image model
Recently Published Documents


TOTAL DOCUMENTS

315
(FIVE YEARS 61)

H-INDEX

27
(FIVE YEARS 2)

Author(s):  
Jie Yuan ◽  
Yuan Ji ◽  
Zhou Zhu ◽  
Liya Huang ◽  
Junfeng Qian ◽  
...  

In order to solve the problems of large error and low performance of traditional progressive image model matching information checking methods, an automatic progressive image model matching information checking method based on machine learning is proposed. The generation method of progressive image is analyzed, and the target image sample is obtained. On this basis, machine learning algorithm is used to segment progressive image samples. In each image segmentation part, crawler technology is used to automatically collect progressive image model matching information, and under the constraint of image model matching information checking standard, automatic checking of progressive image model matching information is realized from geometric structure, image content and other aspects. Experimental results show that the verification error of the design method is reduced by 0.687 Mb, and the quality of progressive image is improved.


2021 ◽  
Vol 8 (Supplement_1) ◽  
pp. S279-S279
Author(s):  
Emily Mu ◽  
Sarah Jabbour ◽  
Michael Sjoding ◽  
John Guttag ◽  
Jenna Wiens ◽  
...  

Abstract Background Infectious respiratory-track pathogens are a common trigger of healthcare capacity strain, e.g. the COVID19 pandemic. Patient risk stratification models to identify low-risk patients can help improve patient care processes and allocate limited resources. Many existing deterioration indices are based entirely on structured data from the Electronic Health Record (EHR) and ignore important information from other data sources. However, chest radiographs have been demonstrated to be helpful in predicting the progress of respiratory diseases. We developed a joint EHR and chest x-ray (CXR) model method and applied it to identify low-risk COVID19+ patients within the first 48 hours of hospital admission. Methods All COVID19+ patients admitted to a large urban hospital between March 2020 and February 2021 were included. We trained an image model using large public chest radiograph datasets and fine-tuned this model to predict acute dyspnea using a cohort from the same hospital. We then combined this image model with two existing EHR deterioration indices to predict the risk of a COVID19+ patient being intubated, receiving a nasal cannula, or being treated with a vasopressor. We evaluated models’ ability to identify low-risk patients by using the positive predictive value (PPV). Results The image-augmented deterioration index was able to identify 12% of 716 COVID-19+ patients as low risk with 0.95 positive predictive value in the first 48 hours of admission. In contrast, when used individually, the EHR and CXR models each identified roughly 3% of the patients with a PPV of 0.95. Predicting Low Risk Patients Aggregated predictions for COVID19 positive patients within the first 48 hours of admission, shown with exponential weight moving average and 95% CIs. Each plot shows the number of patients flagged as low-risk by lowest aggregated prediction and the resulting accuracy for that fraction of patients. The bottom plot compares the MCURES fused model to the MCURES model. The top plot compares the EDI fused model to the EDI model. Conclusion Our multi-modal models were able to identify far more patients at low-risk of COVID19 deterioration than models trained on either modality alone. This indicates the importance of combining structured data with chest X-rays when creating a deterioration index performance for infectious respiratory-track diseases. Disclosures All Authors: No reported disclosures


2021 ◽  
Vol 265 ◽  
pp. 118719
Author(s):  
Xiansheng Liu ◽  
Hadiatullah Hadiatullah ◽  
Xun Zhang ◽  
Jürgen Schnelle-Kreis ◽  
Xiaohu Zhang ◽  
...  

2021 ◽  
Author(s):  
Dmitrii Kriukov ◽  
Nikita Koritskiy ◽  
Igor Kozlovskii ◽  
Mark Zaretckii ◽  
Mariia Bazarevich ◽  
...  

The increasing interest in chromatin conformation inside the nucleus and the availability of genome-wide experimental data make it possible to develop computational methods that can increase the quality of the data and thus overcome the limitations of high experimental costs. Here we develop a deep-learning approach for increasing Hi-C data resolution by appending additional information about genome sequence. In this approach, we utilize two different deep-learning algorithms: the image-to-image model, which enhances Hi-C resolution by itself, and the sequence-to-image model, which uses additional information about the underlying genome sequence for further resolution improvement. Both models are combined with the simple head model that provides a more accurate enhancement of initial low-resolution Hi-C data. The code is freely available in a GitHub repository: https://github.com/koritsky/DL2021 HI-C


2021 ◽  
Author(s):  
Khan Baykaner ◽  
Mona Xu ◽  
Lucas Bordeaux ◽  
Feng Gu ◽  
Balaji Selvaraj ◽  
...  

ABSTRACTWhole slide images (WSIs) contain rich pathology information which can be used to diagnose cancer, characterize the tumour microenvironment (TME), assess patient prognosis, and provide insights into the likelihood of whether a patient may respond to a given treatment. However, since WSI availability is generally scarce during early stage clinical trials, the applicability of deep learning models to new and ongoing drug development in early stages is typically limited. WSIs available in public repositories, such as The Cancer Genome Atlas (TCGA), enable an unsupervised pretraining approach to help alleviate data scarcity. Pretrained models can also be utilised for a range of downstream applications such as automated annotation, quality control (QC), and similar image search.In this work we present DIME (Drug-development Image Model Embeddings), a pipeline for training image patch embeddings for WSIs via self-supervised learning. We compare inpainting and contrastive learning approaches for embedding training in the DIME pipeline, and demonstrate state-of-the-art performance at image patch clustering. In addition, we show that the resultant embeddings allow for training effective downstream patch classifiers with relatively few WSIs, and apply this to an AstraZeneca-sponsored phase III clinical trial. We also highlight the importance of effective colour normalisation for implementing histopathology analysis pipelines, regardless of the core learning algorithm. Finally, we show via subjective exploration of embedding spaces that the DIME pipeline clusters interesting histopathological artefacts, suggesting a possible role for the method in QC pipelines. By clustering image patches according to underlying morphopathologic features, DIME supports subsequent qualitative exploration by pathologists and has the potential to inform and expediate biomarker discovery and drug development.


Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2296
Author(s):  
Hyun-Tae Choi ◽  
Byung-Woo Hong

The development of convolutional neural networks for deep learning has significantly contributed to image classification and segmentation areas. For high performance in supervised image segmentation, we need many ground-truth data. However, high costs are required to make these data, so unsupervised manners are actively being studied. The Mumford–Shah and Chan–Vese models are well-known unsupervised image segmentation models. However, the Mumford–Shah model and the Chan–Vese model cannot separate the foreground and background of the image because they are based on pixel intensities. In this paper, we propose a weakly supervised model for image segmentation based on the segmentation models (Mumford–Shah model and Chan–Vese model) and classification. The segmentation model (i.e., Mumford–Shah model or Chan–Vese model) is to find a base image mask for classification, and the classification network uses the mask from the segmentation models. With the classifcation network, the output mask of the segmentation model changes in the direction of increasing the performance of the classification network. In addition, the mask can distinguish the foreground and background of images naturally. Our experiment shows that our segmentation model, integrated with a classifier, can segment the input image to the foreground and the background only with the image’s class label, which is the image-level label.


2021 ◽  
Vol 13 (2) ◽  
pp. 60
Author(s):  
Korosando Fransiskus ◽  
Manggu Ngguna Raji ◽  
Elvira Fransiska

ABSTRACTThis study aims to describe the application of the image model and MYOB accounting program and the lecture method, the differences in the learning outcomes of students in class XI SMK Yos Sudarso Ende. This experimental research uses the method of observation and tests. Explained with the next quantitative approach. Improved learning outcomes were tested with N-Gain and analyzed by t-test.Initial test results; the average value of the experimental group was 8.50, the control group was 8.67. The standard deviation of the experimental group (S1) was 2.505, the control group was 1.670. Post-test results; the average value of the experimental group was 19.42, the control group was 14.92. The Standard deviation (S2) of the experimental group was 2,938, the control group was 3,343. The N-Gain group of the experimental group had 58.33% "moderate" classification, 41.7% "high" classification. The control group N-Gain had 33.33% "low" classification, 58.33% "moderate" classification, and 8.33% high classification. The t-test shows t count t table or 3.503> 1.717 so that H0 is rejected. In conclusion, there are differences in learning outcomes between the experimental group and the control group. 


2021 ◽  
Vol 17 (24) ◽  
pp. 1
Author(s):  
Egle Urvelyte ◽  
Aidas Perminas

This paper focuses on testing hypothesized psychosocial predictors factors (general unconditional acceptance, body acceptance by others, body function appreciation) for positive body image among 812 female students aged between 18 and 35 years. Path analysis procedures contained in the Mplus Version 7 program were used to evaluate the positive body image model. The positive body image model indicated that greater perceived body acceptance by others was linked to greater body function appreciation which in turn was linked to higher positive body image. Perceived body image acceptance by others was directly related to higher positive body image. General unconditional acceptance did not lead to female students greater body function appreciation. The findings revealed some important positive body image predicting factors, and these results can be used to improve positive body image interventions.


2021 ◽  
Author(s):  
Erik Lindgren ◽  
Christopher Zach

Abstract Within many quality-critical industries, e.g. the aerospace industry, industrial X-ray inspection is an essential as well as a resource intense part of quality control. Within such industries the X-ray image interpretation is typically still done by humans, therefore, increasing the interpretation automatization would be of great value. We claim, that safe automatic interpretation of industrial X-ray images, requires a robust confidence estimation with respect to out-of-distribution (OOD) data. In this work we have explored if such a confidence estimation can be achieved by comparing input images with a model of the accepted images. For the image model we derived an autoencoder which we trained unsupervised on a public dataset with X-ray images of metal fusion-welds. We achieved a true positive rate at 80–90% at a 4% false positive rate, as well as correctly detected an OOD data example as an anomaly.


Sign in / Sign up

Export Citation Format

Share Document