fully automatic
Recently Published Documents


TOTAL DOCUMENTS

2093
(FIVE YEARS 557)

H-INDEX

64
(FIVE YEARS 10)

Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 123
Author(s):  
Rania Almajalid ◽  
Ming Zhang ◽  
Juan Shan

In the medical sector, three-dimensional (3D) images are commonly used like computed tomography (CT) and magnetic resonance imaging (MRI). The 3D MRI is a non-invasive method of studying the soft-tissue structures in a knee joint for osteoarthritis studies. It can greatly improve the accuracy of segmenting structures such as cartilage, bone marrow lesion, and meniscus by identifying the bone structure first. U-net is a convolutional neural network that was originally designed to segment the biological images with limited training data. The input of the original U-net is a single 2D image and the output is a binary 2D image. In this study, we modified the U-net model to identify the knee bone structures using 3D MRI, which is a sequence of 2D slices. A fully automatic model has been proposed to detect and segment knee bones. The proposed model was trained, tested, and validated using 99 knee MRI cases where each case consists of 160 2D slices for a single knee scan. To evaluate the model’s performance, the similarity, dice coefficient (DICE), and area error metrics were calculated. Separate models were trained using different knee bone components including tibia, femur, patella, as well as a combined model for segmenting all the knee bones. Using the whole MRI sequence (160 slices), the method was able to detect the beginning and ending bone slices first, and then segment the bone structures for all the slices in between. On the testing set, the detection model accomplished 98.79% accuracy and the segmentation model achieved DICE 96.94% and similarity 93.98%. The proposed method outperforms several state-of-the-art methods, i.e., it outperforms U-net by 3.68%, SegNet by 14.45%, and FCN-8 by 2.34%, in terms of DICE score using the same dataset.


Author(s):  
Julien Issa ◽  
Raphael Olszewski ◽  
Marta Dyszkiewicz-Konwińska

This systematic review aims to identify the available semi-automatic and fully automatic algorithms for inferior alveolar canal localization as well as to present their diagnostic accuracy. Articles related to inferior alveolar nerve/canal localization using methods based on artificial intelligence (semi-automated and fully automated) were collected electronically from five different databases (PubMed, Medline, Web of Science, Cochrane, and Scopus). Two independent reviewers screened the titles and abstracts of the collected data, stored in EndnoteX7, against the inclusion criteria. Afterward, the included articles have been critically appraised to assess the quality of the studies using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Seven studies were included following the deduplication and screening against exclusion criteria of the 990 initially collected articles. In total, 1288 human cone-beam computed tomography (CBCT) scans were investigated for inferior alveolar canal localization using different algorithms and compared to the results obtained from manual tracing executed by experts in the field. The reported values for diagnostic accuracy of the used algorithms were extracted. A wide range of testing measures was implemented in the analyzed studies, while some of the expected indexes were still missing in the results. Future studies should consider the new artificial intelligence guidelines to ensure proper methodology, reporting, results, and validation.


2022 ◽  
Vol 115 ◽  
pp. 108190
Author(s):  
Joaquim de Moura ◽  
Jorge Novo ◽  
Marcos Ortega
Keyword(s):  
X Ray ◽  

2022 ◽  
pp. 100007
Author(s):  
Hisako Katano ◽  
Nobutake Ozeki ◽  
Hideyuki Koga ◽  
Kenji Suzuki ◽  
Jun Masumoto ◽  
...  

2021 ◽  
Vol 14 (4) ◽  
pp. 1-17
Author(s):  
Dilawar Ali ◽  
Steven Verstockt ◽  
Nico Van De Weghe

Rephotography is the process of recapturing the photograph of a location from the same perspective in which it was captured earlier. A rephotographed image is the best presentation to visualize and study the social changes of a location over time. Traditionally, only expert artists and photographers are capable of generating the rephotograph of any specific location. Manual editing or human eye judgment that is considered for generating rephotographs normally requires a lot of precision, effort and is not always accurate. In the era of computer science and deep learning, computer vision techniques make it easier and faster to perform precise operations to an image. Until now many research methodologies have been proposed for rephotography but none of them is fully automatic. Some of these techniques require manual input by the user or need multiple images of the same location with 3D point cloud data while others are only suggestions to the user to perform rephotography. In historical records/archives most of the time we can find only one 2D image of a certain location. Computational rephotography is a challenge in the case of using only one image of a location captured at different timestamps because it is difficult to find the accurate perspective of a single 2D historical image. Moreover, in the case of building rephotography, it is required to maintain the alignments and regular shape. The features of a building may change over time and in most of the cases, it is not possible to use a features detection algorithm to detect the key features. In this research paper, we propose a methodology to rephotograph house images by combining deep learning and traditional computer vision techniques. The purpose of this research is to rephotograph an image of the past based on a single image. This research will be helpful not only for computer scientists but also for history and cultural heritage research scholars to study the social changes of a location during a specific time period, and it will allow users to go back in time to see how a specific place looked in the past. We have achieved good, fully automatic rephotographed results based on façade segmentation using only a single image.


2021 ◽  
Vol 4 (3) ◽  
Author(s):  
Kaisa Vitikainen ◽  
Maarit Koponen

The demand for intralingual subtitles for television and video content is increasing. In Finland, major broadcasting companies are required to provide intralingual subtitles for all or a portion of their programming in Finnish and Swedish, excluding certain live events. To meet this need, technology could offer solutions in the form of automatic speech recognition and subtitle generation. Although fully automatic subtitles may not be of sufficient quality to be accepted by the target audience, they can be a useful tool for the subtitler. This article presents research conducted as part of the MeMAD project, where automatically generated subtitles for Finnish were tested in professional workflows with four subtitlers. We discuss observations regarding the effect of automation on productivity based on experiments where participants subtitled short video clips from scratch, by respeaking and by post-editing automatically generated subtitles, as well as the subtitlers’ experience based on feedback collected with questionnaires and interviews. Lay summary This article discusses how technology can help create subtitles for television programmes and videos. Subtitles in the same language as the content help the Deaf and the hard-of-hearing to access television programmes and videos. They are also useful for example for language learning or watching videos in noisy places. Demand for subtitles is growing and many countries also have laws that demand same-language subtitles. For example, major broadcasters in Finland must offer same-language subtitles for some programmes in Finnish and Swedish. However, broadcasters usually have limited time and money for subtitling. One useful tool could be speech recognition technology, which automatically converts speech to text. Subtitles made with speech recognition alone are not good enough yet, and need to be edited. We used speech recognition to automatically produce same-language subtitles in Finnish. Four professional subtitlers edited them to create subtitles for short videos. We measured the time and the number of keystrokes they needed for this task and compared whether this made subtitling faster. We also asked how the participants felt about using automatic subtitles in their work. This study shows that speech recognition can be a useful tool for subtitlers, but the quality and usability of technology are important.


Algorithms ◽  
2021 ◽  
Vol 15 (1) ◽  
pp. 2
Author(s):  
Luka Grubišić ◽  
Domagoj Lacmanović ◽  
Josip Tambača

This paper presents an algorithm for the fully automatic mesh generation for the finite element analysis of ships and offshore structures. The quality requirements on the mesh generator are imposed by the acceptance criteria of the classification societies as well as the need to avoid shear locking when using low degree shell elements. The meshing algorithm will be generating quadrilateral dominated meshes (consisting of quads and triangles) and the mesh quality requirements mandate that quadrilaterals with internal angles close to 90° are to be preferred. The geometry is described by a dictionary containing points, rods, surfaces, and openings. The first part of the proposed method consists of an algorithm to automatically clean the geometry. The corrected geometry is then meshed by the frontal Delaunay mesh generator as implemented in the gmsh package. We present a heuristic method to precondition the cross field of the fronatal quadrilateral mesher. In addition, the influence of the order in which the plates are meshed will be explored as a preconditioning step.


Author(s):  
S. Bütüner ◽  
E. Şehirli

Abstract. The usage of computers and software in the biomedical field has been increasing and applications for doctors, clinicians, scientists and other users have been developed in the recent times. Manual, semi-automatic and fully automatic applications developed for bone fracture detection are one of the important studies in this field. Image segmentation, which is one of the image preprocessing steps in bone fracture detection, is an important step to obtain successful results with high accuracy. In this study, Otsu thresholding method, active contour method, k-means method, fuzzy c-mean method, Niblack thresholding method and max min thresholding range (MMTR) method are used on bone images obtained by Karabük University Training and Research Hospital. When any filters are not applied on images to remove noises, the most successful method is obtained by K-means method based on specificity and accuracy as 89,55% and 83,31% respectively. Niblack thresholding method has the highest sensitivity result as 92,45%.


2021 ◽  
Author(s):  
Dominik Hirling ◽  
Peter Horvath

Cell segmentation is a fundamental problem in biology for which convolutional neural networks yield the best results nowadays. In this paper, we present HarmonicNet, a network, which is a modification of the popular StarDist and SplineDist architectures. While StarDist and SplineDist describe an object by the lengths of equiangular rays and control points respectively, our network utilizes Fourier descriptors, predicting a coefficient vector for every pixel on the image, which implicitly define the resulting segmentation. We evaluate our model on three different datasets, and show that Fourier descriptors can achieve a high level of accuracy with a small number of coefficients. HarmonicNet is also capable of accurately segmenting objects that are not star-shaped, a case where StarDist performs suboptimally according to our experiments.


Sign in / Sign up

Export Citation Format

Share Document