additional image
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 22)

H-INDEX

6
(FIVE YEARS 1)

Polymers ◽  
2021 ◽  
Vol 13 (20) ◽  
pp. 3469
Author(s):  
Franziska Hensel ◽  
Andreas Koenig ◽  
Hans-Martin Doerfler ◽  
Florian Fuchs ◽  
Martin Rosentritt ◽  
...  

The aim of this in vitro study was to analyse the performance of CAD/CAM resin-based composites for the fabrication of long-term temporary fixed dental prostheses (FDP) and to compare it to other commercially available alternative materials regarding its long-term stability. Four CAD/CAM materials [Structur CAD (SC), VITA CAD-Temp (CT), Grandio disc (GD), and Lava Esthetic (LE)] and two direct RBCs [(Structur 3 (S3) and LuxaCrown (LC)] were used to fabricate three-unit FDPs. 10/20 FDPs were subjected to thermal cycling and mechanical loading by chewing simulation and 10/20 FDPs were stored in distilled water. Two FDPs of each material were forwarded to additional image diagnostics prior and after chewing simulation. Fracture loads were measured and data were statistically analysed. SC is suitable for use as a long-term temporary (two years) three-unit FDP. In comparison to CT, SC featured significantly higher breaking forces (SC > 800 N; CT < 600 N) and the surface wear of the antagonists was (significantly) lower and the abrasion of the FDP was similar. The high breaking forces (1100–1327 N) of GD and the small difference compared to LE regarding flexural strength showed that the material might be used for the fabrication of three-unit FDPs. With the exception of S3, all analysed direct or indirect materials are suitable for the fabrication of temporary FDPs.


2021 ◽  
Vol 974 (8) ◽  
pp. 27-35
Author(s):  
A.A. Alyabyev ◽  
K.A. Litvintcev ◽  
A.A. Kobzev

The geodesic method of the characteristic points’ coordinates measuring is the main method for urban cadastral works (including complex ones). Implementing digital aerial photography cameras, unmanned aerial vehicles and improving hardware and software systems for image processing enable achieving the necessary accuracy (10 cm in plan coordinates) when using the photogrammetric method. Stereo models and orthomosaics are the output products of the mentioned technology using for measurements. Due to the fact that at creating an orthomosaic, additional image conversion processes are required and they may cause the loss of accuracy and the presence of perspective distortions of high-altitude objects, orthomosaics cannot be used to determine the coordinates of characteristic points. It is proposed to use a stereo model, i.e. a three-dimensional high-precision image of the terrain, as a product for measuring characteristic points in cadastral works. The experiments’ results and the experience of production work proved that the accuracy of geodesic and stereophotogrammetric methods in the real estate cadaster are equal. At the same time, the mentioned method has some advantages


Author(s):  
А.И. Максимов

В работе предложен метод повышения пространственного разрешения по серии кадров низкого разрешения, использующий для формирования результирующего изображения значения погрешностей восстановления в точке каждого кадра. Метод объединяет в себе результаты многолетних исследований автора в области повышения качества изображений и видеозаписей. Предложенный метод разрабатывался для решения прикладных задач криминалистической экспертизы видеозаписей и предназначен для повышения визуального качества плоского локального объекта, находящегося близко к центру кадра. Метод состоит из трех этапов. Первый этап - процедура сверхразрешающего восстановления в каждом кадре с учетом непрерывно-дискретной модели наблюдения сигнала с сохранением сведений об ошибке такого восстановления в дополнительный канал обработки изображения. Второй – геометрическое согласование восстановленных кадров с применением геометрического преобразования к дополнительному каналу обработки. Третий – взвешенное оптимальное по критерию минимизации среднеквадратической ошибки комплексирование кадров. Преимуществами предлагаемого метода являются оценка погрешности восстанавливаемого изображения в каждой точке, а также учет искажений изображений в непрерывной области. В работе проведено экспериментальное исследование ошибки восстановления предлагаемого метода, полученные результаты сравнивались со случаем, не использующим авторские находки предлагаемого метода, - усредняющим комплексированием линейно интерполированных кадров. Линейная интерполяция была взята, поскольку она также вписывается в фильтровую модель восстановления изображения на первом этапе работы метода. Полученные результаты демонстрируют превосходство предлагаемого метода. In this paper, a method for multi-frame superresolution is proposed. It exploits the values ​​of the recovery errors at the point of each frame to form the resulting high-resolution image. The method combines the results of many years of author's research in the field of image and video processing. The proposed method aims to apply to forensic tasks of video analysis. The method improves the visual quality of a flat local object located close to the center of the frame. The method consists of three stages. The first stage is the procedure of optimal super-resolution recovery of each frame with the use of the continuous-discrete observation model. During this stage, the recovery errors are stored in an additional image channel. The second stage is the frames registration. A geometric transformation is also applied to the additional channel during this stage. The final stage is the weighted optimal fusing. The advantages of the proposed method are the estimation of the error of the restored image at each point and taking into account the image degradations in the continuous domain. Experimental research of the reconstruction error of the method was carried out. The results were compared with the case that does not use the novel features of the proposed method - averaging fusing of linear interpolated frames. Linear interpolation was chosen as it also fits into the filtering model of image recovery of the method's first stage. The obtained results show that the proposed method outperforms the other one.


2021 ◽  
Vol 3 (2) ◽  
pp. 136-141
Author(s):  
Arvi Razanata ◽  
Prawito Prajitno ◽  
Djarwani Soeharso Soejoko

The CT cardiac acquisition process is usually conducted by using an additional image with contrast medium that is injected inside the body and reconstructed by a radiologist using an integrated CT Scan software with the aim to find the morphology and volume dimension of the heart and coronary arteries. In fact, the data obtained from the hospital are raw data without segmented contour from a radiologist. For the purpose of automation, dataset is needed to be used as input data for further program development. This study is focused on the evaluation of the segmentation results of CT cardiac images using Otsu threshold and active contour algorithm with the aim to make a dataset for the heart volume quantification that can be used interactively as an alternative to integrated CT scan software. 2D contrast enhanced cardiac CT from 6 patients using image processing techniques was run on Matlab software. Of the 689 slices that was used, as many as (73.75 ± 19.41)%of CT cardiac slices have been segmented properly, (19.15 ± 19.61)%of the slices that were segmented included the spine bone, (1.36 ± 0.98)%of the slices did not include all region of the heart, (16.58 ± 15.26)%of the slices included other organs with the consistency from the measurement proven from inter-observer variability to produce r = 0,9941.The result is due to the geometry influence from the diameter of the patient’s body thickness that tends to be thin.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Hiranya Jayakody ◽  
Paul Petrie ◽  
Hugo Jan de Boer ◽  
Mark Whitty

Abstract Background Stomata analysis using microscope imagery provides important insight into plant physiology, health and the surrounding environmental conditions. Plant scientists are now able to conduct automated high-throughput analysis of stomata in microscope data, however, existing detection methods are sensitive to the appearance of stomata in the training images, thereby limiting general applicability. In addition, existing methods only generate bounding-boxes around detected stomata, which require users to implement additional image processing steps to study stomata morphology. In this paper, we develop a fully automated, robust stomata detection algorithm which can also identify individual stomata boundaries regardless of the plant species, sample collection method, imaging technique and magnification level. Results The proposed solution consists of three stages. First, the input image is pre-processed to remove any colour space biases occurring from different sample collection and imaging techniques. Then, a Mask R-CNN is applied to estimate individual stomata boundaries. The feature pyramid network embedded in the Mask R-CNN is utilised to identify stomata at different scales. Finally, a statistical filter is implemented at the Mask R-CNN output to reduce the number of false positive generated by the network. The algorithm was tested using 16 datasets from 12 sources, containing over 60,000 stomata. For the first time in this domain, the proposed solution was tested against 7 microscope datasets never seen by the algorithm to show the generalisability of the solution. Results indicated that the proposed approach can detect stomata with a precision, recall, and F-score of 95.10%, 83.34%, and 88.61%, respectively. A separate test conducted by comparing estimated stomata boundary values with manually measured data showed that the proposed method has an IoU score of 0.70; a 7% improvement over the bounding-box approach. Conclusions The proposed method shows robust performance across multiple microscope image datasets of different quality and scale. This generalised stomata detection algorithm allows plant scientists to conduct stomata analysis whilst eliminating the need to re-label and re-train for each new dataset. The open-source code shared with this project can be directly deployed in Google Colab or any other Tensorflow environment.


PLoS Genetics ◽  
2021 ◽  
Vol 17 (1) ◽  
pp. e1009304
Author(s):  
Hannah Vicars ◽  
Travis Karg ◽  
Brandt Warecki ◽  
Ian Bast ◽  
William Sullivan

Although kinetochores normally play a key role in sister chromatid separation and segregation, chromosome fragments lacking kinetochores (acentrics) can in some cases separate and segregate successfully. In Drosophila neuroblasts, acentric chromosomes undergo delayed, but otherwise normal sister separation, revealing the existence of kinetochore- independent mechanisms driving sister chromosome separation. Bulk cohesin removal from the acentric is not delayed, suggesting factors other than cohesin are responsible for the delay in acentric sister separation. In contrast to intact kinetochore-bearing chromosomes, we discovered that acentrics align parallel as well as perpendicular to the mitotic spindle. In addition, sister acentrics undergo unconventional patterns of separation. For example, rather than the simultaneous separation of sisters, acentrics oriented parallel to the spindle often slide past one another toward opposing poles. To identify the mechanisms driving acentric separation, we screened 117 RNAi gene knockdowns for synthetic lethality with acentric chromosome fragments. In addition to well-established DNA repair and checkpoint mutants, this candidate screen identified synthetic lethality with X-chromosome-derived acentric fragments in knockdowns of Greatwall (cell cycle kinase), EB1 (microtubule plus-end tracking protein), and Map205 (microtubule-stabilizing protein). Additional image-based screening revealed that reductions in Topoisomerase II levels disrupted sister acentric separation. Intriguingly, live imaging revealed that knockdowns of EB1, Map205, and Greatwall preferentially disrupted the sliding mode of sister acentric separation. Based on our analysis of EB1 localization and knockdown phenotypes, we propose that in the absence of a kinetochore, microtubule plus-end dynamics provide the force to resolve DNA catenations required for sister separation.


Author(s):  
Andrew Babichev ◽  
Vladimir Alexandrovich Frolov

In this paper we propose exemplar-based 3D texture synthesis method which unlike existing neural network approaches preserve structural elements in texture. The proposed approach does this by accounting additional image properties which stand for the preservation of the structure with the help of a specially constructed error function used for training neural networks. Thanks to the proposed solution we can apply 2D texture to any 3D model (even without texture coordinates) by synthesizing high quality 3D texture and using local or world space position of surface instead 2D texture coordinates (fig. 1). Our solution is based on introducing 3 different error components in to the process of neural network fitting which helps to preserve desired properties of generated texture. The first component is for structuredness of the generated texture and the sample, the second component increases the diversity of the generated textures and the third one prevents abrupt transitions between individual pixels.


2020 ◽  
Author(s):  
Hiranya Samanga Jayakody ◽  
Paul Petrie ◽  
Hugo de Boer ◽  
Mark Whitty

Abstract Background: Stomata analysis using microscope imagery provides important insight into plant physiology, health and the surrounding environmental conditions. Plant scientists are now able to conduct automated high-throughput analysis of stomata in microscope data, however, existing detection methods are sensitive to the appearance of stomata in the training images, thereby limiting general applicability. In addition, existing methods only generate bounding-boxes around detected stomata, which require users to implement additional image processing steps to study stomata morphology. In this paper, we develop a fully automated, robust stomata detection algorithm which can also identify individual stomata boundaries regardless of the plant species, sample collection method, imaging technique and magnification level. Results: The proposed solution consists of three stages. First, the input image is pre-processed to remove any colour space biases occurring from different sample collection and imaging techniques. Then, a Mask R-CNN is applied to estimate individual stomata boundaries. The feature pyramid network embedded in the Mask R-CNN is utilised to identify stomata at different scales. Finally, a statistical filter is implemented at the Mask R-CNN output to reduce the number of false positive generated by the network. The algorithm was tested using 16 datasets from 12 sources, containing over 60,000 stomata. For the first time in this domain, the proposed solution was tested against 7 microscope datasets never seen by the algorithm to show the generalisability of the solution. Results indicated that the proposed approach can detect stomata with a precision, recall, and F-score of 95.10\%, 83.34\%, and 88.61\%, respectively. A separate test conducted by comparing estimated stomata boundary values with manually measured data showed that the proposed method has an IoU score of 0.70; a 7\% improvement over the bounding-box approach. Conclusions: The proposed method shows robust performance across multiple microscope image datasets of different quality and scale. This generalised stomata detection algorithm allows plant scientists to conduct stomata analysis whilst eliminating the need to re-label and re-train for each new dataset. The open-source code shared with this project can be directly deployed in Google Colab or any other Tensorflow environment.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Hanying Wang ◽  
Haitao Xiong ◽  
Yuanyuan Cai

In recent years, image style transfer has been greatly improved by using deep learning technology. However, when directly applied to clothing style transfer, the current methods cannot allow the users to self-control the local transfer position of an image, such as separating specific T-shirt or trousers from a figure, and cannot achieve the perfect preservation of clothing shape. Therefore, this paper proposes an interactive image localized style transfer method especially for clothes. We introduce additional image called outline image, which is extracted from content image by interactive algorithm. The interaction consists simply of dragging a rectangle around the desired clothing. Then, we introduce an outline loss function based on distance transform of the outline image, which can achieve the perfect preservation of clothing shape. In order to smooth and denoise the boundary region, total variation regularization is employed. The proposed method constrains that the new style is generated only in the desired clothing part rather than the whole image including background. Therefore, in our new generated images, the original clothing shape can be reserved perfectly. Experiment results show impressive generated clothing images and demonstrate that this is a good approach to design clothes.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243923
Author(s):  
Michael A. Beck ◽  
Chen-Yi Liu ◽  
Christopher P. Bidinosti ◽  
Christopher J. Henry ◽  
Cara M. Godee ◽  
...  

A lack of sufficient training data, both in terms of variety and quantity, is often the bottleneck in the development of machine learning (ML) applications in any domain. For agricultural applications, ML-based models designed to perform tasks such as autonomous plant classification will typically be coupled to just one or perhaps a few plant species. As a consequence, each crop-specific task is very likely to require its own specialized training data, and the question of how to serve this need for data now often overshadows the more routine exercise of actually training such models. To tackle this problem, we have developed an embedded robotic system to automatically generate and label large datasets of plant images for ML applications in agriculture. The system can image plants from virtually any angle, thereby ensuring a wide variety of data; and with an imaging rate of up to one image per second, it can produce lableled datasets on the scale of thousands to tens of thousands of images per day. As such, this system offers an important alternative to time- and cost-intensive methods of manual generation and labeling. Furthermore, the use of a uniform background made of blue keying fabric enables additional image processing techniques such as background replacement and image segementation. It also helps in the training process, essentially forcing the model to focus on the plant features and eliminating random correlations. To demonstrate the capabilities of our system, we generated a dataset of over 34,000 labeled images, with which we trained an ML-model to distinguish grasses from non-grasses in test data from a variety of sources. We now plan to generate much larger datasets of Canadian crop plants and weeds that will be made publicly available in the hope of further enabling ML applications in the agriculture sector.


Sign in / Sign up

Export Citation Format

Share Document