A Real-Time Approach for Automatic Food Quality Assessment Based on Shape Analysis

Author(s):  
Luca Donati ◽  
Eleonora Iotti ◽  
Andrea Prati

Products sorting is a task of paramount importance for many countries’ agricultural industry. An accurate quality check assures that good products are not wasted, and rotten, broken and bent food are properly discarded, which is extremely important for food production chains. Such products sorting and quality controls are often performed with consolidated instruments, since simple systems are easier to maintain, validate, and they speed up the processing in terms of production line speed and products per second. Moreover, industries often lack advanced formation, required for more sophisticated solutions. As a result, the sorting task for many food products is mainly done by color information only. Sorting machines typically detect the color response of products to specific LEDs with various light wavelengths. Unfortunately, a color check is often not enough to detect some very common defects. The shape of a product, instead, reveals many important defects and is highly reliable in detecting external objects mixed with food. Also, shape can be used to take detailed measurements of a product, such as its area, length, width, anisotropy, etc. This paper proposes a complete treatment of the problem of sorting food by its shape. It treats real-world problems such as accuracy, execution time, latency and it provides an overview of a full system used on state-of-the-art measurement machines.

2021 ◽  
Vol 17 (4) ◽  
pp. 1-20
Author(s):  
Serena Wang ◽  
Maya Gupta ◽  
Seungil You

Given a classifier ensemble and a dataset, many examples may be confidently and accurately classified after only a subset of the base models in the ensemble is evaluated. Dynamically deciding to classify early can reduce both mean latency and CPU without harming the accuracy of the original ensemble. To achieve such gains, we propose jointly optimizing the evaluation order of the base models and early-stopping thresholds. Our proposed objective is a combinatorial optimization problem, but we provide a greedy algorithm that achieves a 4-approximation of the optimal solution under certain assumptions, which is also the best achievable polynomial-time approximation bound. Experiments on benchmark and real-world problems show that the proposed Quit When You Can (QWYC) algorithm can speed up average evaluation time by 1.8–2.7 times on even jointly trained ensembles, which are more difficult to speed up than independently or sequentially trained ensembles. QWYC’s joint optimization of ordering and thresholds also performed better in experiments than previous fixed orderings, including gradient boosted trees’ ordering.


2021 ◽  
Author(s):  
Mohammad Shehab ◽  
Laith Abualigah

Abstract Multi-Verse Optimizer (MVO) algorithm is one of the recent metaheuristic algorithms used to solve various problems in different fields. However, MVO suffers from a lack of diversity which may trapping of local minima, and premature convergence. This paper introduces two steps of improving the basic MVO algorithm. The first step using Opposition-based learning (OBL) in MVO, called OMVO. The OBL aids to speed up the searching and improving the learning technique for selecting a better generation of candidate solutions of basic MVO. The second stage, called OMVOD, combines the disturbance operator (DO) and OMVO to improve the consistency of the chosen solution by providing a chance to solve the given problem with a high fitness value and increase diversity. To test the performance of the proposed models, fifteen CEC 2015 benchmark functions problems, thirty CEC 2017 benchmark functions problems, and seven CEC 2011 real-world problems were used in both phases of the enhancement. The second step, known as OMVOD, incorporates the disruption operator (DO) and OMVO to improve the accuracy of the chosen solution by giving a chance to solve the given problem with a high fitness value while also increasing variety. Fifteen CEC 2015 benchmark functions problems, thirty CEC 2017 benchmark functions problems and seven CEC 2011 real-world problems were used in both phases of the upgrade to assess the accuracy of the proposed models.


2020 ◽  
Vol 34 (05) ◽  
pp. 8457-8463
Author(s):  
Dabiao Ma ◽  
Zhiba Su ◽  
Wenxuan Wang ◽  
Yuhao Lu

End-to-end Text-to-speech (TTS) system can greatly improve the quality of synthesised speech. But it usually suffers form high time latency due to its auto-regressive structure. And the synthesised speech may also suffer from some error modes, e.g. repeated words, mispronunciations, and skipped words. In this paper, we propose a novel non-autoregressive, fully parallel end-to-end TTS system (FPETS). It utilizes a new alignment model and the recently proposed U-shape convolutional structure, UFANS. Different from RNN, UFANS can capture long term information in a fully parallel manner. Trainable position encoding and two-step training strategy are used for learning better alignments. Experimental results show FPETS utilizes the power of parallel computation and reaches a significant speed up of inference compared with state-of-the-art end-to-end TTS systems. More specifically, FPETS is 600X faster than Tacotron2, 50X faster than DCTTS and 10X faster than Deep Voice3. And FPETS can generates audios with equal or better quality and fewer errors comparing with other system. As far as we know, FPETS is the first end-to-end TTS system which is fully parallel.


2020 ◽  
Vol 9 (5) ◽  
pp. 315
Author(s):  
John Hall ◽  
Lakin Wecker ◽  
Benjamin Ulmer ◽  
Faramarz Samavati

The amount of information collected about the Earth has become extremely large. With this information comes the demand for integration, processing, visualization and distribution of this data so that it can be leveraged to solve real-world problems. To address this issue, a carefully designed information structure is needed that stores all of the information about the Earth in a convenient format such that it can be easily used to solve a wide variety of problems. The idea which we explore is to create a Discrete Global Grid System (DGGS) using a Disdyakis Triacontahedron (DT) as the initial polyhedron. We have adapted a simple, closed-form, equal-area projection to reduce distortion and speed up queries. We have derived an efficient, closed-form inverse for this projection that can be used in important DGGS queries. The resulting construction is indexed using an atlas of connectivity maps. Using some simple modular arithmetic, we can then address point to cell, neighbourhood and hierarchical queries on the grid, allowing for these queries to be performed in constant time. We have evaluated the angular distortion created by our DGGS by comparing it to a traditional icosahedron DGGS using a similar projection. We demonstrate that our grid reduces angular distortion while allowing for real-time rendering of data across the globe.


Author(s):  
V. J Manzo

In this chapter, we will write a program that randomly generates diatonic pitches at a specified tempo. We will learn how to filter chromatic notes to those of a specific mode by using stored data about scales. By the end of this chapter, you will have created a patch that composes diatonic music with a simple rhythm. We will also learn about adding objects in order to expand the Max language. For this chapter, you will need to access to the companion files for this book. New words are added to the English language all the time. If you open a dictionary from the 1950s, you’re not going to find commonly used words like ringtone or spyware. There was a need to incorporate these words into the language because they serve a specific function. In the same regard, the Max programming language can also be expanded through the development of external objects created by third parties. Third party objects are often referenced in journal articles, forum posts on the Cycling ’74 website (cycling74.com), and a number of repository websites for max objects. A brief, and by no means comprehensive, list of repository websites is given at the end of this chapter. Of course, at this point, we’re still learning the Max language apart from adding external objects. So why are we discussing external objects at all? Well, the beauty of Max is that you can expand the language easily to include objects that can speed up your process. For example, in Chapter 4, we made scales and chords by following a number of steps. To speed up the process of implementing scales and chords in my own patches, I developed a set of external objects called the Modal Object Library that contains objects for quickly building chords and scales. Since you already know how to build scales and chords in Max, using these objects will speed up our process of using scales and chords in our patches instead of having to copy large chunks of objects from old patches whenever you want to implement a particular scale.


2020 ◽  
Vol 28 (1) ◽  
pp. 134-149
Author(s):  
Dragan Simić ◽  
Jovana Gajić ◽  
Vladimir Ilin ◽  
Svetislav D Simić ◽  
Svetlana Simić

Abstract A vast number of real-world problems can be associated with multi-criteria decision-making (MCDM). This paper discusses MCDM in agricultural industry. Methodological hybrid analytical hierarchy processes, ELECTRE I and genetic algorithm method are proposed here, and it is shown how such a model can be used for complete ranking model. The proposed hybrid bio-inspired method is implemented on real-world data set collected from agricultural industry in Serbia.


2021 ◽  
Vol 6 (1) ◽  
pp. 807-818
Author(s):  
Yohanes Gunawan ◽  
Kukuh Tri Margono ◽  
Romy Rizky ◽  
Nandy Putra ◽  
Rizal Al Faqih ◽  
...  

Abstract The unpredictable weather in Indonesia results in a less effective conventional coffee beans drying process, which usually uses solar energy as a heat source. This experiment aimed to examine the performance of the coffee beans drying using low-temperature geothermal energy (LTGE) with solar energy as the energy source. Heat pipe heat exchanger, which consists of 42 straight heat pipes with staggered configuration, was used to extract the LTGE. The heat pipes have 700 mm length, 10 mm outside diameter with a filling ratio of 50%, and added by 181 pieces of aluminum with a dimension size of 76  mm × 345  mm × 0.105 mm as fins. LTGE was simulated by using water that is heated by three heaters and flowed by a pump. Meanwhile, to simulate the drying process with conventional methods, a system of solar air collectors made of polyurethane sheets with a thickness of 20 mm and dimensions of length × width × height = 160 cm × 76 cm × 10 cm, respectively, was used in this study. Zinc galvalume sheet with 0.3 mm thickness was installed and coated by the black doff color throughout the inner of the container wall. The result showed that the drying process with LTGE and solar energy is faster than with solar energy or geothermal energy only. The drying coffee beans using the hybrid system can speed up the drying coffee beans time by about 23% faster than the solar energy only.


Author(s):  
W. Bernard

In comparison to many other fields of ultrastructural research in Cell Biology, the successful exploration of genes and gene activity with the electron microscope in higher organisms is a late conquest. Nucleic acid molecules of Prokaryotes could be successfully visualized already since the early sixties, thanks to the Kleinschmidt spreading technique - and much basic information was obtained concerning the shape, length, molecular weight of viral, mitochondrial and chloroplast nucleic acid. Later, additonal methods revealed denaturation profiles, distinction between single and double strandedness and the use of heteroduplexes-led to gene mapping of relatively simple systems carried out in close connection with other methods of molecular genetics.


Author(s):  
Brian Cross

A relatively new entry, in the field of microscopy, is the Scanning X-Ray Fluorescence Microscope (SXRFM). Using this type of instrument (e.g. Kevex Omicron X-ray Microprobe), one can obtain multiple elemental x-ray images, from the analysis of materials which show heterogeneity. The SXRFM obtains images by collimating an x-ray beam (e.g. 100 μm diameter), and then scanning the sample with a high-speed x-y stage. To speed up the image acquisition, data is acquired "on-the-fly" by slew-scanning the stage along the x-axis, like a TV or SEM scan. To reduce the overhead from "fly-back," the images can be acquired by bi-directional scanning of the x-axis. This results in very little overhead with the re-positioning of the sample stage. The image acquisition rate is dominated by the x-ray acquisition rate. Therefore, the total x-ray image acquisition rate, using the SXRFM, is very comparable to an SEM. Although the x-ray spatial resolution of the SXRFM is worse than an SEM (say 100 vs. 2 μm), there are several other advantages.


Author(s):  
A. G. Jackson ◽  
M. Rowe

Diffraction intensities from intermetallic compounds are, in the kinematic approximation, proportional to the scattering amplitude from the element doing the scattering. More detailed calculations have shown that site symmetry and occupation by various atom species also affects the intensity in a diffracted beam. [1] Hence, by measuring the intensities of beams, or their ratios, the occupancy can be estimated. Measurement of the intensity values also allows structure calculations to be made to determine the spatial distribution of the potentials doing the scattering. Thermal effects are also present as a background contribution. Inelastic effects such as loss or absorption/excitation complicate the intensity behavior, and dynamical theory is required to estimate the intensity value.The dynamic range of currents in diffracted beams can be 104or 105:1. Hence, detection of such information requires a means for collecting the intensity over a signal-to-noise range beyond that obtainable with a single film plate, which has a S/N of about 103:1. Although such a collection system is not available currently, a simple system consisting of instrumentation on an existing STEM can be used as a proof of concept which has a S/N of about 255:1, limited by the 8 bit pixel attributes used in the electronics. Use of 24 bit pixel attributes would easily allowthe desired noise range to be attained in the processing instrumentation. The S/N of the scintillator used by the photoelectron sensor is about 106 to 1, well beyond the S/N goal. The trade-off that must be made is the time for acquiring the signal, since the pattern can be obtained in seconds using film plates, compared to 10 to 20 minutes for a pattern to be acquired using the digital scan. Parallel acquisition would, of course, speed up this process immensely.


Sign in / Sign up

Export Citation Format

Share Document