processing efficiency
Recently Published Documents


TOTAL DOCUMENTS

870
(FIVE YEARS 407)

H-INDEX

40
(FIVE YEARS 10)

Meat Science ◽  
2022 ◽  
Vol 184 ◽  
pp. 108675
Author(s):  
Elaine M. LaRoche ◽  
Wan Jun Wu ◽  
Patricia Garcia ◽  
Baohui Song ◽  
Colin K.Y. Chun ◽  
...  

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261172
Author(s):  
Erika Wauthia ◽  
Fabien D’Hondt ◽  
Wivine Blekic ◽  
Laurent Lefebvre ◽  
Laurence Ris ◽  
...  

Background Cognitive models indicated that social anxiety disorder (SAD) would be caused and maintained by a biased attentional processing of threatening information. This study investigates whether socially anxious children may present impaired attentional engagement and disengagement from negative emotional faces, as well as their underlying event-related potential responses. Methods and findings Fifteen children with high levels of social anxiety (HSA; 9 boys; mean age = 9.99y; SD = 1.14) and twenty low socially anxious children (LSA; 16 boys; mean age = 10.47y; SD = 1.17) participated in a spatial cueing task in which they had to detect targets following neutral/disgusted faces in a valid or invalid location. No group effect was reported on reaction times [p>.05]. However, electrophysiological data showed lower P3a amplitude in HSA children compared with the LSA group when processing facial stimuli. They also reported larger N2 amplitudes for valid-disgusted targets and a larger P3a amplitude for the invalid-disgusted ones. Conclusion In terms of electrophysiological data, our results validated, the hypothesis of attentional disengagement difficulties in SAD children. We also confirm the idea that high levels of social anxiety are associated with cognitive control impairments and have a greater impact on the processing efficiency than on the performance effectiveness.


2022 ◽  
Vol 51 (4) ◽  
pp. 849-858
Author(s):  
Liubov Plotnikova ◽  
Igor Plotnikov ◽  
Pavel Ivanov ◽  
Andrey Semenov ◽  
Irina Plotnikova ◽  
...  

Introduction. Products containing natural extracts are in great demand. However, poor production technologies make them too expensive to satisfy consumer demand. As a result, a variety of intensification methods have been developed to increase the economic efficiency of extraction, e.g. low-frequency mechanical vibrations. However, frozen raw materials have to be processed at low temperatures, which makes the method less efficient. The research objective was to intensify the extraction process from frozen berries in a vibration tray device by increasing the temperature of the system of interacting phases. Study objects and methods. The research involved frozen cranberries and blueberries. They grow everywhere in Western Siberia and are rich vitamins and minerals. The berries were subjected to slow freezing at –18°C, which destroyed the cell structure and increased the processing efficiency. The study was carried out in a lab device with a vibrating tray. All parameters were measured by standard methods. Results and discussion. The extraction device was equipped with a jacket into which a coolant was fed, i.e. water with a temperature of 55°C. A preliminary series of experiments revealed two negative aspects associated with the supply of coolant into the jacket. First, the surface layer started to thaw, which reduced the efficiency of grinding. Second, the processing time increased. A new method was developed to solve these problems: the coolant was supplied at the end of the grinding. The time of the coolant supply depended on the type of raw materials. The processes that occurred within the device depended on two factors: the frequency of vibrations of the tray and the diameter of the holes in the tray. These factors could be adjusted to intensify the process, but they increased the power costs and energy consumption. A series of experiments determined the optimal values of these parameters. A mathematical analysis revealed regression equations, i.e. how the destruction time and power costs affected the main parameters. The established optimal process parameters made it possible to determine the minimal time of the destruction process: for cranberries – 2.5 min, for blueberries – 1.5 min. The minimal power consumption was 17.8 watts for cranberries and 11.7 watts for blueberries. Conclusion. The research increased the economic efficiency of the technological process of natural extraction, which can reduce the cost of the finished product and increase its availability. The values of the process parameters can be used to design new similar devices and serve as practical recommendations for berry extraction in vibration tray devices.


2022 ◽  
Author(s):  
Virginia A. Marchman ◽  
Melanie Ashland ◽  
Elizabeth C. Loi ◽  
Kat Adams Shannon ◽  
Mónica Munévar ◽  
...  

Associations between children’s early language processing efficiency and later language, literacy, and non-verbal outcomes shed light on the extent to which early information processing skills support later learning across domains. Examining whether the strengths of associations are similar in typically developing and at risk populations provides an additional lens into the varying routes to learning that children take across development. We compared patterns of associations between early language processing efficiency (accuracy and reaction time) in the looking-while-listening (LWL) task and school-relevant skills in children born full-term (FT) and preterm (PT). Participants (n=94, 49 FT, 45 PT) were assessed in the LWL task at 18 months (corrected for degree of prematurity in PT group) and on standardized tests of expressive language, pre-literacy (print knowledge and phonological awareness), and non-verbal IQ at 4 ½ years. Early language processing efficiency was associated with later language and pre-literacy outcomes (r2 change ranged from 19.8 to 7.1, p < 0.01) to a similar extent in PT and FT children, controlling for age at test and SES, suggesting similar mechanisms of learning in these domains for PT and FT children. However, birth group moderated the association between reaction time and non-verbal IQ (r2 change 4.5, p < 0.05), such that an association was found in the PT but not the FT group. This finding suggests that information processing skills reflected in efficiency of real-time language processing may be recruited to support learning in a broader range of domains in the PT compared to the FT group.


2022 ◽  
Author(s):  
Virginia A. Marchman ◽  
Melanie Ashland ◽  
Elizabeth C. Loi ◽  
Mónica Munévar ◽  
Kat Adams Shannon ◽  
...  

•Associations between early language processing efficiency in toddlerhood and later standardized test performance inform the extent to which information processing skills support learning across domains.•Comparing patterns of associations in children from different clinical groups (e.g., children born full term and preterm) further informs whether neurobiological risk alters developmental pathways.•Early language processing efficiency was associated with language and pre-literacy outcomes to a similar extent for preterm and full term children, suggesting similar underlying mechanisms. •Association between processing speed and non-verbal IQ differed by group; processing speed supports learning in a broader range of domains in preterm than term children.


2022 ◽  
Vol 12 ◽  
Author(s):  
Sietske van Viersen ◽  
Athanassios Protopapas ◽  
Peter F. de Jong

In this study, we investigated how word- and text-level processes contribute to different types of reading fluency measures. We aimed to increase our understanding of the underlying processes necessary for fluent reading. The sample included 73 Dutch Grade 3 children, who were assessed on serial word reading rate (familiar words), word-list reading fluency (increasingly difficult words), and sentence reading fluency. Word-level processes were individual word recognition speed (discrete word reading) and sequential processing efficiency (serial digit naming). Text-level processes were receptive vocabulary and syntactic skills. The results showed that word- and text-level processes combined accounted for a comparable amount of variance in all fluency outcomes. Both word-level processes were moderate predictors of all fluency outcomes. However, vocabulary only moderately predicted sentence reading fluency, and syntactic skills merely contributed to sentence reading fluency indirectly through vocabulary. The findings indicate that sequential processing efficiency has a crucial role in reading fluency across various measures besides individual word recognition speed. Additionally, text-level processes come into play when complexity and context availability of fluency measures increases, but the exact timing requires further study. Findings are discussed in terms of future directions and their possible value for diagnostic assessment and intervention of reading difficulties.


Author(s):  
Rui Zhang

The current translation quality evaluation system relies on the combination of manual and text comparison for evaluation, which has the defects of low efficiency and large evaluation errors. In order to optimize the defects of the current quality evaluation system, a Japanese translation quality evaluation system based on deep neural network algorithm will be designed. In order to improve the processing efficiency of the system, the USB3.0 communication module of the hardware system will be optimized. Based on the hardware design, the reference translation map is used to extend the reference translation of Japanese translation. The evaluation indexes of over- and under-translation are set, and the evaluation of Japanese translation quality is realized after the parameters are determined by training the deep neural network using the sample set. The system functional test results show that the average data transmission processing time of the system is improved by about 31.27%, and the evaluation error interval is smaller and the evaluation is more reliable.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 471
Author(s):  
Piotr Perek ◽  
Aleksander Mielczarek ◽  
Dariusz Makowski

In recent years, cinematography and other digital content creators have been eagerly turning to Three-Dimensional (3D) imaging technology. The creators of movies, games, and augmented reality applications are aware of this technology’s advantages, possibilities, and new means of expression. The development of electronic and IT technologies enables the achievement of a better and better quality of the recorded 3D image and many possibilities for its correction and modification in post-production. However, preparing a correct 3D image that does not cause perception problems for the viewer is still a complex and demanding task. Therefore, planning and then ensuring the correct parameters and quality of the recorded 3D video is essential. Despite better post-production techniques, fixing errors in a captured image can be difficult, time consuming, and sometimes impossible. The detection of errors typical for stereo vision related to the depth of the image (e.g., depth budget violation, stereoscopic window violation) during the recording allows for their correction already on the film set, e.g., by different scene layouts and/or different camera configurations. The paper presents a prototype of an independent, non-invasive diagnostic system that supports the film crew in the process of calibrating stereoscopic cameras, as well as analysing the 3D depth while working on a film set. The system acquires full HD video streams from professional cameras using Serial Digital Interface (SDI), synchronises them, and estimates and analyses the disparity map. Objective depth analysis using computer tools while recording scenes allows stereographers to immediately spot errors in the 3D image, primarily related to the violation of the viewing comfort zone. The paper also describes an efficient method of analysing a 3D video using Graphics Processing Unit (GPU). The main steps of the proposed solution are uncalibrated rectification and disparity map estimation. The algorithms selected and implemented for the needs of this system do not require knowledge of intrinsic and extrinsic camera parameters. Thus, they can be used in non-cooperative environments, such as a film set, where the camera configuration often changes. Both of them are implemented with the use of a GPU to improve the data processing efficiency. The paper presents the evaluation results of the algorithms’ accuracy, as well as the comparison of the performance of two implementations—with and without the GPU acceleration. The application of the described GPU-based method makes the system efficient and easy to use. The system can process a video stream with full HD resolution at a speed of several frames per second.


Author(s):  
Jiatong Meng ◽  
Yucheng Chen

The traditional quasi-social relationship type prediction model obtains prediction results by analyzing and clustering the direct data. The prediction results are easily disturbed by noisy data, and the problems of low processing efficiency and accuracy of the traditional prediction model gradually appear as the amount of user data increases. To address the above problems, the research constructs a prediction model of user quasi-social relationship type based on social media text big data. After pre-processing the collected social media text big data, the interference data that affect the accuracy of non-model prediction are removed. The interaction information in the text data is mined based on the principle of similarity calculation, and semantic analysis and sentiment annotation are performed on the information content. On the basis of BP neural network, we construct a prediction model of user’s quasi-social relationship type. The performance test data of the model shows that the average prediction accuracy of the constructed model is 89.84%, and the model has low time complexity and higher processing efficiency, which is better than other traditional models.


2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Jijun Wang ◽  
Yi Yuan ◽  
Guoxiang Li

This paper studies the processing of digital media images using a diffusion equation to increase the contrast of the image by stretching or extending the distribution of luminance data of the image to obtain clearer information of digital media images. In this paper, the image enhancement algorithm of nonlinear diffusion filtering is used to add a velocity term to the diffusion function using a coupled denoising model, which makes the diffusion of the original model smooth, and the interferogram is solved numerically with the help of numerical simulation to verify the denoising processing effect before and after the model correction. To meet the real-time applications in the field of video surveillance, this paper focuses on the optimization of the algorithm program, including software pipeline optimization, operation unit balancing, single instruction multiple data optimization, arithmetic operation optimization, and onchip storage optimization. These optimizations enable the nonlinear diffusion filter-based image enhancement algorithm to achieve high processing efficiency on the C674xDSP, with a processing speed of 25 posts per second for 640 × 480 size video images. Finally, the significance means a value of super pixel blocks is calculated in superpixel units, and the image is segmented into objects and backgrounds by combining with the Otsu threshold segmentation algorithm to mention the image. In this paper, the proposed algorithm experiments with several sets of Kor Kor resolution remote sensing images, respectively, and the Markov random field model and fully convolutional network (FCN) algorithm are used as the comparison algorithm. By comparing the experimental results qualitatively and quantitatively, it is shown that the algorithm in this paper has an obvious practical effect on contrast enhancement of digital media images and has certain practicality and superiority.


Sign in / Sign up

Export Citation Format

Share Document