Distortion reconstruction in S-ducts from wall static pressure measurements

2018 ◽  
Vol 28 (5) ◽  
pp. 1134-1155 ◽  
Author(s):  
Pierre Grenson ◽  
Eric Garnier

Purpose This paper aims to report the attempts for predicting “on-the-fly” flow distortion in the engine entrance plane of a highly curved S-duct from wall static pressure measurements. Such a technology would be indispensable to trigger active flow control devices to mitigate the intense flow separations which occur in specific flight conditions. Design/methodology/approach Evaluation of different reconstruction algorithms is performed on the basis of data extracted from a Zonal Detached Eddy Simulation (ZDES) of a well-documented S-Duct (Garnier et al., AIAA J., 2015). Contrary to RANS methods, such a hybrid approach makes unsteady distortions available, which are necessary information for reconstruction algorithm assessment. Findings The best reconstruction accuracy is obtained with the artificial neural network (ANN) but the improvement compared to the classical linear stochastic estimation (LSE) is minor. The different inlet distortion coefficients are not reconstructed with the same accuracy. KA2 coefficient is finally identified as the more suited for activation of the control device. Originality/value LSE and its second-order variant (quadratic stochastic estimation [QSE]) are applied for reconstructing instantaneous stagnation pressure in the flow field. The potential improvement of an algorithm based on an ANN is also evaluated. The statistical link between the wall sensors and 40-Kulite rake sensors are carefully discussed and the accuracy of the reconstruction of the most used distortion coefficients (DC60, RDI, CDI and KA2) is quantified for each estimation technique.

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Johan Economou Lundeberg ◽  
Jenny Oddstig ◽  
Ulrika Bitzén ◽  
Elin Trägårdh

Abstract Background Lung cancer is one of the most common cancers in the world. Early detection and correct staging are fundamental for treatment and prognosis. Positron emission tomography with computed tomography (PET/CT) is recommended clinically. Silicon (Si) photomultiplier (PM)-based PET technology and new reconstruction algorithms are hoped to increase the detection of small lesions and enable earlier detection of pathologies including metastatic spread. The aim of this study was to compare the diagnostic performance of a SiPM-based PET/CT (including a new block-sequential regularization expectation maximization (BSREM) reconstruction algorithm) with a conventional PM-based PET/CT including a conventional ordered subset expectation maximization (OSEM) reconstruction algorithm. The focus was patients admitted for 18F-fluorodeoxyglucose (FDG) PET/CT for initial diagnosis and staging of suspected lung cancer. Patients were scanned on both a SiPM-based PET/CT (Discovery MI; GE Healthcare, Milwaukee, MI, USA) and a PM-based PET/CT (Discovery 690; GE Healthcare, Milwaukee, MI, USA). Standardized uptake values (SUV) and image interpretation were compared between the two systems. Image interpretations were further compared with histopathology when available. Results Seventeen patients referred for suspected lung cancer were included in our single injection, dual imaging study. No statically significant differences in SUVmax of suspected malignant primary tumours were found between the two PET/CT systems. SUVmax in suspected malignant intrathoracic lymph nodes was 10% higher on the SiPM-based system (p = 0.026). Good consistency (14/17 cases) between the PET/CT systems were found when comparing simplified TNM staging. The available histology results did not find any obvious differences between the systems. Conclusion In a clinical setting, the new SiPM-based PET/CT system with a new BSREM reconstruction algorithm provided a higher SUVmax for suspected lymph node metastases compared to the PM-based system. However, no improvement in lung cancer detection was seen.


Micromachines ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 164
Author(s):  
Dongxu Wu ◽  
Fusheng Liang ◽  
Chengwei Kang ◽  
Fengzhou Fang

Optical interferometry plays an important role in the topographical surface measurement and characterization in precision/ultra-precision manufacturing. An appropriate surface reconstruction algorithm is essential in obtaining accurate topography information from the digitized interferograms. However, the performance of a surface reconstruction algorithm in interferometric measurements is influenced by environmental disturbances and system noise. This paper presents a comparative analysis of three algorithms commonly used for coherence envelope detection in vertical scanning interferometry, including the centroid method, fast Fourier transform (FFT), and Hilbert transform (HT). Numerical analysis and experimental studies were carried out to evaluate the performance of different envelope detection algorithms in terms of measurement accuracy, speed, and noise resistance. Step height standards were measured using a developed interferometer and the step profiles were reconstructed by different algorithms. The results show that the centroid method has a higher measurement speed than the FFT and HT methods, but it can only provide acceptable measurement accuracy at a low noise level. The FFT and HT methods outperform the centroid method in terms of noise immunity and measurement accuracy. Even if the FFT and HT methods provide similar measurement accuracy, the HT method has a superior measurement speed compared to the FFT method.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Robert Peter Reimer ◽  
Konstantin Klein ◽  
Miriam Rinneburger ◽  
David Zopfs ◽  
Simon Lennartz ◽  
...  

AbstractComputed tomography in suspected urolithiasis provides information about the presence, location and size of stones. Particularly stone size is a key parameter in treatment decision; however, data on impact of reformatation and measurement strategies is sparse. This study aimed to investigate the influence of different image reformatations, slice thicknesses and window settings on stone size measurements. Reference stone sizes of 47 kidney stones representative for clinically encountered compositions were measured manually using a digital caliper (Man-M). Afterwards stones were placed in a 3D-printed, semi-anthropomorphic phantom, and scanned using a low dose protocol (CTDIvol 2 mGy). Images were reconstructed using hybrid-iterative and model-based iterative reconstruction algorithms (HIR, MBIR) with different slice thicknesses. Two independent readers measured largest stone diameter on axial (2 mm and 5 mm) and multiplanar reformatations (based upon 0.67 mm reconstructions) using different window settings (soft-tissue and bone). Statistics were conducted using ANOVA ± correction for multiple comparisons. Overall stone size in CT was underestimated compared to Man-M (8.8 ± 2.9 vs. 7.7 ± 2.7 mm, p < 0.05), yet closely correlated (r = 0.70). Reconstruction algorithm and slice thickness did not significantly impact measurements (p > 0.05), while image reformatations and window settings did (p < 0.05). CT measurements using multiplanar reformatation with a bone window setting showed closest agreement with Man-M (8.7 ± 3.1 vs. 8.8 ± 2.9 mm, p < 0.05, r = 0.83). Manual CT-based stone size measurements are most accurate using multiplanar image reformatation with a bone window setting, while measurements on axial planes with different slice thicknesses underestimate true stone size. Therefore, this procedure is recommended when impacting treatment decision.


AIAA Journal ◽  
2017 ◽  
Vol 55 (6) ◽  
pp. 1893-1908 ◽  
Author(s):  
Daniel Gil-Prieto ◽  
David G. MacManus ◽  
Pavlos K. Zachos ◽  
Geoffrey Tanguy ◽  
François Wilson ◽  
...  

2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Hsuan-Ming Huang ◽  
Ing-Tsung Hsiao

Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques.Methods. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively.Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method.Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jano Jiménez-Barreto ◽  
Natalia Rubio ◽  
Sebastian Molinillo

Purpose Drawing on the self-determination theory, the assemblage theory and customer experience literature, this paper aims to develop a framework to understand motivational customer experiences with chatbots. Design/methodology/approach This paper uses a multimethod approach to examine the interaction between individuals and airlines’ chatbots. Three components of self-determined interaction with the chatbot (competence, autonomy and relatedness) and five components of the customer–chatbot experience (sensory, intellectual, affective, behavioral and social) are analyzed qualitatively and quantitatively. Findings The findings confirm the direct influence of self-determined interaction on customer experience and the direct effects of these two constructs on participants’ attitudes toward and satisfaction with the chatbot. The model also supports the mediating roles of customer experience and attitude toward the chatbot. Practical implications This paper offers managers a broad understanding of individuals’ interactions with chatbots through three elements: motivation to use chatbots, experiential responses and individuals’ valuation of whether the interactions have amplified (or limited) the outcomes obtained from the experience. Originality/value This paper contributes to the hospitality and tourism literature with a hybrid approach that reflects on current theoretical developments regarding human- and interaction-centric interpretations of customer experience with chatbots.


Kybernetes ◽  
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Naurin Farooq Khan ◽  
Naveed Ikram ◽  
Hajra Murtaza ◽  
Muhammad Aslam Asadi

PurposeThis study aims to investigate the cybersecurity awareness manifested as protective behavior to explain self-disclosure in social networking sites. The disclosure of information about oneself is associated with benefits as well as privacy risks. The individuals self-disclose to gain social capital and display protective behaviors to evade privacy risks by careful cost-benefit calculation of disclosing information.Design/methodology/approachThis study explores the role of cyber protection behavior in predicting self-disclosure along with demographics (age and gender) and digital divide (frequency of Internet access) variables by conducting a face-to-face survey. Data were collected from 284 participants. The model is validated by using multiple hierarchal regression along with the artificial intelligence approach.FindingsThe results revealed that cyber protection behavior significantly explains the variance in self-disclosure behavior. The complementary use of five machine learning (ML) algorithms further validated the model. The ML algorithms predicted self-disclosure with an area under the curve of 0.74 and an F1 measure of 0.70.Practical implicationsThe findings suggest that costs associated with self-disclosure can be mitigated by educating the individuals to heighten their cybersecurity awareness through cybersecurity training programs.Originality/valueThis study uses a hybrid approach to assess the influence of cyber protection behavior on self-disclosure using expectant valence theory (EVT).


2019 ◽  
Vol 37 (1) ◽  
pp. 2-15 ◽  
Author(s):  
Sudarsana Desul ◽  
Madurai Meenachi N. ◽  
Thejas Venkatesh ◽  
Vijitha Gunta ◽  
Gowtham R. ◽  
...  

PurposeOntology of a domain mainly consists of a set of concepts and their semantic relations. It is typically constructed and maintained by using ontology editors with substantial human intervention. It is desirable to perform the task automatically, which has led to the development of ontology learning techniques. One of the main challenges of ontology learning from the text is to identify key concepts from the documents. A wide range of techniques for key concept extraction have been proposed but are having the limitations of low accuracy, poor performance, not so flexible and applicability to a specific domain. The propose of this study is to explore a new method to extract key concepts and to apply them to literature in the nuclear domain.Design/methodology/approachIn this article, a novel method for key concept extraction is proposed and applied to the documents from the nuclear domain. A hybrid approach was used, which includes a combination of domain, syntactic name entity knowledge and statistical based methods. The performance of the developed method has been evaluated from the data obtained using two out of three voting logic from three domain experts by using 120 documents retrieved from SCOPUS database.FindingsThe work reported pertains to extracting concepts from the set of selected documents and aids the search for documents relating to given concepts. The results of a case study indicated that the method developed has demonstrated better metrics than Text2Onto and CFinder. The method described has the capability of extracting valid key concepts from a set of candidates with long phrases.Research limitations/implicationsThe present study is restricted to literature coming out in the English language and applied to the documents from nuclear domain. It has the potential to extend to other domains also.Practical implicationsThe work carried out in the current study has the potential of leading to updating International Nuclear Information System thesaurus for ontology in the nuclear domain. This can lead to efficient search methods.Originality/valueThis work is the first attempt to automatically extract key concepts from the nuclear documents. The proposed approach will address and fix the most of the problems that are existed in the current methods and thereby increase the performance.


Sign in / Sign up

Export Citation Format

Share Document