scholarly journals Smart Luggage System

Author(s):  
Himani Sunariya

From an age, voyaging has been the piece of human existence and gear assumes a significant part while venturing out starting with one spot then onto the next. With time the methods for transportation expanded venturing to every part of the brief distance and over oceans got simpler however odds of losing gear expanded also. To decrease the misusing of gear. The specialists concocted the possibility of establishment of shrewd global positioning framework in the gear to follow the baggage progressively with the assistance of a microcontroller framework, which is wearable and helpful. Utilizing remote correspondence strategies the proposed framework has been planned. Consequently, the Gear Following is created to stay away from misfortune or misusing of a travelers baggage which makes pressure for the travelers. Furthermore, to expand the security of the venture we presented the possibility of RFID locking framework which expanded the degree of safety at one level. The proposed framework comprises of a Microcontroller Arduino which is associated with the gear through a GPS that gives the area subtleties to the GSM. This recovered information of baggage area is handled to the cloud data set that helps in discovering the area of the gear progressively, presently the baggage proprietor is furnished with client ID and secret phrase for attaching the baggage. On the off chance that the client has to know the situation with their gear, the individual can sign into his/her ID and recognize the situation of the lost baggage. Notwithstanding it a RFID Framework is utilized to configuration locking framework which builds the gear security. This stuff global positioning framework will illuminate where your sack is and where it has been. The framework utilizes geographic position and time data from the Worldwide Situating Satellites.

2016 ◽  
Vol 9 (10) ◽  
pp. 5037-5051 ◽  
Author(s):  
Klaus-Peter Heue ◽  
Melanie Coldewey-Egbers ◽  
Andy Delcloo ◽  
Christophe Lerot ◽  
Diego Loyola ◽  
...  

Abstract. In preparation of the TROPOMI/S5P launch in early 2017, a tropospheric ozone retrieval based on the convective cloud differential method was developed. For intensive tests we applied the algorithm to the total ozone columns and cloud data of the satellite instruments GOME, SCIAMACHY, OMI, GOME-2A and GOME-2B. Thereby a time series of 20 years (1995–2015) of tropospheric column ozone was generated. To have a consistent total ozone data set for all sensors, one common retrieval algorithm, namely GODFITv3, was applied and the L1 reflectances were also soft calibrated. The total ozone columns and the cloud data were input into the tropospheric ozone retrieval. However, the tropical tropospheric column ozone (TCO) for the individual instruments still showed small differences and, therefore, we harmonised the data set. For this purpose, a multilinear function was fitted to the averaged difference between SCIAMACHY's TCO and those from the other sensors. The original TCO was corrected by the fitted offset. GOME-2B data were corrected relative to the harmonised data from OMI and GOME-2A. The harmonisation leads to a better agreement between the different instruments. Also, a direct comparison of the TCO in the overlapping periods proves that GOME-2A agrees much better with SCIAMACHY after the harmonisation. The improvements for OMI were small. Based on the harmonised observations, we created a merged data product, containing the TCO from July 1995 to December 2015. A first application of this 20-year record is a trend analysis. The tropical trend is 0.7 ± 0.12 DU decade−1. Regionally the trends reach up to 1.8 DU decade−1 like on the African Atlantic coast, while over the western Pacific the tropospheric ozone declined over the last 20 years with up to 0.8 DU decade−1. The tropical tropospheric data record will be extended in the future with the TROPOMI/S5P data, where the TCO is part of the operational products.


2016 ◽  
Author(s):  
Klaus-Peter Heue ◽  
Melanie Coldewey-Egbers ◽  
Andy Delcloo ◽  
Christophe Lerot ◽  
Diego Loyola ◽  
...  

Abstract. In preparation of the TROPOMI/S5P launch in autumn 2016 a tropospheric ozone retrieval based on the convective cloud differential method was developed. For intensive tests we applied the algorithm to the total ozone columns and cloud data of the satellites GOME, SCIAMACHY, OMI, GOME-2A and GOME-2B. Thereby a time series of 20 years (1995–2015) of tropospheric ozone columns was retrieved. To have a consistent total ozone data set for all sensors one common retrieval algorithm, namely GODFITv3, has been applied to all sensors and the L1 reflectances have also been soft calibrated. These data were input into the tropospheric ozone retrieval. However, the Tropical Tropospheric Ozone Columns (TTOC) for the individual instruments still showed small differences and therefore we harmonised the data set. For this purpose a multi-variant function was fitted to the averaged difference between SCIAMACHY's TTOC and those from the other sensors. The original TTOC was corrected by the fitted offset. GOME-2B data were corrected relative to the harmonised data from OMI and GOME-2A. The harmonisation leads to a better agreement between the different instruments. Also a direct comparison of the TTOCs in the overlapping periods proves that GOME-2A agrees much better with SCIAMACHY after the harmonisation. The improvements for OMI were small. The GOME and SCIAMACHY data overlap for one year for the complete tropics, this turned out to be insufficient to extrapolate back until 1995. Based on the harmonised observations, we created a merged data product, containing the TTOC from July 1995 to Dec. 2015. A first application of this 20 years record is a trend analysis. The global tropical trend is 0.75 ± 0.12 DU decade−1. Regionally the trends reaches up to 1.8 DU decade−1 like on the African Atlantic coast, over the Western Pacific the tropospheric ozone declined over the last 20 years with up to 0.8 DU decade−1. The tropical tropospheric data record will be extended in the future with the TROPOMI/S5P data, where the TTOC is part of the operational products.


Author(s):  
D. E. Becker

An efficient, robust, and widely-applicable technique is presented for computational synthesis of high-resolution, wide-area images of a specimen from a series of overlapping partial views. This technique can also be used to combine the results of various forms of image analysis, such as segmentation, automated cell counting, deblurring, and neuron tracing, to generate representations that are equivalent to processing the large wide-area image, rather than the individual partial views. This can be a first step towards quantitation of the higher-level tissue architecture. The computational approach overcomes mechanical limitations, such as hysterisis and backlash, of microscope stages. It also automates a procedure that is currently done manually. One application is the high-resolution visualization and/or quantitation of large batches of specimens that are much wider than the field of view of the microscope.The automated montage synthesis begins by computing a concise set of landmark points for each partial view. The type of landmarks used can vary greatly depending on the images of interest. In many cases, image analysis performed on each data set can provide useful landmarks. Even when no such “natural” landmarks are available, image processing can often provide useful landmarks.


2020 ◽  

BACKGROUND: This paper deals with territorial distribution of the alcohol and drug addictions mortality at a level of the districts of the Slovak Republic. AIM: The aim of the paper is to explore the relations within the administrative territorial division of the Slovak Republic, that is, between the individual districts and hence, to reveal possibly hidden relation in alcohol and drug mortality. METHODS: The analysis is divided and executed into the two fragments – one belongs to the female sex, the other one belongs to the male sex. The standardised mortality rate is computed according to a sequence of the mathematical relations. The Euclidean distance is employed to compute the similarity within each pair of a whole data set. The cluster analysis examines is performed. The clusters are created by means of the mutual distances of the districts. The data is collected from the database of the Statistical Office of the Slovak Republic for all the districts of the Slovak Republic. The covered time span begins in the year 1996 and ends in the year 2015. RESULTS: The most substantial point is that the Slovak Republic possesses the regional disparities in a field of mortality expressed by the standardised mortality rate computed particularly for the diagnoses assigned to the alcohol and drug addictions at a considerably high level. However, the female sex and the male sex have the different outcome. The Bratislava III District keeps absolutely the most extreme position. It forms an own cluster for the both sexes too. The Topoľčany District bears a similar extreme position from a point of view of the male sex. All the Bratislava districts keep their mutual notable dissimilarity. Contrariwise, evaluation of a development of the regional disparities among the districts looks like notably heterogeneously. CONCLUSIONS: There are considerable regional discrepancies throughout the districts of the Slovak Republic. Hence, it is necessary to create a common platform how to proceed with the solution of this issue.


1992 ◽  
Author(s):  
Rupert S. Hawkins ◽  
K. F. Heideman ◽  
Ira G. Smotroff

2013 ◽  
Vol 796 ◽  
pp. 513-518
Author(s):  
Rong Jin ◽  
Bing Fei Gu ◽  
Guo Lian Liu

In this paper 110 female undergraduates in Soochow University are measured by using 3D non-contact measurement system and manual measurement. 3D point cloud data of human body is taken as research objects by using anti-engineering software, and secondary development of point cloud data is done on the basis of optimizing point cloud data. In accordance with the definition of the human chest width points and other feature points, and in the operability of the three-dimensional point cloud data, the width, thickness, and length dimensions of the curve through the chest width point are measured. Classification of body type is done by choosing the ratio values as classification index which is the ratio between thickness and width of the curve. The generation rules of the chest curve are determined for each type by using linear regression method. Human arm model could be established by the computer automatically. Thereby the individual model of the female upper body mannequin modeling can be improved effectively.


2021 ◽  
Author(s):  
Gabriela Chaves ◽  
Danielle Monteiro ◽  
Virgilio José Martins Ferreira

Abstract Commingle production nodes are standard practice in the industry to combine multiple segments into one. This practice is adopted at the subsurface or surface to reduce costs, elements (e.g. pipes), and space. However, it leads to one problem: determine the rates of the single elements. This problem is recurrently solved in the platform scenario using the back allocation approach, where the total platform flowrate is used to obtain the individual wells’ flowrates. The wells’ flowrates are crucial to monitor, manage and make operational decisions in order to optimize field production. This work combined outflow (well and flowline) simulation, reservoir inflow, algorithms, and an optimization problem to calculate the wells’ flowrates and give a status about the current well state. Wells stated as unsuited indicates either the input data, the well model, or the well is behaving not as expected. The well status is valuable operational information that can be interpreted, for instance, to indicate the need for a new well testing, or as reliability rate for simulations run. The well flowrates are calculated considering three scenarios the probable, minimum and maximum. Real-time data is used as input data and production well test is used to tune and update well model and parameters routinely. The methodology was applied using a representative offshore oil field with 14 producing wells for two-years production time. The back allocation methodology showed robustness in all cases, labeling the wells properly, calculating the flowrates, and honoring the platform flowrate.


2021 ◽  
Vol 2 (4) ◽  
pp. 91-101
Author(s):  
Saleha Ilhaam

The term strategic essentialism, coined by Spivak, is generally understood as “a political strategy whereby differences (within Group) are temporarily downplayed, and unity assumed for the sake of achieving political goals.” On the other hand, essentialism focuses that everything in this world has an intrinsic and immutable essence of its own. The adaption of a particular “nature” of one group of people by way of sexism, culturalization, and ethnification is strongly linked to the idea of essentialism. Mulk Raj Anand’s Bakha is dictated as an outcast by the institutionalized hierarchy of caste practice. He is essentialized as an untouchable by attributing to him the characteristic of dirt and filth. However, unlike other untouchables, Bakha can apprehend the difference between the cultured and uncultured, dirt and cleanliness. Via an analysis of Anand’s “Untouchable,” the present article aims to bring to the forefront the horrid destruction of the individual self that stems from misrepresentations of personality. Through strategic essentialism, it unravels Bakha’s contrasting nature as opposed to his pariah class, defied by his remarkable inner character and etiquette. The term condemns the essentialist categories of human existence. It has been applied to decontextualize and deconstruct the inaccurately essentialized identity of Bakha, which has made him a part of the group he does not actually belong to.


2018 ◽  
Vol 34 (3) ◽  
pp. 1247-1266 ◽  
Author(s):  
Hua Kang ◽  
Henry V. Burton ◽  
Haoxiang Miao

Post-earthquake recovery models can be used as decision support tools for pre-event planning. However, due to a lack of available data, there have been very few opportunities to validate and/or calibrate these models. This paper describes the use of building damage, permitting, and repair data from the 2014 South Napa Earthquake to evaluate a stochastic process post-earthquake recovery model. Damage data were obtained for 1,470 buildings, and permitting and repair time data were obtained for a subset (456) of those buildings. A “blind” prediction is shown to adequately capture the shape of the recovery trajectory despite overpredicting the overall pace of the recovery. Using the mean time to permit and repair time from the acquired data set significantly improves the accuracy of the recovery prediction. A generalized model is formulated by establishing statistical relationships between key time parameters and endogenous and exogenous factors that have been shown to influence the pace of recovery.


2018 ◽  
Vol 15 (6) ◽  
pp. 172988141881470
Author(s):  
Nezih Ergin Özkucur ◽  
H Levent Akın

Self-localization in autonomous robots is one of the fundamental issues in the development of intelligent robots, and processing of raw sensory information into useful features is an integral part of this problem. In a typical scenario, there are several choices for the feature extraction algorithm, and each has its weaknesses and strengths depending on the characteristics of the environment. In this work, we introduce a localization algorithm that is capable of capturing the quality of a feature type based on the local environment and makes soft selection of feature types throughout different regions. A batch expectation–maximization algorithm is developed for both discrete and Monte Carlo localization models, exploiting the probabilistic pose estimations of the robot without requiring ground truth poses and also considering different observation types as blackbox algorithms. We tested our method in simulations, data collected from an indoor environment with a custom robot platform and a public data set. The results are compared with the individual feature types as well as naive fusion strategy.


Sign in / Sign up

Export Citation Format

Share Document