scholarly journals MTKeras: An Automated Metamorphic Testing Platform

Author(s):  
Yelin Liu ◽  
Zhi Quan Zhou ◽  
Tsong Yueh Chen ◽  
Yang Liu ◽  
Dave Towey

This paper presents an automated, domain-independent, metamorphic testing platform called MTKeras. In this paper, we report on an investigation demonstrating the effectiveness and usability of MTKeras through five case studies in the four domains of image classification, sentiment analysis, search engines and database management systems. We also report on the effectiveness of combining metamorphic relation (input) patterns in individual metamorphic relations, enhancing the failure-finding abilities of the individual relations. The results of our experiments support combining patterns, and the use of MTKeras. The research reported in this paper shows the applicability of metamorphic relation patterns, and introduces a practical tool for the research community.

Author(s):  
Rebecca PRICE ◽  
Christine DE LILLE ◽  
Cara WRIGLEY ◽  
Kees DORST

There is an increasing need for organizations to adapt to rapid changes in society. This need requires organizations’ and the leader within them, to explore, recognize, build and exploit new capabilities. Researching such capabilities has drawn attention from the design management research community in recent years. Dominantly, research contributions have focused on perspectives of innovation and the strategic application of design with the researcher distanced from context. Descriptive and evaluative case studies of past organizational leadership have been vital, by building momentum for the design movement. However, there is a need now to progress toward prescriptive and explorative research perspectives that embrace context through practice and the simultaneous research of design.  Therefore, the aim of this track is to lead and progress discussion on research methodologies that support the research community in developing explorative and prescriptive research methodologies for context-orientated organizational research. This track brings together a group of diverse international researchers and practitioners to fuel discussion on design approaches and subsequent outcomes of prescriptive and explorative research methodologies.


Author(s):  
Viktoriya Yu. Ukhanova

In the pig breeding, a large share of the production cost is accounted for by the cost of electricity and heat. Reducing energy costs is one of the important tasks of the industry. It is important to identify and study modern technologies and equipment for pig farming due to the constant increase in electricity tariffs, significant energy costs for creating and maintaining a microclimate. (Research purpose) The research purpose is in an analysis of modern available technologies and equipment for agricultural production, including pig farming, allowing to reduce the cost of production. (Materials and methods) Authors used a method for determining the standard indicators of energy consumption of energy resources in the pig industry, based on the calculation and analytical method. The article presents an analyze of automated microclimate management systems in pig breeding complexes, taking into account the individual characteristics of farms produced by OWEN. (Results and discussion) Automated microclimate management systems can improve the productivity of pig farms; reduce the cost of electrical and thermal energy, reduce the number of diseases of animal from hypothermia, high humidity or temperature in the room, feed consumption; monitor the chemical composition of the air. The article considers three options for creating a microclimate in rooms with animals, taking into account the individual characteristics of enterprises. (Conclusions) The profitability of pig production depend on the level of technical equipment of farms, automation, and the use of energy-saving technologies and equipment. The use of innovations in agricultural production makes it possible to increase labor productivity up to three times, and savings due to reduced feed costs can reach several million rubles a year.


Author(s):  
Susanna Braund ◽  
Zara Martirosova Torlone

The introduction describes the broad landscape of translation of Virgil from both the theoretical and the practical perspectives. It then explains the genesis of the volume and indicates how the individual chapters, each one of which is summarized, fit into the complex tapestry of Virgilian translation activity through the centuries and across the world. The volume editors indicate points of connection between the chapters in order to render the whole greater than the sum of its parts. Braund and Torlone emphasize that a project such as this could look like a (rather large) collection of case studies; they therefore consider it important to extrapolate larger phenomena from the specifics presented here


2021 ◽  
Vol 17 (2) ◽  
pp. 1-27
Author(s):  
Morteza Hosseini ◽  
Tinoosh Mohsenin

This article presents a low-power, programmable, domain-specific manycore accelerator, Binarized neural Network Manycore Accelerator (BiNMAC), which adopts and efficiently executes binary precision weight/activation neural network models. Such networks have compact models in which weights are constrained to only 1 bit and can be packed several in one memory entry that minimizes memory footprint to its finest. Packing weights also facilitates executing single instruction, multiple data with simple circuitry that allows maximizing performance and efficiency. The proposed BiNMAC has light-weight cores that support domain-specific instructions, and a router-based memory access architecture that helps with efficient implementation of layers in binary precision weight/activation neural networks of proper size. With only 3.73% and 1.98% area and average power overhead, respectively, novel instructions such as Combined Population-Count-XNOR , Patch-Select , and Bit-based Accumulation are added to the instruction set architecture of the BiNMAC, each of which replaces execution cycles of frequently used functions with 1 clock cycle that otherwise would have taken 54, 4, and 3 clock cycles, respectively. Additionally, customized logic is added to every core to transpose 16×16-bit blocks of memory on a bit-level basis, that expedites reshaping intermediate data to be well-aligned for bitwise operations. A 64-cluster architecture of the BiNMAC is fully placed and routed in 65-nm TSMC CMOS technology, where a single cluster occupies an area of 0.53 mm 2 with an average power of 232 mW at 1-GHz clock frequency and 1.1 V. The 64-cluster architecture takes 36.5 mm 2 area and, if fully exploited, consumes a total power of 16.4 W and can perform 1,360 Giga Operations Per Second (GOPS) while providing full programmability. To demonstrate its scalability, four binarized case studies including ResNet-20 and LeNet-5 for high-performance image classification, as well as a ConvNet and a multilayer perceptron for low-power physiological applications were implemented on BiNMAC. The implementation results indicate that the population-count instruction alone can expedite the performance by approximately 5×. When other new instructions are added to a RISC machine with existing population-count instruction, the performance is increased by 58% on average. To compare the performance of the BiNMAC with other commercial-off-the-shelf platforms, the case studies with their double-precision floating-point models are also implemented on the NVIDIA Jetson TX2 SoC (CPU+GPU). The results indicate that, within a margin of ∼2.1%--9.5% accuracy loss, BiNMAC on average outperforms the TX2 GPU by approximately 1.9× (or 7.5× with fabrication technology scaled) in energy consumption for image classification applications. On low power settings and within a margin of ∼3.7%--5.5% accuracy loss compared to ARM Cortex-A57 CPU implementation, BiNMAC is roughly ∼9.7×--17.2× (or 38.8×--68.8× with fabrication technology scaled) more energy efficient for physiological applications while meeting the application deadline.


2020 ◽  
Vol 26 (4) ◽  
pp. 405-425
Author(s):  
Javed Miandad ◽  
Margaret M. Darrow ◽  
Michael D. Hendricks ◽  
Ronald P. Daanen

ABSTRACT This study presents a new methodology to identify landslide and landslide-susceptible locations in Interior Alaska using only geomorphic properties from light detection and ranging (LiDAR) derivatives (i.e., slope, profile curvature, and roughness) and the normalized difference vegetation index (NDVI), focusing on the effect of different resolutions of LiDAR images. We developed a semi-automated object-oriented image classification approach in ArcGIS 10.5 and prepared a landslide inventory from visual observation of hillshade images. The multistage work flow included combining derivatives from 1-, 2.5-, and 5-m-resolution LiDAR, image segmentation, image classification using a support vector machine classifier, and image generalization to clean false positives. We assessed classification accuracy by generating confusion matrix tables. Analysis of the results indicated that LiDAR image scale played an important role in the classification, and the use of NDVI generated better results. Overall, the LiDAR 5-m-resolution image with NDVI generated the best results with a kappa value of 0.55 and an overall accuracy of 83 percent. The LiDAR 1-m-resolution image with NDVI generated the highest producer accuracy of 73 percent in identifying landslide locations. We produced a combined overlay map by summing the individual classified maps that was able to delineate landslide objects better than the individual maps. The combined classified map from 1-, 2.5-, and 5-m-resolution LiDAR with NDVI generated producer accuracies of 60, 80, and 86 percent and user accuracies of 39, 51, and 98 percent for landslide, landslide-susceptible, and stable locations, respectively, with an overall accuracy of 84 percent and a kappa value of 0.58. This semi-automated object-oriented image classification approach demonstrated potential as a viable tool with further refinement and/or in combination with additional data sources.


2021 ◽  
Author(s):  
Andrew Imrie ◽  
Maciej Kozlowski ◽  
Omar Torky ◽  
Aditya Arie Wijaya

AbstractMonitoring pipe corrosion is one of the critical aspects in the well intervention. Such analysis is used to evaluate and justify any remedial actions, to prolong the longevity of the well. Typical corrosion evaluation methods of tubulars consist of multifinger caliper tools that provide high-resolution measurements of the internal condition of the pipe. Routinely, this data is then analyzed and interpreted with respect to the manufacture's nominal specification for each tubular. However, this requires assumptions on the outer diameter of the tubular may add uncertainty, and incorrectly calculate the true metal thicknesses. This paper will highlight cases where the integration of such tool and electromagnetic (EM) thickness data adds value in discovering the true condition of both the first tubular and outer casings.These case studies demonstrate the use of a multireceiver, multitransmitter electromagnetic (EM) metal thickness tool operating at multiple simultaneous frequencies. It is used to measure the individual wall thickness across multiple strings (up to five) and operates continuously, making measurements in the frequency domain. This tool was combined with a multifinger caliper to provide a complete and efficient single-trip diagnosis of the tubing and casing integrity. The combination of multifinger caliper and EM metal thickness tool results gives both internal and external corrosion as well as metal thickness of first and outer tubular strings.The paper highlights multiple case studies including; i) successfully detecting several areas of metal loss (up to greater than 32%) on the outer string, which correlated to areas of the mobile salt formation, ii) overlapping defects in two tubulars and, iii) cases where a multifinger caliper alone doesn't provide an accurate indication of the true wall thickness. The final case highlights the advantages of integrating multiple tubular integrity tools when determining the condition of the casing wall.Metal thickness tools operating on EM principles benefit from a slim outer diameter design that allows the tools to pass through restrictions which typically would prevent ultrasonic scanning thickness tools. Additionally, EM tools are unaffected by the type of fluid in the wellbore and not affected by any non-ferrous scale buildup that may present in the inside of the tubular wall. Combinability between complementary multifinger caliper technology and EM thickness results in two independent sensors to provide a complete assessment of the well architecture.


2017 ◽  
Vol 12 (2) ◽  
pp. 289-313
Author(s):  
Claire Farago

Abstract Five interrelated case studies from the sixteenth to the twentieth centuries develop the dynamic contrast between portraiture and pictorial genres newly invented in and about Latin America that do not represent their subjects as individuals despite the descriptive focus on the particular. From Jean de Léry’s genre-defining proto-ethnographic text (1578) about the Tupinamba of Brazil to the treatment of the Creole upper class in New Spain as persons whose individuality deserves to be memorialized in contrast to the Mestizaje, African, and Indian underclass objectified as types deserving of scientific study, hierarchical distinctions between portraiture and ethnographic images can be framed in historical terms around the Aristotelian categories of the universal, the individual, and the particular. There are also some intriguing examples that destabilize these inherited distinctions, such as Puerto Rican artist José Campeche’s disturbing and poignant image of a deformed child, Juan Pantaléon Aviles, 1808; and an imaginary portrait of Moctezuma II, c. 1697, based on an ethnographic image, attributed to the leading Mexican painter Antonio Rodriguez. These anomalies serve to focus the study on the hegemonic position accorded to the viewing subject as actually precarious and unstable, always ripe for reinterpretation at the receiving end of European culture.


Sign in / Sign up

Export Citation Format

Share Document