scholarly journals Setup Planning Automation of Turned Parts Based on STEP-NC Standard

Author(s):  
P. Alizadehdehkohneh ◽  
M. R. Razfar

STEP-NC(ISO 14649), the extension of STEP(ISO 10303) standard developed for CNC controllers, is a feature-based data model. STEP-NC develops the step neutral data standard for CAD data, and uses the modern geometric constructs to define device independent tool paths, and CAM independent volume removal features. This paper will present an automatic setup planning module integrated in a CAPP system for rotational parts to be machined on a lathe. The developed system will determine the possible setup combinations that are necessary for a complete machining of the part. The applied methodology will take into consideration constraints such as the geometry of both the stock and the final part, the geometry and the capacity of the chuck, and the part tolerances. Finally, the analysis of tolerances charts will be implemented for the sets of surfaces to be machined within each setup. The output can be then used to augment the STEP-NC physical file.

Author(s):  
Matt Woodburn ◽  
Gabriele Droege ◽  
Sharon Grant ◽  
Quentin Groom ◽  
Janeen Jones ◽  
...  

The utopian vision is of a future where a digital representation of each object in our collections is accessible through the internet and sustainably linked to other digital resources. This is a long term goal however, and in the meantime there is an urgent need to share data about our collections at a higher level with a range of stakeholders (Woodburn et al. 2020). To sustainably achieve this, and to aggregate this information across all natural science collections, the data need to be standardised (Johnston and Robinson 2002). To this end, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Interest Group has developed a data standard for describing collections, which is approaching formal review for ratification as a new TDWG standard. It proposes 20 classes (Suppl. material 1) and over 100 properties that can be used to describe, categorise, quantify, link and track digital representations of natural science collections, from high-level approximations to detailed breakdowns depending on the purpose of a particular implementation. The wide range of use cases identified for representing collection description data means that a flexible approach to the standard and the underlying modelling concepts is essential. These are centered around the ‘ObjectGroup’ (Fig. 1), a class that may represent any group (of any size) of physical collection objects, which have one or more common characteristics. This generic definition of the ‘collection’ in ‘collection descriptions’ is an important factor in making the standard flexible enough to support the breadth of use cases. For any use case or implementation, only a subset of classes and properties within the standard are likely to be relevant. In some cases, this subset may have little overlap with those selected for other use cases. This additional need for flexibility means that very few classes and properties, representing the core concepts, are proposed to be mandatory. Metrics, facts and narratives are represented in a normalised structure using an extended MeasurementOrFact class, so that these can be user-defined rather than constrained to a set identified by the standard. Finally, rather than a rigid underlying data model as part of the normative standard, documentation will be developed to provide guidance on how the classes in the standard may be related and quantified according to relational, dimensional and graph-like models. So, in summary, the standard has, by design, been made flexible enough to be used in a number of different ways. The corresponding risk is that it could be used in ways that may not deliver what is needed in terms of outputs, manageability and interoperability with other resources of collection-level or object-level data. To mitigate this, it is key for any new implementer of the standard to establish how it should be used in that particular instance, and define any necessary constraints within the wider scope of the standard and model. This is the concept of the ‘collection description scheme,’ a profile that defines elements such as: which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. which classes and properties should be included, which should be mandatory, and which should be repeatable; which controlled vocabularies and hierarchies should be used to make the data interoperable; how the collections should be broken down into individual ObjectGroups and interlinked, and how the various classes should be related to each other. Various factors might influence these decisions, including the types of information that are relevant to the use case, whether quantitative metrics need to be captured and aggregated across collection descriptions, and how many resources can be dedicated to amassing and maintaining the data. This process has particular relevance to the Distributed System of Scientific Collections (DiSSCo) consortium, the design of which incorporates use cases for storing, interlinking and reporting on the collections of its member institutions. These include helping users of the European Loans and Visits System (ELViS) (Islam 2020) to discover specimens for physical and digital loans by providing descriptions and breakdowns of the collections of holding institutions, and monitoring digitisation progress across European collections through a dynamic Collections Digitisation Dashboard. In addition, DiSSCo will be part of a global collections data ecosystem requiring interoperation with other infrastructures such as the GBIF (Global Biodiversity Information Facility) Registry of Scientific Collections, the CETAF (Consortium of European Taxonomic Facilities) Registry of Collections and Index Herbariorum. In this presentation, we will introduce the draft standard and discuss the process of defining new collection description schemes using the standard and data model, and focus on DiSSCo requirements as examples of real-world collection descriptions use cases.


1999 ◽  
Vol 7 (2) ◽  
pp. 159-176 ◽  
Author(s):  
Thu-Hua Liu ◽  
Amy J. C. Trappey ◽  
Jen-Bin Shyu

Author(s):  
Matt Woodburn ◽  
Deborah L Paul ◽  
Wouter Addink ◽  
Steven J Baskauf ◽  
Stanley Blum ◽  
...  

Digitisation and publication of museum specimen data is happening worldwide, but far from complete. Museums can start by sharing what they know about their holdings at a higher level, long before each object has its own record. Information about what is held in collections worldwide is needed by many stakeholders including collections managers, funders, researchers, policy-makers, industry, and educators. To aggregate this information from collections, the data need to be standardised (Johnston and Robinson 2002). So, the Biodiversity Information Standards (TDWG) Collection Descriptions (CD) Task Group is developing a data standard for describing collections, which gives the ability to provide: automated metrics, using standardised collection descriptions and/or data derived from specimen datasets (e.g., counts of specimens) and a global registry of physical collections (i.e., digitised or non-digitised). automated metrics, using standardised collection descriptions and/or data derived from specimen datasets (e.g., counts of specimens) and a global registry of physical collections (i.e., digitised or non-digitised). Outputs will include a data model to underpin the new standard, and guidance and reference implementations for the practical use of the standard in institutional and collaborative data infrastructures. The Task Group employs a community-driven approach to standard development. With international participation, workshops at the Natural History Museum (London 2019) and the MOBILISE workshop (Warsaw 2020) allowed over 50 people to contribute this work. Our group organized online "barbecues" (BBQs) so that many more could contribute to standard definitions and address data model design challenges. Cloud-based tools (e.g., GitHub, Google Sheets) are used to organise and publish the group's work and make it easy to participate. A Wikibase instance is also used to test and demonstrate the model using real data. There are a range of global, regional, and national initiatives interested in the standard (see Task Group charter). Some, like GRSciColl (now at the Global Biodiversity Information Facility (GBIF)), Index Herbariorum (IH), and the iDigBio US Collections List are existing catalogues. Others, including the Consortium of European Taxonomic Facilities (CETAF) and the Distributed System of Scientific Collections (DiSSCo), include collection descriptions as a key part of their near-term development plans. As part of the EU-funded SYNTHESYS+ project, GBIF organized a virtual workshop: Advancing the Catalogue of the World's Natural History Collections to get international input for such a resource that would use this CD standard. Some major complexities present themselves in designing a standardised approach to represent collection descriptions data. It is not the first time that the natural science collections community has tried to address them (see the TDWG Natural Collections Description standard). Beyond natural sciences, the library community in particular gave thought to this (Heaney 2001, Johnston and Robinson 2002), noting significant difficulties. One hurdle is that collections may be broken down into different degrees of granularity according to different criteria, and may also overlap so that a single object can be represented in more than one collection description. Managing statistics such as numbers of objects is complex due to data gaps and variable degrees of certainty about collection contents. It also takes considerable effort from collections staff to generate structured data about their undigitised holdings. We need to support simple, high-level collection summaries as well as detailed quantitative data, and to be able to update as needed. We need a simple approach, but one that can also handle the complexities of data, scope, and social needs, for digitised and undigitised collections. The data standard itself is a defined set of classes and properties that can be used to represent groups of collection objects and their associated information. These incorporate common characteristics ('dimensions') by which we want to describe, group and break down our collections, metrics for quantifying those collections, and properties such as persistent identifiers for tracking collections and managing their digital counterparts. Existing terms from other standards (e.g. Darwin Core, ABCD) are re-used if possible. The data model (Fig. 1) underpinning the standard defines the relationships between those different classes, and ensures that the structure as well as the content are comparable across different datasets. It centres around the core concept of an 'object group', representing a set of physical objects that is defined by one or more dimensions (e.g., taxonomy and geographic origin), and linked to other entities such as the holding institution. To the object group, quantitative data about its contents are attached (e.g. counts of objects or taxa), along with more qualitative information describing the contents of the group as a whole. In this presentation, we will describe the draft standard and data model with examples of early adoption for real-world and example data. We will also discuss the vision of how the new standard may be adopted and its potential impact on collection discoverability across the collections community.


Author(s):  
Peter C. G. Veenstra

The Pipeline Open Data Standard (PODS) Association develops and advances global pipeline data standards and best practices supporting data management and reporting for the oil and gas industry. This presentation provides an overview of the PODS Association and a detailed overview of the transformed PODS Pipeline Data Model resulting from the PODS Next Generation initiative. The PODS Association’s Next Generation, or Next Gen, initiative is focused on a complete re-design and modernization of the PODS Pipeline Data Model. The re-design of the PODS Pipeline Data Model is driven by PODS Association Strategy objectives as defined in its 2016–2019 Strategic Plan and reflects nearly 20 years of PODS Pipeline Data Model implementation experience and lessons learned. The Next Gen Data Model is designed to be the system of record for pipeline centerlines and pressurized containment assets for the safe transport of product, allowing pipeline operators to: • Achieve greater agility to build and extend the data model, • respond to new business requirements, • interoperate through standard data models and consistent application interface, • share data within and between organizations using well defined data exchange specifications, • optimize performance for management of bulk loading, reroute, inspection data and history. The presentation will introduce the Next Gen Data Model design principles, conceptual, logical and physical structures with a focus on transformational changes from prior versions of the Model. Support for multiple platforms including but not limited to Esri ArcGIS, open source GIS and relational database management systems will be described. Alignment with Esri’s ArcGIS Platform and ArcGIS for Pipeline Referencing (APR) will be a main topic of discussion along with how PODS Next Gen can be leveraged to benefit pipeline integrity, risk assessment, reporting and data maintenance. The end goal of a PODS implementation is a realization of data management efficiency, data transfer and exchange, to make the operation of a pipeline safer and most cost effective.


Author(s):  
Jian Liang ◽  
Jami J. Shah ◽  
Susan D. Urban ◽  
Edward Harter ◽  
Thomas Bluhm

Abstract This paper addresses the data modeling aspect of an ongoing research project which is aimed at developing an integrated product data-sharing environment (IPDE) based on the international standard STEP. The data model involves multiple design and analysis applications including: Product Design, Computational Fluid Dynamics Analysis (CFD), Finite Element Analysis (FEA), and feature based manufacturability evaluation. The model makes use of multiple STEP Application Protocols (AP203, 209, 214, 224) and integrated resources from different parts of STEP. Extensions beyond STEP were necessary for CFD analysis, parametric geometry and constraints. The latter were imported from the ENGEN* Data Model and a new STEP-compliant model was created for CFD. The consolidated data model used for a prototype is presented in this paper.


Author(s):  
S. M. Mahbub Murshed ◽  
Jami J. Shah ◽  
Vadivel Jagasivamani ◽  
Ayman Wasfy ◽  
David W. Hislop

This paper presents a new assembly model named open assembly model plus (OAM+) to support legacy systems engineering (LSE). LSE is a collection of technologies for prolonging the life of old mechanical systems. Rapid Re-Engineering System (RRES), a subsystem of LSE is geared towards the fast production of redesigned parts customized to the manufacturing capability available. RRES requires the extraction of initial part geometry, parameters, interfacing constraints, kinematic constraints, and technical function. These specifications need to be imprinted on the CAD model before iterative redesign. A CAD data model is needed that can carry all the functional constraints. A detailed comparison of all the available assembly model shows that none of them can provide all these requirements in one place. Assembly feature based, object oriented assembly model OAM+ is developed to meet these requirements in one model. OAM+ can be used to perform kinematic analysis, force analysis and can exchange feature data using N-Rep feature definition language between different modules of RRES. OAM+ is based on part and assembly features in N-Rep.


Sign in / Sign up

Export Citation Format

Share Document