scholarly journals Special Issue on Product Model and International Standard STEP for CAD Data Exchange. STEP Applications and Data Exchange System.

1993 ◽  
Vol 59 (12) ◽  
pp. 1937-1942
Author(s):  
Toshio KOJIMA
Author(s):  
Steven A. Ryan

Abstract This status report provides a current overview of the work that is progressing toward the development of an international Standard for the Exchange of Product model data (STEP). STEP has the potential for revolutionizing the exchange of product definition data. The current state of the art in product data exchange requires knowledge of both the sending and receiving system in order to expect a reliable exchange to occur. The basic premise that STEP is built upon is to support the exchange and sharing of product model data without the need to know the sending or receiving system. The first release of STEP as a Draft International Standard will occur in 1992. The capability of that release will provide a strong basis for system designers and integrators to develop STEP compliant products that can support a significant portion of the product definition data that is exchanged today between and within businesses.


1994 ◽  
Vol 10 (01) ◽  
pp. 39-50
Author(s):  
Richard H. Lovdahl ◽  
Douglas J. Martin ◽  
Michael A. Polini ◽  
Ron W. Wood ◽  
Michael L. Gerardi ◽  
...  

This paper presents the purpose, approach, goals and progress of the tasks that make up the standard for a digital Ship Product Model. The Navy/Industry Digital Data Exchange Standards Committee (NIDDESC) Standards will be a part of the Standard for the Exchange of Product Model Data (STEP) International Standard. The STEP standard has a layered architecture in which basic core definitions are used by many industry and product specific standards such as the NIDDESC Standards.


1997 ◽  
Vol 13 (02) ◽  
pp. 111-124
Author(s):  
Jeff Wyman ◽  
Dan Wooley ◽  
Burt Gischner ◽  
Joyce Howell

Effective data exchange of product model data is essential for future competition in the global marketplace. Many efforts have been undertaken in recent years to establish a transfer mechanism for product model data in the Shipbuilding industry. These include the development of the STEP Standard, creation of the NIDDESC Application Protocols, and efforts of the European NEUTRABAS and MARITIME Projects. The ARPA/MARITECH Project for "Development of STEP Ship Product Model Database and Translators for Data Exchange Between Shipyards" provides a unique opportunity to attempt to implement the still developing Standards for Product Model Exchange and to enable their use for data exchange between the major US Shipyards. The project will create and populate a prototype product model database, develop translators for exchange of product model data between Shipyards, and facilitate adoption of the Shipbuilding Application Protocols as part of the emerging International Standard (STEP). These ambitious goals are being undertaken by a consortium of US Shipbuilders, their CAD vendors, and STEP experts. The participants will help develop a product model data exchange capability for the entire Shipbuilding industry, while they enhance their own ability to compete in the global marketplace.


2020 ◽  
Vol 51 (2) ◽  
pp. 479-493
Author(s):  
Jenny A. Roberts ◽  
Evelyn P. Altenberg ◽  
Madison Hunter

Purpose The results of automatic machine scoring of the Index of Productive Syntax from the Computerized Language ANalysis (CLAN) tools of the Child Language Data Exchange System of TalkBank (MacWhinney, 2000) were compared to manual scoring to determine the accuracy of the machine-scored method. Method Twenty transcripts of 10 children from archival data of the Weismer Corpus from the Child Language Data Exchange System at 30 and 42 months were examined. Measures of absolute point difference and point-to-point accuracy were compared, as well as points erroneously given and missed. Two new measures for evaluating automatic scoring of the Index of Productive Syntax were introduced: Machine Item Accuracy (MIA) and Cascade Failure Rate— these measures further analyze points erroneously given and missed. Differences in total scores, subscale scores, and individual structures were also reported. Results Mean absolute point difference between machine and hand scoring was 3.65, point-to-point agreement was 72.6%, and MIA was 74.9%. There were large differences in subscales, with Noun Phrase and Verb Phrase subscales generally providing greater accuracy and agreement than Question/Negation and Sentence Structures subscales. There were significantly more erroneous than missed items in machine scoring, attributed to problems of mistagging of elements, imprecise search patterns, and other errors. Cascade failure resulted in an average of 4.65 points lost per transcript. Conclusions The CLAN program showed relatively inaccurate outcomes in comparison to manual scoring on both traditional and new measures of accuracy. Recommendations for improvement of the program include accounting for second exemplar violations and applying cascaded credit, among other suggestions. It was proposed that research on machine-scored syntax routinely report accuracy measures detailing erroneous and missed scores, including MIA, so that researchers and clinicians are aware of the limitations of a machine-scoring program. Supplemental Material https://doi.org/10.23641/asha.11984364


Sign in / Sign up

Export Citation Format

Share Document