Parametric Generator for Architectural and Urban 3D Objects

Author(s):  
Renato Saleri Lunazzi

The authors developed and finalized a specific tool able to model the global structure of architectural objects through a morphological and semantic description of its finite elements. This discrete conceptual model - still in study - was refined during the geometric modeling of the “Vieux Lyon” district, containing a high level of morpho-stylistic disparity. Future developments should allow increasing the genericity of its descriptive efficiency, permitting even more sparse morphological and\or stylistic varieties. Its general purpose doesn’t consist in creating a “universal modeler,” but to offer a simple tool able to quickly describe a majority of standard architectural objects compliant with some standard parametric definition rules.

Author(s):  
Robert H. Sturges ◽  
Jui-Te Yang

Abstract In support of the effort to bring downstream issues to the attention of the designer as parts take shape, an analysis system is being built to extract certain features relevant to the assembly process, such as the dimension, shape, and symmetry of an object. These features can be applied to a model during the downstream process to evaluate handling and assemblability. In this paper, we will focus on the acquisition phase of the assembly process and employ a Design for Assembly (DFA) evaluation to quantify factors in this process. The capabilities of a non-homogeneous, non-manifold boundary representation geometric modeling system are used with an Index of Difficulty (ID) that represents the dexterity and time required to assemble a product. A series of algorithms based on the high-level abstractions of loop and link are developed to extract features that are difficult to orient, which is one of the DFA criteria. Examples for testing the robustness of the algorithms are given. Problems related to nearly symmetric outlines are also discussed.


Author(s):  
Matias Javier Oliva ◽  
Pablo Andrés García ◽  
Enrique Mario Spinelli ◽  
Alejandro Luis Veiga

<span lang="EN-US">Real-time acquisition and processing of electroencephalographic signals have promising applications in the implementation of brain-computer interfaces. These devices allow the user to control a device without performing motor actions, and are usually made up of a biopotential acquisition stage and a personal computer (PC). This structure is very flexible and appropriate for research, but for final users it is necessary to migrate to an embedded system, eliminating the PC from the scheme. The strict real-time processing requirements of such systems justify the choice of a system on a chip field-programmable gate arrays (SoC-FPGA) for its implementation. This article proposes a platform for the acquisition and processing of electroencephalographic signals using this type of device, which combines the parallelism and speed capabilities of an FPGA with the simplicity of a general-purpose processor on a single chip. In this scheme, the FPGA is in charge of the real-time operation, acquiring and processing the signals, while the processor solves the high-level tasks, with the interconnection between processing elements solved by buses integrated into the chip. The proposed scheme was used to implement a brain-computer interface based on steady-state visual evoked potentials, which was used to command a speller. The first tests of the system show that a selection time of 5 seconds per command can be achieved. The time delay between the user’s selection and the system response has been estimated at 343 µs.</span>


2004 ◽  
Vol 11 (33) ◽  
Author(s):  
Aske Simon Christensen ◽  
Christian Kirkegaard ◽  
Anders Møller

We show that it is possible to extend a general-purpose programming language with a convenient high-level data-type for manipulating XML documents while permitting (1) precise static analysis for guaranteeing validity of the constructed XML documents relative to the given DTD schemas, and (2) a runtime system where the operations can be performed efficiently. The system, named Xact, is based on a notion of immutable XML templates and uses XPath for deconstructing documents. A companion paper presents the program analysis; this paper focuses on the efficient runtime representation.


2019 ◽  
Author(s):  
J-Donald Tournier ◽  
Robert Smith ◽  
David Raffelt ◽  
Rami Tabbara ◽  
Thijs Dhollander ◽  
...  

AbstractMRtrix3 is an open-source, cross-platform software package for medical image processing, analysis and visualization, with a particular emphasis on the investigation of the brain using diffusion MRI. It is implemented using a fast, modular and flexible general-purpose code framework for image data access and manipulation, enabling efficient development of new applications, whilst retaining high computational performance and a consistent command-line interface between applications. In this article, we provide a high-level overview of the features of the MRtrix3 framework and general-purpose image processing applications provided with the software.


Author(s):  
Aleksandr V. Babkin ◽  
◽  
Elena V. Shkarupeta ◽  
Vladimir A. Plotnikov ◽  
◽  
...  

Ten years after the first introduction of Industry 4.0 at Hannover trade fair as a concept of German industry efficiency improvement, the European Commission announced a new industrial evolution – Industry 5.0 and revealed an updated representation of Industry 5.0 as a result of attaining of triad forming stability, human-centricity and industry viability. At the nexus of the fourth and fifth phases of industry evolutions, new objects arise – intelligent cyber-social ecosystems that use the strengths of cyber-physical ecosystems, changing under the influence of digital end-to-end technologies, combined with human and artificial intelligence. The purpose of this research is to present a conceptual model of an intelligent (“smart”) cyber-social ecosystem based on multimodal hyperspace within the conditions of Industry 5.0. The research methodology includes systems science, metasystemic, ecosystemic, value-based, cyber-socio-techno-cognitive approaches; concepts of platforms, creator economy, Open innovations 2.0 based on an innovative model of a quadruple helix. As a result of this research, the evolution of the establishment and development of an ecosystemic paradigm in economic science is shown. The study describes a cognitive transition from cyber-physical systems of Industry 4.0 to intelligent cyber-social ecosystems as objects of Industry 5.0. A conceptual model has been originated, in which a cyber-social ecosystem is introduced as an ecosystem of new metalevel (“metasystem”), evolving under the conditions of the transition from Industry 4.0 to Industry 5.0 based on cyber-social values of human-centricity, stability and viability. The model is notable for its high level of cybernetic hyperconvergence, socioecosystemic, technological and cognitive modality to achieve ethical social goals, sustainable welfare for all humanity and each individual person, taking into account the scope of planetary capacity.


Author(s):  
А. С. Семин ◽  
С. И. Вахрушев

Строительство зданий и сооружений нефтяного комплекса представляет собой сложный технологический процесс, в состав которого входит производство земляных работ. Строительная техника для производства земляных работ должна соответствовать как техническим характеристикам для выполнения поставленных задач при определенных внешних условиях, так и экономическим показателям. В статье рассматривается вопрос оптимального комплектования строительных машин при разработке котлована под вертикальный стальной резервуар (РВС). Разработана программа для поиска оптимального комплекта машин, в условиях полной определенности. Расчет основан на методе динамического программирования Дейкстры. В качестве критерия оптимизации выбраны приведенные затраты на единицу одного кубометра строительной продукции. Расчет представлен для каждого этапа производства работ. По результат расчета построен сетевой граф с выбором оптимального комплекта машин. Программа расчета выполнена на высокоуровневом языке программирования общего назначения Python. The construction of buildings and structures of the oil complex is a complex technological process, which includes the production of earthworks. Construction machinery for earthworks should comply with both the technical characteristics for the performance of tasks under certain external conditions and economic indicators. The article discusses the question of the optimal acquisition of construction vehicles during the development of a pit for a vertical steel tank (TVS). A program has been developed to search for the optimal set of machines, in conditions of complete certainty. The calculation is based on the Dijkstra dynamic programming method. As an optimization criterion, the reduced costs per unit of one cubic meter of construction products were selected. The calculation is presented for each stage of the work. Based on the calculation result, a network graph was constructed with the choice of the optimal set of machines. The calculation program is executed in a high-level general-purpose programming language Python.


Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 193 ◽  
Author(s):  
Sebastian Raschka ◽  
Joshua Patterson ◽  
Corey Nolet

Smarter applications are making better use of the insights gleaned from data, having an impact on every industry and research discipline. At the core of this revolution lies the tools and the methods that are driving it, from processing the massive piles of data generated each day to learning from and taking useful action. Deep neural networks, along with advancements in classical machine learning and scalable general-purpose graphics processing unit (GPU) computing, have become critical components of artificial intelligence, enabling many of these astounding breakthroughs and lowering the barrier to adoption. Python continues to be the most preferred language for scientific computing, data science, and machine learning, boosting both performance and productivity by enabling the use of low-level libraries and clean high-level APIs. This survey offers insight into the field of machine learning with Python, taking a tour through important topics to identify some of the core hardware and software paradigms that have enabled it. We cover widely-used libraries and concepts, collected together for holistic comparison, with the goal of educating the reader and driving the field of Python machine learning forward.


2020 ◽  
Vol 54 (4) ◽  
pp. 409-435
Author(s):  
Paolo Manghi ◽  
Claudio Atzori ◽  
Michele De Bonis ◽  
Alessia Bardi

PurposeSeveral online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.Design/methodology/approachThis work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.FindingsGDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.Originality/valueTo our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.


Author(s):  
José-Fernando. Diez-Higuera ◽  
Francisco-Javier Diaz-Pernas

In the last few years, because of the increasing growth of the Internet, general-purpose clients have achieved a high level of popularity for static consultation of text and pictures. This is the case of the World Wide Web (i.e., the Web browsers). Using a hypertext system, Web users can select and read in their computers information from all around the world, with no other requirement than an Internet connection and a navigation program. For a long time, the information available on the Internet has been series of written texts and 2D pictures (i.e., static information). This sort of information suited many publications, but it was highly unsatisfactory for others, like those related to objects of art, where real volume, and interactivity with the user, are of great importance. Here, the possibility of including 3D information in Web pages makes real sense.


Sign in / Sign up

Export Citation Format

Share Document