From Softimage to Postimage

Leonardo ◽  
2017 ◽  
Vol 50 (1) ◽  
pp. 72-73 ◽  
Author(s):  
Ingrid Hoelzl ◽  
Rémi Marie

With the digital revolution, the photographic paradigm of the image has become supplemented with an algorithmic paradigm. The result is a new kind of image capable to gather, compute, merge and display heterogeneous data in real time; no longer a solid representation of a solid world but a softimage—a program-mable database view. In today’s neurosciences and machine vision, the very concept of “image” as a stable visual entity becomes questionable. As a result, the authors propose that the need exists to radically expand the definition of image and abandon its humanist and subjective frame: The posthuman image—which the authors propose to call the postimage—is a collaborative image created through the process of distributed vision involving humans, animals and machines.

1995 ◽  
Vol 34 (05) ◽  
pp. 475-488
Author(s):  
B. Seroussi ◽  
J. F. Boisvieux ◽  
V. Morice

Abstract:The monitoring and treatment of patients in a care unit is a complex task in which even the most experienced clinicians can make errors. A hemato-oncology department in which patients undergo chemotherapy asked for a computerized system able to provide intelligent and continuous support in this task. One issue in building such a system is the definition of a control architecture able to manage, in real time, a treatment plan containing prescriptions and protocols in which temporal constraints are expressed in various ways, that is, which supervises the treatment, including controlling the timely execution of prescriptions and suggesting modifications to the plan according to the patient’s evolving condition. The system to solve these issues, called SEPIA, has to manage the dynamic, processes involved in patient care. Its role is to generate, in real time, commands for the patient’s care (execution of tests, administration of drugs) from a plan, and to monitor the patient’s state so that it may propose actions updating the plan. The necessity of an explicit time representation is shown. We propose using a linear time structure towards the past, with precise and absolute dates, open towards the future, and with imprecise and relative dates. Temporal relative scales are introduced to facilitate knowledge representation and access.


2021 ◽  
pp. 1-27
Author(s):  
D. Sartori ◽  
F. Quagliotti ◽  
M.J. Rutherford ◽  
K.P. Valavanis

Abstract Backstepping represents a promising control law for fixed-wing Unmanned Aerial Vehicles (UAVs). Its non-linearity and its adaptation capabilities guarantee adequate control performance over the whole flight envelope, even when the aircraft model is affected by parametric uncertainties. In the literature, several works apply backstepping controllers to various aspects of fixed-wing UAV flight. Unfortunately, many of them have not been implemented in a real-time controller, and only few attempt simultaneous longitudinal and lateral–directional aircraft control. In this paper, an existing backstepping approach able to control longitudinal and lateral–directional motions is adapted for the definition of a control strategy suitable for small UAV autopilots. Rapidly changing inner-loop variables are controlled with non-adaptive backstepping, while slower outer loop navigation variables are Proportional–Integral–Derivative (PID) controlled. The controller is evaluated through numerical simulations for two very diverse fixed-wing aircraft performing complex manoeuvres. The controller behaviour with model parametric uncertainties or in presence of noise is also tested. The performance results of a real-time implementation on a microcontroller are evaluated through hardware-in-the-loop simulation.


Author(s):  
Jahwan Koo ◽  
Nawab Muhammad Faseeh Qureshi ◽  
Isma Farah Siddiqui ◽  
Asad Abbas ◽  
Ali Kashif Bashir

Abstract Real-time data streaming fetches live sensory segments of the dataset in the heterogeneous distributed computing environment. This process assembles data chunks at a rapid encapsulation rate through a streaming technique that bundles sensor segments into multiple micro-batches and extracts into a repository, respectively. Recently, the acquisition process is enhanced with an additional feature of exchanging IoT devices’ dataset comprised of two components: (i) sensory data and (ii) metadata. The body of sensory data includes record information, and the metadata part consists of logs, heterogeneous events, and routing path tables to transmit micro-batch streams into the repository. Real-time acquisition procedure uses the Directed Acyclic Graph (DAG) to extract live query outcomes from in-place micro-batches through MapReduce stages and returns a result set. However, few bottlenecks affect the performance during the execution process, such as (i) homogeneous micro-batches formation only, (ii) complexity of dataset diversification, (iii) heterogeneous data tuples processing, and (iv) linear DAG workflow only. As a result, it produces huge processing latency and the additional cost of extracting event-enabled IoT datasets. Thus, the Spark cluster that processes Resilient Distributed Dataset (RDD) in a fast-pace using Random access memory (RAM) defies expected robustness in processing IoT streams in the distributed computing environment. This paper presents an IoT-enabled Directed Acyclic Graph (I-DAG) technique that labels micro-batches at the stage of building a stream event and arranges stream elements with event labels. In the next step, heterogeneous stream events are processed through the I-DAG workflow, which has non-linear DAG operation for extracting queries’ results in a Spark cluster. The performance evaluation shows that I-DAG resolves homogeneous IoT-enabled stream event issues and provides an effective stream event heterogeneous solution for IoT-enabled datasets in spark clusters.


2005 ◽  
Vol 56 (8-9) ◽  
pp. 831-842 ◽  
Author(s):  
Monica Carfagni ◽  
Rocco Furferi ◽  
Lapo Governi

2020 ◽  
Vol 32 ◽  
pp. 03054
Author(s):  
Akshata Parab ◽  
Rashmi Nagare ◽  
Omkar Kolambekar ◽  
Parag Patil

Vision is one of the very essential human senses and it plays a major role in human perception about surrounding environment. But for people with visual impairment their definition of vision is different. Visually impaired people are often unaware of dangers in front of them, even in familiar environment. This study proposes a real time guiding system for visually impaired people for solving their navigation problem and to travel without any difficulty. This system will help the visually impaired people by detecting the objects and giving necessary information about that object. This information may include what the object is, its location, its precision, distance from the visually impaired etc. All these information will be conveyed to the person through audio commands so that they can navigate freely anywhere anytime with no or minimal assistance. Object detection is done using You Only Look Once (YOLO) algorithm. As the process of capturing the video/images and sending it to the main module has to be carried at greater speed, Graphics Processing Unit (GPU) is used. This will help in enhancing the overall speed of the system and will help the visually Impaired to get the maximum necessary instructions as quickly as possible. The process starts from capturing the real time video, sending it for analysis and processing and get the calculated results. The results obtained from analysis are conveyed to user by means of hearing aid. As a result by this system the blind or the visually impaired people can visualize the surrounding environment and travel freely from source to destination on their own.


Sign in / Sign up

Export Citation Format

Share Document