Artificial Intelligence Software and Hardware Platforms

Author(s):  
Rajesh Gadiyar ◽  
Tong Zhang ◽  
Ananth Sankaranarayanan
Ergodesign ◽  
2020 ◽  
Vol 2020 (1) ◽  
pp. 19-24
Author(s):  
Igor Pestov ◽  
Polina Shinkareva ◽  
Sofia Kosheleva ◽  
Maxim Burmistrov

This article aims to develop a hardware-software system for access control and management based on the hardware platforms Arduino Uno and Raspberry Pi. The developed software and hardware system is designed to collect data and store them in the database. The presented complex can be carried and used anywhere, which explains its high mobility.


2021 ◽  
Vol 10 (1) ◽  
pp. 13
Author(s):  
Claudia Campolo ◽  
Giacomo Genovese ◽  
Antonio Iera ◽  
Antonella Molinaro

Several Internet of Things (IoT) applications are booming which rely on advanced artificial intelligence (AI) and, in particular, machine learning (ML) algorithms to assist the users and make decisions on their behalf in a large variety of contexts, such as smart homes, smart cities, smart factories. Although the traditional approach is to deploy such compute-intensive algorithms into the centralized cloud, the recent proliferation of low-cost, AI-powered microcontrollers and consumer devices paves the way for having the intelligence pervasively spread along the cloud-to-things continuum. The take off of such a promising vision may be hurdled by the resource constraints of IoT devices and by the heterogeneity of (mostly proprietary) AI-embedded software and hardware platforms. In this paper, we propose a solution for the AI distributed deployment at the deep edge, which lays its foundation in the IoT virtualization concept. We design a virtualization layer hosted at the network edge that is in charge of the semantic description of AI-embedded IoT devices, and, hence, it can expose as well as augment their cognitive capabilities in order to feed intelligent IoT applications. The proposal has been mainly devised with the twofold aim of (i) relieving the pressure on constrained devices that are solicited by multiple parties interested in accessing their generated data and inference, and (ii) and targeting interoperability among AI-powered platforms. A Proof-of-Concept (PoC) is provided to showcase the viability and advantages of the proposed solution.


2018 ◽  
Vol 9 (2) ◽  
pp. 257-274
Author(s):  
Ririen Kusumawati

The computer technology has incredibly increased. Computer software and hardware compete to meet the customer's needs. The research intends to spread the knowledge of information technology, specifically, on the artificial intelligence. The concept of artificial intelligence is adopting and imitating human form, character, and habit which to be implemented on the computer. Using natural approach, the research aims to investigate whether artificial intelligence (AI) will produce the duplication of God's creation. Another important reason of other reseaches on AI is to create a computer which is smart and able to understand human brain working system. Hence, AI has been designed into more practical with faster CPU, cheaper mass memory, and sophisticated software tool. The concept of integrating AI science or collaborative art among sub-fields of technology will stimulate and lead to further AI researches, and it will be an interesting topic for AI researchers for developing AI technology in the future.


Author(s):  
Stefan Richter ◽  
Norbert Kuhn ◽  
Stefan Naumann ◽  
Michael Schmidt

Many governmental institutions have started to provide their customers with access to governmental documents by electronic means. This changes the way of interaction between authorities and citizens considerably. Hence, it is worthwhile to look at both the chances and the risks that this process of change implies for disabled citizens. Due to different laws or legal directives governmental authorities have a particular responsibility to consider also the needs of handicapped persons. Therefore, they need to apply appropriate techniques for these groups to avoid an “Accessibility Divide”. This discussion is built on the observation that governmental processes are mostly based on the exchange of forms between authorities and citizens. Authors state that such processes can be distinguished into three scenarios, with the use of paper as means of transport on the one end and complete electronic treatment at the other end. For each scenario there exist tools to improve accessibility for people with certain disabilities. These tools include standard technologies like improved Web access by magnifying characters, assistive technologies like document cameras, and more sophisticated approaches like integrated solutions for handling forms and government processes. This chapter focuses on approaches that provide access to governmental processes for people with visual impairments, elderly people, illiterates, or immigrants. Additionally, it sees a chance to enable electronic government processes in developing countries where the citizens have less experience in handling IT-based processes. The main part of the chapter describes an approach to combine scanned images of paper-based forms containing textual information and textto- speech synthesis yielding an audio-visual document representation. It exploits standard document formats based on XML and web service technology to achieve independency from software and hardware platforms. This is also helpful for conventional governmental processes because people within the group of interest stated above often also have problems to access non-digitized information, for instance when they have to read announcements within public administration offices.


Author(s):  
Rajasvaran Logeswaran ◽  
Li-Choo Chen

This paper investigates the performance of two proposed load balancing algorithms for Object-Oriented Distributed Service Architectures (DSA) that are open and flexible, enabling rapid and easy development of new applications on various kinds of software and hardware platforms, catering for telecommunications and distributed medical applications. The proposed algorithms, namely, Node Status Algorithm and Random Sender Initiated Algorithm, have been developed as solutions to the performance problems faced by the DSA. The performance of the proposed algorithms have been tested and compared with baseline load balancing algorithms, namely the Random Algorithm and Shortest Queue Algorithm. Simulation results show that both the proposed algorithms perform better than the baseline algorithms, especially in heavily loaded conditions. This paper discusses the mechanisms of the algorithms and reports on the investigations that have been carried out in comparing the load balancing algorithms implemented on a DSA-based network, which is useful for the distributed computing requirements of the medical field.


2011 ◽  
pp. 42-87
Author(s):  
Ashutosh Deshmukh

The Internet spins a vast web of information across the globe. Data and information flow freely — available to anyone for learning, understanding and analysis. Organizations can cooperate across departments, regions and countries. ERP II and ECM herald the era of intra- and inter-business collaboration. Sounds wonderful – what is the problem? The problem is as old as mainframe vs. PC and Windows vs. Macintosh. Data can move freely but are not standardized. Data streams have no universal meanings; consequently, data are not understood by all systems, analyzed easily, translated across different languages and human readable, among other things. Specialized hardware and software is needed for data decoding, and if the required tools are not available, then you are out of luck. This problem is not only confined to the Internet. A great deal of money (by one estimate, almost 20% of the U.S. gross national product) is spent on generating new information, and more than 90% of this information is in documents, not in databases. Businesses in the U.S. produce approximately 100 billion documents per year. This information is stored in various formats across a range of computer systems. These disparate storage formats cause severe problems in accessing, searching and distributing this information. Any solution (a combination of information technology products and services) that manages information across diverse software and hardware platforms must address a few key requirements. First, these solutions should be transparent to users. The technical details should not be handled by users. Second, users should be able to save data and information in the desired format; for example, databases, text files or proprietary formats. Third, a solution must intelligently retrieve data and information. This solution should be knowledgeable regarding meaning of the information itself. Finally, such solution should be capable of providing the desired output — print, screen, Web or CD/DVD format.


Author(s):  
David Mulvaney ◽  
Ian Sillitoe ◽  
Erick Swere ◽  
Yang Wang ◽  
Zhenhuan Zhu

Author(s):  
Konstantin Ryabinin ◽  
Svetlana Chuprina ◽  
Ivan Labutin

In the last decade, the recent advances in software and hardware facilitate the increase of interest in conducting experiments in the field of neurosciences, especially related to human-machine interaction. There are many mature and popular platforms leveraging experiments in this area including systems for representing the stimuli. However, these solutions often lack high-level adaptability to specific conditions, specific experiment setups, and third-party software and hardware, which may be involved in the experimental pipelines. This paper presents an adaptable solution based on ontology engineering that allows creating and tuning the EEG-based brain-computer interfaces. This solution relies on the ontology-driven SciVi visual analytics platform developed earlier. In the present work, we introduce new capabilities of SciVi, which enable organizing the pipeline for neuroscience-related experiments, including the representation of audio-visual stimuli, as well as retrieving, processing, and analyzing the EEG data. The distinctive feature of our approach is utilizing the ontological description of both the neural interface and processing tools used. This increases the semantic power of experiments, simplifies the reuse of pipeline parts between different experiments, and allows automatic distribution of data acquisition, storage, processing, and visualization on different computing nodes in the network to balance the computation load and to allow utilizing various hardware platforms, EEG devices, and stimuli controllers.


Author(s):  
Ram Chander

Preservation of digital resources in the 21st century has been a great challenge for library and information professionals. Digital libraries have been built all over the world. Libraries are engaged in creating and maintaining digital libraries. One of the main challenges in maintaining digital libraries is the digital preservation aspect. The aim of digital preservation is to ensure that digital records are filed and are made available through time. Digital information preservation is always the thinking of library and information society. Preservation of digital documents has now become more obvious and necessary because of the fragility of digital data and software and hardware platforms becoming obsolete. The present chapter focuses on the digital preservation, strategies, policies, functions, current activities, and guideline of digital preservation of information.


Sign in / Sign up

Export Citation Format

Share Document