Mini-ATX Computer System in Vehicle

2015 ◽  
Vol 1115 ◽  
pp. 484-487 ◽  
Author(s):  
Muhammad Sami ◽  
Akram M. Zeki

The aim of this study is to create and assemble the system with customizing/building Linux kernel and environments to be compatible and efficient on mini-ITX computer. The objective of the study is to create/customizing lightweight operating system using GNU/Linux to be used on computer to be used on vehicle. The system would also optimize the size and functionalities most probably would be implemented on car computer system.Keywords: mini-ATX, CarPC, Linux, Ubuntu, Qt, QML

2021 ◽  
Vol 17 (2) ◽  
Author(s):  
Kisron Kisron ◽  
Bima Sena Bayu Dewantara ◽  
Hary Oktavianto

In a visual-based real detection system using computer vision, the most important thing that must be considered is the computation time. In general, a detection system has a heavy algorithm that puts a strain on the performance of a computer system, especially if the computer has to handle two or more different detection processes. This paper presents an effort to improve the performance of the trash detection system and the target partner detection system of a trash bin robot with social interaction capabilities. The trash detection system uses a combination of the Haar Cascade algorithm, Histogram of Oriented Gradient (HOG) and Gray-Level Coocurrence Matrix (GLCM). Meanwhile, the target partner detection system uses a combination of Depth and Histogram of Oriented Gradient (HOG) algorithms. Robotic Operating System (ROS) is used to make each system in separate modules which aim to utilize all available computer system resources while reducing computation time. As a result, the performance obtained by using the ROS platform is a trash detection system capable of running at a speed of 7.003 fps. Meanwhile, the human target detection system is capable of running at a speed of 8,515 fps. In line with the increase in fps, the accuracy also increases to 77%, precision increases to 87,80%, recall increases to 82,75%, and F1-score increases to 85,20% in trash detection, and the human target detection system has also improved accuracy to 81%, %, precision increases to 91,46%, recall increases to 86,20%, and F1-score increases to 88,42%.


1989 ◽  
Vol 11 (3) ◽  
pp. 119-123 ◽  
Author(s):  
Arthur A. Eggert ◽  
Kenneth A. Emmerich ◽  
Thomas J. Blankenheim ◽  
Gary J. Smulka

Improvements in the performance of a laboratory computer system do not necessarily require the replacement of major portions of the system and may not require the acquisition of any hardware at all. Major bottlenecks may exist in the ways that the operating system manages its resources and the algorithm used for timesharing decisions. Moreover, significant throughput improvements may be attainable by switching to a faster storage device if substantial disk activity is performed. In this study the fractions of time used for each of the types of tasks a laboratory computer system performs (e.g. applications programs, disk transfer, queue cycler) are defined and measured. Methods for reducing the time fractions of the various types of overhead are evaluated by doing before and after studies. The combined results of the three studies indicated that a 50% improvement could be gained through system tuning and faster storage without replacement of the computer itself


First Monday ◽  
2005 ◽  
Author(s):  
Jae Yun Moon ◽  
Lee Sproull

This paper provides a historical account of how the Linux operating system kernel was developed from three different perspectives. Each focuses on different critical factors in its success at the individual, group, and community levels. The technical and management decisions of Linus Torvalds the individual were critical in laying the groundwork for a collaborative software development project that has lasted almost a decade. The contributions of volunteer programmers distributed worldwide enabled the development of an operating system on the par with proprietary operating systems. The Linux electronic community was the organizing structure that coordinated the efforts of the individual programmers. The paper concludes by summarizing the factors important in the successful distributed development of the Linux kernel, and the implications for organizationally managed distributed work arrangements.


1976 ◽  
Vol 2 (4) ◽  
pp. 54-64 ◽  
Author(s):  
B. Veldstra ◽  
J.M.H. Dassen

2018 ◽  
Vol 1 (2) ◽  
pp. 86-93
Author(s):  
I Putu Agus Eka Pratama ◽  
Anak Agung Bagus Arya Wiradarma

The Linux Operating System is known for its open-source characteristic which means everyone is free to develop Linux with the use of available source code. The result of Linux development is called Linux distribution (Distro). There are various Linux distributions in accordance with their respective uses, one of them is Kali Linux. Kali Linux is a Linux distro that is developed to penetrate the security of computer systems. Kali Linux uses a variety of tools to perform its functions. However, for users who want to use the functionality of Kali Linux without having to change the Linux distro that has been used, the user can use Katoolin. Katoolin can provide the convenience and flexibility for users who want to use Kali Linux as a special Linux distro for the purpose of penetrating computer system security without having to replace the distro that has been used or do a full install of Kali Linux. One case study that can be solved using the Kali Kali Linux based tool on Katoolin is Reverse Engineering. The case study was solved using one of the tools in the Reverse Engineering category named apktool that available on Katoolin.


2014 ◽  
Vol 666 ◽  
pp. 69-76
Author(s):  
Tao Tao ◽  
Hui Yi Zhang ◽  
Xiao Zheng ◽  
Zhi Xiang Yuan ◽  
Xiu Jun Wang

Directed at the traditional single chip microcomputer system which didn’t have the managing ability of the modern operating system, a technology of AC synchronization sampling was presented and implements which were based on embedded microprocessor S3C2410 and Linux real-time operating system kernel. The technology divided the sampling process into 2-part, frequency dynamic tracking and same-internal synchronization sampling .The multi-tasks, which process the different priorities, used to actualize sampling the signal and treating with the data. All tasks were scheduled by the preemptive Linux kernel. The technology has decreased the conflict between the precision and real-time property. The construction of hardware and software, method of constituting real-time operating system based on Linux kernel, signal frequency measuring, signal sampling and harmonic computing were introduced in detail.


Author(s):  
Yair Wiseman ◽  
Joel Isaacson ◽  
Eliad Lubovsky ◽  
Pinchas Weisberg

The Linux kernel stack has a fixed size. There is no mechanism to prevent the kernel from overflowing the stack. Hackers can exploit this bug to put unwanted information in the memory of the operating system and gain control over the system. In order to prevent this problem, the authors introduce a dynamically sized kernel stack that can be integrated into the standard Linux kernel. The well-known paging mechanism is reused with some changes, in order to enable the kernel stack to grow.


Author(s):  
Jitendra Kumar Rai ◽  
Atul Negi ◽  
Rajeev Wankar ◽  
K. D. Nayak

Sharing resources such as caches and memory buses between the cores of multi-core processors may cause performance bottlenecks for running programs. In this paper, the authors describe a meta-scheduler, which adapts the process scheduling decisions for reducing the contention for shared L2 caches on multi-core processors. The meta-scheduler takes into account the multi-core topology as well as the L2 cache related characteristics of the processes. Using the model generated by the process of machine learning, it predicts the L2 cache behavior, i.e., solo-run-L2-cache-stress, of the programs. It runs in user mode and guides the underlying operating system process scheduler in intelligent scheduling of processes to reduce the contention of shared L2 caches. In these experiments, the authors observed up to 12 percent speedup in individual as well as overall performance, while using meta-scheduler as compared to default process scheduler (Completely Fair Scheduler) of Linux kernel.


Sign in / Sign up

Export Citation Format

Share Document