The Design and Research of Telemedicine Consultation System Based in SIP

2012 ◽  
Vol 457-458 ◽  
pp. 834-840
Author(s):  
Xiang Hua Sun

In this essay ,introduce the design of Telemedicine Consultation System (TCS) ,then describe the chief functions and structure, analyse the key techique to realize the system. Namely with complete SIP protocol mechanisms to realize session and the establishment of termination, media part adopts JMF to realize the audio and video data acquisition, processing, sending and receiving, played, and other functions, graphical interface part adopts Java Swing realization, and image compression and audio, video processing technology.

Author(s):  
Haixu Xi ◽  
Feiyue Ye ◽  
Sheng He ◽  
Yijun Liu ◽  
Hongfen Jiang

Batch processes and phenomena in traffic video data processing, such as traffic video image processing and intelligent transportation, are commonly used. The application of batch processing can increase the efficiency of resource conservation. However, owing to limited research on traffic video data processing conditions, batch processing activities in this area remain minimally examined. By employing database functional dependency mining, we developed in this study a workflow system. Meanwhile, the Bayesian network is a focus area of data mining. It provides an intuitive means for users to comply with causality expression approaches. Moreover, graph theory is also used in data mining area. In this study, the proposed approach depends on relational database functions to remove redundant attributes, reduce interference, and select a property order. The restoration of selective hidden naive Bayesian (SHNB) affects this property order when it is used only once. With consideration of the hidden naive Bayes (HNB) influence, rather than using one pair of HNB, it is introduced twice. We additionally designed and implemented mining dependencies from a batch traffic video processing log for data execution algorithms.


Author(s):  
Dr. Manish L Jivtode

Web services are applications that allow for communication between devices over the internet and are independent of the technology. The devices are built and use standardized eXtensible Markup Language (XML) for information exchange. A client or user is able to invoke a web service by sending an XML message and then gets back and XML response message. There are a number of communication protocols for web services that use the XML format such as Web Services Flow Language (WSFL), Blocks Extensible Exchange Protocol(BEEP) etc. Simple Object Access Protocol (SOAP) and Representational State Transfer (REST) are used options for accessing web services. It is not directly comparable that SOAP is a communications protocol while REST is a set of architectural principles for data transmission. In this paper, the data size of 1KB, 2KB, 4KB, 8KB and 16KB were tested each for Audio, Video and result obtained for CRUD methods. The encryption and decryption timings in milliseconds/seconds were recorded by programming extensibility points of a WCF REST web service in the Azure cloud..


Connectivity ◽  
2020 ◽  
Vol 148 (6) ◽  
Author(s):  
Yu. I. Katkov ◽  
◽  
O. S. Zvenigorodsky ◽  
O. V. Zinchenko ◽  
V. V. Onyshchenko ◽  
...  

The article is devoted to the topical issue of finding new effective and improving existing widespread compression methods in order to reduce computational complexity and improve the quality of image-renewable image compression images, is important for the introduction of cloud technologies. The article presents a problem To increase the efficiency of cloud storage, it is necessary to determine methods for reducing the information redundancy of digital images by fractal compression of video content, to make recommendations on the possibilities of applying these methods to solve various practical problems. The necessity of storing high-quality video information in new HDTV formats 2k, 4k, 8k in cloud storage to meet the existing needs of users has been substantiated. It is shown that when processing and transmitting high quality video information there is a problem of reducing the redundancy of video data (image compression) provided that the desired image quality is preserved, restored by the user. It has been shown that in cloud storage the emergence of such a problem is historically due to the contradiction between consumer requirements for image quality and the necessary volumes and ways to reduce redundancy of video data, which are transmitted over communication channels and processed in data center servers. The solution to this problem is traditionally rooted in the search for effective technologies for compressing, archiving and compressing video information. An analysis of video compression methods and digital video compression technology has been performed, which reduces the amount of data used to represent the video stream. Approaches to image compression in cloud storage under conditions of preservation or a slight reduction in the amount of data that provide the user with the specified quality of the restored image are shown. Classification of special compression methods without loss and with information loss is provided. Based on the analysis, it is concluded that it is advisable to use special methods of compression with loss of information to store high quality video information in the new formats HDTV 2k, 4k, 8k in cloud storage. The application of video image processing and their encoding and compression on the basis of fractal image compression is substantiated. Recommendations for the implementation of these methods are given.


Author(s):  
Vladimir Barannik ◽  
Andrii Krasnorutsky ◽  
Sergii Shulgin ◽  
Valerii Yeroshenko ◽  
Yevhenii Sidchenko ◽  
...  

The subject of research in the article are the processes of video image processing using an orthogonal transformation for data transmission in information and telecommunication networks. The aim is to build a method of compression of video images while maintaining the efficiency of its delivery at a given informative probability. That will allow to provide a gain in the time of delivery of compressed video images, a necessary level of availability and authenticity at transfer of video data with preservation of strictly statistical regulations and the controlled loss of quality. Task: to study the known algorithms for selective processing of static video at the stage of approximation and statistical coding of the data based on JPEG-platform. The methods used are algorithm based on JPEG-platform, methods of approximation by orthogonal transformation of information blocks, arithmetic coding. It is a solution of scientific task-developed methods for reducing the computational complexity of transformations (compression and decompression) of static video images in the equipment for processing visual information signals, which will increase the efficiency of information delivery.The following results were obtained. The method of video image compression with preservation of the efficiency of its delivery at the set informative probability is developed. That will allow to fulfill the set requirements at the preservation of structural-statistical economy, providing a gain in time to bring compressed images based on the developed method, relative to known methods, on average up to 2 times. This gain is because with a slight difference in the compression ratio of highly saturated images compared to the JPEG-2000 method, for the developed method, the processing time will be less by at least 34%.Moreover, with the increase in the volume of transmitted images and the data transmission speed in the communication channel - the gain in the time of delivery for the developed method will increase. Here, the loss of quality of the compressed/restored image does not exceed 2% by RMS, or not worse than 45 dB by PSNR. What is unnoticeable to the human eye.Conclusions. The scientific novelty of the obtained results is as follows: for the first time the method of classification (separate) coding (compression) of high-frequency and low-frequency components of Walsh transformants of video images is offered and investigated, which allows to consider their different dynamic range and statistical redundancy reduced using arithmetic coding. This method will allow to ensure the necessary level of availability and authenticity when transmitting video data, while maintaining strict statistical statistics.Note that the proposed method fulfills the set tasks to increase the efficiency of information delivery. Simultaneously, the method for reducing the time complexity of the conversion of highly saturated video images using their representation by the transformants of the discrete Walsh transformation was further developed. It is substantiated that the perspective direction of improvement of methods of image compression is the application of orthogonal transformations on the basis of integer piecewise-constant functions, and methods of integer arithmetic coding of values of transformant transformations.It is substantiated that the joint use of Walsh transformation and arithmetic coding, which reduces the time of compression and recovery of images; reduces additional statistical redundancy. To further increase the degree of compression, a classification coding of low-frequency and high-frequency components of Walsh transformants is developed. It is shown that an additional reduction in statistical redundancy in the arrays of low-frequency components of Walsh transformants is achieved due to their difference in representation. Recommendations for the parameters of the compression method for which the lowest value of the total time of information delivery is provided are substantiated.


2013 ◽  
Vol 7 (3) ◽  
pp. 683-685
Author(s):  
Anil Mishra ◽  
Ms. Savita Shiwani

Images are an important part of today's digital world. However, due to the large quantity of data needed to represent modern imagery the storage of such data can be expensive. Thus, work on efficient image storage (image compression) has the potential to reduce storage costs and enable new applications.This lossless image compression has uses in medical, scientific and professional video processing applications.Compression is a process, in which given size of data is compressed to a smaller size. Storing and sending images to its original form can present a problem in terms of storage space and transmission speed.Compression is efficient for storing and transmission purpose.In this paper we described a new lossless adaptive prediction based algorithm for continuous tone images. In continuous tone images spatial redundancy exists.Our approach is to develop a new backward adaptive prediction techniques to reduce spatial redundancy in a image.The new prediction technique known as Modifed Gradient Adjusted Predictor (MGAP) is developed. MGAP is based on the prediction method used in Context based Adaptive Lossless Image Coding (CALIC). An adaptive selection method which selects the predictor in a slope bin in terms of minimum entropy improves the compression performance.


2021 ◽  
Vol 26 (2) ◽  
pp. 172-183
Author(s):  
E.S. Yanakova ◽  
◽  
G.T. Macharadze ◽  
L.G. Gagarina ◽  
A.A. Shvachko ◽  
...  

A turn from homogeneous to heterogeneous architectures permits to achieve the advantages of the efficiency, size, weight and power consumption, which is especially important for the built-in solutions. However, the development of the parallel software for heterogeneous computer systems is rather complex task due to the requirements of high efficiency, easy programming and the process of scaling. In the paper the efficiency of parallel-pipelined processing of video information in multiprocessor heterogeneous systems on a chip (SoC) such as DSP, GPU, ISP, VDP, VPU and others, has been investigated. A typical scheme of parallel-pipelined processing of video data using various accelerators has been presented. The scheme of the parallel-pipelined video data on heterogeneous SoC 1892VM248 has been developed. The methods of efficient parallel-pipelined processing of video data in heterogeneous computers (SoC), consisting of the operating system level, programming technologies level and the application level, have been proposed. A comparative analysis of the most common programming technologies, such as OpenCL, OpenMP, MPI, OpenAMP, has been performed. The analysis has shown that depend-ing on the device finite purpose two programming paradigms should be applied: based on OpenCL technology (for built-in system) and MPI technology (for inter-cell and inter processor interaction). The results obtained of the parallel-pipelined processing within the framework of the face recognition have confirmed the effectiveness of the chosen solutions.


Symmetry ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 911 ◽  
Author(s):  
Md Azher Uddin ◽  
Aftab Alam ◽  
Nguyen Anh Tu ◽  
Md Siyamul Islam ◽  
Young-Koo Lee

In recent years, the amount of intelligent CCTV cameras installed in public places for surveillance has increased enormously and as a result, a large amount of video data is produced every moment. Due to this situation, there is an increasing request for the distributed processing of large-scale video data. In an intelligent video analytics platform, a submitted unstructured video undergoes through several multidisciplinary algorithms with the aim of extracting insights and making them searchable and understandable for both human and machine. Video analytics have applications ranging from surveillance to video content management. In this context, various industrial and scholarly solutions exist. However, most of the existing solutions rely on a traditional client/server framework to perform face and object recognition while lacking the support for more complex application scenarios. Furthermore, these frameworks are rarely handled in a scalable manner using distributed computing. Besides, existing works do not provide any support for low-level distributed video processing APIs (Application Programming Interfaces). They also failed to address a complete service-oriented ecosystem to meet the growing demands of consumers, researchers and developers. In order to overcome these issues, in this paper, we propose a distributed video analytics framework for intelligent video surveillance known as SIAT. The proposed framework is able to process both the real-time video streams and batch video analytics. Each real-time stream also corresponds to batch processing data. Hence, this work correlates with the symmetry concept. Furthermore, we introduce a distributed video processing library on top of Spark. SIAT exploits state-of-the-art distributed computing technologies with the aim to ensure scalability, effectiveness and fault-tolerance. Lastly, we implant and evaluate our proposed framework with the goal to authenticate our claims.


Author(s):  
Sébastien Lefèvre

Video processing and segmentation are important stages for multimedia data mining, especially with the advance and diversity of video data available. The aim of this chapter is to introduce researchers, especially new ones, to the “video representation, processing, and segmentation techniques”. This includes an easy and smooth introduction, followed by principles of video structure and representation, and then a state-of-the-art of the segmentation techniques focusing on the shot-detection. Performance evaluation and common issues are also discussed before concluding the chapter.


2012 ◽  
Vol 262 ◽  
pp. 157-162
Author(s):  
Chong Gu ◽  
Zhan Jun Si

With the rapid development of modern video technology, the range of video applications is increasing, such as online video conferencing, online classroom, online medical, etc. However, due to the quantity of video data is large, video has to be compressed and encoded appropriately, but the encoding process may cause some distortions on video quality. Therefore, how to evaluate the video quality efficiently and accurately is essential in the fields of video processing, video quality monitoring and multimedia video applications. In this article, subjective, and comprehensive evaluation method of video quality were introduced, a video quality assessment system was completed, four ITU recommended videos were encoded and evaluated by Degradation Category Rating (DCR) and Structural Similarity (SSIM) methods using five different formats. After that, comprehensive evaluations with weights were applied. Results show that data of all three evaluations have good consistency; H.264 is the best encoding method, followed by Xvid and wmv8; the higher the encoding bit rate is, the better the evaluations are, but comparing to 1000kbps, the subjective and objective evaluation scores of 1400kbps couldn’t improve obviously. The whole process could also evaluate new encodings methods, and is applicable for high-definition video, finally plays a significant role in promoting the video quality evaluation and video encoding.


Sign in / Sign up

Export Citation Format

Share Document