Ensured Quality of Admitted Voice Calls through Authorized VoIP Networks

Author(s):  
T. Subashri ◽  
B. Gokul Vinoth Kumar ◽  
V. Vaidehi
Keyword(s):  
Author(s):  
Daniel Schlosser ◽  
Michael Jarschel ◽  
Valentin Burger ◽  
Rastin Pries

2021 ◽  
Vol 16 ◽  
pp. 655-667
Author(s):  
Ivan Ganchev ◽  
Zhanlin Ji

In this paper, a new vision is presented for highly personalized, customized, and contextualized real-time recommendation of services to mobile users (consumers) by considering the current consumer-, network-, and service context. A smart service recommendation system is elaborated, which builds up and dynamically manages personal profiles of consumers, aiming to facilitate and optimize the service discovery and recommendation process, in support of consumers’ choices, thereby achieving the best quality of experience (QoE) as perceived by those consumers when utilizing different mobile services. The algorithm-driven recommended mobile services, accessible anytime-anywhere-anyhow through any kind of mobile devices via heterogeneous wireless access networks, range from typical telecommunication services (e.g., outgoing voice calls) to Internet services (e.g., multimedia streaming). These algorithms also may be further enriched by their being adapted and expanded to cover more sophisticated services such as helping the consumer’s health and security needs, an example being the finding (with subsequent dynamic changing, if required) of the most 'healthy' or 'secure' driving/biking/jogging/walking route to follow so as to avoid areas posing particular, consumer-specific, health or safety risk.


2012 ◽  
Vol 23 (1) ◽  
pp. 85-128 ◽  
Author(s):  
Geetanjali Sharma Geetanjali Sharma

In cellular Networks we generally consider a single queue for each cell, some authors proposed a model with a dedicated queue for each transceiver in the cell. We have extended the idea of dedicated queue for each transceiver in the cell with sub-rating channels to improve the Quality of Service (QOS) of the system. In this paper we have compared three models, in model-I we used guard channels to give priority to handoff attempts and a buffer for finite size is provided to give priority to handoff data attempts, further in model-II we have taken sub-rating channel scheme (SCS). In subrating scheme a full rate channel is temporarily divided into two half rate channels in the blocked cell; one half rate channel serve the originating call and another serves handoff call. We proposed a dedicated queue for each transceiver in the cell with sub-rating in model-III. The Fixed Channel Assignment Scheme is considered for all models. The probabilities of handoff failure, blocking probability of new calls, forced termination of handoff calls, probability of noncompleted calls for all models are calculated for varying assumed of values arrival rate of new data calls, arrival rate of new voice calls, buffer size of channels and service rates. We compared and analyzed the numerical results to validate the proposed models.


Signal Processing finds its applications in many fields of engineering like Communications, Multimedia processing like audio, speech, video compression. It is the constant endeavor in compression techniques to retain highest quality of media through lowest bits so that communication band width utilization can be maximized. The most basic form of end user communication in the mobile industry has been voice calls. Speech signals captured at the microphone are sampled at 8 or 16 KHz and using 16 bits per sample presents 128 KHz of information. Transmitting all of it is very inefficient usage of the communication channel. There are multiple ways to encode the speech input at lower bit rates using Source Coding and Waveform coding techniques. This paper is scoped to practically simulate some of the basic and advanced signal processing concepts and apply them to the speech signal processing domain to minimize the vocoders packet exchange bit rate. Existing techniques are studied and a new schema is proposed which reduce the vocoders packet exchange bit rate further.


Author(s):  
Samhan K ◽  
◽  
A. H. EL Fawal ◽  
M. Ammad- Uddin ◽  
Mansour A ◽  
...  

Recently, the coronavirus pandemic has caused widespread panic around the world. Modern technologies can be used to monitor and control this highly contagious disease. A plausible solution is to equip each patient who is diagnosed with or suspected of having COVID-19 with sensors that can monitor various healthcare and location parameters and report them to the desired facility to control the spread of the disease. However, the simultaneous communication of numerous sensors installed in the majority of an area’s population results in a huge burden on existing Long-Term Evolution (LTE) networks. The existing network becomes oversaturated because it has to manage two kinds of traffic in addition to normal traffic (text, voice, and video): healthcare traffic generated by a large number of sensors deployed over a huge population, and extra traffic generated by people contacting their family members via video or voice calls. In pandemics, e-healthcare traffic is critical and should not suffer packet loss or latency due to network overload. In this research, we studied the performance of existing networks under various conditions and predicted the severity of network degradation in an emergency. We proposed and evaluated three schemes (doubling bandwidth, combining LTE-A and LTE-M networks, and request queuing) for ensuring quality of service (QoS) of healthcare sensor (HCS) network traffic without perturbation from routine human-to-human or machine-to-machine communications. Finally, we simulated all proposed schemes and compared them with existing network scenarios. The results have showed that when we have doubled the bandwidth the SCR of all traffic was 100% as same as the Queue strategy. However, when we prioritized the HCS traffic the SCR has recorded 100%, while H2H and M2M traffic has recorded 73%. When we used hybrid network LTE-A and LTE-M network, the HCS and H2H traffic has recorded 100% and M2M traffic has recorded 70%. After analyzing the results, we conclude that our proposed queuing schemes performed well in all conditions and provide the best QoS for HCS traffic.


Poor voice quality in VoIP models during communication has been a common occurrence which VoIP users experience, this can be frustrating when users cannot communicate efficiently. Most people find it difficult to think straight when they make calls and there is an echo. In addition to this frustration, the caller’s money, time, effort, energy is all wasted without compensation of any kind. Users are also frustrated by not receiving, transmitting or misunderstanding voice messages correctly. Given the need for voice quality in calls, it is of no importance when there is no proper communication. This study aims to reduce the threat of bad calls and improve the quality of voice calls. Nonetheless, we need to raise the filter duration to a high value in some real telecom’s environments with long echo delays. But, because of high computational complexity, it is not efficient in efficiency. In this study, we suggest a solution that uses a computational formula to compensate long echo, delay, packet loss, jitter and noise. The model designed was developed using MATLAB 2019b. This approach demonstrated productivity in terms of both voice quality and system speed.


Author(s):  
K. T. Tokuyasu

During the past investigations of immunoferritin localization of intracellular antigens in ultrathin frozen sections, we found that the degree of negative staining required to delineate u1trastructural details was often too dense for the recognition of ferritin particles. The quality of positive staining of ultrathin frozen sections, on the other hand, has generally been far inferior to that attainable in conventional plastic embedded sections, particularly in the definition of membranes. As we discussed before, a main cause of this difficulty seemed to be the vulnerability of frozen sections to the damaging effects of air-water surface tension at the time of drying of the sections.Indeed, we found that the quality of positive staining is greatly improved when positively stained frozen sections are protected against the effects of surface tension by embedding them in thin layers of mechanically stable materials at the time of drying (unpublished).


Author(s):  
L. D. Jackel

Most production electron beam lithography systems can pattern minimum features a few tenths of a micron across. Linewidth in these systems is usually limited by the quality of the exposing beam and by electron scattering in the resist and substrate. By using a smaller spot along with exposure techniques that minimize scattering and its effects, laboratory e-beam lithography systems can now make features hundredths of a micron wide on standard substrate material. This talk will outline sane of these high- resolution e-beam lithography techniques.We first consider parameters of the exposure process that limit resolution in organic resists. For concreteness suppose that we have a “positive” resist in which exposing electrons break bonds in the resist molecules thus increasing the exposed resist's solubility in a developer. Ihe attainable resolution is obviously limited by the overall width of the exposing beam, but the spatial distribution of the beam intensity, the beam “profile” , also contributes to the resolution. Depending on the local electron dose, more or less resist bonds are broken resulting in slower or faster dissolution in the developer.


Author(s):  
G. Lehmpfuhl

Introduction In electron microscopic investigations of crystalline specimens the direct observation of the electron diffraction pattern gives additional information about the specimen. The quality of this information depends on the quality of the crystals or the crystal area contributing to the diffraction pattern. By selected area diffraction in a conventional electron microscope, specimen areas as small as 1 µ in diameter can be investigated. It is well known that crystal areas of that size which must be thin enough (in the order of 1000 Å) for electron microscopic investigations are normally somewhat distorted by bending, or they are not homogeneous. Furthermore, the crystal surface is not well defined over such a large area. These are facts which cause reduction of information in the diffraction pattern. The intensity of a diffraction spot, for example, depends on the crystal thickness. If the thickness is not uniform over the investigated area, one observes an averaged intensity, so that the intensity distribution in the diffraction pattern cannot be used for an analysis unless additional information is available.


Sign in / Sign up

Export Citation Format

Share Document