massive number
Recently Published Documents





2022 ◽  
Vol 18 (2) ◽  
pp. 1-21
Yubo Yan ◽  
Panlong Yang ◽  
Jie Xiong ◽  
Xiang-Yang Li

The global IoT market is experiencing a fast growth with a massive number of IoT/wearable devices deployed around us and even on our bodies. This trend incorporates more users to upload data frequently and timely to the APs. Previous work mainly focus on improving the up-link throughput. However, incorporating more users to transmit concurrently is actually more important than improving the throughout for each individual user, as the IoT devices may not require very high transmission rates but the number of devices is usually large. In the current state-of-the-arts (up-link MU-MIMO), the number of transmissions is either confined to no more than the number of antennas (node-degree-of-freedom, node-DoF) at an AP or clock synchronized with cables between APs to support more concurrent transmissions. However, synchronized APs still incur a very high collaboration overhead, prohibiting its real-life adoption. We thus propose novel schemes to remove the cable-synchronization constraint while still being able to support more concurrent users than the node-DoF limit, and at the same time minimize the collaboration overhead. In this paper, we design, implement, and experimentally evaluate OpenCarrier, the first distributed system to break the user limitation for up-link MU-MIMO networks with coordinated APs. Our experiments demonstrate that OpenCarrier is able to support up to five up-link high-throughput transmissions for MU-MIMO network with 2-antenna APs.

2022 ◽  
Vol 54 (9) ◽  
pp. 1-40
Pengzhen Ren ◽  
Yun Xiao ◽  
Xiaojun Chang ◽  
Po-Yao Huang ◽  
Zhihui Li ◽  

Active learning (AL) attempts to maximize a model’s performance gain while annotating the fewest samples possible. Deep learning (DL) is greedy for data and requires a large amount of data supply to optimize a massive number of parameters if the model is to learn how to extract high-quality features. In recent years, due to the rapid development of internet technology, we have entered an era of information abundance characterized by massive amounts of available data. As a result, DL has attracted significant attention from researchers and has been rapidly developed. Compared with DL, however, researchers have a relatively low interest in AL. This is mainly because before the rise of DL, traditional machine learning requires relatively few labeled samples, meaning that early AL is rarely according the value it deserves. Although DL has made breakthroughs in various fields, most of this success is due to a large number of publicly available annotated datasets. However, the acquisition of a large number of high-quality annotated datasets consumes a lot of manpower, making it unfeasible in fields that require high levels of expertise (such as speech recognition, information extraction, medical images, etc.). Therefore, AL is gradually coming to receive the attention it is due. It is therefore natural to investigate whether AL can be used to reduce the cost of sample annotation while retaining the powerful learning capabilities of DL. As a result of such investigations, deep active learning (DeepAL) has emerged. Although research on this topic is quite abundant, there has not yet been a comprehensive survey of DeepAL-related works; accordingly, this article aims to fill this gap. We provide a formal classification method for the existing work, along with a comprehensive and systematic overview. In addition, we also analyze and summarize the development of DeepAL from an application perspective. Finally, we discuss the confusion and problems associated with DeepAL and provide some possible development directions.

2022 ◽  
pp. 64-86
Rafik El Amine Ghobrini ◽  
Fatima Zohra Benzert ◽  
Hanane Sarnou

With the new wave of young-minded, digitally-fluent, and tech-tethered instructors comes new creative e-pathways that build up novel e-pedagogies. More than ever before, innovative e-teaching modalities are needed to navigate the intricate socially-networked abyss where students and teachers alike have chosen to function in this pandemic period. That is why frameworks, however nascent they might be, are required to steer the learning ship, on more than one social media platform, to meet specific educational ends. In this light, this chapter presents a descriptive unobtrusive study which was conducted to map out an innovative e-mode of grammar instruction of a secondary e-tutor who was able to tutor a massive number of e-tutees simultaneously on Instagram and YouTube in the first phase, adding on, subsequently, Facebook to the e-instructional process. The findings unveil a framework of how to leverage or educationalize certain features of these cloud-based outlets concurrently and reach a more optimized e-method of multi-platform tutoring.

2022 ◽  
pp. 280-299
Moiz Mansoor ◽  
Muhammad Waqar Khan ◽  
Syed Sajjad Hussain Rizvi ◽  
Manzoor Ahmed Hashmani ◽  
Muhammad Zubair

Software engineering has been an active working area for many decades. It evolved in a bi-folded manner. First research and subsequently development. Since the day of its inception, the massive number of variants and methods of software engineering were proposed. Primarily, these methods are designed to cater the time-varying need of modern approach. In this connection, the Global Software Engineering (GSE) is one of the growing trends in the modern software industry. At the same time, the employment of Agile development methodologies has also gained the significant attention in the literature. This has created a rationale to explore and adopt agile development methodology in GSE. It gained rigorous attention as an alternative to traditional software development methodologies. This paper has presented a comprehensive review on the adaptation of modern agile practices in GSE. In addition, the strength and limitation of each approach have been highlighted. Finally, the open area in the said domain is submitted as one of the deliverables of this work.

2021 ◽  
Bin Zhang ◽  
Yang Wu ◽  
Xiaojing Zhang ◽  
Ming Ma

In the current salient object detection network, the most popular method is using U-shape structure. However, the massive number of parameters leads to more consumption of computing and storage resources which are not feasible to deploy on the limited memory device. Some others shallow layer network will not maintain the same accuracy compared with U-shape structure and the deep network structure with more parameters will not converge to a global minimum loss with great speed. To overcome all of these disadvantages, we propose a new deep convolution network architecture with three contributions: (1) using smaller convolution neural networks (CNNs) to compress the model in our improved salient object features compression and reinforcement extraction module (ISFCREM) to reduce parameters of the model. (2) introducing channel attention mechanism to weigh different channels for improving the ability of feature representation. (3) applying a new optimizer to accumulate the long-term gradient information during training to adaptively tune the learning rate. The results demonstrate that the proposed method can compress the model to 1/3 of the original size nearly without losing the accuracy and converging faster and more smoothly on six widely used datasets of salient object detection compared with the others models. Our code is published in

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Chen Cui ◽  
Shuang Wu ◽  
Zhenyong Wang ◽  
Qing Guo ◽  
Wei Xiang

The Internet of Things (IoT), which is expected to support a massive number of devices, is a promising communication scenario. Usually, the data of different devices has different reliability requirements. Channel codes with the unequal error protection (UEP) property are rather appealing for such applications. Due to the power-constrained characteristic of the IoT services, most of the data has short packets; therefore, channel codes are of short lengths. Consequently, how to transmit such nonuniform data from multisources efficiently and reliably becomes an issue be solved urgently. To address this issue, in this paper, a distributed coding scheme based on polar codes which can provide UEP property is proposed. The distributed polar codes are realized by the groundbreaking combination method of noisy coded bits. With the proposed coding scheme, the various data from multisources can be recovered with a single common decoder. Various reliability can be achieved; thus, UEP is provided. Finally, the simulation results show that the proposed coding scheme is viable.

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8320
Abebe Diro ◽  
Naveen Chilamkurti ◽  
Van-Doan Nguyen ◽  
Will Heyne

The Internet of Things (IoT) consists of a massive number of smart devices capable of data collection, storage, processing, and communication. The adoption of the IoT has brought about tremendous innovation opportunities in industries, homes, the environment, and businesses. However, the inherent vulnerabilities of the IoT have sparked concerns for wide adoption and applications. Unlike traditional information technology (I.T.) systems, the IoT environment is challenging to secure due to resource constraints, heterogeneity, and distributed nature of the smart devices. This makes it impossible to apply host-based prevention mechanisms such as anti-malware and anti-virus. These challenges and the nature of IoT applications call for a monitoring system such as anomaly detection both at device and network levels beyond the organisational boundary. This suggests an anomaly detection system is strongly positioned to secure IoT devices better than any other security mechanism. In this paper, we aim to provide an in-depth review of existing works in developing anomaly detection solutions using machine learning for protecting an IoT system. We also indicate that blockchain-based anomaly detection systems can collaboratively learn effective machine learning models to detect anomalies.

2021 ◽  
Xi Tom Zhang ◽  
Runpeng Harris Han

A massive number of transcriptomic profiles of blood samples from COVID-19 patients has been produced since pandemic COVID-19 begins, however, these big data from primary studies have not been well integrated by machine learning approaches. Taking advantage of modern machine learning arthrograms, we integrated and collected single cell RNA-seq (scRNA-seq) data from three independent studies, identified genes potentially available for interpretation of severity, and developed a high-performance deep learning-based deconvolution model AImmune that can predict the proportion of seven different immune cells from the bulk RNA-seq results of human peripheral mononuclear cells. This novel approach which can be used for clinical blood testing of COVID-19 on the ground that previous research shows that mRNA alternations in blood-derived PBMCs may serve as a severity indicator. Assessed on real-world data sets, the AImmune model outperformed the most recognized immune profiling model CIBERSORTx. The presented study showed the results obtained by the true scRNA-seq route can be consistently reproduced through the new approach AImmune, indicating a potential replacing the costly scRNA-seq technique for the analysis of circulating blood cells for both clinical and research purposes.

2021 ◽  
Vol 10 (6) ◽  
pp. 3249-3255
Ahmad A. A. Solyman ◽  
Khalid Yahya

Given the massive potentials of 5G communication networks and their foreseeable evolution, what should there be in 6G that is not in 5G or its long-term evolution? 6G communication networks are estimated to integrate the terrestrial, aerial, and maritime communications into a forceful network which would be faster, more reliable, and can support a massive number of devices with ultra-low latency requirements. This article presents a complete overview of potential 6G communication networks. The major contribution of this study is to present a broad overview of key performance indicators (KPIs) of 6G networks that cover the latest manufacturing progress in the environment of the principal areas of research application, and challenges.

2021 ◽  
Vijay Shankar Sharma ◽  
N.C Barwar

Now a day’s, Data is exponentially increasing with the advancement in the data science. Each and every digital footprint is generating enormous amount of data, which is further used for processing various tasks to generate important information for different end user applications. To handle such enormous amount of data, there are number of technologies available, Hadoop/HDFS is one of the big data handling technology. HDFS can easily handle the large files but when there is the case to deal with massive number of small files, the performance of the HDFS degrades. In this paper we have proposed a novel technique Hash Based Archive File (HBAF) that can solve the small file problem of the HDFS. The proposed technique is capable to read the final index files partly, that will reduce the memory load on the Name Node and offer the file appending capability after creation of the archiv.

Sign in / Sign up

Export Citation Format

Share Document