massive scale
Recently Published Documents


TOTAL DOCUMENTS

472
(FIVE YEARS 204)

H-INDEX

28
(FIVE YEARS 8)

2022 ◽  
Author(s):  
Anu Jagannath ◽  
Jithin Jagannath ◽  
Prem Sagar Pattanshetty Vasanth Kumar

Fifth generation (5G) networks and beyond envisions massive Internet of Things (IoT) rollout to support disruptive applications such as extended reality (XR), augmented/virtual reality (AR/VR), industrial automation, autonomous driving, and smart everything which brings together massive and diverse IoT devices occupying the radio frequency (RF) spectrum. Along with spectrum crunch and throughput challenges, such a massive scale of wireless devices exposes unprecedented threat surfaces. RF fingerprinting is heralded as a candidate technology that can be combined with cryptographic and zero-trust security measures to ensure data privacy, confidentiality, and integrity in wireless networks. Motivated by the relevance of this subject in the future communication networks, in this work, we present a comprehensive survey of RF fingerprinting approaches ranging from a traditional view to the most recent deep learning (DL) based algorithms. Existing surveys have mostly focused on a constrained presentation of the wireless fingerprinting approaches, however, many aspects remain untold. In this work, however, we mitigate this by addressing every aspect - background on signal intelligence (SIGINT), applications, relevant DL algorithms, systematic literature review of RF fingerprinting techniques spanning the past two decades, discussion on datasets, and potential research avenues - necessary to elucidate this topic to the reader in an encyclopedic manner.


F1000Research ◽  
2022 ◽  
Vol 11 ◽  
pp. 17
Author(s):  
Shohel Sayeed ◽  
Abu Fuad Ahmad ◽  
Tan Choo Peng

The Internet of Things (IoT) is leading the physical and digital world of technology to converge. Real-time and massive scale connections produce a large amount of versatile data, where Big Data comes into the picture. Big Data refers to large, diverse sets of information with dimensions that go beyond the capabilities of widely used database management systems, or standard data processing software tools to manage within a given limit. Almost every big dataset is dirty and may contain missing data, mistyping, inaccuracies, and many more issues that impact Big Data analytics performances. One of the biggest challenges in Big Data analytics is to discover and repair dirty data; failure to do this can lead to inaccurate analytics results and unpredictable conclusions. We experimented with different missing value imputation techniques and compared machine learning (ML) model performances with different imputation methods. We propose a hybrid model for missing value imputation combining ML and sample-based statistical techniques. Furthermore, we continued with the best missing value inputted dataset, chosen based on ML model performance for feature engineering and hyperparameter tuning. We used k-means clustering and principal component analysis. Accuracy, the evaluated outcome, improved dramatically and proved that the XGBoost model gives very high accuracy at around 0.125 root mean squared logarithmic error (RMSLE). To overcome overfitting, we used K-fold cross-validation.


2022 ◽  
Author(s):  
Charlie Saillard ◽  
Flore Delecourt ◽  
Benoit Schmauch ◽  
Olivier Moindrot ◽  
Magali Svrcek ◽  
...  

Pancreatic ductal adenocarcinoma (PAC) is a highly heterogeneous and plastic tumor with different transcriptomic molecular subtypes that hold great prognostic and theranostic values. We developed PACpAInt, a multistep approach using deep learning models to determine tumor cell type and their molecular phenotype on routine histological preparation at a resolution enabling to decipher complete intratumor heterogeneity on a massive scale never achieved before. PACpAInt effectively identified molecular subtypes at the slide level in three validation cohorts and had an independent prognostic value. It identified an interslide heterogeneity within a case in 39% of tumors that impacted survival. Diving at the cell level, PACpAInt identified pure classical and basal-like main subtypes as well as an intermediary phenotype and hybrid tumors that co-carried both classical and basal-like phenotypes. These novel artificial intelligence-based subtypes, together with the proportion of basal- like cells within a tumor had a strong prognostic impact.


2022 ◽  
Author(s):  
Jithin Jagannath ◽  
Anu Jagannath ◽  
Prem Sagar Pattanshetty Vasanth Kumar

Fifth generation (5G) networks and beyond envisions massive Internet of Things (IoT) rollout to support disruptive applications such as extended reality (XR), augmented/virtual reality (AR/VR), industrial automation, autonomous driving, and smart everything which brings together massive and diverse IoT devices occupying the radio frequency (RF) spectrum. Along with spectrum crunch and throughput challenges, such a massive scale of wireless devices exposes unprecedented threat surfaces. RF fingerprinting is heralded as a candidate technology that can be combined with cryptographic and zero-trust security measures to ensure data privacy, confidentiality, and integrity in wireless networks. Motivated by the relevance of this subject in the future communication networks, in this work, we present a comprehensive survey of RF fingerprinting approaches ranging from a traditional view to the most recent deep learning (DL) based algorithms. Existing surveys have mostly focused on a constrained presentation of the wireless fingerprinting approaches, however, many aspects remain untold. In this work, however, we mitigate this by addressing every aspect - background on signal intelligence (SIGINT), applications, relevant DL algorithms, systematic literature review of RF fingerprinting techniques spanning the past two decades, discussion on datasets, and potential research avenues - necessary to elucidate this topic to the reader in an encyclopedic manner.


2022 ◽  
Author(s):  
Jithin Jagannath ◽  
Anu Jagannath ◽  
Prem Sagar Pattanshetty Vasanth Kumar

Fifth generation (5G) networks and beyond envisions massive Internet of Things (IoT) rollout to support disruptive applications such as extended reality (XR), augmented/virtual reality (AR/VR), industrial automation, autonomous driving, and smart everything which brings together massive and diverse IoT devices occupying the radio frequency (RF) spectrum. Along with spectrum crunch and throughput challenges, such a massive scale of wireless devices exposes unprecedented threat surfaces. RF fingerprinting is heralded as a candidate technology that can be combined with cryptographic and zero-trust security measures to ensure data privacy, confidentiality, and integrity in wireless networks. Motivated by the relevance of this subject in the future communication networks, in this work, we present a comprehensive survey of RF fingerprinting approaches ranging from a traditional view to the most recent deep learning (DL) based algorithms. Existing surveys have mostly focused on a constrained presentation of the wireless fingerprinting approaches, however, many aspects remain untold. In this work, however, we mitigate this by addressing every aspect - background on signal intelligence (SIGINT), applications, relevant DL algorithms, systematic literature review of RF fingerprinting techniques spanning the past two decades, discussion on datasets, and potential research avenues - necessary to elucidate this topic to the reader in an encyclopedic manner.


2021 ◽  
Vol 27 ◽  
pp. 135-165
Author(s):  
Vidya Prahassacitta ◽  
Harkristuti Harkrisnowo

In a democratic society, the criminalisation of spreading disinformation is deemed a violation of freedom of expression. The development of information and communication technology, specifically the Internet, has changed people's perceptions of both disinformation and freedom of expression. This research critically analyses criminal law intervention against disinformation and freedom of expression in Indonesia. The research is document research using a comparative approach that analyses laws and regulations on disinformation in Indonesia, Germany, and Singapore. For Indonesian law, this research focuses on the provision of Articles 14 and 15 of Law No. 1/1946, which criminalises disinformation in the public sphere. This research shows that Indonesia needs a new approach regarding the criminal prohibition of spreading disinformation. It recommends that criminal law intervention is limited only to disinformation that is spread on a massive scale and causes significant harm.


2021 ◽  
Vol 119 (1) ◽  
pp. e2025334119
Author(s):  
Ferenc Huszár ◽  
Sofia Ira Ktena ◽  
Conor O’Brien ◽  
Luca Belli ◽  
Andrew Schlaikjer ◽  
...  

Content on Twitter’s home timeline is selected and ordered by personalization algorithms. By consistently ranking certain content higher, these algorithms may amplify some messages while reducing the visibility of others. There’s been intense public and scholarly debate about the possibility that some political groups benefit more from algorithmic amplification than others. We provide quantitative evidence from a long-running, massive-scale randomized experiment on the Twitter platform that committed a randomized control group including nearly 2 million daily active accounts to a reverse-chronological content feed free of algorithmic personalization. We present two sets of findings. First, we studied tweets by elected legislators from major political parties in seven countries. Our results reveal a remarkably consistent trend: In six out of seven countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left. Consistent with this overall trend, our second set of findings studying the US media landscape revealed that algorithmic amplification favors right-leaning news sources. We further looked at whether algorithms amplify far-left and far-right political groups more than moderate ones; contrary to prevailing public belief, we did not find evidence to support this hypothesis. We hope our findings will contribute to an evidence-based debate on the role personalization algorithms play in shaping political content consumption.


2021 ◽  
Vol 7 (1) ◽  
pp. 6-19
Author(s):  
Robert Boucaut

This article applies a persona studies approach to the case study of the Academy Awards. Key literature is used to situate an ‘Oscar’ persona within existing conceptualisations from the discipline. Oscar represents a composite persona that encapsulates an event, its broadcast, an Academy of individuals, and a larger discursive industry. It is a non-human persona that is coloured by distinctly human elements; it is collectively constructed on a massive scale, the process of which inviting constant contestation. Drawing from these theorisations I conduct a textual analysis to reach a persona reading of Oscar. As collective authors of the persona, members of the Academy, associated performers, and discursive contributors employ three distinct and consistent persona strategies: the Functional, the Spiritual, and the Ironic. Oscar’s taste-making function is enabled by extravagant staging and tempered by expressions of philanthropy yet performed with ironic self-effacement. The cumulative effect of these three performances allows Oscar manoeuvrability across the requirements of the different cultural contexts of each year. As well as providing a unique prism for understanding the Oscars as an institution, this work demarcates different levels of collective persona construction, challenging notions of central authority in production and performance, and accounting for the ongoing constructive work of publics.


Author(s):  
Ms. S. Thangappa ◽  
Ms. M. Annalakshmi ◽  
Dr. M. Sivakumar

Productive resources, which are vital for economic development of nations, are primarily scarce among nations. Capital being an important productive resource is abundant in industrialized economics. Structural adjustments in developing economics due to the introduction of globalization, since 1991, enabled Indian economy to attract these productive resources in a massive scale. Being the second largest populated nations of the world, India is unable to exploit its labour resources fully, due to the scarce availability of capital. This is one of the main reasons, why India had not achieved the desired level of economic growth, as expected. However the flow of capital movements to India during the post reform period is encouraging. As a result, India has achieved the growth rate of GDP at 7.2% per annum recently. But due to the uncontrolled growth of urbanization and industrialization, expansion and massive intensification of agriculture and the destruction of forests has created heavy pressure on land, forests, water and biodiversity. In the era of globalization, water has considered as an economic goods due to the higher demand. Water quality problem arises due to the extractive industries as well as from various manufacturing and agricultural production processes. Various pollutants are generated as the by product in the production of Pesticides, leather goods, detergent, plastic, pulp and paper. These pollutants have led to major environmental issues such as Forest and Agricultural land degradation, Resource depletion (water, mineral, forest, sand, rocks etc.,), Environmental degradation, Public Health, Loss of Biodiversity, Loss of resilience in ecosystems, Livelihood Security for the Poor. In recent years there has been growing concern about degradation and pollution of environment and climate change as they impact on future development of both the developing and developed countries. In 1992, representatives of over 150 countries met at Rio in Brazil to discuss the environmental issues and their implications for future development of the world. This meeting at Rio is called the ‘Earth summit’ or the United Nations Conference on Environment and Development (UNCED).


2021 ◽  
Author(s):  
◽  
Finn Sorger

<p>The Stadium is a blend of commercialism, functionality, regulation and iconicism. At the height of its power, a Stadium is an unrivalled example of the Sublime functioning at a massive scale for a collective and for individuals simultaneously. Every year, large-scale stadia are built for events such as Olympic Games or World Cups which then become underused or even abandoned after the event has finished. Despite this, these facilities continue to be built. This thesis argues that the challenge then, is to design the Sublime into the post-event stadia architecture.  This thesis looks to explore architectural design methods that invest the post-event stadia with the Sublime. The aim is to intensify the Sublime, often found at the height of the event, in post-event situations. Explorations in scale and programming are used to test such intensifications. Can the Sublime – which, to paraphrase Burke, is “the strongest of emotions causing astonishment because of unimagined eloquence, greatness, significance, or power, and which is experienced by the user as awe, wonder or even dread, fear and terror” - be found after the event?  This research uses iterative design experimentation to tease out the Sublime at three scales: that of an installation, then a domestic scale and then an urban-public scale. This research ultimately looks to create a project that uses the Sublime as a main driver and design criterion for creating a Stadium that is as effective at low capacity as it is at full capacity through the enticement of the Sublime.</p>


Sign in / Sign up

Export Citation Format

Share Document