web 3.0
Recently Published Documents


TOTAL DOCUMENTS

449
(FIVE YEARS 134)

H-INDEX

15
(FIVE YEARS 2)

Semantic Web technology is not new as most of us contemplate; it has evolved over the years. Linked Data web terminology is the name set recently to the Semantic Web. Semantic Web is a continuation of Web 2.0 and it is to replace existing technologies. It is built on Natural Language processing and provides solutions to most of the prevailing issues. Web 3.0 is the version of Semantic Web caters to the information needs of half of the population on earth. This paper links two important current concerns, the security of information and enforced online education due to COVID-19 with Semantic Web. The Steganography requirement for the Semantic web is discussed elaborately, even though encryption is applied which is inadequate in providing protection. Web 2.0 issues concerning online education and semantic Web solutions have been discussed. An extensive literature survey has been conducted related to the architecture of Web 3.0, detailed history of online education, and Security architecture. Finally, Semantic Web is here to stay and data hiding along with encryption makes it robust.


Author(s):  
Shweta S. Aladakatti ◽  
S. Senthil Kumar

The era of the web has evolved and the industry strives to work better every day, the constant need for data to be accessible at a random moment is expanding, and with this expansion, the need to create a meaningful query technique in the web is a major concerns. To transmit meaningful data or rich semantics, machines/projects need to have the ability to reach the correct information and make adequate connections, this problem is addressed after the emergence of Web 3.0, the semantic web is developing and being collected an immense. Information to prepare, this passes the giant data management test, to provide an ideal result at any time needed. Accordingly, in this article, we present an ideal system for managing huge information using MapReduce structures that internally help an engine bring information using the strength of fair preparation using smaller map occupations and connection disclosure measures. Calculations for similarity can be challenging, this work performs five similarity detection algorithms and determines the time it takes to address the patterns that has to be a better choice in the calculation decision. The proposed framework is created using the most recent and widespread information design, that is, the JSON design, the HIVE query language to obtain and process the information planned according to the customer’s needs and calculations for the disclosure of the interface. Finally, the results on a web page is made available that helps a user stack json information and make connections somewhere in the range of dataset 1 and dataset 2. The results are examined in 2 different sets, the results show that the proposed approach helps to interconnect significantly faster; Regardless of how large the information is, the time it takes is not radically extended. The results demonstrate the interlinking of the dataset 1 and dataset 2 is most notable using LD and JW, the time required is ideal in both calculations, this paper has mechanized the method involved with interconnecting via a web page, where customers can merge two sets of data that should be associated and used.


Author(s):  
Сергей Владимирович Володенков ◽  
Сергей Николаевич Федорченко
Keyword(s):  

Статья определяет ключевые вызовы, риски и ограничения цифровизации современного гражданско-политического активизма в аспекте демократизации традиционных политических институтов и гражданского общества. Ключевой задачей исследования выступает критический анализ актуальных практик гражданско-политического активизма с использованием цифровых платформ. Авторы обосновывают положение о том, что в условиях глобальных технологических трансформаций и перехода к парадигме Web 3.0 содержательные и функциональные характеристики гражданско-политического активизма будут существенно изменяться, порождая новые эффекты. Показано, что параметры цифрового гражданско-политического активизма во многом определяются дизайном используемых цифровых платформ — теми возможностями, форматами, механиками, алгоритмами и особенностями коммуникации, которые предоставляются в рамках действующей цифровой инфраструктуры. Авторы заключают, что дизайн гражданско-политического активизма является в функциональном аспекте не самостоятельным, но производным от функционала и алгоритмических моделей платформ, а аффордансы в значительной степени зависят от программных алгоритмов платформы. При этом алгоритмы, используемые цифровыми корпорациями и институтами власти, не предполагают интерфейсов, способствующих цифровому самоопределению граждан. Для подтверждения данного утверждения авторы проводят SWOT-анализ некоторых цифровых платформ гражданско-политического активизма. Также в статье обсуждается проблема симуляции и симулякризации цифрового активизма. Показано, что за счет своей информационной активности и наличия устойчивой лояльной аудитории гражданско-политические активисты способны в собственных интересах влиять на повестку дня, смещая представление аудитории о реальной действительности в необходимую сторону. При этом информационное капсулирование онлайн-пользователей посредством их вовлечения в деятельность групп и сообществ гражданско-политических активистов способно формировать необходимые общественно-политические убеждения и представления, которые в дальнейшем становятся основой программирования ожидаемого поведения индивидов в реальности. По итогам проведенного исследования в работе определены ключевые компоненты цифровых инфраструктур гражданско-политического активизма, а также представлены возможные сценарии развития этой сферы. Благодарность. Исследование выполнено в рамках Программы развития Междисциплинарной научно-образовательнойшколы Московского государственного университета имени М.В. Ломоносова «Сохранение мировогокультурно-исторического наследия».


2021 ◽  
Vol 18 (2) ◽  
pp. 51-68
Author(s):  
Dragana Vuković Vojnović

In this paper, we investigate the main characteristics underlying noun + noun collocations in the English and Serbian language of tourism. Their morpho-syntactic, semantic and communicative features are contrasted and compared in the two languages. Firstly, we compiled two comparable corpora in English and Serbian from the tourism websites of Great Britain and Serbia. Based on their normalized frequencies per 10,000 words, key noun + noun collocations were extracted, using TermoStat Web 3.0 and AntConc. The results showed certain similarities in terms of the prevailing topics in the two corpora, based on the analysis of key noun + noun collocations. However, we found major differences in the two languages in terms of their morpho-syntactic features, communicative focus and the relationship of the collocates. The results of the study have implications for English for Tourism education, tourism discourse studies, language typology and lexicography.


2021 ◽  
pp. 125-134
Author(s):  
Tatyana Victorovna Voloshina ◽  
◽  
Tatyana Eduardovna Sizikova ◽  

In the article, the authors note that at present, especially in the field of education, the already existing processes of the development of new content thinking under the influence of Internet technologies are being actualized, as well as the need to develop reflection, starting from an early age of children and in artificial intelligence. The purpose of this article is an analytical study of the changes taking place in the thinking of consumers of Internet technologies. The authors note that in the current conditions of accelerating scientific and technological progress and the development of global information networks, qualitative changes are taking place in the mental activity of a modern person. The article examines topical issues related to the development of a new type of thinking – content thinking, conditioned by web technologies. The authors examined in detail the Internet technologies Web 3.0, Web 4.0 and their influence on the psychological content of the ontological foundations of human life on the basis of the methodological principles of the consistency of determinism and development. The authors identified the main characteristics of Web 3.0 and Web 4.0 content: semantic structure, cooperativity, clustering, ample opportunities for consumer self-expression, self-developing basic personal content, self-correcting system, effective and convenient information management, accessibility, simplicity and maximum convenience, development and use additional opportunities, human resource management in the current time mode, crystallization, the presence of the maximum possible consumer protection. The authors have shown the influence of Internet technology 3.0, 4.0 on the psychological content of the ontological foundations of human life, including new content thinking. The method of such thinking is synthetic deduction and the method of developing such thinking is the method of “synergetic deduction 3.0” developed by us.


Author(s):  
Divyansh Shankar Mishra ◽  
Abhinav Agarwal ◽  
B. P. Swathi ◽  
K C. Akshay

AbstractThe idea of data to be semantically linked and the subsequent usage of this linked data with modern computer applications has been one of the most important aspects of Web 3.0. However, the actualization of this aspect has been challenging due to the difficulties associated with building knowledge bases and using formal languages to query them. In this regard, SPARQL, a recursive acronym for standard query language and protocol for Linked Open Data and Resource Description Framework databases, is a most popular formal querying language. Nonetheless, writing SPARQL queries is known to be difficult, even for experts. Natural language query formalization, which involves semantically parsing natural language queries to their formal language equivalents, has been an essential step in overcoming this steep learning curve. Recent work in the field has seen the usage of artificial intelligence (AI) techniques for language modelling with adequate accuracy. This paper discusses a design for creating a closed domain ontology, which is then used by an AI-powered chat-bot that incorporates natural language query formalization for querying linked data using Rasa for entity extraction after intent recognition. A precision–recall analysis is performed using in-built Rasa tools in conjunction with our own testing parameters, and it is found that our system achieves a precision of 0.78, recall of 0.79 and F1-score of 0.79, which are better than the current state of the art.


One Health ◽  
2021 ◽  
pp. 100357
Author(s):  
Sarah Valentin ◽  
Elena Arsevska ◽  
Julien Rabatel ◽  
Sylvain Falala ◽  
Alizé Mercier ◽  
...  

2021 ◽  
pp. 47-52
Author(s):  
George Zarkadakis

AbstractSocial exclusion, data exploitation, surveillance, and economic inequality on the web are mainly technological problems. The current web of centralized social media clouds delivers by design a winner-takes-all digital economy that stifles innovation and exacerbates power asymmetries between citizens, governments, and technology oligopolies. To fix the digital economy, we need a new, decentralized web where citizens are empowered to own their data, participate in disintermediated peer-to-peer marketplaces, and influence policy-making decisions by means of innovative applications of participatory and deliberative democracy. By reimagining “web 3.0” as a cloud commonwealth of networked virtual machines leveraging blockchains and sharing code, it is possible to design new digital business models where all stakeholders and participants, including users, can share the bounty of the Fourth Industrial Revolution fairly.


Author(s):  
Leila Jane Brum Lage Sena Guimarães ◽  
Eliane Cristina de Freitas Rocha
Keyword(s):  

Introdução: O ambiente da web 3.0 tem como característica marcante a apresentação de dados personalizados aos seus usuários, por meio de seus algoritmos que atuam como artefatos mediadores das relações dos usuários com a rede. Neste cenário de cooperação entre pessoas e computadores inerentes à web 3.0, a Ciência da Informação contribui ao refletir sobre o sujeito informacional e seu contexto, especialmente por meio da abordagem das práticas informacionais. Objetivo: Construir aporte teórico entre os propósitos do design thinking que se aliam aos objetivos de estudo de práticas situadas de apropriação de tecnologias pelos usuários. Metodologia: Através da revisão bibliográfica exploratória narrativa, desenvolve-se o aporte teórico para a construção dos artefatos mediadores na web 3.0 com base na perspectiva de estudo das práticas informacionais associada à abordagem metodológica do design thinking como alternativa para o desenvolvimento participativo e humano de novos contextos de organização e relação sociais na web. Resultados: São tecidas relações conceituais entre a web 3.0, desmediação e artefatos mediadores. A metodologia do design thinking é apresentada e relacionada à abordagem de estudos de usuários das práticas informacionais. Conclusão: A construção de artefatos mediadores para a web 3.0 requer a abordagem de estudos centrada no contexto de interação dos seus usuários, por meio de uma metodologia sensível à modelagem de contexto, como é a proposta do design thinking. A aposta teórica é que o design thinking é congruente com a abordagem de práticas informacionais dos usuários da informação, podendo ser uma abordagem inovadora para a área de CI.


Author(s):  
Usha Yadav ◽  
Neelam Duhan

With the evolution of Web 3.0, the traditional algorithm of searching Web 2.0 would become obsolete and underperform in retrieving the precise and accurate information from the growing semantic web. It is very reasonable to presume that common users might not possess any understanding of the ontology used in the knowledge base or SPARQL query. Therefore, providing easy access of this enormous knowledge base to all level of users is challenging. The ability for all level of users to effortlessly formulate structure query such as SPARQL is very diverse. In this paper, semantic web based search methodology is proposed which converts user query in natural language into SPARQL query, which could be directed to domain ontology based knowledge base. Each query word is further mapped to the relevant concept or relations in ontology. Score is assigned to each mapping to find out the best possible mapping for the query generation. Mapping with highest score are taken into consideration along with interrogative or other function to finally formulate the user query into SPARQL query. If there is no search result retrieved from the knowledge base, then instead of returning null to the user, the query is further directed to the Web 3.0. The top “k” documents are considered to further converting them into RDF format using Text2Onto tool and the corpus of semantically structured web documents is build. Alongside, semantic crawl agent is used to get <Subject-Predicate-Object> set from the semantic wiki. The Term Frequency Matrix and Co-occurrence Matrix are applied on the corpus following by singular Value decomposition (SVD) to find the results relevant for the user query. The result evaluations proved that the proposed system is efficient in terms of execution time, precision, recall and f-measures.


Sign in / Sign up

Export Citation Format

Share Document