2017 ◽  
Vol 22 (1) ◽  
pp. 21-37 ◽  
Author(s):  
Matthew T. Mccarthy

The web of linked data, otherwise known as the semantic web, is a system in which information is structured and interlinked to provide meaningful content to artificial intelligence (AI) algorithms. As the complex interactions between digital personae and these algorithms mediate access to information, it becomes necessary to understand how these classification and knowledge systems are developed. What are the processes by which those systems come to represent the world, and how are the controversies that arise in their creation, overcome? As a global form, the semantic web is an assemblage of many interlinked classification and knowledge systems, which are themselves assemblages. Through the perspectives of global assemblage theory, critical code studies and practice theory, I analyse netnographic data of one such assemblage. Schema.org is but one component of the larger global assemblage of the semantic web, and as such is an emergent articulation of different knowledges, interests and networks of actors. This articulation comes together to tame the profusion of things, seeking stability in representation, but in the process, it faces and produces more instability. Furthermore, this production of instability contributes to the emergence of new assemblages that have similar aims.


Author(s):  
Amit Chauhan

The annals of the Web have been a defining moment in the evolution of education and e-Learning. The evolution of Web 1.0 almost three decades ago has been a precursor to Web 3.0 that has reshaped education and learning today. The evolution to Web 3.0 has been synonymous with “Semantic Web” or “Artificial Intelligence” (AI). AI makes it possible to deliver custom content to the learners based on their learning behavior and preferences. As a result of these developments, the learners have been empowered and have at their disposal a range of Web tools and technology powered by AI to pursue and accomplish their learning goals. This chapter traces the evolution and impact of Web 3.0 and AI on e-Learning and its role in empowering the learner and transforming the future of education and learning. This chapter will be of interest to educators and learners in exploring techniques that improve the quality of education and learning outcomes.


Author(s):  
Amit Chauhan

The annals of the Web have been a defining moment in the evolution of education and e-Learning. The evolution of Web 1.0 almost three decades ago has been a precursor to Web 3.0 that has reshaped education and learning today. The evolution to Web 3.0 has been synonymous with “Semantic Web” or “Artificial Intelligence” (AI). AI makes it possible to deliver custom content to the learners based on their learning behavior and preferences. As a result of these developments, the learners have been empowered and have at their disposal a range of Web tools and technology powered by AI to pursue and accomplish their learning goals. This chapter traces the evolution and impact of Web 3.0 and AI on e-Learning and its role in empowering the learner and transforming the future of education and learning. This chapter will be of interest to educators and learners in exploring techniques that improve the quality of education and learning outcomes.


2015 ◽  
Vol 21 (5) ◽  
pp. 661-664
Author(s):  
ZORNITSA KOZAREVA ◽  
VIVI NASTASE ◽  
RADA MIHALCEA

Graph structures naturally model connections. In natural language processing (NLP) connections are ubiquitous, on anything between small and web scale. We find them between words – as grammatical, collocation or semantic relations – contributing to the overall meaning, and maintaining the cohesive structure of the text and the discourse unity. We find them between concepts in ontologies or other knowledge repositories – since the early ages of artificial intelligence, associative or semantic networks have been proposed and used as knowledge stores, because they naturally capture the language units and relations between them, and allow for a variety of inference and reasoning processes, simulating some of the functionalities of the human mind. We find them between complete texts or web pages, and between entities in a social network, where they model relations at the web scale. Beyond the more often encountered ‘regular’ graphs, hypergraphs have also appeared in our field to model relations between more than two units.


2019 ◽  
Vol 36 (1) ◽  
pp. 3-11
Author(s):  
Pompeu Casanovas ◽  
Jianfu Chen ◽  
David Wishart

We introduce both the new inception of Law in Context - A Socio-legal Journal and the continuing issue of LiC 36 (1). The editorial provides a brief historical account of the Journal since its inception in the early 1980s, in the context of the evolution of the Law & Society movement. It also describes the changes produced in the digital age by the emergence of the Web of Data, Big Data, and the Internet of Things. The convergence between Law & Society and Artificial Intelligence & Law is also discussed. Finally, we introduce briefly the articles included in this issue.          


De Musica ◽  
2021 ◽  
Author(s):  
Alessandro Bertinetto

This paper considers the issue of musical improvisational interactions in the digital era by pursuing the following three steps. 1) I will raise the question of the meaning and value of liveness, and in particular of live musical improvisation, in the age of the internet and discuss some effects of the so-called digital revolution on improvisation practices. 2) Then I will suggest that the interactions made possible by the web can be understood as kinds of live improvisational practices and I will briefly outline how such practices also involve musical improvisation. 3) Finally, I will focus on some aesthetic and philosophical aspects of new kinds of live improvisation made possible by recent progress in artificial intelligence research.


Author(s):  
José Luiz Andrade Duizith ◽  
Lizandro Kirst Da Silva ◽  
Daniel Ribeiro Brahm ◽  
Gustavo Tagliassuchi ◽  
Stanley Loh

This work presents a Virtual Assistant (VA) whose main goal is to supply information for Websites users. AVA is a software system that interacts with persons through a Web browser, receiving textual questions and answering automatically without human intervention. The VA supplies information by looking for similar questions in a knowledge base and giving the corresponding answer. Artificial Intelligence techniques are employed in this matching process, to compare the user’s question against questions stored in the base. The main advantage of using the VA is to minimize information overload when users get lost in Websites. The VA can guide the user across the web pages or directly supply information. This is especially important for customers visiting an enterprise site, looking for products, services or prices or needing information about some topic. The VA can also help in Knowledge Management processes inside enterprises, offering an easy way for people storing and retrieving knowledge. An extra advantage is to reduce the structure of Call Centers, since the VA can be given to customers in a CD-ROM. Furthermore, the VA provides Webmasters with statistics about the usage of the VA (themes more asked, number of visitants, time of conversation).


2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
WenNing Wu ◽  
ZhengHong Deng

Wi-Fi-enabled information terminals have become enormously faster and more powerful because of this technology’s rapid advancement. As a result of this, the field of artificial intelligence (AI) was born. Artificial intelligence (AI) has been used in a wide range of societal contexts. It has had a significant impact on the realm of education. Using big data to support multistage views of every subject of opinion helps to recognize the unique characteristics of each aspect and improves social network governance’s suitability. As public opinion in colleges and universities becomes an increasingly important vehicle for expressing public opinion, this paper aims to explore the concepts of public opinion based on the web crawler and CNN (Convolutional Neural Network) model. Web crawler methodology is utilised to gather the data given by students of college and universities and mention them in different dimensions. This CNN has robust data analysis capability; this proposed model uses the CNN to analyse the public opinion. Preprocessing of data is done using the oversampling method to maximize the effect of classification. Through the association of descriptions, comprehensive utilization of image information like user influence, stances of comments, topics, time of comments, etc., to suggest guidance phenomenon for various schemes, helps to enhance the effectiveness and targeted social governance of networks. The overall experimentation was carried out in python here in which the suggested methodology was predicting the positive and negative opinion of the students over the web crawler technology with a low rate of error when compared to other existing methodology.


Author(s):  
Ben Choi

Web mining aims for searching, organizing, and extracting information on the Web and search engines focus on searching. The next stage of Web mining is the organization of Web contents, which will then facilitate the extraction of useful information from the Web. This chapter will focus on organizing Web contents. Since a majority of Web contents are stored in the form of Web pages, this chapter will focus on techniques for automatically organizing Web pages into categories. Various artificial intelligence techniques have been used; however the most successful ones are classification and clustering. This chapter will focus on clustering. Clustering is well suited for Web mining by automatically organizing Web pages into categories each of which contain Web pages having similar contents. However, one problem in clustering is the lack of general methods to automatically determine the number of categories or clusters. For the Web domain, until now there is no such a method suitable for Web page clustering. To address this problem, this chapter describes a method to discover a constant factor that characterizes the Web domain and proposes a new method for automatically determining the number of clusters in Web page datasets. This chapter also proposes a new bi-directional hierarchical clustering algorithm, which arranges individual Web pages into clusters and then arranges the clusters into larger clusters and so on until the average inter-cluster similarity approaches the constant factor. Having the constant factor together with the algorithm, this chapter provides a new clustering system suitable for mining the Web.


Sign in / Sign up

Export Citation Format

Share Document