scholarly journals The algorithm and the flow: Netflix, machine learning and recommendation algorithms

Intexto ◽  
2019 ◽  
pp. 166-184
Author(s):  
João Damasceno Martins Ladeira

This article discusses the Netflix recommendation system, expecting to understand these techniques as a part of the contemporary strategies for the reorganization of television and audiovisual. It renders problematic a technology indispensable to these suggestions: the tools for artificial intelligence, expecting to infer questions of cultural impact inscribed in this technique. These recommendations will be analyzed in their relationship with the formerly decisive form for the constitution of broadcast: the television flow. The text investigates the meaning such influential tools at the definition of a television based on the manipulation of collections, and not in the predetermined programming, decided previously to the transmission of content. The conclusion explores the consequences of these archives, which concedes to the user a sensation of choice in tension with the mechanical character of those images.

2020 ◽  
Vol 25 (2) ◽  
pp. 7-13
Author(s):  
Zhangozha A.R. ◽  

On the example of the online game Akinator, the basic principles on which programs of this type are built are considered. Effective technics have been proposed by which artificial intelligence systems can build logical inferences that allow to identify an unknown subject from its description (predicate). To confirm the considered hypotheses, the terminological analysis of definition of the program "Akinator" offered by the author is carried out. Starting from the assumptions given by the author's definition, the article complements their definitions presented by other researchers and analyzes their constituent theses. Finally, some proposals are made for the next steps in improving the program. The Akinator program, at one time, became one of the most famous online games using artificial intelligence. And although this was not directly stated, it was clear to the experts in the field of artificial intelligence that the program uses the techniques of expert systems and is built on inference rules. At the moment, expert systems have lost their positions in comparison with the direction of neural networks in the field of artificial intelligence, however, in the case considered in the article, we are talking about techniques using both directions – hybrid systems. Games for filling semantics interact with the user, expanding their semantic base (knowledge base) and use certain strategies to achieve the best result. The playful form of such semantics filling programs is beneficial for researchers by involving a large number of players. The article examines the techniques used by the Akinator program, and also suggests possible modifications to it in the future. This study, first of all, focuses on how the knowledge base of the Akinator program is built, it consists of incomplete sets, which can be filled and adjusted as a result of further iterations of the program launches. It is important to note our assumption that the order of questions used by the program during the game plays a key role, because it determines its strategy. It was identified that the program is guided by the principles of nonmonotonic logic – the assumptions constructed by the program are not final and can be rejected by it during the game. The three main approaches to acquisite semantics proposed by Jakub Šimko and Mária Bieliková are considered, namely, expert work, crowdsourcing and machine learning. Paying attention to machine learning, the Akinator program using machine learning to build an effective strategy in the game presents a class of hybrid systems that combine the principles of two main areas in artificial intelligence programs – expert systems and neural networks.


2019 ◽  
Vol 5 (2) ◽  
pp. 205630511984752 ◽  
Author(s):  
Jonathan Sterne ◽  
Elena Razlogova

This article proposes a contextualist approach to machine learning and aesthetics, using LANDR, an online platform that offers automated music mastering and that trumpets its use of supervised machine learning, branded as artificial intelligence (AI). Increasingly, machine learning will become an integral part of the processing of sounds and images, shaping the way our culture sounds, looks, and feels. Yet we cannot know exactly how much of a role or what role machine learning plays in LANDR. To parochialize the machine learning part of what LANDR does, this study spirals in from bigger contexts to smaller ones: LANDR’s place between the new media industry and the mastering industry; the music scene in their home city, Montreal, Quebec; LANDR use by DIY musicians and independent engineers; and, finally, the LANDR interface and the sound it produces in use. While LANDR claims to automate the work of mastering engineers, it appears to expand and morph the definition of mastering itself: it devalues people’s aesthetic labor as it establishes higher standards for recordings online. And unlike many other new media firms, LANDR’s connection to its local music scene has been essential to its development, growth, and authority, even as they have since moved on from that scene, and even as the relationship was never fully reciprocal.


2019 ◽  
Vol 87 (2) ◽  
pp. 27-29
Author(s):  
Meagan Wiederman

Artificial intelligence (AI) is the ability of any device to take an input, like that of its environment, and work to achieve a desired output. Some advancements in AI have focused n replicating the human brain in machinery. This is being made possible by the human connectome project: an initiative to map all the connections between neurons within the brain. A full replication of the thinking brain would inherently create something that could be argued to be a thinking machine. However, it is more interesting to question whether a non-biologically faithful AI could be considered as a thinking machine. Under Turing’s definition of ‘thinking’, a machine which can be mistaken as human when responding in writing from a “black box,” where they can not be viewed, can be said to pass for thinking. Backpropagation is an error minimizing algorithm to program AI for feature detection with no biological counterpart which is prevalent in AI. The recent success of backpropagation demonstrates that biological faithfulness is not required for deep learning or ‘thought’ in a machine. Backpropagation has been used in medical imaging compression algorithms and in pharmacological modelling.


2021 ◽  
Vol 18 (1) ◽  
pp. 27-35
Author(s):  
Roman B. Kupriyanov ◽  
Dmitry L. Agranat ◽  
Ruslan S. Suleymanov

Problem and goal. Developed and tested solutions for building individual educational trajectories of students, focused on improving the educational process by forming a personalized set of recommendations from the optional disciplines. Methodology. Data mining and machine learning methods were used to process both numeric and textual data. The approaches based on collaborative and content filtering to generate recommendations for students were also used. Results. Testing of the developed system was carried out in the context of several periods of elective courses selection, in which 4,769 first- and second-year students took part. A set of recommendations was automatically generated for each student, and then the quality of the recommendations was evaluated based on the percentage of students who used these recommendations. According to the results of testing, the recommendations were used by 1,976 students, which was 41.43% of the total number of participants. Conclusion. In the study, a recommendation system was developed that performs automatic ranking of subjects of choice and forms a personalized set of recommendations for each student based on their interests for building individual educational trajectories.


10.2196/18752 ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. e18752
Author(s):  
Nariman Ammar ◽  
Arash Shaban-Nejad

Background The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem. Objective In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance. Methods We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology. Results To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation. Conclusions This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make.


2019 ◽  
Vol 9 (3) ◽  
pp. 11
Author(s):  
Zdenko Kodelja

The question of whether machine learning is real learning is ambiguous, because the term “real learning” can be understood in two different ways. Firstly, it can be understood as learning that actually exists and is, as such, opposed to something that only appears to be learning, or is misleadingly called learning despite being something else, something that is different from learning. Secondly, it can be understood as the highest form of human learning, which presupposes that an agent understands what is learned and acquires new knowledge as a justified true belief. As a result, there are also two opposite answers to the question of whether machine learning is real learning. Some experts in the field of machine learning, which is a subset of artificial intelligence, claim that machine learning is in fact learning and not something else, while some others – including philosophers – reject the claim that machine learning is real learning. For them, real learning means the highest form of human learning. The main purpose of this paper is to present and discuss, very briefly and in a simplifying manner, certain interpretations of human and machine learning, on the one hand, and the problem of real learning, on the other, in order to make it clearer that the answer to the question of whether machine learning is real learning depends on the definition of learning.


Author(s):  
M. Stashevskaya

The article contains a study of existing views on the economic content of big data. From among the views, within which the authors define big data, the descriptive-model, utility-digital and complex-technological approaches are formulated. Against the back- ground of the large-scale spread of digital technologies (machine learning, cloud computing, artificial intelligence, augmented and virtual reality, etc.), functioning thanks to big data, the study of their economic essence is becoming especially relevant. As a result, it was found that the basis of economic activity in the digital economy is big data. The definition of big data as a resource of the digital economy is proposed.


Author(s):  
Rohit Rastogi ◽  
Prabhat Yadav ◽  
Jayash Raj Singh Yadav

There is music recommendation software and music providers that are well explored and commonly used, but those are generally based on simple similarity calculations and manually tagged parameters. This project proposes a music recommendation system based on emotion detection of users, automatic computing, and classification. Music is recommended based on the emotion expressed and temper of the user. Like artists and genre, emotion of the user can also be a crucial recommendation point for music listeners. The different mооds in whiсh the system will сlаssify the imаges аre hаррy, neutrаl, аnd sаd. The system will рre-sоrt the songs according to their genre in the above-mentioned categories. This research project gives us advancement in the music industry with the help of machine learning and artificial intelligence and will reduce the hassle of selecting songs in our leisure time and will automatically play songs by detecting the emotion of the user. This data can be used to play the songs that match the current mood detected from the provided input by the user.


Author(s):  
Stavros Pitoglou

Machine learning, closely related to artificial intelligence and standing at the intersection of computer science and mathematical statistical theory, comes in handy when the truth is hiding in a place that the human brain has no access to. Given any prediction or assessment problem, the more complicated this issue is, based on the difficulty of the human mind to understand the inherent causalities/patterns and apply conventional methods towards an acceptable solution, machine learning can find a fertile field of application. This chapter's purpose is to give a general non-technical definition of machine learning, provide a review of its latest implementations in the healthcare domain and add to the ongoing discussion on this subject. It suggests the active involvement of entities beyond the already active academic community in the quest for solutions that “exploit” existing datasets and can be applied in the daily practice, embedded inside the software processes that are already in use.


2020 ◽  
Vol 11 (1) ◽  
pp. 18
Author(s):  
Diogo Cardoso ◽  
Luís Ferreira

The growing competitiveness of the market, coupled with the increase in automation driven with the advent of Industry 4.0, highlights the importance of maintenance within organizations. At the same time, the amount of data capable of being extracted from industrial systems has increased exponentially due to the proliferation of sensors, transmission devices and data storage via Internet of Things. These data, when processed and analyzed, can provide valuable information and knowledge about the equipment, allowing a move towards predictive maintenance. Maintenance is fundamental to a company’s competitiveness, since actions taken at this level have a direct impact on aspects such as cost and quality of products. Hence, equipment failures need to be identified and resolved. Artificial Intelligence tools, in particular Machine Learning, exhibit enormous potential in the analysis of large amounts of data, now readily available, thus aiming to improve the availability of systems, reducing maintenance costs, and increasing operational performance and support in decision making. In this dissertation, Artificial Intelligence tools, more specifically Machine Learning, are applied to a set of data made available online and the specifics of this implementation are analyzed as well as the definition of methodologies, in order to provide information and tools to the maintenance area.


Sign in / Sign up

Export Citation Format

Share Document