scholarly journals Algorithmic disclosure rules

Author(s):  
Fabiana Di Porto

AbstractDuring the past decade, a small but rapidly growing number of Law&Tech scholars have been applying algorithmic methods in their legal research. This Article does it too, for the sake of saving disclosure regulation failure: a normative strategy that has long been considered dead by legal scholars, but conspicuously abused by rule-makers. Existing proposals to revive disclosure duties, however, either focus on the industry policies (e.g. seeking to reduce consumers’ costs of reading) or on rulemaking (e.g. by simplifying linguistic intricacies). But failure may well depend on both. Therefore, this Article develops a `comprehensive approach', suggesting to use computational tools to cope with linguistic and behavioral failures at both the enactment and implementation phases of disclosure duties, thus filling a void in the Law & Tech scholarship. Specifically, it outlines how algorithmic tools can be used in a holistic manner to address the many failures of disclosures from the rulemaking in parliament to consumer screens. It suggests a multi-layered design where lawmakers deploy three tools in order to produce optimal disclosure rules: machine learning, natural language processing, and behavioral experimentation through regulatory sandboxes. To clarify how and why these tasks should be performed, disclosures in the contexts of online contract terms and privacy online are taken as examples. Because algorithmic rulemaking is frequently met with well-justified skepticism, problems of its compatibility with legitimacy, efficacy and proportionality are also discussed.

Author(s):  
NANA AMPAH ◽  
Matthew Sadiku ◽  
Omonowo Momoh ◽  
Sarhan Musa

Computational humanities is at the intersection of computing technologies and the disciplines of the humanities. Research in this field has steadily increased over the past years. Computational tools supporting textual search, large database analysis, data mining, network mapping, and natural language processing are employed by the humanities researcher.  This opens up new realms for analysis and understanding.  This paper provides a brief introduction into computational humanities.


Author(s):  
Prof. Ahlam Ansari ◽  
Fakhruddin Bootwala ◽  
Owais Madhia ◽  
Anas Lakdawala

Artificial intelligence, machine learning and deep learning machines are being used as conversational agents. They are used to impersonate a human and provide the user a human-like experience. Conversational software agents that use natural language processing is called a chatbot and it is widely used for interacting with users. It provides appropriate and satisfactory answers to the user. In this paper we have analyzed and compared various chatbots and provided a score to each of them on different parameters. We have asked each chatbot the same questions, and we have evaluated each answer, whether it’s satisfactory or not. This analysis is based on user experience rather than analyzing the software of each chatbot. This paper proves that even though chatbot performance has highly increased compared to the past, there is still quite a lot of room for improvement.


Author(s):  
Rafael Jiménez ◽  
Vicente García ◽  
Karla Olmos-Sánchez ◽  
Alan Ponce ◽  
Jorge Rodas-Osollo

Social networks have moved from online sites to interact with your friends to a platform where people, artists, brands, and even presidents interact with crowds of people daily. Airlines are some of the companies that use social networks such as Twitter to communicate with their clients through messages with offers, travel recommendations, videos of collaborations with YouTubers, and surveys. Among the many responses to airline tweets, there are users' suggestions on how to improve their services or processes. These recommendations are essential since the success of many companies is based on offering what the client wants or needs. A database of tweets was created using user tweets sent to airline accounts on Twitter between July 30 (2019) and August 8 (2019). Natural language processing techniques were used on the database to preprocess its data. The latest classification results using Naive Bayes show an accuracy of 72.44%.


Author(s):  
Rohan Pandey ◽  
Vaibhav Gautam ◽  
Ridam Pal ◽  
Harsh Bandhey ◽  
Lovedeep Singh Dhingra ◽  
...  

BACKGROUND The COVID-19 pandemic has uncovered the potential of digital misinformation in shaping the health of nations. The deluge of unverified information that spreads faster than the epidemic itself is an unprecedented phenomenon that has put millions of lives in danger. Mitigating this ‘Infodemic’ requires strong health messaging systems that are engaging, vernacular, scalable, effective and continuously learn the new patterns of misinformation. OBJECTIVE We created WashKaro, a multi-pronged intervention for mitigating misinformation through conversational AI, machine translation and natural language processing. WashKaro provides the right information matched against WHO guidelines through AI, and delivers it in the right format in local languages. METHODS We theorize (i) an NLP based AI engine that could continuously incorporate user feedback to improve relevance of information, (ii) bite sized audio in the local language to improve penetrance in a country with skewed gender literacy ratios, and (iii) conversational but interactive AI engagement with users towards an increased health awareness in the community. RESULTS A total of 5026 people who downloaded the app during the study window, among those 1545 were active users. Our study shows that 3.4 times more females engaged with the App in Hindi as compared to males, the relevance of AI-filtered news content doubled within 45 days of continuous machine learning, and the prudence of integrated AI chatbot “Satya” increased thus proving the usefulness of an mHealth platform to mitigate health misinformation. CONCLUSIONS We conclude that a multi-pronged machine learning application delivering vernacular bite-sized audios and conversational AI is an effective approach to mitigate health misinformation. CLINICALTRIAL Not Applicable


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


2021 ◽  
Vol 48 (4) ◽  
pp. 41-44
Author(s):  
Dena Markudova ◽  
Martino Trevisan ◽  
Paolo Garza ◽  
Michela Meo ◽  
Maurizio M. Munafo ◽  
...  

With the spread of broadband Internet, Real-Time Communication (RTC) platforms have become increasingly popular and have transformed the way people communicate. Thus, it is fundamental that the network adopts traffic management policies that ensure appropriate Quality of Experience to users of RTC applications. A key step for this is the identification of the applications behind RTC traffic, which in turn allows to allocate adequate resources and make decisions based on the specific application's requirements. In this paper, we introduce a machine learning-based system for identifying the traffic of RTC applications. It builds on the domains contacted before starting a call and leverages techniques from Natural Language Processing (NLP) to build meaningful features. Our system works in real-time and is robust to the peculiarities of the RTP implementations of different applications, since it uses only control traffic. Experimental results show that our approach classifies 5 well-known meeting applications with an F1 score of 0.89.


Sign in / Sign up

Export Citation Format

Share Document