Public parts, resocialized autonomous communal life

2021 ◽  
pp. 147807712110390
Author(s):  
David Rodrigues Silva Dória ◽  
Keshav Ramaswami ◽  
Mollie Claypool ◽  
Gilles Retsin

Commoning embodies the product of social contracts and behaviors between groups of individuals. In the case of social housing and the establishment of physical domains for life, commoning is an intersection of these contracts and the restrictions and policies that prohibit and allow them to occur within municipalities. Via a platform-based project entitled Public Parts (2020), this article will also present positions on the reification of the common through a set of design methodologies and implementations of automation. This platform seeks to subvert typical platform models to decrease ownership, increase access, and produce a new form of communal autonomous life amongst individuals that constitute the rapidly expanding freelance, work from home, and gig economies. Furthermore, this text investigates the consequences of merging domestic space with artificial intelligence by implementing machine learning to reconfigure spaces and program. The problems that arise from the deployment of machine learning algorithms involve issues of collection, usage, and ownership of data. Through the physical design of space, and a central AI which manages the platform and the automated management of space, the core objective of Public Parts is to reify the common through architecture and collectively owned data.

Author(s):  
Peyakunta Bhargavi ◽  
Singaraju Jyothi

The moment we live in today demands the convergence of the cloud computing, fog computing, machine learning, and IoT to explore new technological solutions. Fog computing is an emerging architecture intended for alleviating the network burdens at the cloud and the core network by moving resource-intensive functionalities such as computation, communication, storage, and analytics closer to the end users. Machine learning is a subfield of computer science and is a type of artificial intelligence (AI) that provides machines with the ability to learn without explicit programming. IoT has the ability to make decisions and take actions autonomously based on algorithmic sensing to acquire sensor data. These embedded capabilities will range across the entire spectrum of algorithmic approaches that is associated with machine learning. Here the authors explore how machine learning methods have been used to deploy the object detection, text detection in an image, and incorporated for better fulfillment of requirements in fog computing.


2013 ◽  
Vol 4 (3) ◽  
pp. 18-41 ◽  
Author(s):  
Yan Wu ◽  
Robin Gandhi ◽  
Harvey Siy

Those who do not learn from past vulnerabilities are bound to repeat it. Consequently, there have been several research efforts to enumerate and categorize software weaknesses that lead to vulnerabilities. The Common Weakness Enumeration (CWE) is a community developed dictionary of software weakness types and their relationships, designed to consolidate these efforts. Yet, aggregating and classifying natural language vulnerability reports with respect to weakness standards is currently a painstaking manual effort. In this paper, the authors present a semi-automated process for annotating vulnerability information with semantic concepts that are traceable to CWE identifiers. The authors present an information-processing pipeline to parse natural language vulnerability reports. The resulting terms are used for learning the syntactic cues in these reports that are indicators for corresponding standard weakness definitions. Finally, the results of multiple machine learning algorithms are compared individually as well as collectively to semi-automatically annotate new vulnerability reports.


With the rapid development of artificial intelligence, various machine learning algorithms have been widely used in the task of football match result prediction and have achieved certain results. However, traditional machine learning methods usually upload the results of previous competitions to the cloud server in a centralized manner, which brings problems such as network congestion, server computing pressure and computing delay. This paper proposes a football match result prediction method based on edge computing and machine learning technology. Specifically, we first extract some game data from the results of the previous games to construct the common features and characteristic features, respectively. Then, the feature extraction and classification task are deployed to multiple edge nodes.Finally, the results in all the edge nodes are uploaded to the cloud server and fused to make a decision. Experimental results have demonstrated the effectiveness of the proposed method.


2003 ◽  
Vol 12 (02) ◽  
pp. 241-273 ◽  
Author(s):  
ANA L. C. BAZZAN ◽  
ROGÉRIO DUARTE ◽  
ABNER N. PITINGA ◽  
LUCIANA F. SCHROEDER ◽  
FARLON DE A. SOUTO ◽  
...  

This work reports on the ATUCG environment (Agent-based environmenT for aUtomatiC annotation of Genomes). It consists of three layers, each having several agents in charge of performing repetitive and time-consuming tasks. Layer I aims at automating the tasks behind the process of finding ORFs (Open Reading Frames). Layer II (the core of our approach) is associated with three main tasks: extraction and formatting of data, automatic annotation of data regarding profiles or families of proteins, and generation and validation of rules to automatically annotate the Keywords field in the SWISS-PROT database. Layer III permits the user to check the correctness of the automatic annotation. This environment is being designed having the sequencing of the Mycoplasma hyopneumoniae in mind. Thus examples are presented using data of organisms of the Mycoplasmataceae family. We have concentrated the developments in layer II because this is the most general one and because it focusses on machine learning algorithms, a characteristic which is not usual in annotation systems. Results regarding this layer show that with learning (individual or colaborative), agents are able to generate rules for annotation which achieve better results than those reported in the literature.


Breast Cancer has become one of the common diseases not only in women but also in few men. According to research, the demise rate of females has increased mainly because of Breast Cancer tumor. One out of every eight women and one out of every thousand men are diagnosed with breast cancer. Breast cancer tumors are mainly classified into two types: Benign tumor which is a non-cancerous tumor and other one is malignant tumor which is a cancerous tumor. In order to know which type of tumor a patient has; the accurate and early diagnosis is a very crucial step. Machine Learning (ML) algorithms have been used to develop and train the model for classification of the type of tumor. For accurate and better classification several classification algorithms in ML have been trained and tested on the dataset that was collected. Already algorithms like Naïve Bayes, Random Forest, K-Nearest Neighbor and SVM showed better accuracy for classification of tumor. When we implemented Multilayer Perceptron (MLP) algorithm it gave us the best accuracy levels among all both during training as well as testing .i.e. 97%. So, the exact classification using this model will help the doctors to diagnose the type of tumor in patients quickly and accurately


Author(s):  
Peipei Jiang ◽  
Liailun Chen ◽  
Min-Feng Wang

Each language is a system of understanding and skills that allows language users to interact, express thoughts, hypotheses, feelings, wishes, and all that needs to be expressed. Linguistics is the research of these structures in all respects: the composition, usage, and sociology of language, in particular, are the core of linguistics. Machine Learning is the research area that allows machines to learn without being specifically scheduled. In linguistics, the design of writing is understood to be a foundation for many distinct company apps and probably the most useful if incorporated with machine learning methods. Research shows that besides text tagging and algorithm training, there are major problems in the field of Big Data. This article provides a collaborative effort (transfer learning integrated into Recurrent Neural Network) to analyze the distinct kinds of writing between the language's linear and non-computational sides, and to enhance granularity. The outcome demonstrates stronger incorporation of granularity into the language from both sides. Comparative results of machine learning algorithms are used to determine the best way to analyze and interpret the structure of the language.


The world is reworking in a digital era. However, the field of medicine was quite repulsive to technology. Recently, the advent of newer technologies like machine learning has catalyzed its adoption into healthcare. The blending of technology and medicine is facilitating a wealth of innovation that continues to improve lives. With the realm of possibility, machine learning is discovering various trends in a dataset and it is globally practiced in various medical conditions to predict the results, diagnose, analyze, treat, and recover. Machine Learning is aiding a lot to fight the battle against Covid-19. For instance, a face scanner that uses ML is used to detect whether a person has a fever or not. Similarly, the data from wearable technology like Apple Watch and Fitbit can be used to detect the changes in resting heart rate patterns which help in detecting coronavirus. According to a study by the Hindustan Times, the number of cases is rapidly increasing. Careful risk assessments should identify hotspots and clusters, and continued efforts should be made to further strengthen capacities to respond, especially at sub-national levels. The core public health measures for the Covid-19 response remain, rapidly detect, test, isolate, treat, and trace all contacts. The work presented in this paper represents the system that predicts the number of coronavirus cases in the upcoming days as well as the possibility of the infection in a particular person based on the symptoms. The work focuses on Linear Regression and SVM models for predicting the curve of active cases. SVM is least affected by noisy data, and it is not prone to overfitting. To diagnose a person our application has a certain question that needs to be answered. Based on this, the KNN model provides the maximum likelihood result of a person being infected or not. Tracking and monitoring in the course of such pandemic help us to be prepared.


2018 ◽  
Vol 9 (6) ◽  
pp. 497-506
Author(s):  
Paola Savona ◽  

Machine learning algorithms play a significant role in the digital economy. They suggest products and services to clients, select friends and news, give navigation advice to drivers, make translations. Moreover, learning algorithms are increasingly used to make important decisions about individuals. Companies, for example, rely on machine learning to approve loan, evaluate investments, calculate insurance risks, evaluate workers’ performance or select people to hire. Governments use it to detect terrorists and prevent future attacks, target citizens or places for police scrutiny, select tax payers for audit, detect frauds, grant or deny visas, and more. The influence of machine learning in administrative decision-making might rapidly grow in the near future. The paper analyses opportunities and risks involved in relying on learning algorithms to support or to make administrative decisions with the aim of understanding the challenges that the use of those tools poses to the core principles of the rule of law.


Author(s):  
Duraipandian M. ◽  
Vinothkanna R.

The mobile device have gained an imperative predominance in the daily routine of our lives, by keeping us connected to the real world seamlessly. Most of the mobile devices are built on android whose security mechanism is totally permission based controlling the applications from accessing the core details of the devices and the users. Even after understanding the permission system often the mobile user are ignorant about the common threat, due to the applications popularity and proceed with the installation process not aware of the targets of the application developer. The aim of the paper is to devise malware detection with the automatic permission granting employing the machine learning techniques. The different machine learning methods are engaged in the malware detection and analyzed. The results are observed to note down the approaches that aids better in enhancing the user awareness and reducing the malware threats, by detecting the malwares of the applications.


Sign in / Sign up

Export Citation Format

Share Document