Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning

2019 ◽  
Vol 20 (1) ◽  
pp. 83-121 ◽  
Author(s):  
Mireille Hildebrandt

Abstract This Article takes the perspective of law and philosophy, integrating insights from computer science. First, I will argue that in the era of big data analytics we need an understanding of privacy that is capable of protecting what is uncountable, incalculable or incomputable about individual persons. To instigate this new dimension of the right to privacy, I expand previous work on the relational nature of privacy, and the productive indeterminacy of human identity it implies, into an ecological understanding of privacy, taking into account the technological environment that mediates the constitution of human identity. Second, I will investigate how machine learning actually works, detecting a series of design choices that inform the accuracy of the outcome, each entailing trade-offs that determine the relevance, validity and reliability of the algorithm’s accuracy for real life problems. I argue that incomputability does not call for a rejection of machine learning per se but calls for a research design that enables those who will be affected by the algorithms to become involved and to learn how machines learn — resulting in a better understanding of their potential and limitations. A better understanding of the limitations that are inherent in machine learning will deflate some of the eschatological expectations, and provide for better decision-making about whether and if so how to implement machine learning in specific domains or contexts. I will highlight how a reliable research design aligns with purpose limitation as core to its methodological integrity. This Article, then, advocates a practice of “agonistic machine learning” that will contribute to responsible decisions about the integration of data-driven applications into our environments while simultaneously bringing them under the Rule of Law. This should also provide the best means to achieve effective protection against overdetermination of individuals by machine inferences.

Author(s):  
Dimitar Christozov ◽  
Katia Rasheva-Yordanova

The article shares the authors' experiences in training bachelor-level students to explore Big Data applications in solving nowadays problems. The article discusses curriculum issues and pedagogical techniques connected to developing Big Data competencies. The following objectives are targeted: The importance and impact of making rational, data driven decisions in the Big Data era; Complexity of developing and exploring a Big Data Application in solving real life problems; Learning skills to adopt and explore emerging technologies; and Knowledge and skills to interpret and communicate results of data analysis via combining domain knowledge with system expertise. The curriculum covers: The two general uses of Big Data Analytics Applications, which are well distinguished from the point of view of end-user's objectives (presenting and visualizing data via aggregation and summarization [data warehousing: data cubes, dash boards, etc.] and learning from Data [data mining techniques]); Organization of Data Sources: distinction of Master Data from Operational Data, in particular; Extract-Transform-Load (ETL) process; and Informing vs. Misinforming, including the issue of over-trust vs. under-trust of obtained analytical results.


2020 ◽  
Vol 28 (1) ◽  
pp. 95-108 ◽  
Author(s):  
Daniel Cinalli ◽  
Luis Martí ◽  
Nayat Sanchez-Pi ◽  
Ana Cristina Bicharra Garcia

Abstract Evolutionary multi-objective optimization algorithms (EMOAs) have been successfully applied in many real-life problems. EMOAs approximate the set of trade-offs between multiple conflicting objectives, known as the Pareto optimal set. Reference point approaches can alleviate the optimization process by highlighting relevant areas of the Pareto set and support the decision makers to take the more confident evaluation. One important drawback of this approaches is that they require an in-depth knowledge of the problem being solved in order to function correctly. Collective intelligence has been put forward as an alternative to deal with situations like these. This paper extends some well-known EMOAs to incorporate collective preferences and interactive techniques. Similarly, two new preference-based multi-objective optimization performance indicators are introduced in order to analyze the results produced by the proposed algorithms in the comparative experiments carried out.


Author(s):  
Paulo Mendes ◽  
José Eugenio Leal

Outsourcing can be a very effective strategy to increase operational performance and improve customer service; at the same time, that minimizes capital investment, freeing up capital to other important projects according to the company strategy that will increase revenue and profitability. However, when outsourcing is not performed in the right way, as there are several examples in the marketplace, it can also decrease performance and hurt customer service, reducing company competitiveness. Therefore, it is critical to establish a robust Outsourcing Execution Process to reduce risks of vendor failure due to lack of operational capability, performance management, and conflict of culture between 3PL and the company, just to enumerate a few possible real life problems. This chapter provides a broad and updated introduction of transportation and distribution operation, and based on literature review and practical experience from the authors, several best practices are reviewed to support outsourcing execution in transportation and distribution operation.


2020 ◽  
Vol 8 (4) ◽  
pp. 421-431
Author(s):  
Nathan Clark ◽  
Kristoffer Albris

The use of digital technologies, social media platforms, and (big) data analytics is reshaping crisis management in the 21st century. In turn, the sharing, collecting, and monitoring of personal and potentially sensitive data during crises has become a central matter of interest and concern which governments, emergency management and humanitarian professionals, and researchers are increasingly addressing. This article asks if these rapidly advancing challenges can be governed in the same ways that data is governed in periods of normalcy. By applying a political realist perspective, we argue that governing data in crises is challenged by state interests and by the complexity of other actors with interests of their own. The article focuses on three key issues: 1) vital interests of the data subject vis-à-vis the right to privacy; 2) the possibilities and limits of an international or global policy on data protection vis-à-vis the interests of states; and 3) the complexity of actors involved in the protection of data. In doing so, we highlight a number of recent cases in which the problems of governing data in crises have become visible.


Author(s):  
Jean Philippe Pierre Décieux

Knowledge co-production is a solution-oriented approach to analysing real-life problems such as making the right decision in a given scenario. The most popular examples come from evidence-based policymaking contexts. Political decisions made in this way rely on specialist expertise co-produced in organisations that can be characterised as Hybrid Fora. However, despite the rise in popularity of Hybrid Fora and evidence-based policymaking processes, there are only a few studies that analyse the influencing factors of knowledge co-production in these contexts. The case study presented here addresses this new area of research through a documentary analysis and 11 expert interviews, both analysed via qualitative content analysis. First, the study reconstructs how knowledge is produced within an Expert Group of the European Commission. Second, it reflects how the produced knowledge is de facto included as “evidence” into the decision-making processes of the relevant policy area. The results of this study show that in this expert group, pragmatic and extra-scientific criteria such as specific stakes and interests as well as the group hierarchy controlled the process of knowledge co-production. Moreover, it also seems that knowledge produced by the interaction of experts within the examined Expert Group has a more symbolic, policy-orientated function, rather than being specifically used as decision-making evidence.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-28
Author(s):  
Menatalla Abououf ◽  
Shakti Singh ◽  
Hadi Otrok ◽  
Rabeb Mizouni ◽  
Ernesto Damiani

With the advent of mobile crowd sourcing (MCS) systems and its applications, the selection of the right crowd is gaining utmost importance. The increasing variability in the context of MCS tasks makes the selection of not only the capable but also the willing workers crucial for a high task completion rate. Most of the existing MCS selection frameworks rely primarily on reputation-based feedback mechanisms to assess the level of commitment of potential workers. Such frameworks select workers having high reputation scores but without any contextual awareness of the workers, at the time of selection, or the task. This may lead to an unfair selection of workers who will not perform the task. Hence, reputation on its own only gives an approximation of workers’ behaviors since it assumes that workers always behave consistently regardless of the situational context. However, following the concept of cross-situational consistency, where people tend to show similar behavior in similar situations and behave differently in disparate ones, this work proposes a novel recruitment system in MCS based on behavioral profiling. The proposed approach uses machine learning to predict the probability of the workers performing a given task, based on their learned behavioral models. Subsequently, a group-based selection mechanism, based on the genetic algorithm, uses these behavioral models in complementation with a reputation-based model to recruit a group of workers that maximizes the quality of recruitment of the tasks. Simulations based on a real-life dataset show that considering human behavior in varying situations improves the quality of recruitment achieved by the tasks and their completion confidence when compared with a benchmark that relies solely on reputation.


Energies ◽  
2021 ◽  
Vol 14 (17) ◽  
pp. 5410
Author(s):  
Mahmoud Abdelkader Bashery Abbass ◽  
Mohamed Hamdy

One of the biggest problems in applying machine learning (ML) in the energy and buildings field is the lack of experience of ML users in implementing each ML algorithm in real-life applications the right way, because each algorithm has prerequisites to be used and specific problems or applications to be implemented. Hence, this paper introduces a generic pipeline to the ML users in the specified field to guide them to select the best-fitting algorithm based on their particular applications and to help them to implement the selected algorithm correctly to achieve the best performance. The introduced pipeline is built on (1) reviewing the most popular trails to put ML pipelines for the energy and building, with a declaration for each trial drawbacks to avoid it in the proposed pipeline; (2) reviewing the most popular ML algorithms in the energy and buildings field and linking them with possible applications in the energy and buildings field in one layout; (3) a full description of the proposed pipeline by explaining the way of implementing it and its environmental impacts in improving energy management systems for different countries; and (4) implementing the pipeline on real data (CBECS) to prove its applicability.


2021 ◽  
Vol 36 (1) ◽  
pp. 583-589
Author(s):  
Suraya Masrom ◽  
Thuraiya Mohd ◽  
Nur Syafiqah Jamil

Researchers and industry players acknowledged that machine learning application is useful in assisting human for solving many kinds of real life problems, including in real estate and property industry. In this paper, we present the empirical steps for implementing machine learning approaches in the prediction of green building price. Green building conserve natural resources and reduce the negative impact of the building development. This paper provides a report from the data collection method, preliminary data analysis with statistical method, and the experimental implementation of the machine learning models from training, validating to testing. The results show that the tree based machine learning produced better performances on the green building properties, which further tested with another five hold-out data. The testing results show that the machine learning with tree based scheme was able to predict the green building price higher than the observed price for the eight out of the ten cases within the acceptable valuation ranges.


2021 ◽  
Vol 5 (5) ◽  
pp. 700-713
Author(s):  
Md. Kowsher ◽  
Imran Hossen ◽  
Anik Tahabilder ◽  
Nusrat Jahan Prottasha ◽  
Kaiser Habib ◽  
...  

Machine learning models have been very popular nowadays for providing rigorous solutions to complicated real-life problems. There are three main domains named supervised, unsupervised, and reinforcement. Supervised learning mainly deals with regression and classification. There exist several types of classification algorithms, and these are based on various bases. The classification performance varies based on the dataset velocity and the algorithm selection. In this article, we have focused on developing a model of angular nature that performs supervised classification. Here, we have used two shifting vectors named Support Direction Vector (SDV) and Support Origin Vector (SOV) to form a linear function. These vectors form a linear function to measure cosine-angle with both the target class data and the non-target class data. Considering target data points, the linear function takes such a position that minimizes its angle with target class data and maximizes its angle with non-target class data. The positional error of the linear function has been modelled as a loss function which is iteratively optimized using the gradient descent algorithm. In order to justify the acceptability of this method, we have implemented this model on three different standard datasets. The model showed comparable accuracy with the existing standard supervised classification algorithm. Doi: 10.28991/esj-2021-01306 Full Text: PDF


Author(s):  
Constantine Loum

The challenge of doing research in the social sciences and other disciplines is anchored in the dilemma of finding the right research design to pursue an inquiry path leading to trustworthy evidences . Designing Research in the Social Sciences ( Maggetti, Gilardi, & Radaelli , 2013) is an elucidative narrative, adding a strong voice in helping novice and seasoned researchers to redirect their thoughts and research actions into meaningful efforts to find balance (trade - offs) in research implementation. This new tome is not the usual ‘cook book’ in the research design arena, rather it focuses your mind into appreciating the craft of research; from understanding the social sciences, concepts, causal analysis and related statistical designs to the features that make the world of social research; it’s a new dawn.


Sign in / Sign up

Export Citation Format

Share Document