scholarly journals What the Machine Saw: some questions on the ethics of computer vision and machine learning to investigate human remains trafficking

Author(s):  
Damien Huffer ◽  
Cristina Wood ◽  
Shawn Graham

This article represents the next step in our ongoing effort to understand the online human remains trade, how, why and where it exists on social media. It expands upon initial research to explore the 'rhetoric' and structure behind the use and manipulation of images and text by this collecting community, topics explored using Google Inception v.3, TensorFlow, etc. (Huffer and Graham 2017; 2018). This current research goes beyond that work to address the ethical and moral dilemmas that can confound the use of new technology to classify and sort thousands of images. The categories used to 'train' the machine are self-determined by the researchers, but to what extent can current image classifying methods be broken to create false positives or false negatives when attempting to classify images taken from social media sales records as either old authentic items or recent forgeries made using remains sourced from unknown locations? What potential do they have to be exploited by dealers or forgers as a way to 'authenticate the market'? Analysing the data obtained when 'scraping' image or text relevant to cultural property trafficking of any kind involves the use of machine learning and neural network analysis, the ethics of which are themselves complicated. Here, we discuss these issues around two case studies; the ongoing repatriation case of Abraham Ulrikab, and an example of what it looks like when the classifier is deliberately broken.

Author(s):  
Damien Huffer ◽  
Cristina Wood ◽  
Shawn Graham

This article represents the next step in our ongoing effort to understand the online human remains trade, how, why and where it exists on social media. It expands upon initial research to explore the 'rhetoric' and structure behind the use and manipulation of images and text by this collecting community, topics explored using Google Inception v.3, TensorFlow, etc. (Huffer and Graham 2017; 2018). This current research goes beyond that work to address the ethical and moral dilemmas that can confound the use of new technology to classify and sort thousands of images. The categories used to 'train' the machine are self-determined by the researchers, but to what extent can current image classifying methods be broken to create false positives or false negatives when attempting to classify images taken from social media sales records as either old authentic items or recent forgeries made using remains sourced from unknown locations? What potential do they have to be exploited by dealers or forgers as a way to 'authenticate the market'? Analysing the data obtained when 'scraping' image or text relevant to cultural property trafficking of any kind involves the use of machine learning and neural network analysis, the ethics of which are themselves complicated. Here, we discuss these issues around two case studies; the ongoing repatriation case of Abraham Ulrikab, and an example of what it looks like when the classifier is deliberately broken.


Heritage ◽  
2020 ◽  
Vol 3 (2) ◽  
pp. 208-227 ◽  
Author(s):  
Shawn Graham ◽  
Damien Huffer ◽  
Jeff Blackadar

It is possible to purchase human remains via Instagram. We present an experiment using computer vision and automated annotation of over ten thousand photographs from Instagram, connected with the buying and selling of human remains, in order to develop a distant view of the sensory affect of these photos: What macroscopic patterns exist, and how do these relate to the self-presentation of these individual vendors? Using Microsoft’s Azure cloud computing and machine learning services, we annotate and then visualize the co-occurrence of tags as a series of networks, giving us that macroscopic view. Vendors are clearly trying to mimic ‘museum’-like experiences, with differing degrees of effectiveness. This approach may therefore be useful for even larger-scale investigations of this trade beyond this single social media platform.


Author(s):  
Navid Asadizanjani ◽  
Sachin Gattigowda ◽  
Mark Tehranipoor ◽  
Domenic Forte ◽  
Nathan Dunn

Abstract Counterfeiting is an increasing concern for businesses and governments as greater numbers of counterfeit integrated circuits (IC) infiltrate the global market. There is an ongoing effort in experimental and national labs inside the United States to detect and prevent such counterfeits in the most efficient time period. However, there is still a missing piece to automatically detect and properly keep record of detected counterfeit ICs. Here, we introduce a web application database that allows users to share previous examples of counterfeits through an online database and to obtain statistics regarding the prevalence of known defects. We also investigate automated techniques based on image processing and machine learning to detect different physical defects and to determine whether or not an IC is counterfeit.


2020 ◽  
Author(s):  
Shreya Reddy ◽  
Lisa Ewen ◽  
Pankti Patel ◽  
Prerak Patel ◽  
Ankit Kundal ◽  
...  

<p>As bots become more prevalent and smarter in the modern age of the internet, it becomes ever more important that they be identified and removed. Recent research has dictated that machine learning methods are accurate and the gold standard of bot identification on social media. Unfortunately, machine learning models do not come without their negative aspects such as lengthy training times, difficult feature selection, and overwhelming pre-processing tasks. To overcome these difficulties, we are proposing a blockchain framework for bot identification. At the current time, it is unknown how this method will perform, but it serves to prove the existence of an overwhelming gap of research under this area.<i></i></p>


2021 ◽  
Vol 40 (5) ◽  
pp. 9361-9382 ◽  
Author(s):  
Naeem Iqbal ◽  
Rashid Ahmad ◽  
Faisal Jamil ◽  
Do-Hyeun Kim

Quality prediction plays an essential role in the business outcome of the product. Due to the business interest of the concept, it has extensively been studied in the last few years. Advancement in machine learning (ML) techniques and with the advent of robust and sophisticated ML algorithms, it is required to analyze the factors influencing the success of the movies. This paper presents a hybrid features prediction model based on pre-released and social media data features using multiple ML techniques to predict the quality of the pre-released movies for effective business resource planning. This study aims to integrate pre-released and social media data features to form a hybrid features-based movie quality prediction (MQP) model. The proposed model comprises of two different experimental models; (i) predict movies quality using the original set of features and (ii) develop a subset of features based on principle component analysis technique to predict movies success class. This work employ and implement different ML-based classification models, such as Decision Tree (DT), Support Vector Machines with the linear and quadratic kernel (L-SVM and Q-SVM), Logistic Regression (LR), Bagged Tree (BT) and Boosted Tree (BOT), to predict the quality of the movies. Different performance measures are utilized to evaluate the performance of the proposed ML-based classification models, such as Accuracy (AC), Precision (PR), Recall (RE), and F-Measure (FM). The experimental results reveal that BT and BOT classifiers performed accurately and produced high accuracy compared to other classifiers, such as DT, LR, LSVM, and Q-SVM. The BT and BOT classifiers achieved an accuracy of 90.1% and 89.7%, which shows an efficiency of the proposed MQP model compared to other state-of-art- techniques. The proposed work is also compared with existing prediction models, and experimental results indicate that the proposed MQP model performed slightly better compared to other models. The experimental results will help the movies industry to formulate business resources effectively, such as investment, number of screens, and release date planning, etc.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Suppawong Tuarob ◽  
Poom Wettayakorn ◽  
Ponpat Phetchai ◽  
Siripong Traivijitkhun ◽  
Sunghoon Lim ◽  
...  

AbstractThe explosion of online information with the recent advent of digital technology in information processing, information storing, information sharing, natural language processing, and text mining techniques has enabled stock investors to uncover market movement and volatility from heterogeneous content. For example, a typical stock market investor reads the news, explores market sentiment, and analyzes technical details in order to make a sound decision prior to purchasing or selling a particular company’s stock. However, capturing a dynamic stock market trend is challenging owing to high fluctuation and the non-stationary nature of the stock market. Although existing studies have attempted to enhance stock prediction, few have provided a complete decision-support system for investors to retrieve real-time data from multiple sources and extract insightful information for sound decision-making. To address the above challenge, we propose a unified solution for data collection, analysis, and visualization in real-time stock market prediction to retrieve and process relevant financial data from news articles, social media, and company technical information. We aim to provide not only useful information for stock investors but also meaningful visualization that enables investors to effectively interpret storyline events affecting stock prices. Specifically, we utilize an ensemble stacking of diversified machine-learning-based estimators and innovative contextual feature engineering to predict the next day’s stock prices. Experiment results show that our proposed stock forecasting method outperforms a traditional baseline with an average mean absolute percentage error of 0.93. Our findings confirm that leveraging an ensemble scheme of machine learning methods with contextual information improves stock prediction performance. Finally, our study could be further extended to a wide variety of innovative financial applications that seek to incorporate external insight from contextual information such as large-scale online news articles and social media data.


Author(s):  
V.T Priyanga ◽  
J.P Sanjanasri ◽  
Vijay Krishna Menon ◽  
E.A Gopalakrishnan ◽  
K.P Soman

The widespread use of social media like Facebook, Twitter, Whatsapp, etc. has changed the way News is created and published; accessing news has become easy and inexpensive. However, the scale of usage and inability to moderate the content has made social media, a breeding ground for the circulation of fake news. Fake news is deliberately created either to increase the readership or disrupt the order in the society for political and commercial benefits. It is of paramount importance to identify and filter out fake news especially in democratic societies. Most existing methods for detecting fake news involve traditional supervised machine learning which has been quite ineffective. In this paper, we are analyzing word embedding features that can tell apart fake news from true news. We use the LIAR and ISOT data set. We churn out highly correlated news data from the entire data set by using cosine similarity and other such metrices, in order to distinguish their domains based on central topics. We then employ auto-encoders to detect and differentiate between true and fake news while also exploring their separability through network analysis.


Sign in / Sign up

Export Citation Format

Share Document