A Digital Diagnostic Aide for Skincare: The Role of Computer Vision and Machine Learning in Revealing Skin Texture Changes

Author(s):  
Jaya Shankar Vuppalapati ◽  
Santosh Kedari ◽  
Anitha Ilapakurti ◽  
Chandrasekar Vuppalapati ◽  
Sharat Kedari ◽  
...  
AI & Society ◽  
2020 ◽  
Author(s):  
Nicolas Malevé

Abstract Computer vision aims to produce an understanding of digital image’s content and the generation or transformation of images through software. Today, a significant amount of computer vision algorithms rely on techniques of machine learning which require large amounts of data assembled in collections, or named data sets. To build these data sets a large population of precarious workers label and classify photographs around the clock at high speed. For computers to learn how to see, a scale articulates macro and micro dimensions: the millions of images culled from the internet with the few milliseconds given to the workers to perform a task for which they are paid a few cents. This paper engages in details with the production of this scale and the labour it relies on: its elaboration. This elaboration does not only require hands and retinas, it also crucially zes mobilises the photographic apparatus. To understand the specific character of the scale created by computer vision scientists, the paper compares it with a previous enterprise of scaling, Malraux’s Le Musée Imaginaire, where photography was used as a device to undo the boundaries of the museum’s collection and open it to an unlimited access to the world’s visual production. Drawing on Douglas Crimp’s argument that the “musée imaginaire”, a hyperbole of the museum, relied simultaneously on the active role of the photographic apparatus for its existence and on its negation, the paper identifies a similar problem in computer vision’s understanding of photography. The double dismissal of the role played by the workers and the agency of the photographic apparatus in the elaboration of computer vision foreground the inherent fragility of the edifice of machine vision and a necessary rethinking of its scale.


2021 ◽  
Vol 110 ◽  
pp. 103854
Author(s):  
Nelson Silva ◽  
Dajie Zhang ◽  
Tomas Kulvicius ◽  
Alexander Gail ◽  
Carla Barreiros ◽  
...  

2020 ◽  
Author(s):  
Marc Philipp Bahlke ◽  
Natnael Mogos ◽  
Jonny Proppe ◽  
Carmen Herrmann

Heisenberg exchange spin coupling between metal centers is essential for describing and understanding the electronic structure of many molecular catalysts, metalloenzymes, and molecular magnets for potential application in information technology. We explore the machine-learnability of exchange spin coupling, which has not been studied yet. We employ Gaussian process regression since it can potentially deal with small training sets (as likely associated with the rather complex molecular structures required for exploring spin coupling) and since it provides uncertainty estimates (“error bars”) along with predicted values. We compare a range of descriptors and kernels for 257 small dicopper complexes and find that a simple descriptor based on chemical intuition, consisting only of copper-bridge angles and copper-copper distances, clearly outperforms several more sophisticated descriptors when it comes to extrapolating towards larger experimentally relevant complexes. Exchange spin coupling is similarly easy to learn as the polarizability, while learning dipole moments is much harder. The strength of the sophisticated descriptors lies in their ability to linearize structure-property relationships, to the point that a simple linear ridge regression performs just as well as the kernel-based machine-learning model for our small dicopper data set. The superior extrapolation performance of the simple descriptor is unique to exchange spin coupling, reinforcing the crucial role of choosing a suitable descriptor, and highlighting the interesting question of the role of chemical intuition vs. systematic or automated selection of features for machine learning in chemistry and material science.


2020 ◽  
Author(s):  
Siva Kumar Jonnavithula ◽  
Abhilash Kumar Jha ◽  
Modepalli Kavitha ◽  
Singaraju Srinivasulu

Author(s):  
Xin (Shane) Wang ◽  
Jun Hyun (Joseph) Ryoo ◽  
Neil Bendle ◽  
Praveen K. Kopalle

Data ◽  
2021 ◽  
Vol 6 (2) ◽  
pp. 12
Author(s):  
Helder F. Castro ◽  
Jaime S. Cardoso ◽  
Maria T. Andrade

The ever-growing capabilities of computers have enabled pursuing Computer Vision through Machine Learning (i.e., MLCV). ML tools require large amounts of information to learn from (ML datasets). These are costly to produce but have received reduced attention regarding standardization. This prevents the cooperative production and exploitation of these resources, impedes countless synergies, and hinders ML research. No global view exists of the MLCV dataset tissue. Acquiring it is fundamental to enable standardization. We provide an extensive survey of the evolution and current state of MLCV datasets (1994 to 2019) for a set of specific CV areas as well as a quantitative and qualitative analysis of the results. Data were gathered from online scientific databases (e.g., Google Scholar, CiteSeerX). We reveal the heterogeneous plethora that comprises the MLCV dataset tissue; their continuous growth in volume and complexity; the specificities of the evolution of their media and metadata components regarding a range of aspects; and that MLCV progress requires the construction of a global standardized (structuring, manipulating, and sharing) MLCV “library”. Accordingly, we formulate a novel interpretation of this dataset collective as a global tissue of synthetic cognitive visual memories and define the immediately necessary steps to advance its standardization and integration.


Author(s):  
Doris Xin ◽  
Eva Yiwei Wu ◽  
Doris Jung-Lin Lee ◽  
Niloufar Salehi ◽  
Aditya Parameswaran
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document