scholarly journals Cortical Microcircuits from a Generative Vision Model

2018 ◽  
Author(s):  
Dileep George ◽  
Alexander Lavin ◽  
J. Swaroop Guntupalli ◽  
David Mely ◽  
Nick Hay ◽  
...  

AbstractUnderstanding the information processing roles of cortical circuits is an outstanding problem in neuroscience and artificial intelligence. The theoretical setting of Bayesian inference has been suggested as a framework for understanding cortical computation. Based on a recently published generative model for visual inference (George et al., 2017), we derive a family of anatomically instantiated and functional cortical circuit models. In contrast to simplistic models of Bayesian inference, the underlying generative model’s representational choices are validated with real-world tasks that required efficient inference and strong generalization. The cortical circuit model is derived by systematically comparing the computational requirements of this model with known anatomical constraints. The derived model suggests precise functional roles for the feedforward, feedback and lateral connections observed in different laminae and columns, and assigns a computational role for the path through the thalamus.

2020 ◽  
Author(s):  
Dileep George ◽  
Miguel Lázaro-Gredilla ◽  
Wolfgang Lehrach ◽  
Antoine Dedieu ◽  
Guangyao Zhou

AbstractUnderstanding the information processing roles of cortical circuits is an outstanding problem in neuroscience and artificial intelligence. Theory-driven efforts will be required to tease apart the functional logic of cortical circuits from the vast amounts of experimental data on cortical connectivity and physiology. Although the theoretical setting of Bayesian inference has been suggested as a framework for understanding cortical computation, making precise and falsifiable biological mappings need models that tackle the challenge of real world tasks. Based on a recent generative model, Recursive Cortical Networks, that demonstrated excellent performance on visual task benchmarks, we derive a family of anatomically instantiated and functional cortical circuit models. Efficient inference and generalization guided the representational choices in the original computational model. The cortical circuit model is derived by systematically comparing the computational requirements of this model with known anatomical constraints. The derived model suggests precise functional roles for the feed-forward, feedback, and lateral connections observed in different laminae and columns, assigns a computational role for the path through the thalamus, predicts the interactions between blobs and inter-blobs, and offers an algorithmic explanation for the innate inter-laminar connectivity between clonal neurons within a cortical column. The model also explains several visual phenomena, including the subjective contour effect, and neon-color spreading effect, with circuit-level precision. Our work paves a new path forward in understanding the logic of cortical and thalamic circuits.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Albert T. Young ◽  
Kristen Fernandez ◽  
Jacob Pfau ◽  
Rasika Reddy ◽  
Nhat Anh Cao ◽  
...  

AbstractArtificial intelligence models match or exceed dermatologists in melanoma image classification. Less is known about their robustness against real-world variations, and clinicians may incorrectly assume that a model with an acceptable area under the receiver operating characteristic curve or related performance metric is ready for clinical use. Here, we systematically assessed the performance of dermatologist-level convolutional neural networks (CNNs) on real-world non-curated images by applying computational “stress tests”. Our goal was to create a proxy environment in which to comprehensively test the generalizability of off-the-shelf CNNs developed without training or evaluation protocols specific to individual clinics. We found inconsistent predictions on images captured repeatedly in the same setting or subjected to simple transformations (e.g., rotation). Such transformations resulted in false positive or negative predictions for 6.5–22% of skin lesions across test datasets. Our findings indicate that models meeting conventionally reported metrics need further validation with computational stress tests to assess clinic readiness.


2021 ◽  
pp. 146144482199380
Author(s):  
Donghee Shin

How much do anthropomorphisms influence the perception of users about whether they are conversing with a human or an algorithm in a chatbot environment? We develop a cognitive model using the constructs of anthropomorphism and explainability to explain user experiences with conversational journalism (CJ) in the context of chatbot news. We examine how users perceive anthropomorphic and explanatory cues, and how these stimuli influence user perception of and attitudes toward CJ. Anthropomorphic explanations of why and how certain items are recommended afford users a sense of humanness, which then affects trust and emotional assurance. Perceived humanness triggers a two-step flow of interaction by defining the baseline to make a judgment about the qualities of CJ and by affording the capacity to interact with chatbots concerning their intention to interact with chatbots. We develop practical implications relevant to chatbots and ascertain the significance of humanness as a social cue in CJ. We offer a theoretical lens through which to characterize humanness as a key mechanism of human–artificial intelligence (AI) interaction, of which the eventual goal is humans perceive AI as human beings. Our results help to better understand human–chatbot interaction in CJ by illustrating how humans interact with chatbots and explaining why humans accept the way of CJ.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


Diagnosis ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Taro Shimizu

Abstract Diagnostic errors are an internationally recognized patient safety concern, and leading causes are faulty data gathering and faulty information processing. Obtaining a full and accurate history from the patient is the foundation for timely and accurate diagnosis. A key concept underlying ideal history acquisition is “history clarification,” meaning that the history is clarified to be depicted as clearly as a video, with the chronology being accurately reproduced. A novel approach is presented to improve history-taking, involving six dimensions: Courtesy, Control, Compassion, Curiosity, Clear mind, and Concentration, the ‘6 C’s’. We report a case that illustrates how the 6C approach can improve diagnosis, especially in relation to artificial intelligence tools that assist with differential diagnosis.


2021 ◽  
Vol 22 ◽  
pp. 101573
Author(s):  
Pranav Ajmera ◽  
Amit Kharat ◽  
Rajesh Botchu ◽  
Harun Gupta ◽  
Viraj Kulkarni

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Andre Esteva ◽  
Katherine Chou ◽  
Serena Yeung ◽  
Nikhil Naik ◽  
Ali Madani ◽  
...  

AbstractA decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields—including medicine—to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques—powered by deep learning—for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit—including cardiology, pathology, dermatology, ophthalmology–and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sachin Modgil ◽  
Shivam Gupta ◽  
Rébecca Stekelorum ◽  
Issam Laguir

PurposeCOVID-19 has pushed many supply chains to re-think and strengthen their resilience and how it can help organisations survive in difficult times. Considering the availability of data and the huge number of supply chains that had their weak links exposed during COVID-19, the objective of the study is to employ artificial intelligence to develop supply chain resilience to withstand extreme disruptions such as COVID-19.Design/methodology/approachWe adopted a qualitative approach for interviewing respondents using a semi-structured interview schedule through the lens of organisational information processing theory. A total of 31 respondents from the supply chain and information systems field shared their views on employing artificial intelligence (AI) for supply chain resilience during COVID-19. We used a process of open, axial and selective coding to extract interrelated themes and proposals that resulted in the establishment of our framework.FindingsAn AI-facilitated supply chain helps systematically develop resilience in its structure and network. Resilient supply chains in dynamic settings and during extreme disruption scenarios are capable of recognising (sensing risks, degree of localisation, failure modes and data trends), analysing (what-if scenarios, realistic customer demand, stress test simulation and constraints), reconfiguring (automation, re-alignment of a network, tracking effort, physical security threats and control) and activating (establishing operating rules, contingency management, managing demand volatility and mitigating supply chain shock) operations quickly.Research limitations/implicationsAs the present research was conducted through semi-structured qualitative interviews to understand the role of AI in supply chain resilience during COVID-19, the respondents may have an inclination towards a specific role of AI due to their limited exposure.Practical implicationsSupply chain managers can utilise data to embed the required degree of resilience in their supply chains by considering the proposed framework elements and phases.Originality/valueThe present research contributes a framework that presents a four-phased, structured and systematic platform considering the required information processing capabilities to recognise, analyse, reconfigure and activate phases to ensure supply chain resilience.


2021 ◽  
pp. 026638212110619
Author(s):  
Sharon Richardson

During the past two decades, there have been a number of breakthroughs in the fields of data science and artificial intelligence, made possible by advanced machine learning algorithms trained through access to massive volumes of data. However, their adoption and use in real-world applications remains a challenge. This paper posits that a key limitation in making AI applicable has been a failure to modernise the theoretical frameworks needed to evaluate and adopt outcomes. Such a need was anticipated with the arrival of the digital computer in the 1950s but has remained unrealised. This paper reviews how the field of data science emerged and led to rapid breakthroughs in algorithms underpinning research into artificial intelligence. It then discusses the contextual framework now needed to advance the use of AI in real-world decisions that impact human lives and livelihoods.


2013 ◽  
Vol 718-720 ◽  
pp. 2422-2426
Author(s):  
Ming Gou ◽  
Jing Yang

The test database of students' health is being analyzed with the information processing tool of artificial intelligence Expert System in order to create a scientific model of students Exercise Prescription in the end. It aims at starting with studying every student to realize an optimized development for the quality potential of every student.


Sign in / Sign up

Export Citation Format

Share Document