Intersubjectivity and Materiality: A Multimodal Perspective

2016 ◽  
Vol 5 (1) ◽  
pp. 1-14 ◽  
Author(s):  
Jesse Pirini

AbstractResearchers seeking to analyse how intersubjectivity is established and maintained face significant challenges. The purpose of this article is to provide theoretical/methodological tools that begin to address these challenges. I develop these tools by applying several concepts from multimodal (inter)action analysis to an excerpt taken from the beginning of a tutoring session, drawn from a wider data set of nine one-to-one tutoring sessions. Focusing on co-produced higher-level actions as an analytic site of intersubjectivity, I show that lower-level actions that co-constitute a higher-level action can be delineated into tiers of materiality. I identify three tiers of materiality: durable, adjustable and fleeting. I introduce the theoretical/methodological tool

2009 ◽  
Vol 19 (03) ◽  
pp. 383-397 ◽  
Author(s):  
ANNE BENOIT ◽  
YVES ROBERT ◽  
ERIC THIERRY

In this paper, we explore the problem of mapping linear chain applications onto large-scale heterogeneous platforms. A series of data sets enter the input stage and progress from stage to stage until the final result is computed. An important optimization criterion that should be considered in such a framework is the latency, or makespan, which measures the response time of the system in order to process one single data set entirely. For such applications, which are representative of a broad class of real-life applications, we can consider one-to-one mappings, in which each stage is mapped onto a single processor. However, in order to reduce the communication cost, it seems natural to group stages into intervals. The interval mapping problem can be solved in a straightforward way if the platform has homogeneous communications: the whole chain is grouped into a single interval, which in turn is mapped onto the fastest processor. But the problem becomes harder when considering a fully heterogeneous platform. Indeed, we prove the NP-completeness of this problem. Furthermore, we prove that neither the interval mapping problem nor the similar one-to-one mapping problem can be approximated in polynomial time by any constant factor (unless P=NP).


2020 ◽  
Author(s):  
Maria-Anna Trapotsi ◽  
Ian Barrett ◽  
Lewis Mervin ◽  
Avid M. Afzal ◽  
Noé Sturm ◽  
...  

<p>The understanding of the Mechanism-of-Action (MoA) of compounds and the prediction of potential drug targets has an important role in small-molecule drug discovery. The aim of this work was to compare chemical and cell morphology information for bioactivity prediction. The comparison was performed by using bioactivity data from the ExCAPE database, image data from the Cell Painting data set (the largest publicly available data set of cell images with approximately ~30,000 compound perturbations) and Extended Connectivity Fingerprints (ECFPs) using the multitask Bayesian Matrix Factorisation (BMF) approach Macau. We found that the BMF Macau and Random Forest (RF) performance was overall similar when ECFP fingerprints were used as compounds descriptors. However, BMF Macau outperformed RF in 155 out of 224 target classes (69.20%) when image data was used as compounds information. By using BMF Macau 100 (corresponding to about 45%) and 90 ( about 40%) of the 224 targets were predicted with high predictive performance (AUC > 0.8) with ECFP data and image data as side information, respectively. There were targets better predicted by image data as side information, such as b-catenin, and others better predicted by fingerprint-based side information, like proteins belonging to the G-Protein Coupled Receptor 1 family, which could be rationalized from the underlying data distributions in each descriptor domain. In conclusion, both cell morphology changes and structural chemical information contain information about compound bioactivity, which is also partially complementary, and can hence contribute to <i>in silico </i>mechanism of action analysis. </p>


2020 ◽  
Vol 39 (5) ◽  
pp. 6733-6740
Author(s):  
Zeliang Zhang

Artificial intelligence technology has been applied very well in big data analysis such as data classification. In this paper, the application of the support vector machine (SVM) method from machine learning in the problem of multi-classification was analyzed. In order to improve the classification performance, an improved one-to-one SVM multi-classification method was creatively designed by combining SVM with the K-nearest neighbor (KNN) method. Then the method was tested using UCI public data set, Statlog statistical data set and actual data. The results showed that the overall classification accuracy of the one-to-many SVM, one-to-one SVM and improved one-to-one SVM were 72.5%, 77.25% and 91.5% respectively in the classification of UCI publication data set and Statlog statistical data set, and the total classification accuracy of the neural network, decision tree, basic one-to-one SVM, directed acyclic graph improved one-to-one SVM and fuzzy decision method improved one-to-one SVM and improved one-to-one SVM proposed in this study was 83.98%, 84.55%, 74.07%, 81.5%, 82.68% and 92.9% respectively in the classification of fault data of transformer, which demonstrated the improved one-to-one SVM had good reliability. This study provides some theoretical bases for the application of methods such as machine learning in big data analysis.


Data ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 89 ◽  
Author(s):  
Ren

Collatz conjecture is also known as 3X + 1 conjecture. For verifying the conjecture, we designed an algorithm that can output reduced dynamics (occurred 3 × x+1 or x/2 computations from a starting integer to the first integer smaller than the starting integer) and original dynamics of integers (from a starting integer to 1). Especially, the starting integer has no upper bound. That is, extremely large integers with length of about 100,000 bits, e.g., 2100000 − 1, can be verified for Collatz conjecture, which is much larger than current upper bound (about 260). We analyze the properties of those data (e.g., reduced dynamics) and discover the following laws; reduced dynamics is periodic and the period is the length of its reduced dynamics; the count of x/2 equals to minimal integer that is not less than the count of (3 × x + 1)/2 times ln(1.5)/ln(2). Besides, we observe that all integers are partitioned regularly in half and half iteratively along with the prolonging of reduced dynamics, thus given a reduced dynamics we can compute a residue class that presents this reduced dynamics by a proposed algorithm. It creates one-to-one mapping between a reduced dynamics and a residue class. These observations from data can reveal the properties of reduced dynamics, which are proved mathematically in our other papers (see references). If it can be proved that every integer has reduced dynamics, then every integer will have original dynamics (i.e., Collatz conjecture will be true). The data set includes reduced dynamics of all odd positive integers in [3, 99999999] whose remainder is 3 when dividing 4, original dynamics of some extremely large integers, and all computer source codes in C that implement our proposed algorithms for generating data (i.e., reduced or original dynamics).


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1785
Author(s):  
Liu Shuai ◽  
Liu Yuanning ◽  
Zhu Xiaodong ◽  
Huo Guang ◽  
Wu Zukang ◽  
...  

Due to the unsteady morphology of heterogeneous irises generated by a variety of different devices and environments, the traditional processing methods of statistical learning or cognitive learning for a single iris source are not effective. Traditional iris recognition divides the whole process into several statistically guided steps, which cannot solve the problem of correlation between various links. The existing iris data set size and situational classification constraints make it difficult to meet the requirements of learning methods under a single deep learning framework. Therefore, aiming at a one-to-one iris certification scenario, this paper proposes a heterogeneous iris one-to-one certification method with universal sensors based on quality fuzzy inference and a multi-feature entropy fusion lightweight neural network. The method is divided into an evaluation module and a certification module. The evaluation module can be used by different devices to design a quality fuzzy concept inference system and an iris quality knowledge concept construction mechanism, transform human logical cognition concepts into digital concepts, and select appropriate concepts to determine iris quality according to different iris quality requirements and get a recognizable iris. The certification module is a lightweight neural network based on statistical learning ideas and a multi-source feature fusion mechanism. The information entropy of the iris feature label was used to set the iris entropy feature category label and design certification module functions according to the category label to obtain the certification module result. As the requirements for the number and quality of irises changes, the category labels in the certification module function were dynamically adjusted using a feedback learning mechanism. This paper uses iris data collected from three different sensors in the JLU (Jilin University) iris library. The experimental results prove that for the lightweight multi-state irises, the abovementioned problems are ameliorated to a certain extent by this method.


Rheumatology ◽  
2021 ◽  
Vol 60 (Supplement_1) ◽  
Author(s):  
Alice Berry ◽  
Susan Bridgewater ◽  
Bryan Abbott ◽  
Jo Adams ◽  
Emma Dures

Abstract Background/Aims  Patients with inflammatory arthritis report fatigue as a primary symptom that affects everyday life. FREE-IA (Fatigue - Reducing its Effects through individualised support Episodes in Inflammatory Arthritis) is a feasibility study of a brief intervention (2-4 sessions of 20-30 minutes) designed to reduce fatigue impact. The intervention designed with patients and health professionals is delivered by rheumatology practitioners in one-to-one sessions, after training and using a manual. The aim of this process evaluation was to understand the perspectives of patients and practitioners in FREE-IA. Methods  One-to-one telephone interviews were conducted with patients who had received the intervention and practitioners who had delivered it. Interviews were audio-recorded, transcribed and anonymised. An inductive thematic analysis approach was used to identify and analyse patterns within each data set. Results  Twenty-two patients, and eight practitioners across the five sites participated. We identified four patient and three practitioner themes. Patient themes: Collaborative, non-judgemental consultations: participants reported positive relationships in which their fatigue was validated, and they were able to reflect. They expressed their preference for a responsive, flexible approach to sessions, rather than a rigid, ‘protocolised’ approach. Relevant and useful, but not ground-breaking: participants appreciated the opportunity to tailor content to their individual priorities. They found it helpful to visualise fatigue and identified daily dairies as useful. Although the content was not seen as ground-breaking, it provided focus. Insights and self-awareness: sessions increased participants’ awareness of lifestyle factors and patterns influencing their fatigue, which increased their sense of control and confidence to manage fatigue. Degrees of openness to change: sessions prompted some participants to engage in positive behaviour change or make plans for changes. However, some participants expressed frustration, explaining that it was not the right time because their lives were complicated. Practitioner themes: Engagement with the intervention: practitioners liked training face-to-face with peers and their enjoyment of the intervention increased with experience of delivery. However, for practitioners with extensive experience of providing fatigue support, the low level of treatment intensity and the manualised approach limited the perceived usefulness of the intervention. Research versus clinical practice: practitioners expressed concern about fitting sessions into clinic appointments, and it was often a challenge to offer patients a follow-up session within the proposed two-week time frame. Collaborating with patients: practitioners reported that many patients were willing to try the tools and strategies. While some practitioners followed the manual in a linear way, others used it more flexibly. Conclusion  There is potential for this brief fatigue intervention to benefit patients. Future research will focus on flexibility to fit with local services and creating educational learning resources for practitioners to use in a range of contexts. Disclosure  A. Berry: None. S. Bridgewater: None. B. Abbott: None. J. Adams: None. E. Dures: None.


2020 ◽  
Author(s):  
Maria-Anna Trapotsi ◽  
Ian Barrett ◽  
Lewis Mervin ◽  
Avid M. Afzal ◽  
Noé Sturm ◽  
...  

<p>The understanding of the Mechanism-of-Action (MoA) of compounds and the prediction of potential drug targets has an important role in small-molecule drug discovery. The aim of this work was to compare chemical and cell morphology information for bioactivity prediction. The comparison was performed by using bioactivity data from the ExCAPE database, image data from the Cell Painting data set (the largest publicly available data set of cell images with approximately ~30,000 compound perturbations) and Extended Connectivity Fingerprints (ECFPs) using the multitask Bayesian Matrix Factorisation (BMF) approach Macau. We found that the BMF Macau and Random Forest (RF) performance was overall similar when ECFP fingerprints were used as compounds descriptors. However, BMF Macau outperformed RF in 155 out of 224 target classes (69.20%) when image data was used as compounds information. By using BMF Macau 100 (corresponding to about 45%) and 90 ( about 40%) of the 224 targets were predicted with high predictive performance (AUC > 0.8) with ECFP data and image data as side information, respectively. There were targets better predicted by image data as side information, such as b-catenin, and others better predicted by fingerprint-based side information, like proteins belonging to the G-Protein Coupled Receptor 1 family, which could be rationalized from the underlying data distributions in each descriptor domain. In conclusion, both cell morphology changes and structural chemical information contain information about compound bioactivity, which is also partially complementary, and can hence contribute to <i>in silico </i>mechanism of action analysis. </p>


2018 ◽  
Author(s):  
Alessio Basti ◽  
Marieke Mur ◽  
Nikolaus Kriegeskorte ◽  
Vittorio Pizzella ◽  
Laura Marzetti ◽  
...  

AbstractMost connectivity metrics in neuroimaging research reduce multivariate activity patterns in regions-of-interests (ROIs) to one dimension, which leads to a loss of information. Importantly, it prevents us from investigating the transformations between patterns in different ROIs. Here, we applied linear estimation theory in order to robustly estimate the linear transformations between multivariate fMRI patterns with a cross-validated Tikhonov regularisation approach. We derived three novel metrics that describe different features of these voxel-by-voxel mappings: goodness-of-fit, sparsity and pattern deformation. The goodness-of-fit describes the degree to which the patterns in an input region can be described as a linear transformation of patterns in an output region. The sparsity metric, which relies on a Monte Carlo procedure, was introduced in order to test whether the transformation mostly consists of one-to-one mappings between voxels in different regions. Furthermore, we defined a metric for pattern deformation, i.e. the degree to which the transformation rotates or rescales the input patterns. As a proof of concept, we applied these metrics to an event-related fMRI data set consisting of four subjects that has been used in previous studies. We focused on the transformations from early visual cortex (EVC) to inferior temporal cortex (ITC), fusiform face area (FFA) and parahippocampal place area (PPA). Our results suggest that the estimated linear mappings are able to explain a significant amount of variance of the three output ROIs. The transformation from EVC to ITC shows the highest goodness-of-fit, and those from EVC to FFA and PPA show the expected preference for faces and places as well as animate and inanimate objects, respectively. The pattern transformations are sparse, but sparsity is lower than would have been expected for one-to-one mappings, thus suggesting the presence of one-to-few voxel mappings. ITC, FFA and PPA patterns are not simple rotations of an EVC pattern, indicating that the corresponding transformations amplify or dampen certain dimensions of the input patterns. While our results are only based on a small number of subjects, they show that our pattern transformation metrics can describe novel aspects of multivariate functional connectivity in neuroimaging data.


2014 ◽  
Vol 3 (2) ◽  
Author(s):  
Jesse Pirini

AbstractDuring the activities of everyday life social actors always produce multiple simultaneous higher level actions. These necessarily operate at different levels of attention and awareness. Modal density is a methodological tool that can be used to analyse the attention/awareness of social actors in relation to higher level actions they produce, positioning actions in the foreground, midground and background of attention. Using modal density to analyse an opening and a closing in high school tutoring sessions, I show social actors transitioning into and out of producing the same higher level actions at the foreground of their attention/awareness. Through this analysis I identify two potentially unique aspects of one-to-one tutoring. Firstly I show one way that a tutor helps a student take on the practices of being a good student, and secondly I show the influence that students have over tutoring. I argue that movements into and out of a shared focus of attention are potentially useful sites for analysis of social interaction.


1994 ◽  
Vol 144 ◽  
pp. 139-141 ◽  
Author(s):  
J. Rybák ◽  
V. Rušin ◽  
M. Rybanský

AbstractFe XIV 530.3 nm coronal emission line observations have been used for the estimation of the green solar corona rotation. A homogeneous data set, created from measurements of the world-wide coronagraphic network, has been examined with a help of correlation analysis to reveal the averaged synodic rotation period as a function of latitude and time over the epoch from 1947 to 1991.The values of the synodic rotation period obtained for this epoch for the whole range of latitudes and a latitude band ±30° are 27.52±0.12 days and 26.95±0.21 days, resp. A differential rotation of green solar corona, with local period maxima around ±60° and minimum of the rotation period at the equator, was confirmed. No clear cyclic variation of the rotation has been found for examinated epoch but some monotonic trends for some time intervals are presented.A detailed investigation of the original data and their correlation functions has shown that an existence of sufficiently reliable tracers is not evident for the whole set of examinated data. This should be taken into account in future more precise estimations of the green corona rotation period.


Sign in / Sign up

Export Citation Format

Share Document