cognitive tool
Recently Published Documents


TOTAL DOCUMENTS

141
(FIVE YEARS 28)

H-INDEX

14
(FIVE YEARS 2)

Author(s):  
Nadegda Biyushkina ◽  
Sergey Kodan

The authors address one of the problems of interaction between natural sciences and humanities — the use of theoretical provisions of synergetic, theories of dissipative structures and turbulence in the study of problems of the history of state and law through the prism of a bifurcation approach. The article discusses the general conceptual issues of understanding bifurcation and the bifurcation approach as tools for studying social processes and institutions. Special attention is focused on the methodological design and the possibilities of its use as a cognitive tool in research practices.


LingVaria ◽  
2021 ◽  
Vol 16 (2(32)) ◽  
pp. 71-80
Author(s):  
Justyna Winiarska

Is It True that “If You Run Ahead of Yourself, You Cannot Go Very Far”? Image Schemata and Aphorisms The author uses a cognitive tool called image schemata to analyse aphorisms. The schemata originate from early bodily experience and are enable to ground the phenomenon of linguistic meaning there. The aphorism is defined not only as a linguistic fact but as a conceptual structure based on an axiological clash. The clash results from profiling opposite values in the used schemata. Considering the language-values relationship, the article adopts a cognitive linguistics approach which claims that valuation is an immanent part of symbolic language units and it mustn’t be relegated to the area of pragmatics. Following Krzeszowski’s concept, the author assumes that preconceptual schemata interact with the SCALE schema. The hearer/reader of the self-contradictory expression must reinterpret it using metaphorical meanings. These are easily available thanks to conceptual metaphors which include image schemata in their source domains.


Author(s):  
Keke Wu

Data visualization leverages human visual system to enhance cognition, it helps a person quickly and accurately see the trends, outliers, and patterns in data. Yet using visualization requires a viewer to read abstract imagery, estimate statistics, and retain information. These processes typically function differently for those with Intellectual and Developmental Disabilities (IDD) and have created an inaccessible barrier for them to access data. Preliminary findings from our graphical perception experiment suggest that people with IDD use different strategies to reason with data and are more sensitive to the design of data visualization compared with non-IDD populations. This article discusses several implications from that study and lays out actionable steps towards turning data visualization into a universal cognitive tool for people with varying cognitive abilities.


2021 ◽  
Author(s):  
Tharunya Danabal ◽  
Neethi Sarah John ◽  
Abhijeet Pramod Ghawade ◽  
Pranjal Padharinath Ahire

Abstract The focus of this work is on developing a cognitive tool that predicts the most frequent HSE hazards with the highest potential severity levels. The tool identifies these risks using a natural language processing algorithm on HSE leading and lagging indicator reports submitted to an oilfield services company’s global HSE reporting system. The purpose of the tool is to prioritize proactive actions and provide focus to raise workforce awareness. A natural language processing algorithm was developed to identify priority HSE risks based on potential severity levels and frequency of occurrence. The algorithm uses vectorization, compression, and clustering methods to categorize the risks by potential severity and frequency using a formulated risk index methodology. In the pilot study, a user interface was developed to configure the frequency and the number of the prioritized HSE risks that are to be communicated from the tool to those employees who opted to receive the information in a given location. From this pilot study using data reported in the company’s online HSE reporting system, the algorithm successfully identified five priority HSE risks across different hazard categories based on the risk index. Using a high volume of reporting data, the risk index factored multiple coefficients such as severity levels, frequency and cluster tightness to prioritize the HSE risks. The observations at each stage of the developed algorithm are as follows:In the data cleaning stage, all stop words (such as a, and, the) were removed, followed by tokenization to divide text in the HSE reports into tokens and remove punctuation.In the vectorization stage, many vectors were formed using the Term Frequency - Inverse Document Frequency (TF-IDF) method.In the compression stage, an autoencoder removed the noise from the input data.In the agglomerative clustering stage, HSE reports with similar words were grouped into clusters and the number of clusters generated per category were in the range of three to five. The novelty of this approach is its ability to prioritize a location’s HSE risks using an algorithm containing natural language processing techniques. This cognitive tool treats reported HSE information as data to identify and flag priority HSE risks factoring in the frequency of similar reports and their associated severity levels. The proof of concept has demonstrated the potential ability of the tool. The next stage would be to test predictive capabilities for injury prevention.


2021 ◽  
Vol 53 (8S) ◽  
pp. 385-385
Author(s):  
Abigail H. Feder ◽  
Nathan Kegel ◽  
Alicia Trbovich ◽  
Shawn R. Eagle ◽  
Alicia Kissinger-Knox ◽  
...  
Keyword(s):  

2021 ◽  
pp. 004711782110339
Author(s):  
Gustav Meibauer

Following scholarship on IR’s ‘historical turn’ as well as on neorealism and neoclassical realism, this article finds fault particularly in neorealism’s implicit reliance on the historically contingent but incompletely conceptualised transmission of systemic factors into state behaviour. Instead, it suggests that neoclassical realism (NCR) is well-suited to leveraging ‘history’ in systematic and general explanation. This article interrogates two routes towards a historically sensitive NCR (intervening variables and structural modifiers), and how they enable different operationalisations of ‘history’ as a sequence of events, cognitive tool or collective narrative. The first route suggests history underpins concepts and variables currently used by neoclassical realists. Here, history is more easily operationalised and allows a clearer view at learning and emulation processes. It is also more clearly scoped, and therefore less ‘costly’ in terms of paradigmatic distinctiveness. The second route, in which history modifies structural incentives and constraints, is more theoretically challenging especially in terms of differentiating NCR from constructivist approaches, but lends itself to theorising systemic change. Both routes provide fruitful avenues for realist theorising, can serve to emancipate NCR from neorealism in IR and foster cross-paradigmatic dialog. Examining how ‘history’ can be leveraged in realism allows interrogating how other ‘mainstream’, positivist approaches can and should leverage historical contingency, context and evidence to explain international processes and outcomes.


2021 ◽  
Vol 5 (1) ◽  
pp. 1-13
Author(s):  
Maxim Polyakov ◽  
Igor Khanin ◽  
Gennadiy Shevchenko ◽  
Vladimir Bilozubenko

Due to the large volumes of empirical digitized data, a critical challenge is to identify their hidden and unobvious patterns, enabling to gain new knowledge. To make efficient use of data mining (DM) methods, it is required to know its capabilities and limits of application as a cognitive tool. The paper aims to specify the capabilities and limits of DM methods within the methodology of scientific cognition. This will enhance the efficiency of these DM methods for experts in this field as well as for professionals in other fields who analyze empirical data. It was proposed to supplement the existing classification of cognitive levels by the level of empirical regularity (ER) or provisional hypothesis. If ER is generated using DM software algorithm, it can be called the man-machine hypothesis. Thereby, the place of DM in the classification of the levels of empirical cognition was determined. The paper drawn up the scheme illustrating the relationship between the cognitive levels, which supplements the well-known schemes of their classification, demonstrates maximum capabilities of DM methods, and also shows the possibility of a transition from practice to the scientific method through the generation of ER, and further from ER to hypotheses, and from hypotheses to the scientific method. In terms of the methodology of scientific cognition, the most critical fact was established – the limitation of any DM methods is the level of ER. As a result of applying any software developed based on DM methods, the level of cognition achieved represents the ER level.


Author(s):  
Jin Woo Lee ◽  
Shanna R. Daly ◽  
Aileen Huang-Saad ◽  
Gabriella Rodriguez ◽  
Quinton DeVries ◽  
...  

2021 ◽  
Vol 273 ◽  
pp. 11042
Author(s):  
Natalia Kryukova ◽  
Elena Aleksandrova ◽  
Elena Isakova

The articles presents an ongoing study of visual metaphors in terms of cognitive approach. Visual metaphor is viewed by the authors as a cognitive tool that structures the perception of the world. The perception of visual metaphors is analysed by means of multimodal transcription that allows to decipher the semiotic codes that produce the meaning. Multimodal transcript is made applying the system of 14 semiotic codes represented by verbal and non-verbal elements of the visual metaphor. The metaphors are divided into motionless and motion ones depending on the type of channel that is engaged for perceiving and processing the information. Depending on the type of the visual metaphor different semiotic codes are transcribed in the structure of the metaphor.


Sign in / Sign up

Export Citation Format

Share Document