Abstract— Credit card fraud is a serious problem for e-commerce retailers with UK merchants reporting losses of $574.2M in 2020. As a result, effective fraud detection systems must be in place to ensure that payments are processed securely in an online environment. From the literature, the detection of credit card fraud is challenging due to dataset imbalance (genuine versus fraudulent transactions), real-time processing requirements, and the dynamic behavior of fraudsters and customers. It is proposed in this paper that the use of machine learning could be an effective solution for combating credit card fraud.According to research, machine learning techniques can play a role in overcoming the identified challenges while ensuring a high detection rate of fraudulent transactions, both directly and indirectly. Even though both supervised and unsupervised machine learning algorithms have been suggested, the flaws in both methods point to the necessity for hybrid approaches.
Event-driven neuromorphic imagers have a number of attractive properties including low-power consumption, high dynamic range, the ability to detect fast events, low memory consumption and low band-width requirements. One of the biggest challenges with using event-driven imagery is that the field of event data processing is still embryonic. In contrast, decades worth of effort have been invested in the analysis of frame-based imagery. Hybrid approaches for applying established frame-based analysis techniques to event-driven imagery have been studied since event-driven imagers came into existence. However, the process for forming frames from event-driven imagery has not been studied in detail. This work presents a principled digital coded exposure approach for forming frames from event-driven imagery that is inspired by the physics exploited in a conventional camera featuring a shutter. The technique described in this work provides a fundamental tool for understanding the temporal information content that contributes to the formation of a frame from event-driven imagery data. Event-driven imagery allows for the application of arbitrary virtual digital shutter functions to form the final frame on a pixel-by-pixel basis. The proposed approach allows for the careful control of the spatio-temporal information that is captured in the frame. Furthermore, unlike a conventional physical camera, event-driven imagery can be formed into any variety of possible frames in post-processing after the data is captured. Furthermore, unlike a conventional physical camera, coded-exposure virtual shutter functions can assume arbitrary values including positive, negative, real, and complex values. The coded exposure approach also enables the ability to perform applications of industrial interest such as digital stroboscopy without any additional hardware. The ability to form frames from event-driven imagery in a principled manner opens up new possibilities in the ability to use conventional frame-based image processing techniques on event-driven imagery.
The automated generation of radiology reports provides X-rays and has tremendous potential to enhance the clinical diagnosis of diseases in patients. A new research direction is gaining increasing attention that involves the use of hybrid approaches based on natural language processing and computer vision techniques to create auto medical report generation systems. The auto report generator, producing radiology reports, will significantly reduce the burden on doctors and assist them in writing manual reports. Because the sensitivity of chest X-ray (CXR) findings provided by existing techniques not adequately accurate, producing comprehensive explanations for medical photographs remains a difficult task. A novel approach to address this issue was proposed, based on the continuous integration of convolutional neural networks and long short-term memory for detecting diseases, followed by the attention mechanism for sequence generation based on these diseases. Experimental results obtained by using the Indiana University CXR and MIMIC-CXR datasets showed that the proposed model attained the current state-of-the-art efficiency as opposed to other solutions of the baseline. BLEU-1, BLEU-2, BLEU-3, and BLEU-4 were used as the evaluation metrics.
The global financial crisis of 2008, its following bank bailouts, and associated corporate impunity sparked a renewed interest in the concept of the structural power of business and the question of “who rules?” in capitalist societies. This new wave of scholarship mitigated some of the problems of the original, theory-driven discussions from the 1970s and 1980s. But despite significant advancements in the empirical identification of business power, we lack a unified framework for studying its working mechanisms. So-called hybrid approaches, drawing on instrumental and structural power for their analyses, display high potential for such a unified and easily applicable framework. We build on this hybrid tradition and propose a novel model that integrates instrumental and structural power analysis into a basic framework. With this, we recalibrate the often rigid division between instrumental and structural power forms and emphasize the role of perceptions as key for understanding the dynamics of business power over time. We illustrate this parsimonious framework by an analysis of the plans of the Dutch government to abolish a dividend tax in 2018 that would have benefited a number of large multinationals but collapsed before implementation.
This paper comprehensively reviews the spiral dynamics optimization (SDO) algorithm and investigates its characteristics. SDO algorithm is one of the most straightforward physics-based optimization algorithms and is successfully applied in various broad fields. This paper describes the recent advances of the SDO algorithm, including its adaptive, improved, and hybrid approaches. The growth of the SDO algorithm and its application in various areas, theoretical analysis, and comparison with its preceding and other algorithms are also described in detail. A detailed description of different spiral paths, their characteristics, and the application of these spiral approaches in developing and improving other optimization algorithms are comprehensively presented. The review concludes the current works on the SDO algorithm, highlighting its shortcomings and suggesting possible future research perspectives.
Address matching continues to play a central role at various levels, through geocoding and data integration from different sources, with a view to promote activities such as urban planning, location-based services, and the construction of databases like those used in census operations. However, the task of address matching continues to face several challenges, such as non-standard or incomplete address records or addresses written in more complex languages. In order to better understand how current limitations can be overcome, this paper conducted a systematic literature review focused on automated approaches to address matching and their evolution across time. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed, resulting in a final set of 41 papers published between 2002 and 2021, the great majority of which are after 2017, with Chinese authors leading the way. The main findings revealed a consistent move from more traditional approaches to deep learning methods based on semantics, encoder-decoder architectures, and attention mechanisms, as well as the very recent adoption of hybrid approaches making an increased use of spatial constraints and entities. The adoption of evolutionary-based approaches and privacy preserving methods stand as some of the research gaps to address in future studies.
Manual inspection of infrastructure damages such as building cracks is difficult due to the objectivity and reliability of assessment and high demands of time and costs. This can be automated using unmanned aerial vehicles (UAVs) for aerial imagery of damages. Numerous computer vision-based approaches have been applied to address the limitations of crack detection but they have their limitations that can be overcome by using various hybrid approaches based on artificial intelligence (AI) and machine learning (ML) techniques. The convolutional neural networks (CNNs), an application of the deep learning (DL) method, display remarkable potential for automatically detecting image features such as damages and are less sensitive to image noise. A modified deep hierarchical CNN architecture has been used in this study for crack detection and damage assessment in civil infrastructures. The proposed architecture is based on 16 convolution layers and a cycle generative adversarial network (CycleGAN). For this study, the crack images were collected using UAVs and open-source images of mid to high rise buildings (five stories and above) constructed during 2000 in Sydney, Australia. Conventionally, a CNN network only utilizes the last layer of convolution. However, our proposed network is based on the utility of multiple layers. Another important component of the proposed CNN architecture is the application of guided filtering (GF) and conditional random fields (CRFs) to refine the predicted outputs to get reliable results. Benchmarking data (600 images) of Sydney-based buildings damages was used to test the proposed architecture. The proposed deep hierarchical CNN architecture produced superior performance when evaluated using five methods: GF method, Baseline (BN) method, Deep-Crack BN, Deep-Crack GF, and SegNet. Overall, the GF method outperformed all other methods as indicated by the global accuracy (0.990), class average accuracy (0.939), mean intersection of the union overall classes (IoU) (0.879), precision (0.838), recall (0.879), and F-score (0.8581) values. Overall, the proposed CNN architecture provides the advantages of reduced noise, highly integrated supervision of features, adequate learning, and aggregation of both multi-scale and multilevel features during the training procedure along with the refinement of the overall output predictions.
PurposeOur research aims in designation of a hybrid approach in the calibration of an attribute impact vector in order to guarantee its completeness in case when other approaches cannot ensure this.Design/methodology/approachReal estate mass appraisal aims at valuating a large number of properties by means of a specialised algorithm. We can apply various methods for this purpose. We present the Szczecin Algorithm of Real Estate Mass Appraisal (SAREMA) and the four methods of calibration of an attribute impact vector. Eventually, we present its application on the example of 318 residential properties in Szczecin, Poland.FindingsWe compare the results of appraisals obtained with the application of the hybrid approach with the appraisals obtained for the three remaining ones. If the database is complete and reliable, the econometric and statistical approaches could be recommended because they are based on quantitative measures of relationships between the values of attributes and properties' unit values. However, when the database is incomplete, the expert and, subsequently, hybrid approaches are used as supplementary ones.Originality/valueThe application of the hybrid approach ensures that the calibration system of an attribute impact vector is always complete. This is because it incorporates the expert approach that can be used even if the database excludes application of approaches that are based on quantitative measures of relationship between the unit real estate value and the value of attributes.