Artificial Bee Colony-Based Approach for Privacy Preservation of Medical Data

Author(s):  
Shivlal Mewada ◽  
Sita Sharan Gautam ◽  
Pradeep Sharma

A large amount of data is generated through healthcare applications and medical equipment. This data is transferred from one piece of equipment to another and sometimes also communicated over a global network. Hence, security and privacy preserving are major concerns in the healthcare sector. It is seen that traditional anonymization algorithms are viable for sanitization process, but not for restoration task. In this work, artificial bee colony-based privacy preserving model is developed to address the aforementioned issues. In the proposed model, ABC-based algorithm is adopted to generate the optimal key for sanitization of sensitive information. The effectiveness of the proposed model is tested through restoration analysis. Furthermore, several popular attacks are also considered for evaluating the performance of the proposed privacy preserving model. Simulation results of the proposed model are compared with some popular existing privacy preserving models. It is observed that the proposed model is capable of preserving the sensitive information in an efficient manner.

Author(s):  
Shivlal Mewada ◽  
Sita Sharan Gautam ◽  
Pradeep Sharma

A large amount of data is generated through healthcare applications and medical equipment. This data is transferred from one piece of equipment to another and sometimes also communicated over a global network. Hence, security and privacy preserving are major concerns in the healthcare sector. It is seen that traditional anonymization algorithms are viable for sanitization process, but not for restoration task. In this work, artificial bee colony-based privacy preserving model is developed to address the aforementioned issues. In the proposed model, ABC-based algorithm is adopted to generate the optimal key for sanitization of sensitive information. The effectiveness of the proposed model is tested through restoration analysis. Furthermore, several popular attacks are also considered for evaluating the performance of the proposed privacy preserving model. Simulation results of the proposed model are compared with some popular existing privacy preserving models. It is observed that the proposed model is capable of preserving the sensitive information in an efficient manner.


Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 73
Author(s):  
Kaixiang Zhu ◽  
Lily D. Li ◽  
Michael Li

Although educational timetabling problems have been studied for decades, one instance of this, the school timetabling problem (STP), has not developed as quickly as examination timetabling and course timetabling problems due to its diversity and complexity. In addition, most STP research has only focused on the educators’ availabilities when studying the educator aspect, and the educators’ preferences and expertise have not been taken into consideration. To fill in this gap, this paper proposes a conceptual model for the school timetabling problem considering educators’ availabilities, preferences and expertise as a whole. Based on a common real-world school timetabling scenario, the artificial bee colony (ABC) algorithm is adapted to this study, as research shows its applicability in solving examination and course timetabling problems. A virtual search space for dealing with the large search space is introduced to the proposed model. The proposed approach is simulated with a large, randomly generated dataset. The experimental results demonstrate that the proposed approach is able to solve the STP and handle a large dataset in an ordinary computing hardware environment, which significantly reduces computational costs. Compared to the traditional constraint programming method, the proposed approach is more effective and can provide more satisfactory solutions by considering educators’ availabilities, preferences, and expertise levels.


2020 ◽  
Vol 144 ◽  
pp. 113097
Author(s):  
Akbar Telikani ◽  
Amir H. Gandomi ◽  
Asadollah Shahbahrami ◽  
Mohammad Naderi Dehkordi

2018 ◽  
Vol 1 (4) ◽  
pp. e32 ◽  
Author(s):  
Chandramohan Dhasarathan ◽  
Rajaguru Dayalan ◽  
Vengattaram Thirumal ◽  
Dhavachelvan Ponnurangam

2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Wei Chen

Portfolio selection is an important issue for researchers and practitioners. In this paper, under the assumption that security returns are given by experts’ evaluations rather than historical data, we discuss the portfolio adjusting problem which takes transaction costs and diversification degree of portfolio into consideration. Uncertain variables are employed to describe the security returns. In the proposed mean-variance-entropy model, the uncertain mean value of the return is used to measure investment return, the uncertain variance of the return is used to measure investment risk, and the entropy is used to measure diversification degree of portfolio. In order to solve the proposed model, a modified artificial bee colony (ABC) algorithm is designed. Finally, a numerical example is given to illustrate the modelling idea and the effectiveness of the proposed algorithm.


2014 ◽  
Vol 14 (1) ◽  
pp. 52-71 ◽  
Author(s):  
A. Geetha Mary ◽  
D. P. Acharjya ◽  
N. Ch. S. N. Iyengar

Abstract In the present age of Internet, data is accumulated at a dramatic pace. The accumulated huge data has no relevance, unless it provides certain useful information pertaining to the interest of the organization. But the real challenge lies in hiding sensitive information in order to provide privacy. Therefore, attribute reduction becomes an important aspect for handling such huge database by eliminating superfluous or redundant data to enable a sensitive rule hiding in an efficient manner before it is disclosed to the public. In this paper we propose a privacy preserving model to hide sensitive fuzzy association rules. In our model we use two processes, named a pre-process and post-process to mine fuzzified association rules and to hide sensitive rules. Experimental results demonstrate the viability of the proposed research.


2019 ◽  
Vol 8 (2) ◽  
pp. 3813-3817

Our examination work is the most slanting since it relates to the security issues which increase, well ordered in view of the casual association inclines in society. In this paper we address the issue of uncovering an individual's information; the philosophy depends for the most part on the K-anonymity with the possibility of unimportant theory, which gets the property of the release strategy not to ravage the data more than anticipated to achieve K-lack of definition. We talk about the colossal data in detail how it is useful to guarantee an immense volume of information without change the arrangement of one of a kind data. Data stored in different forms is anonymise efficiently without affecting the integrity if data by using K-Anonymity and Artificial Bee Colony algorithm respectively. We similarly give base on security and insurance strategies and to avoid the interference. We use the phony bumble bee settlement (ABC) count to propel the tremendous enlightening accumulation. We exhibit the result in graphically the sum we improve the insurance from the as of proposed system. Here both K-Anonymity and ABC Algorithm is used (mixed) which never took care by executing parallel.


Author(s):  
Kauser Ahmed P. ◽  
Debi Prasanna Acharjya

Vast volumes of raw data are generated from the digital world each day. Acquiring useful information and chief features from this data is challenging, and it has become a prime area of current research. Another crucial area is knowledge inferencing. Much research has been carried out in both directions. Swarm intelligence is used for feature selection whereas for knowledge inferencing either fuzzy or rough computing is widely used. Hybridization of intelligent and swarm intelligence techniques are booming recently. In this research work, the authors hybridize both artificial bee colony and rough set. At the initial phase, they employ an artificial bee colony to find the chief features. Further, these main features are analyzed using rough set generating rules. The proposed model indeed helps to diagnose a disease carefully. An empirical analysis is carried out on hepatitis dataset. In addition, a comparative study is also presented. The analysis shows the viability of the proposed model.


Author(s):  
Alexandre Evfimievski ◽  
Tyrone Grandison

Privacy-preserving data mining (PPDM) refers to the area of data mining that seeks to safeguard sensitive information from unsolicited or unsanctioned disclosure. Most traditional data mining techniques analyze and model the data set statistically, in aggregated form, while privacy preservation is primarily concerned with protecting against disclosure of individual data records. This domain separation points to the technical feasibility of PPDM. Historically, issues related to PPDM were first studied by the national statistical agencies interested in collecting private social and economical data, such as census and tax records, and making it available for analysis by public servants, companies, and researchers. Building accurate socioeconomical models is vital for business planning and public policy. Yet, there is no way of knowing in advance what models may be needed, nor is it feasible for the statistical agency to perform all data processing for everyone, playing the role of a trusted third party. Instead, the agency provides the data in a sanitized form that allows statistical processing and protects the privacy of individual records, solving a problem known as privacypreserving data publishing. For a survey of work in statistical databases, see Adam and Wortmann (1989) and Willenborg and de Waal (2001).


Sign in / Sign up

Export Citation Format

Share Document