Becoming a Digital Dictatorship

2020 ◽  
pp. 163-178
Author(s):  
Jennifer Pan

The conclusion considers how China’s pursuit of political order through preemptive control changes in a digital context of rapidly growing data, computing power, and advances in machine learning (e.g., deep learning, artificial intelligence / “AI”). Digital advances help the Chinese government collect more information about the entire population, and to do so in ways that are less detectable. However, new digital technologies do not alter China’s goal of preemptive control or the predictive surveillance that underpins this goal. Digital technologies will likely enable the government to identify more potential threats, but because digital technologies will not eliminate error altogether and because there is always a tradeoff between precision and recall in machine classification systems, the dramatic expansion of available information may expand the number of people trapped in programs of preemptive control.

2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


2018 ◽  
Vol 15 (1) ◽  
pp. 6-28 ◽  
Author(s):  
Javier Pérez-Sianes ◽  
Horacio Pérez-Sánchez ◽  
Fernando Díaz

Background: Automated compound testing is currently the de facto standard method for drug screening, but it has not brought the great increase in the number of new drugs that was expected. Computer- aided compounds search, known as Virtual Screening, has shown the benefits to this field as a complement or even alternative to the robotic drug discovery. There are different methods and approaches to address this problem and most of them are often included in one of the main screening strategies. Machine learning, however, has established itself as a virtual screening methodology in its own right and it may grow in popularity with the new trends on artificial intelligence. Objective: This paper will attempt to provide a comprehensive and structured review that collects the most important proposals made so far in this area of research. Particular attention is given to some recent developments carried out in the machine learning field: the deep learning approach, which is pointed out as a future key player in the virtual screening landscape.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


2021 ◽  
Vol 52 (1) ◽  
pp. 159-181
Author(s):  
Arne Pilniok

The digital transformation is permanently changing the government, administration, and society . This process is being intensified by the much-discussed technologies of artificial intelligence, and poses a variety of challenges for parliaments and indirectly for parliamen­tary studies . Their different dimensions have not been discussed comprehensively so far, although the technological developments affect all parliamentary functions and their prem­ises . This article systematizes and structures the various effects of the age of artificial intel­ligence on parliamentary democracy . Namely, the conditions of democratic representation change, the innovation-friendly regulation of digital technologies becomes a parliamentary task, parliamentary control has to be adjusted to the use of algorithms and artificial intelli­gence in government and administration, and possibly, the epistemological and organiza­tional structures of parliamentary work might have to be adapted . This provides starting points for future detailed analyses to adequately capture these processes of change and to accompany them from different disciplinary perspectives .


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi139-vi139
Author(s):  
Jan Lost ◽  
Tej Verma ◽  
Niklas Tillmanns ◽  
W R Brim ◽  
Harry Subramanian ◽  
...  

Abstract PURPOSE Identifying molecular subtypes in gliomas has prognostic and therapeutic value, traditionally after invasive neurosurgical tumor resection or biopsy. Recent advances using artificial intelligence (AI) show promise in using pre-therapy imaging for predicting molecular subtype. We performed a systematic review of recent literature on AI methods used to predict molecular subtypes of gliomas. METHODS Literature review conforming to PRSIMA guidelines was performed for publications prior to February 2021 using 4 databases: Ovid Embase, Ovid MEDLINE, Cochrane trials (CENTRAL), and Web of Science core-collection. Keywords included: artificial intelligence, machine learning, deep learning, radiomics, magnetic resonance imaging, glioma, and glioblastoma. Non-machine learning and non-human studies were excluded. Screening was performed using Covidence software. Bias analysis was done using TRIPOD guidelines. RESULTS 11,727 abstracts were retrieved. After applying initial screening exclusion criteria, 1,135 full text reviews were performed, with 82 papers remaining for data extraction. 57% used retrospective single center hospital data, 31.6% used TCIA and BRATS, and 11.4% analyzed multicenter hospital data. An average of 146 patients (range 34-462 patients) were included. Algorithms predicting IDH status comprised 51.8% of studies, MGMT 18.1%, and 1p19q 6.0%. Machine learning methods were used in 71.4%, deep learning in 27.4%, and 1.2% directly compared both methods. The most common algorithm for machine learning were support vector machine (43.3%), and for deep learning convolutional neural network (68.4%). Mean prediction accuracy was 76.6%. CONCLUSION Machine learning is the predominant method for image-based prediction of glioma molecular subtypes. Major limitations include limited datasets (60.2% with under 150 patients) and thus limited generalizability of findings. We recommend using larger annotated datasets for AI network training and testing in order to create more robust AI algorithms, which will provide better prediction accuracy to real world clinical datasets and provide tools that can be translated to clinical practice.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Qian Huang ◽  
Xue Wen Li

Big data is a massive and diverse form of unstructured data, which needs proper analysis and management. It is another great technological revolution after the Internet, the Internet of Things, and cloud computing. This paper firstly studies the related concepts and basic theories as the origin of research. Secondly, it analyzes in depth the problems and challenges faced by Chinese government management under the impact of big data. Again, we explore the opportunities that big data brings to government management in terms of management efficiency, administrative capacity, and public services and believe that governments should seize opportunities to make changes. Brainlike computing attempts to simulate the structure and information processing process of biological neural network. This paper firstly analyzes the development status of e-government at home and abroad, studies the service-oriented architecture (SOA) and web services technology, deeply studies the e-government and SOA theory, and discusses this based on the development status of e-government in a certain region. Then, the deep learning algorithm is used to construct the monitoring platform to monitor the government behavior in real time, and the deep learning algorithm is used to conduct in-depth mining to analyze the government's intention behavior.


2021 ◽  
Vol 5 (1) ◽  
pp. 1-15
Author(s):  
Rubina Shaheen ◽  
Mir Kasi

The report gives a presents use of artificial intelligence in few administrative agencies. In-depth thematic analysis of some institution, have been conducted to review the current trends. In thematic analysis, 12 institutions have been selected and described the details of the institutions using artificial intelligence in different departments. These analyses yielded five major findings. First, the government has a wide application of Artificial Intelligence toolkit traversing the federal administrative and state. Almost half of the federal agencies evaluated (45%) has used AI and associated machine learning (ML) tools. Also, AI tools are already enhancing agency strategies in  the full span of governance responsibilities, such as keeping regulatory assignments bordering on market efficiency, safety in workplace, health care, and protection of the environmental, protecting the privileges and benefits of the government ranging from intellectual properties to disability, accessing, verifying and analyzing all risks to public  safety and health, Extracting essential data from the data stream of government including complaints by consumer and the communicating with citizens on their rights, welfare, asylum seeking and business ownership. AI toolkit owned by government span the complete scope of Artificial Intelligence techniques, ranging from conventional machine learning to deep learning including natural language and image data. Irrespective of huge acceptance of AI, much still has to be done in this area by the government. Recommendations also discussed at the end.


Sign in / Sign up

Export Citation Format

Share Document