the cost
Recently Published Documents


TOTAL DOCUMENTS

57344
(FIVE YEARS 26016)

H-INDEX

193
(FIVE YEARS 44)

2022 ◽  
Vol 34 (5) ◽  
pp. 1-19
Author(s):  
Xiaohui Wu

In this paper, Artificial Intelligence assisted rule-based confidence metric (AI-CRBM) framework has been introduced for analyzing environmental governance expense prediction reform. A metric method is to assess a level of collective environmental governance representing general, government, and corporate aspects. The equilibrium approach is used to calculate improvements in the source of environmental management based on cost, and it is tailored to test the public sector-corporation for environmental shared governance. The overall concept of cost prediction or estimation of environmental governance is achieved by the rule-based confidence method. The framework compares the expected cost to the environment of governance to determine the efficiency of the cost prediction process.


2022 ◽  
Vol 22 (3) ◽  
pp. 1-20
Author(s):  
Zhihan Lv ◽  
Ranran Lou ◽  
Haibin Lv

Nowadays, with the rapid development of intelligent technology, it is urgent to effectively prevent infectious diseases and ensure people's privacy. The present work constructs the intelligent prevention system of infectious diseases based on edge computing by using the edge computing algorithm, and further deploys and optimizes the privacy information security defense strategy of users in the system, controls the cost, constructs the optimal conditions of the system security defense, and finally analyzes the performance of the model. The results show that the system delay decreases with the increase of power in the downlink. In the analysis of the security performance of personal privacy information, it is found that six different nodes can maintain the optimal strategy when the cost is minimized in the finite time domain and infinite time domain. In comparison with other classical algorithms in the communication field, when the intelligent prevention system of infectious diseases constructed adopts the best defense strategy, it can effectively reduce the consumption of computing resources of edge network equipment, and the prediction accuracy is obviously better than that of other algorithms, reaching 83%. Hence, the results demonstrate that the model constructed can ensure the safety performance and forecast accuracy, and achieve the best defense strategy at low cost, which provides experimental reference for the prevention and detection of infectious diseases in the later period.


2022 ◽  
Vol 16 (2) ◽  
pp. 1-34
Author(s):  
Arpita Biswas ◽  
Gourab K. Patro ◽  
Niloy Ganguly ◽  
Krishna P. Gummadi ◽  
Abhijnan Chakraborty

Many online platforms today (such as Amazon, Netflix, Spotify, LinkedIn, and AirBnB) can be thought of as two-sided markets with producers and customers of goods and services. Traditionally, recommendation services in these platforms have focused on maximizing customer satisfaction by tailoring the results according to the personalized preferences of individual customers. However, our investigation reinforces the fact that such customer-centric design of these services may lead to unfair distribution of exposure to the producers, which may adversely impact their well-being. However, a pure producer-centric design might become unfair to the customers. As more and more people are depending on such platforms to earn a living, it is important to ensure fairness to both producers and customers. In this work, by mapping a fair personalized recommendation problem to a constrained version of the problem of fairly allocating indivisible goods, we propose to provide fairness guarantees for both sides. Formally, our proposed FairRec algorithm guarantees Maxi-Min Share of exposure for the producers, and Envy-Free up to One Item fairness for the customers. Extensive evaluations over multiple real-world datasets show the effectiveness of FairRec in ensuring two-sided fairness while incurring a marginal loss in overall recommendation quality. Finally, we present a modification of FairRec (named as FairRecPlus ) that at the cost of additional computation time, improves the recommendation performance for the customers, while maintaining the same fairness guarantees.


2022 ◽  
Vol 18 (2) ◽  
pp. 1-25
Author(s):  
Jing Li ◽  
Weifa Liang ◽  
Zichuan Xu ◽  
Xiaohua Jia ◽  
Wanlei Zhou

We are embracing an era of Internet of Things (IoT). The latency brought by unstable wireless networks caused by limited resources of IoT devices seriously impacts the quality of services of users, particularly the service delay they experienced. Mobile Edge Computing (MEC) technology provides promising solutions to delay-sensitive IoT applications, where cloudlets (edge servers) are co-located with wireless access points in the proximity of IoT devices. The service response latency for IoT applications can be significantly shortened due to that their data processing can be performed in a local MEC network. Meanwhile, most IoT applications usually impose Service Function Chain (SFC) enforcement on their data transmission, where each data packet from its source gateway of an IoT device to the destination (a cloudlet) of the IoT application must pass through each Virtual Network Function (VNF) in the SFC in an MEC network. However, little attention has been paid on such a service provisioning of multi-source IoT applications in an MEC network with SFC enforcement. In this article, we study service provisioning in an MEC network for multi-source IoT applications with SFC requirements and aiming at minimizing the cost of such service provisioning, where each IoT application has multiple data streams from different sources to be uploaded to a location (cloudlet) in the MEC network for aggregation, processing, and storage purposes. To this end, we first formulate two novel optimization problems: the cost minimization problem of service provisioning for a single multi-source IoT application, and the service provisioning problem for a set of multi-source IoT applications, respectively, and show that both problems are NP-hard. Second, we propose a service provisioning framework in the MEC network for multi-source IoT applications that consists of uploading stream data from multiple sources of the IoT application to the MEC network, data stream aggregation and routing through the VNF instance placement and sharing, and workload balancing among cloudlets. Third, we devise an efficient algorithm for the cost minimization problem built upon the proposed service provisioning framework, and further extend the solution for the service provisioning problem of a set of multi-source IoT applications. We finally evaluate the performance of the proposed algorithms through experimental simulations. Simulation results demonstrate that the proposed algorithms are promising.


2022 ◽  
Vol 43 (2) ◽  
Author(s):  
Arjan Trinks ◽  
Gbenga Ibikunle ◽  
Machiel Mulder ◽  
Bert Scholtens

2022 ◽  
Vol 27 (3) ◽  
pp. 1-26
Author(s):  
Skandha Deepsita S ◽  
Dhayala Kumar M ◽  
Noor Mahammad SK

The approximate hardware design can save huge energy at the cost of errors incurred in the design. This article proposes the approximate algorithm for low-power compressors, utilized to build approximate multiplier with low energy and acceptable error profiles. This article presents two design approaches (DA1 and DA2) for higher bit size approximate multipliers. The proposed multiplier of DA1 have no propagation of carry signal from LSB to MSB, resulted in a very high-speed design. The increment in delay, power, and energy are not exponential with increment of multiplier size ( n ) for DA1 multiplier. It can be observed that the maximum combinations lie in the threshold Error Distance of 5% of the maximum value possible for any particular multiplier of size n . The proposed 4-bit DA1 multiplier consumes only 1.3 fJ of energy, which is 87.9%, 78%, 94%, 67.5%, and 58.9% less when compared to M1, M2, LxA, MxA, accurate designs respectively. The DA2 approach is recursive method, i.e., n -bit multiplier built with n/2-bit sub-multipliers. The proposed 8-bit multiplication has 92% energy savings with Mean Relative Error Distance (MRED) of 0.3 for the DA1 approach and at least 11% to 40% of energy savings with MRED of 0.08 for the DA2 approach. The proposed multipliers are employed in the image processing algorithm of DCT, and the quality is evaluated. The standard PSNR metric is 55 dB for less approximation and 35 dB for maximum approximation.


2022 ◽  
Vol 30 (7) ◽  
pp. 1-13
Author(s):  
Jin Qiu

BACKGROUND: With the gradual improvement of market economy, people' s consumption level is constantly improving, and the quality requirements are getting higher and higher. OBJECTIVES: In order to study the management accounting information analysis platform based on Artificial Intelligence (AI) and realize the goal of accounting computerization, the application of AI in expert system is applied to the field of accounting information analysis. METHODS: The combination of subsystems is applied to the construction of AI accounting information Web system, and the feasibility analysis of its theory and technology is carried out. RESULTS: The results show that its effect is obvious: accelerating the flow of all information and promoting the change of enterprise management mode. Moreover, compared with the traditional system algorithm, the accuracy of the system model is improved by 6% and the time delay is reduced by 9ms, which makes the overall management level of the enterprise further improved, the scope of enterprise competition further expanded, the cost of enterprise saved


2022 ◽  
Vol 41 (1) ◽  
pp. 1-10
Author(s):  
Jonas Zehnder ◽  
Stelian Coros ◽  
Bernhard Thomaszewski

We present a sparse Gauss-Newton solver for accelerated sensitivity analysis with applications to a wide range of equilibrium-constrained optimization problems. Dense Gauss-Newton solvers have shown promising convergence rates for inverse problems, but the cost of assembling and factorizing the associated matrices has so far been a major stumbling block. In this work, we show how the dense Gauss-Newton Hessian can be transformed into an equivalent sparse matrix that can be assembled and factorized much more efficiently. This leads to drastically reduced computation times for many inverse problems, which we demonstrate on a diverse set of examples. We furthermore show links between sensitivity analysis and nonlinear programming approaches based on Lagrange multipliers and prove equivalence under specific assumptions that apply for our problem setting.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-25
Author(s):  
Ryan Dailey ◽  
Aniesh Chawla ◽  
Andrew Liu ◽  
Sripath Mishra ◽  
Ling Zhang ◽  
...  

Reduction in the cost of Network Cameras along with a rise in connectivity enables entities all around the world to deploy vast arrays of camera networks. Network cameras offer real-time visual data that can be used for studying traffic patterns, emergency response, security, and other applications. Although many sources of Network Camera data are available, collecting the data remains difficult due to variations in programming interface and website structures. Previous solutions rely on manually parsing the target website, taking many hours to complete. We create a general and automated solution for aggregating Network Camera data spread across thousands of uniquely structured web pages. We analyze heterogeneous web page structures and identify common characteristics among 73 sample Network Camera websites (each website has multiple web pages). These characteristics are then used to build an automated camera discovery module that crawls and aggregates Network Camera data. Our system successfully extracts 57,364 Network Cameras from 237,257 unique web pages.


2022 ◽  
Vol 54 (9) ◽  
pp. 1-40
Author(s):  
Pengzhen Ren ◽  
Yun Xiao ◽  
Xiaojun Chang ◽  
Po-Yao Huang ◽  
Zhihui Li ◽  
...  

Active learning (AL) attempts to maximize a model’s performance gain while annotating the fewest samples possible. Deep learning (DL) is greedy for data and requires a large amount of data supply to optimize a massive number of parameters if the model is to learn how to extract high-quality features. In recent years, due to the rapid development of internet technology, we have entered an era of information abundance characterized by massive amounts of available data. As a result, DL has attracted significant attention from researchers and has been rapidly developed. Compared with DL, however, researchers have a relatively low interest in AL. This is mainly because before the rise of DL, traditional machine learning requires relatively few labeled samples, meaning that early AL is rarely according the value it deserves. Although DL has made breakthroughs in various fields, most of this success is due to a large number of publicly available annotated datasets. However, the acquisition of a large number of high-quality annotated datasets consumes a lot of manpower, making it unfeasible in fields that require high levels of expertise (such as speech recognition, information extraction, medical images, etc.). Therefore, AL is gradually coming to receive the attention it is due. It is therefore natural to investigate whether AL can be used to reduce the cost of sample annotation while retaining the powerful learning capabilities of DL. As a result of such investigations, deep active learning (DeepAL) has emerged. Although research on this topic is quite abundant, there has not yet been a comprehensive survey of DeepAL-related works; accordingly, this article aims to fill this gap. We provide a formal classification method for the existing work, along with a comprehensive and systematic overview. In addition, we also analyze and summarize the development of DeepAL from an application perspective. Finally, we discuss the confusion and problems associated with DeepAL and provide some possible development directions.


Sign in / Sign up

Export Citation Format

Share Document