privacy invasion
Recently Published Documents


TOTAL DOCUMENTS

68
(FIVE YEARS 24)

H-INDEX

9
(FIVE YEARS 2)

2021 ◽  
Vol 24 (4) ◽  
pp. 1-34
Author(s):  
Simon Birnbach ◽  
Richard Baker ◽  
Simon Eberz ◽  
Ivan Martinovic

Drones are becoming increasingly popular for hobbyists and recreational use. But with this surge in popularity comes increased risk to privacy as the technology makes it easy to spy on people in otherwise-private environments, such as an individual’s home. An attacker can fly a drone over fences and walls to observe the inside of a house, without having physical access. Existing drone detection systems require specialist hardware and expensive deployment efforts, making them inaccessible to the general public. In this work, we present a drone detection system that requires minimal prior configuration and uses inexpensive commercial off-the-shelf hardware to detect drones that are carrying out privacy invasion attacks. We use a model of the attack structure to derive statistical metrics for movement and proximity that are then applied to received communications between a drone and its controller. We test our system in real-world experiments with two popular consumer drone models mounting privacy invasion attacks using a range of flight patterns. We are able both to detect the presence of a drone and to identify which phase of the privacy attack was in progress while being resistant to false positives from other mobile transmitters. For line-of-sight approaches using our kurtosis-based method, we are able to detect all drones at a distance of 6 m, with the majority of approaches detected at 25 m or farther from the target window without suffering false positives for stationary or mobile non-drone transmitters.


Author(s):  
Omid Setayeshfar ◽  
Karthika Subramani ◽  
Xingzi Yuan ◽  
Raunak Dey ◽  
Dezhi Hong ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Md. Abdul Malek

<p></p><p><i>Notwithstanding the apparent hyperbole about AI promises for judicial modernization, there arise deep concerns that span from unfairness, privacy invasion, bias, discrimination to the lack of transparency and legitimacy, etc. Likewise, critics branded their application in the judicial precincts as ethically, legally, and technically distressing. Accordingly, whereas there is already an ongoing transparency debate on board, this paper attempts to revisit, extend and contribute to such simmering debate with a particular focus from a judicial perspective. Since preserving and promoting trust and confidence in the judiciary as a whole</i> <i>appears to be imperative, it uses a searchlight to explore how and why justice algorithms ought to be transparent as to their training data, methods, and outcomes. This paper also ends up delineating the tentative paths to do away with black-box effects and suggesting the way out for the use of algorithms in high-stake areas like the judicial settings.</i></p><br><p></p>


2021 ◽  
Author(s):  
Md. Abdul Malek

<p><i>Although the apparent hyperbole about the promises of AI algorithms has successfully entered upon the judicial precincts; it has also procreated some robust concerns spanning from unfairness, privacy invasion, bias, discrimination, and the lack of legitimacy</i><i> to the lack of transparency</i><i> and explainability</i><i>, </i><i>etc.</i><i> Notably, critics have already denounced </i><i>the current use of the </i><i>predictive algorithm in the judicial decision-making process in many ways, and branded them as ethically, legally, and technically distressing.</i><i> So contextually, whereas there is already an ongoing transparency debate on board, this paper attempts to revisit, extend and contribute to such simmering debate with a particular focus from a judicial perspective. Since there is a good cause to preserve and promote trust and confidence in the judiciary as a whole, a searchlight is beamed on exploring how and why justice algorithms ought to be transparent as to their outcomes, with a sufficient level of explainability, interpretability, intelligibility, and contestability. This paper also ends up delineating the tentative paths to do away with black-box effects, and suggesting the way out for the use of algorithms in the high-stake areas like the judicial settings.</i></p>


2021 ◽  
Author(s):  
Md. Abdul Malek

<p><i>Although the apparent hyperbole about the promises of AI algorithms has successfully entered upon the judicial precincts; it has also procreated some robust concerns spanning from unfairness, privacy invasion, bias, discrimination, and the lack of legitimacy</i><i> to the lack of transparency</i><i> and explainability</i><i>, </i><i>etc.</i><i> Notably, critics have already denounced </i><i>the current use of the </i><i>predictive algorithm in the judicial decision-making process in many ways, and branded them as ethically, legally, and technically distressing.</i><i> So contextually, whereas there is already an ongoing transparency debate on board, this paper attempts to revisit, extend and contribute to such simmering debate with a particular focus from a judicial perspective. Since there is a good cause to preserve and promote trust and confidence in the judiciary as a whole, a searchlight is beamed on exploring how and why justice algorithms ought to be transparent as to their outcomes, with a sufficient level of explainability, interpretability, intelligibility, and contestability. This paper also ends up delineating the tentative paths to do away with black-box effects, and suggesting the way out for the use of algorithms in the high-stake areas like the judicial settings.</i></p>


Author(s):  
Isaac Wiafe ◽  
Felix Nti Koranteng ◽  
Ebenezer Owusu ◽  
Samuel Alimo

Over the years, conventional monitoring devices such as video cameras and tape-recorders have been redesigned into smarter and smaller forms which can be integrated seamlessly into an environment. The purpose of these ubiquitous monitoring devices is to enable the provision of innovative applications and services that support user wellbeing. Despite improving operations in essential areas such as health, there are still concerns associated with ubiquitous monitoring. For benefits associated with ubiquitous monitoring to be fully realized, there is the need to understand the role of user perceptions. This study investigates the factors that influence user perceptions of ubiquitous monitoring devices by drawing samples from a developing country. Users’ response on seven recurring ubiquitous monitoring perceptions were collected using a survey questionnaire. The relationships among these factors were analysed using Partial Least Square Structural Equation Modelling. The results suggest a significant relationship between Perceived Natural Border Crossing and Perceived Privacy Invasion. Also Perceived Affordance, Perceived Coverage and Perceived Privacy Invasion predicted Perceived Trust. The findings imply that more emphasis must be given to educating and familiarizing users with ubiquitous monitoring devices. Future studies are expected to replicate the study in other developing societies to validate these claims.


2021 ◽  
Author(s):  
Marcel Hunecke ◽  
Nadine Richter ◽  
Holger Heppner

The present study aimed to identify psychological barriers, which potentially prevent people from implementing collaborative car use in their every-day mobility behaviour. We suggested a model consisting of four psychological barriers: Autonomy Loss, Privacy Invasion, Interpersonal Distrust, and Data Misuse. Perceived Financial Benefit was included as a main incentive of collaborative car use. Using two samples, a community (N = 176) and a student sample (N = 265), three forms of peer-to-peer collaborative car use were examined: lending your own car to another private person (Lending To), renting a car from another private person (Renting From) and sharing rides with others (Ridesharing). For all three forms, a standardised questionnaire was developed which included the psychological barriers, self-reported collaborative car use intention and behaviour, and scenario evaluations. The results showed that specific barriers predicted specific forms of collaborative car use: Autonomy Loss was connected negatively with Ridesharing and Privacy Invasion predicted Lending To negatively. Data misuse was connected negatively with Renting From, when the renting was arranged via internet. Interpersonal Distrust showed no predictive value of collaborative car use. Perceived Financial Benefit was a consistent incentive for all forms of collaborative car use. Overall, the results confirm the relevance of psychological barriers to collaborative car use. Practical implications to overcome the psychological barriers are discussed.


2021 ◽  
pp. 72-92
Author(s):  
Calvin Brierley ◽  
Budi Arief ◽  
David Barnes ◽  
Julio Hernandez-Castro
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document