Tag Archives: Security

Humans and Tech Risk

highlighting dangerous tech

New technologies are constantly being released in the market with new, exciting functions and reshaping the way we live our lives as human beings. Psychological research studies may be helpful in understanding how people engage with technology and how they manage the risks and dangers relevant to information and data storage.

Williams (2002) looked at the government’s efforts to educate the public in security risks that threaten public health, security, environmental climate change and other well being factors.

Williams suggested a framework for understanding how humans process information from a focuses on brain science and the human species’ ability to perceive risk.

Central to Williams’ theoretical framework is what he calls “brain lag,” or the notion that the human brain has not evolved as rapidly as the pace of modernisation and, therefore, is incapable of perceiving many risks and threats in a modern world.

As a result, these shortcomings in perception and intellect leave humans ill-equipped to comprehend certain technology related risks and they lack an innate “common sense” response to many modern threats (p. 227).

Williams (2002) makes the point that the trait of adaptation also brings with it an element of denial within behaviour because humans begin to accept false normalities in an urbanised world (e.g., living in cities with polluted air or adapting to the noise of a nearby airport). If denial by adaptation occurs, Williams (2002) maintains that humans will rely on sensory information to determine risk; however, many modern hazards tend to be unnoticeable to the human senses.

Conveying risk in IT is a complicated task.

Furthermore, he characterised conveying the hazards and risks of information security as a highly complex task for governments as there is little sensory information available for humans to assess information risk properly and to characterise such risk as threatening.

Elaborating on his theoretical framework and relying on evolutionary brain science, Williams puts forth the core concept of “enhanced difference” and outlines rules for creating communication materials on modern risks (p.244).

His “enhanced difference” concept relies on the basic evolutionary skills humans have to experience fear or disgust, estimate size and impact through number-scale perception, and to determine reliable entities through a trust versus cheating assessment.

Ultimately, Williams’ “enhanced difference” guidelines aim to make any unseen or unobserved risks of the modern world more visible to humans by appealing to those fundamental, perceptive skills.

Williams, C. (2002). ‘New security’ risks and public educating: the significance of recent evolutionary brain science. Journal of Risk Research, 5 (3), 225 – 248.

Andrea M Lewis

Andrea Lewis is a psychologist and Managing Director of Ad Hoc Global Ltd. With a foundation in equity research, she has been leading product development in technology and digital media for over 12 years and leads UX due diligence assessments, research, and strategy.

Constructs of Privacy Online

Online Privacy is generally surrounded with doubt and scepticism, putting trust into a machine and allowing details of your private life to be absorbed is a daunting thought. Who can see it? How safe is it? What sites can be trusted? Are all prevalent questions that most of us ask. When we are online we want to feel secure and there are many threats out there that can effect our online behaviour, making us cautious.

Dourish and Anderson suggest that Privacy online can be determined by social and cultural circumstances, including group formation, group identity, group membership, and acceptable behaviours characterised within the group. They defined privacy as a social practice including aspects such as solitude, confidentiality and autonomy.

The researchers challenged existing models of privacy and security in regards to Human Computer Interaction concerns, citing areas where prior research has shown the failings of technological and design solutions in addressing usability needs and anticipating the real-world practices, preferences, and behaviours of end-users.

Dourish and Anderson outlined three models of theoretical approaches to understanding privacy and security that have failed to account for actual information habits and practices of users.

  • First, they considered the existing model of privacy as an economic rationality, where the risk and reward of exchanging information takes on a value that can be prioritised (e.g., the benefit of online purchasing versus the costs of disclosing financial information) and the user is positioned as an entity able to calculate the impact of such a risk.
  • Second, they presented the model of privacy as a practical action whereby security measures are considered in the context of everyday life and how people achieve privacy in real settings. Viewing privacy and security as a practice in action makes them ongoing measures rather than any static or fixed ideals. It also forces considerations of privacy and security beyond technical and computer systems and toward considerations of the context, so the people, organisations, entities or even the physical space involved.
  • Third, and lastly, the researchers presented the model of privacy as a discursive practice where use of language constitutes how privacy and security are characterised. Depending on the social or cultural circumstances, the language used to describe a risk will take on a clear perspective of whether the action is acceptable and securem, or unacceptable and insecure (e.g., co-workers choosing to share passwords as a display of teamwork, contradicting company policy).

For the most part, the first model of privacy as an economic rationality has dominated information system design. However, Dourish and Anderson (2006) next reframe privacy and security, keeping in line with the latter two models, which are more inclusive of social aspects and then position privacy and security as a collective rather than individual practice. In terms of how groups or cultures interpret risk, the researchers focus on risk as an aspect of danger assessment or the moral value shared by the collective.

This perspective reinforces the requirement for information technology systems to acknowledge individuals may be part of a collective and consequently aid them where necessary as individuals and a collective.

Another aspect of collective practice is the dynamic relationships within the collective itself and how secrecy and trust are expressed and group identity formed. Social practices of the users (how they manage, share, or withhold information) are positioned as markers of group membership which dictate trust and information sharing.

The researchers then argue that information system design must recognise the need to selectively share information and should negotiate a continuum of public versus private boundaries rather than giving information an inherent value of one over the other.

Finally, Dourish and Anderson (2006) presented their system design solutions that include a visual feedback of system performance so that users are aware of the potential consequences of their actions and they suggest integrating a system’s configuration panels (i.e., the visual control panels that manage privacy settings) with the relevant user actions so that users are aware of the constraints their security preferences have placed on their system’s performance.

Dourish, P. & Anderson, K. (2006). Collective Information Practice: Exploring Privacy and Security as Social and Cultural Phenomena. Human-Computer Interaction, 21(3), 319-342.

http://www.dourish.com/publications/2006/DourishAnderson-InfoPractices-HCIJ.pdf


Dourish, P. & Anderson, K. (2006). Collective Information Practice: Exploring Privacy and Security as Social and Cultural Phenomena. Human-Computer Interaction, 21(3), 319-342.
http://www.dourish.com/publications/2006/DourishAnderson-InfoPractices-HCIJ.pdf 

Andrea M Lewis

Andrea Lewis is a psychologist and Managing Director of Ad Hoc Global Ltd. With a foundation in equity research, she has been leading product development in technology and digital media for over 12 years and leads UX due diligence assessments, research, and strategy.