All posts by Andrea M Lewis

Andrea Lewis is a psychologist and Managing Director of Ad Hoc Global Ltd. With a foundation in equity research, she has been leading product development in technology and digital media for over 12 years and leads UX due diligence assessments, research, and strategy.

Google at 20: how a search engine became a literal extension of our mind


File 20180831 195325 1x7xzwe.jpg?ixlib=rb 1.1


Benjamin Curtis, Nottingham Trent University

We are losing our minds to Google. After 20 years, Google’s products have become integrated into our everyday lives, altering the very structure of our cognitive architecture, and our minds have expanded out into cyberspace as a consequence. This is not science fiction, but an implication of what’s known as the “extended mind thesis”, a widely accepted view in philosophy, psychology and neuroscience.

Make no mistake about it, this is a seismic shift in human psychology, probably the biggest we have ever had to cope with, and one that is occurring with breathtaking rapidity – Google, after all, is just 20 years old, this month. But although this shift has some good consequences, there are some deeply troubling issues we urgently need to address.

Much of my research spans issues to do with personal identity, mind, neuroscience, and ethics. And in my view, as we gobble up Google’s AI driven “personalised” features, we cede ever more of our personal cognitive space to Google, and so both mental privacy and the ability to think freely are eroded. What’s more, evidence is starting to emerge that there may be a link between technology use and mental health problems. In other words, it is not clear that our minds can take the strain of the virtual stretch. Perhaps we are even close to the snapping point.

Where does the mind stop and the rest of the world begin?

This was the question posed in 1998 (coincidentally the same year Google was launched) by two philosophers and cognitive scientists, Andy Clark and David Chalmers, in a now famous journal article, The Extended Mind. Before their work, the standard answer among scientists was to say that the mind stopped at the boundaries of skin and skull (roughly, the boundaries of the brain and nervous system).

But Clark and Chalmers proposed a more radical answer. They argued that when we integrate things from the external environment into our thinking processes, those external things play the same cognitive role as our brains do. As a result, they are just as much a part of our minds as neurons and synapses. Clark and Chalmers’ argument produced debate, but many other experts on the mind have since agreed.

Our minds are linked with Google

Clark and Chalmers were writing before the advent of smartphones and 4G internet, and their illustrative examples were somewhat fanciful. They involved, for instance, a man who integrated a notebook into his everyday life that served as an external memory. But as recent work has made clear, the extended mind thesis bears directly on our obsession with smartphones and other devices connected to the web.

Growing numbers of us are now locked into our smartphones from morning until night. Using Google’s services (search engine, calendar, maps, documents, photo assistant and so on) has become second nature. Our cognitive integration with Google is a reality. Our minds literally lie partly on Google’s servers.



But does this matter? It does, for two major reasons.

First, Google is not a mere passive cognitive tool. Google’s latest upgrades, powered by AI and machine learning, are all about suggestions. Google Maps not only tells us how to get where we want to go (on foot, by car or by public transport), but now gives us personalised location suggestions that it thinks will interest us.

Google Assistant, always just two words away (“Hey Google”), now not only provides us with quick information, but can even book appointments for us and make restaurant reservations.

Gmail now makes suggestions about what we want to type. And Google News now pushes stories that it thinks are relevant to us, personally. But all of this removes the very need to think and make decisions for ourselves. Google – again I stress, literally – fills gaps in our cognitive processes, and so fills gaps in our minds. And so mental privacy and the ability to think freely are both eroded.

Addiction or integration?

Second, it doesn’t seem to be good for our minds to be spread across the internet. A growing cause for concern is so-called “smartphone addiction”, no longer an uncommon problem. According to recent reports, the average UK smartphone user checks his phone every 12 minutes. There are a whole host of bad psychological effects this could have that we are only just beginning to appreciate, depression and anxiety being the two most prominent.

But the word “addiction” here, in my view, is just another word for the integration I mentioned above. The reason why so many of us find it so hard to put our smartphones down, it seems to me, is that we have integrated their use into our everyday cognitive processes. We literally think by using them, and so it is no wonder it is hard to stop using them. To have one’s smartphone suddenly taken away is akin to having a lobotomy. Instead, to break the addiction/integration and regain our mental health, we must learn to think differently, and to reclaim our minds.The Conversation

Benjamin Curtis, Lecturer in Philosophy and Ethics, Nottingham Trent University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Andrea M Lewis

Andrea Lewis is a psychologist and Managing Director of Ad Hoc Global Ltd. With a foundation in equity research, she has been leading product development in technology and digital media for over 12 years and leads UX due diligence assessments, research, and strategy.

Humans and Tech Risk

highlighting dangerous tech

New technologies are constantly being released in the market with new, exciting functions and reshaping the way we live our lives as human beings. Psychological research studies may be helpful in understanding how people engage with technology and how they manage the risks and dangers relevant to information and data storage.

Williams (2002) looked at the government’s efforts to educate the public in security risks that threaten public health, security, environmental climate change and other well being factors.

Williams suggested a framework for understanding how humans process information from a focuses on brain science and the human species’ ability to perceive risk.

Central to Williams’ theoretical framework is what he calls “brain lag,” or the notion that the human brain has not evolved as rapidly as the pace of modernisation and, therefore, is incapable of perceiving many risks and threats in a modern world.

As a result, these shortcomings in perception and intellect leave humans ill-equipped to comprehend certain technology related risks and they lack an innate “common sense” response to many modern threats (p. 227).

Williams (2002) makes the point that the trait of adaptation also brings with it an element of denial within behaviour because humans begin to accept false normalities in an urbanised world (e.g., living in cities with polluted air or adapting to the noise of a nearby airport). If denial by adaptation occurs, Williams (2002) maintains that humans will rely on sensory information to determine risk; however, many modern hazards tend to be unnoticeable to the human senses.

Conveying risk in IT is a complicated task.

Furthermore, he characterised conveying the hazards and risks of information security as a highly complex task for governments as there is little sensory information available for humans to assess information risk properly and to characterise such risk as threatening.

Elaborating on his theoretical framework and relying on evolutionary brain science, Williams puts forth the core concept of “enhanced difference” and outlines rules for creating communication materials on modern risks (p.244).

His “enhanced difference” concept relies on the basic evolutionary skills humans have to experience fear or disgust, estimate size and impact through number-scale perception, and to determine reliable entities through a trust versus cheating assessment.

Ultimately, Williams’ “enhanced difference” guidelines aim to make any unseen or unobserved risks of the modern world more visible to humans by appealing to those fundamental, perceptive skills.

Williams, C. (2002). ‘New security’ risks and public educating: the significance of recent evolutionary brain science. Journal of Risk Research, 5 (3), 225 – 248.

Andrea M Lewis

Andrea Lewis is a psychologist and Managing Director of Ad Hoc Global Ltd. With a foundation in equity research, she has been leading product development in technology and digital media for over 12 years and leads UX due diligence assessments, research, and strategy.

Constructs of Privacy Online

Online Privacy is generally surrounded with doubt and scepticism, putting trust into a machine and allowing details of your private life to be absorbed is a daunting thought. Who can see it? How safe is it? What sites can be trusted? Are all prevalent questions that most of us ask. When we are online we want to feel secure and there are many threats out there that can effect our online behaviour, making us cautious.

Dourish and Anderson suggest that Privacy online can be determined by social and cultural circumstances, including group formation, group identity, group membership, and acceptable behaviours characterised within the group. They defined privacy as a social practice including aspects such as solitude, confidentiality and autonomy.

The researchers challenged existing models of privacy and security in regards to Human Computer Interaction concerns, citing areas where prior research has shown the failings of technological and design solutions in addressing usability needs and anticipating the real-world practices, preferences, and behaviours of end-users.

Dourish and Anderson outlined three models of theoretical approaches to understanding privacy and security that have failed to account for actual information habits and practices of users.

  • First, they considered the existing model of privacy as an economic rationality, where the risk and reward of exchanging information takes on a value that can be prioritised (e.g., the benefit of online purchasing versus the costs of disclosing financial information) and the user is positioned as an entity able to calculate the impact of such a risk.
  • Second, they presented the model of privacy as a practical action whereby security measures are considered in the context of everyday life and how people achieve privacy in real settings. Viewing privacy and security as a practice in action makes them ongoing measures rather than any static or fixed ideals. It also forces considerations of privacy and security beyond technical and computer systems and toward considerations of the context, so the people, organisations, entities or even the physical space involved.
  • Third, and lastly, the researchers presented the model of privacy as a discursive practice where use of language constitutes how privacy and security are characterised. Depending on the social or cultural circumstances, the language used to describe a risk will take on a clear perspective of whether the action is acceptable and securem, or unacceptable and insecure (e.g., co-workers choosing to share passwords as a display of teamwork, contradicting company policy).

For the most part, the first model of privacy as an economic rationality has dominated information system design. However, Dourish and Anderson (2006) next reframe privacy and security, keeping in line with the latter two models, which are more inclusive of social aspects and then position privacy and security as a collective rather than individual practice. In terms of how groups or cultures interpret risk, the researchers focus on risk as an aspect of danger assessment or the moral value shared by the collective.

This perspective reinforces the requirement for information technology systems to acknowledge individuals may be part of a collective and consequently aid them where necessary as individuals and a collective.

Another aspect of collective practice is the dynamic relationships within the collective itself and how secrecy and trust are expressed and group identity formed. Social practices of the users (how they manage, share, or withhold information) are positioned as markers of group membership which dictate trust and information sharing.

The researchers then argue that information system design must recognise the need to selectively share information and should negotiate a continuum of public versus private boundaries rather than giving information an inherent value of one over the other.

Finally, Dourish and Anderson (2006) presented their system design solutions that include a visual feedback of system performance so that users are aware of the potential consequences of their actions and they suggest integrating a system’s configuration panels (i.e., the visual control panels that manage privacy settings) with the relevant user actions so that users are aware of the constraints their security preferences have placed on their system’s performance.

Dourish, P. & Anderson, K. (2006). Collective Information Practice: Exploring Privacy and Security as Social and Cultural Phenomena. Human-Computer Interaction, 21(3), 319-342.

http://www.dourish.com/publications/2006/DourishAnderson-InfoPractices-HCIJ.pdf


Dourish, P. & Anderson, K. (2006). Collective Information Practice: Exploring Privacy and Security as Social and Cultural Phenomena. Human-Computer Interaction, 21(3), 319-342.
http://www.dourish.com/publications/2006/DourishAnderson-InfoPractices-HCIJ.pdf 

Andrea M Lewis

Andrea Lewis is a psychologist and Managing Director of Ad Hoc Global Ltd. With a foundation in equity research, she has been leading product development in technology and digital media for over 12 years and leads UX due diligence assessments, research, and strategy.

Women and Video Games

 

When we think of ‘gamers’ a certain stereotype appears in most people’s minds and this stereotype is more often than most a male. Video games have become increasing popular world wide, with a diverse range of games the demographic profile of the typical player or ‘gamer’ is also changing. As more and games are released or readily available consequently there has been an increase in average age and an equalising gender distribution (1)

However, the literature consistently finds that males play video games more frequently than females and play for longer intervals (2). It also states that both genders are equally likely to view video game playing as a masculine pursuit (3).

The gendering of video game play has been linked to low female motivation to play video games because of gender-role stereotyping (4). Particularly a connection has been made to reduced female participation in areas like science, mathematics, and technology, where there is a historical perception of women as ‘inferior’ (5).

Bryce and Rutter (2002) have argued that video game research must challenge the dominant gender stereotypes in gaming and focus on game-play as a “domestic” or leisure practice “in the context of everyday life” (p. 248), especially given the many genres of games, range of places in which to game and the popularity of domestic and online gaming among females.

Thus, context and personal experience become crucial factors in generating an explanatory model of female motivation in gaming. To date, there is no research on female gamers in circumstances where females are the perceived dominant gamers.

Female players are most pronounced in the ‘casual games’ industry (6), where they account for 51% of all players and 74% of the buyers (7).

Casual games have simple rules, allowing players to “get into” game-play quickly, are highly accessible to novice players, and can belong to any game genre (8). Researchers focusing on gender and computer games have suggested that casual games are often overlooked as “real” games because of an “unarticulated aesthetic” in the gaming community that considers mastery of so-called hardcore games as a right of passage to be a true gamer (9). Carr (2005) argues that simply because hard-core gamers appear more committed to their gaming, it does not mean that they are “more representative or more credible” than casual-gamers (p. 468).

As gender stereotypes persist regarding who is an ‘avid’ gamer, actual figures suggest that although males appear to play more than females, such findings are only true for certain countries, gaming platforms, and game genres (10).

Arguably it is possible that research into gaming may have overlooked different genres and platforms where female players are more common than it initially appears. The further in depth the research delves the more evident it becomes that certain studies may have overlooked these factors.

(1) (Entertainment Software Association, 2009).

(2) Williams, Yee, & Caplan, 2008; Ogletree & Drake, 2007; Griffiths, Davies, & Chappell, 2004; Phillips, Rolls, Rouse, & Griffiths, 1995

(3)(Selwyn, 2007).

(4) (Lucas & Sherry, 2004)

(5) see Cassell & Jenkins, 1998

(6) Krotoski, 2004

(7) Casual Games Association, 2007

(8) Juul, 2009

(9) Sweedyk & de Laet, 2005, p. 26

(10) Krotoski, 2004

An excerpt from Lewis, A.M. and Griffiths, M.D. (2011). A qualitative study of the experiences and motivations of female casual-gamers. Aloma: Revista de Psicologia, Ciències de l’Educació ide l’Esport, 28, 245-272. (Spanish and English text).

The complete text is available as a part of the “open journals” system:
http://www.revistaaloma.net/index.php/aloma/article/view/37

Andrea M Lewis

Andrea Lewis is a psychologist and Managing Director of Ad Hoc Global Ltd. With a foundation in equity research, she has been leading product development in technology and digital media for over 12 years and leads UX due diligence assessments, research, and strategy.

UX and Due Diligence

UX is the key to a congested tech landscape.

Day to day living has changed dramatically due to the ability to connect via a diverse range of devices. Society has entered a hyper-connected movement that allows social, educational, and business spheres to be separated. The power of software and wired devices encourages users to savour the privacy,  anonymity, invisibility and convenience of the Internet as a platform.

Users quickly become dependent on being connected online and are swift to evaluate how efficiently and effectively a product meets these needs.

Understanding how and why people accept or reject a product is the framework for improving product user experience, and consequently product success in an ever-crowding marketplace.

As investors assess opportunities within the congested “tech” landscape, it becomes crucial to include a consideration of the product’s user experience as the unique differentiator. Likewise, as companies examine digital media products for acquisition or partnership, it is absolutely paramount that a user-centred evaluation of the product’s potential and features is outlined in the forward-looking growth strategy.

Leaders ought to consider the following two issues when evaluating a product’s potential:

1) The product’s ability to evolve and adapt to user needs

While product evolution and adaptation may be assumed elements of success, the ability to change quickly to respond to market needs should not be taken for granted. Costly, early builds inevitably become legacy products in the rapidly changing digital media environment. Furthermore, if any element of a service relies on such a legacy system, then the service cannot be so concisely defined and tied to that system that service itself cannot evolve. It is nearly impossible for software and system developers to anticipate all user needs and context of use scenarios. The very genius of an initial product concept should not be its own demise. A product or concept must have an inherent adaptability and this adaptability relies on a user-centred positioning. Where the concept and usage states fully tested with potential target audiences? How was user research and feedback incorporated into the development process? And, finally, is there a development framework that allows user feedback to fuel future innovation?
2) The development team’s approach to incorporating user requirements

Brilliant minds are frequently forthright when it comes to clarity of strategy, but is there too much pride if never considering a fall? There are times when leaders, product teams, designers and engineers are so thoroughly embedded within their own product experience that they are unable to strategically execute on user experience and customer insight. A simple resolution for any such reluctance and resistance is to ensure that customer insight and UX are the driving values of the product development cycle. Quite simply, in order to embrace the technology, a user must be able to use the technology, and much more so, achieve mastery of the product for any sense of self-efficacy. When a team recognises that product adoption is a core aspect of innovation, then a user centred philosophy is the natural approach.  “But customers don’t know what they want!” The second point is not to suggest that development teams should be led solely by user requirements. However, leadership teams must be amenable to input from users and implement structured changes based on feedback by finding the innovation on the solution that leads to rich, product differentiation.

User experience analysis provides an accurate valuation of digital media products in the marketplace by providing quantifiable and impartial insights. The UX field has expanded to include research, design, development, and project management. Many agencies and companies have adopted a user-centred development process as a key competitive edge. While challenges remain as to how UX insight successfully merges with business strategy, one adage remains clear: the customer may not know what they want, but the customer is always right.

 

Andrea M Lewis

Andrea Lewis is a psychologist and Managing Director of Ad Hoc Global Ltd. With a foundation in equity research, she has been leading product development in technology and digital media for over 12 years and leads UX due diligence assessments, research, and strategy.