Category Archives: UX Research

Explainer: what is experience design?

Explainer: what is experience design?

Anyone who wishes to innovate can design.

Faye Miller, Queensland University of Technology

“It’s not just a _____, it’s an experience.”

Substitute the blank space above with just about anything these days (car, meal, city, website, course, concert, charity, therapy), and you get the unofficial catch cry of the early 21st century.

Whatever you have to promote to the world – among the endless options in category X competing for attention – is not desirable without it being an “experience”.

But what exactly does that mean? And what does it mean for the two groups of people who potentially collaborate to provide it – the creative types and the business types?

Experience and design

Experiences are ultimately about human perceptions, memories and impressions. Psychologically speaking, how a person experiences an event or phenomenon is an emotional and rational response to an outside stimulus.

Once lived, an experience can be stored as a memory within a person’s mind – and we all know we like to keep pleasant memories that “stick” for the right reasons.

Design usually falls into the domain of the creative types; but “design thinking” is becoming an acceptable and popular practice for just about anyone. As Tim Brown, CEO of global design firm IDEO put it:

Design thinking is a human-centred approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success.

That means anyone who wishes to innovate can design – that is, visualise, map, conceptualise, sketch – solutions based on gathering knowledge of how people behave in terms of technological use or non-use, and how this knowledge can advance the aims of an endeavour.

When we talk about designing experiences, it is important to first understand how certain types of people experience something in context, and then design or facilitate experiences that make a positive difference for people.


In the US and Europe, the so-called “experience economy” (also known as exponomy) is on the rise as a potentially transformative concept for businesses, consumers and society in general. The idea can be traced back to 1998, when B. Joseph Pine II and James H. Gilmore of Harvard Business School introduced a new way of thinking about commodities not just being about goods and services.

Commodities were, the pair argued, more about human experiences that are highly memorable and emotionally engaging enough to sustain long-term value and relationships. Such experiences were powerful enough to change the ways in which people lived and behaved. In short, Pine and Gilmore believed people were willing to pay more for the commodity with the X-Factor.

azwan naim

This suggests companies need to pay much closer attention to the design of experiences co-created by their customers. Businesses need to provide opportunities for customers to participate in experience design through user research. Similarly, a collective mindset needs to be cultivated that allows businesses to realise the interrelatedness of different companies and industries.

This would help them design for experiences that are collaborative across different sectors. For example, a major fashion event would collaborate with the entertainment, media and tourism/hospitality industries to provide an audience with a lasting impression through a multi-sensory experience that is both enjoyable and prosperous.

In recent years, “experience” related positions such as User Experience (UX) Designer/Researcher and Chief Experience Officer (CXO) have increasingly become more visible in organisations of all types.

While some creative positions have a narrow focus on designing digital experiences for website users, others at the senior executive level, such as CXO, aim to plan and maintain a more holistic user-business-technology experience, including “blended” experiences online and offline.

Some experience designers work as freelance consultants, either independently or as part of a design firm, for clients in a range of sectors.

Experience design globally and in Australia

Photo Giddy

In practice, experience design has grown to include the personalisation of experiences through better understanding of different types of human beings combined with unique, innovative ideas developed by company leaders. Perhaps the most well known example of a globally influential and transformative experience-based commodity is Apple.

Apple changed the way people experienced technology with simple interfaces, interactive gestures and memorable branding permeating the products through to their digital and in-store service. This design was a combination of user needs and behaviour which Apple designers perceived and their own creativity, as Steve Jobs himself put it:

Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesise new things. And the reason they were able to do that was that they’ve had more experiences or they have thought more about their experiences than other people.

A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem. The broader one’s understanding of the human experience, the better design we will have.

While the tech industry in the US seems to have embraced the experience economy (with US-based innovation firm frog this year declaring its “coming of age”), the concept has impacted upon many types of businesses and sectors across the globe.

Australian businesses are now starting to acknowledge the emergence of the experience economy with sectors such as (but not limited to) arts and entertainment, tourism, and higher education re-thinking their roles as key players.

Recently, the Australian independent music industry was explored conceptually for the first time using an experience-economy lens, acknowledging the complex relationships and interactions between music business entrepreneurs, musicians, music fans, and the digital and live music experiences.

While these elements usually work in isolation, the exponomy (and experience designers/CXOs who implement the concept) are able to unite them on common ground. Furthermore, exponomy highlights the fusing of industries towards increasing value for all stakeholders involved in a given venture.

A good example of this is the recent collaboration between Australian musicians and wine tourism campaigns, featuring a Nick Cave classic soundtrack for the Be Consumed at Barossa Valley cinematic multi-sensory advert (see above).

The ad won international acclaim as best tourism ad at Cannes and has succeeded in its goal of attracting more tourists to visit Barossa Valley as a result. This shows that real-life experiences can begin with audio-visual tempters designed to engage imaginations on a personal level.

In the same way, higher education in Australia as a major service provider is currently reframing its understanding of how to design for diverse experiences for students, teachers, researchers and research users.

Designing experiences that acknowledge, enthuse, inspire and potentially positively transform the whole person – not just the customer, employee, student or statistic – appear vital to sustaining long-term partnerships.The Conversation

Faye Miller, PhD Candidate, Information Systems, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Google at 20: how a search engine became a literal extension of our mind

File 20180831 195325 1x7xzwe.jpg?ixlib=rb 1.1

Benjamin Curtis, Nottingham Trent University

We are losing our minds to Google. After 20 years, Google’s products have become integrated into our everyday lives, altering the very structure of our cognitive architecture, and our minds have expanded out into cyberspace as a consequence. This is not science fiction, but an implication of what’s known as the “extended mind thesis”, a widely accepted view in philosophy, psychology and neuroscience.

Make no mistake about it, this is a seismic shift in human psychology, probably the biggest we have ever had to cope with, and one that is occurring with breathtaking rapidity – Google, after all, is just 20 years old, this month. But although this shift has some good consequences, there are some deeply troubling issues we urgently need to address.

Much of my research spans issues to do with personal identity, mind, neuroscience, and ethics. And in my view, as we gobble up Google’s AI driven “personalised” features, we cede ever more of our personal cognitive space to Google, and so both mental privacy and the ability to think freely are eroded. What’s more, evidence is starting to emerge that there may be a link between technology use and mental health problems. In other words, it is not clear that our minds can take the strain of the virtual stretch. Perhaps we are even close to the snapping point.

Where does the mind stop and the rest of the world begin?

This was the question posed in 1998 (coincidentally the same year Google was launched) by two philosophers and cognitive scientists, Andy Clark and David Chalmers, in a now famous journal article, The Extended Mind. Before their work, the standard answer among scientists was to say that the mind stopped at the boundaries of skin and skull (roughly, the boundaries of the brain and nervous system).

But Clark and Chalmers proposed a more radical answer. They argued that when we integrate things from the external environment into our thinking processes, those external things play the same cognitive role as our brains do. As a result, they are just as much a part of our minds as neurons and synapses. Clark and Chalmers’ argument produced debate, but many other experts on the mind have since agreed.

Our minds are linked with Google

Clark and Chalmers were writing before the advent of smartphones and 4G internet, and their illustrative examples were somewhat fanciful. They involved, for instance, a man who integrated a notebook into his everyday life that served as an external memory. But as recent work has made clear, the extended mind thesis bears directly on our obsession with smartphones and other devices connected to the web.

Growing numbers of us are now locked into our smartphones from morning until night. Using Google’s services (search engine, calendar, maps, documents, photo assistant and so on) has become second nature. Our cognitive integration with Google is a reality. Our minds literally lie partly on Google’s servers.

But does this matter? It does, for two major reasons.

First, Google is not a mere passive cognitive tool. Google’s latest upgrades, powered by AI and machine learning, are all about suggestions. Google Maps not only tells us how to get where we want to go (on foot, by car or by public transport), but now gives us personalised location suggestions that it thinks will interest us.

Google Assistant, always just two words away (“Hey Google”), now not only provides us with quick information, but can even book appointments for us and make restaurant reservations.

Gmail now makes suggestions about what we want to type. And Google News now pushes stories that it thinks are relevant to us, personally. But all of this removes the very need to think and make decisions for ourselves. Google – again I stress, literally – fills gaps in our cognitive processes, and so fills gaps in our minds. And so mental privacy and the ability to think freely are both eroded.

Addiction or integration?

Second, it doesn’t seem to be good for our minds to be spread across the internet. A growing cause for concern is so-called “smartphone addiction”, no longer an uncommon problem. According to recent reports, the average UK smartphone user checks his phone every 12 minutes. There are a whole host of bad psychological effects this could have that we are only just beginning to appreciate, depression and anxiety being the two most prominent.

But the word “addiction” here, in my view, is just another word for the integration I mentioned above. The reason why so many of us find it so hard to put our smartphones down, it seems to me, is that we have integrated their use into our everyday cognitive processes. We literally think by using them, and so it is no wonder it is hard to stop using them. To have one’s smartphone suddenly taken away is akin to having a lobotomy. Instead, to break the addiction/integration and regain our mental health, we must learn to think differently, and to reclaim our minds.The Conversation

Benjamin Curtis, Lecturer in Philosophy and Ethics, Nottingham Trent University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

4 ways your Google searches and social media affect your opportunities in life

Lorna McGregor, University of Essex; Daragh Murray, University of Essex, and Vivian Ng, University of Essex

Whether or not you realise or consent to it, big data can affect you and how you live your life. The data we create when using social media, browsing the internet and wearing fitness trackers are all collected, categorised and used by businesses and the state to create profiles of us. These profiles are then used to target advertisements for products and services to those most likely to buy them, or to inform government decisions.

Big data enable states and companies to access, combine and analyse our information and build revealing – but incomplete and potentially inaccurate – profiles of our lives. They do so by identifying correlations and patterns in data about us, and people with similar profiles to us, to make predictions about what we might do.

But just because big data analytics are based on algorithms and statistics, does not mean that they are accurate, neutral or inherently objective. And while big data may provide insights about group behaviour, these are not necessarily a reliable way to determine individual behaviour. In fact, these methods can open the door to discrimination and threaten people’s human rights – they could even be working against you. Here are four examples where big data analytics can lead to injustice.

1. Calculating credit scores

Big data can be used to make decisions about credit eligibility, affecting whether you are granted a mortgage, or how high your car insurance premiums should be. These decisions may be informed by your social media posts and data from other apps, which are taken to indicate your level of risk or reliability.

But data such as your education background or where you live may not be relevant or reliable for such assessments. This kind of data can act as a proxy for race or socioeconomic status, and using it to make decisions about credit risk could result in discrimination.

2. Job searches

Big data can be used to determine who sees a job advertisement or gets shortlisted for an interview. Job advertisements can be targeted at particular age groups, such as 25 to 36-year-olds, which excludes younger and older workers from even seeing certain job postings and presents a risk of age discrimination.

Seek, but ye shall not always find.

Automation is also used to make filtering, sorting and ranking candidates more efficient. But this screening process may exclude people on the basis of indicators such as the distance of their commute. Employers might suppose that those with a longer commute are less likely to remain in a job long-term, but this can actually discriminate against people living further from the city centre due to the location of affordable housing.

3. Parole and bail decisions

In the US and the UK, big data risk assessment models are used to help officials decide whether people are granted parole or bail, or referred to rehabilitation programmes. They can also be used to assess how much of a risk an offender presents to society, which is one factor a judge might consider when deciding the length of a sentence.

It’s not clear exactly what data is used to help make these assessments, but as the move toward digital policing gathers pace, it’s increasingly likely that these programmes will incorporate open source information such as social medial activity – if they don’t already.

These assessments may not just look at a person’s profile, but also how their compares to others’. Some police forces have historically over-policed certain minority communities, leading to a disproportionate number of reported criminal incidents. If this data is fed into an algorithm, it will distort the risk assessment models and result in discrimination which directly affects a person’s right to liberty.

4. Vetting visa applications

Last year, the United States’ Immigration and Customs Enforcement Agency (ICE) announced that it wanted to introduce an automated “extreme visa vetting” programme. It would automatically and continuously scan social media accounts, to assess whether applicants will make a “positive contribution” to the United States, and whether any national security issues may arise.

As well as presenting risks to freedom of thought, opinion, expression and association, there were significant risks that this programme would discriminate against people of certain nationalities or religions. Commentators characterised it as a “Muslim ban by algorithm”.

The programme was recently withdrawn, reportedly on the basis that “there was no ‘out-of-the-box’ software that could deliver the quality of monitoring the agency wanted”. But including such goals in procurement documents can create bad incentives for the tech industry to develop programmes that are discriminatory-by-design.

The ConversationThere’s no question that big data analytics works in ways that can affect individuals’ opportunities in life. But the lack of transparency about how big data are collected, used and shared makes it difficult for people to know what information is used, how, and when. Big data analytics are simply too complicated for individuals to be able to protect their data from inappropriate use. Instead, states and companies must make – and follow – regulations to ensure that their use of big data doesn’t lead to discrimination.

Lorna McGregor, Director, Human Rights Centre, PI and Co-Director, ESRC Human Rights, Big Data and Technology Large Grant, University of Essex; Daragh Murray, Lecturer in International Human Rights Law at Essex Law School, University of Essex, and Vivian Ng, Senior Researcher in Human Rights, University of Essex

This article was originally published on The Conversation. Read the original article.

Job: UX Researcher Trainee in London

We’re hiring a UX Researcher Trainee to join our London team.

This UX Researcher Trainee position offers an excellent opportunities for career progression and growth. The trainee role will last approximately 3 to 6 months and will function like an apprenticeship – where you learn the rules of the trade and assist senior researchers with projects.

As a successful UX Researcher Trainee your next step is Junior Researcher, where you will:
– Lead research projects and support team members on other projects
– Meet and liaise with clients to negotiate and agree research projects
– Assist in formulating a plan/proposal and presenting it to the client or senior managementWriting and managing the distribution of surveys and questionnaires
– Assist the senior management on various tasks.
– Manage data and input data into databases.

As a UX Researcher Trainee, you will be:
– An enthusiastic, hard-working and diligent individual
– Have excellent verbal and written communication skills
– A business extrovert, comfortable dealing with individuals at all corporate levels, including board level.
– Be comfortable working in a high pressure, fast paced environment where multiple projects and competing demands are the norm.
– A team-player, detail-oriented and quick learner.
– Any experience in the Tech, Marketing or Multimedia industry would be an added bonus.

Recent graduates are welcome, but must demonstrate relevant course work (e.g., thesis work), also those with more experience who are attempting a career change.

Psychology and social science degree holders strongly encouraged to apply – Research Methods in particular. More technical or design experience is also welcome, but please mention your interest in research and skills for research field work.

Salary is 60 per day and the duration of training is for a 3 to 6 month period.
Weekly schedule may vary from 30 to 40 hours per week.

Send CVs with introduction letter to

The psychology of believing in free will

Peter Gooding, University of Essex

From coffee table books and social media to popular science lectures, it seems it has has become increasing fashionable for neuroscientists, philosophers and other commentators to tell anyone that will listen that free will is a myth.

But why is this debate relevant to anyone but a philosophy student keen to impress a potential date? Actually, a growing body of evidence from psychology suggests belief in free will matters enormously for our behaviour. It is also becoming clear that how we talk about free will affect whether we believe in it.

In the lab, using deterministic arguments to undermine people’s belief in free will has led to a number of negative outcomes including increased cheating and aggression. It has also been linked to a reduction in helping behaviours and lowered feelings of gratitude.

A recent study showed that it is possible to diminish people’s belief in free will by simply making them read a science article suggesting that everything is predetermined. This made the participants’ less willing to donate to charitable causes (compared to a control group). This was only observed in non-religious participants, however.

Scientists argue that these outcomes may be the result of a diminished sense of agency and control that comes with believing that we are free to make choices. Similarly, we may also feel less moral responsibility for the outcomes of our actions.

It may therefore be unsurprising that some studies have shown that people who believe in free will are more likely to have positive life outcomes – such as happiness, academic success and better work performance
. However, the relationship between free will belief and life outcomes may be complex so this association is still debated.

Disturbing dualism

Language and definitions seem linked to whether we believe in free will. Those who refute the existence of free will typically refer to a philosophical definition of free will as an ability of our consciousness (or soul) to make any decision it chooses – regardless of brain processes or preceding causal events. To undermine it, they often couple it with the “determinism” of classical physics. Newton’s laws of physics simply don’t allow for free will to exist – once a physical system is set in motion, it follows a completely predictable path.

According to fundamental physics, everything that happens in the universe is encoded in its initial conditions. From the Big Bang onward, mechanical cause-and-effect interactions of atoms formed stars, planets, life and eventually your DNA and your brain. It was inevitable. Your physical brain was therefore always destined to process information exactly as does, so every decision that you are ever going to make is predetermined. You (your consciousness) are a mere bystander – your brain is in charge of you. Therefore you have no free will. This argument is known as determinism.

Descartes mind and body: Inputs are passed on by the sensory organs to the epiphysis in the brain and from there to the immaterial spirit.

But this approach is absurdly dualistic, requiring people to see their consciousness as their true self and their brain as something separate. Despite being an accurate description of the philosophical definition of free will, this flies in the face of what ordinary people – and many scientists – actually believe.

In reality it seems that the functioning of our brain actually affects our consciousness. Most of us can recognise, without existential angst, that drinking alcohol, which impacts our physical brain, subsequently diminishes our capacity to make rational choices in a manner that our consciousness is powerless to simply override. In fact, we tend to be able to accept that our consciousness is the product of our physical brain, which removes dualism. It is not that our brains make decisions for us, rather we make our decisions with our brains.

Most people define free will as simply their capacity to make choices that fulfil their desires – free from constraints.
This lay understanding of free will doesn’t really involve arguments about deterministic causation stretching back to the Big Bang.

But how could we learn about the arguments for and against the existence of free will without feeling threatened and having our moral judgement undermined? One way could be to re-express valid deterministic arguments in language that people actually use.

For example, when the determinist argues that “cause-and-effect interactions since the Big Bang fashioned the universe and your brain in a way that has made your every decision inevitable”, we could replace it with more familiar language. For example, “your family inheritance and life experience made you the person you are by forming your brain and mind”.

In my view, both arguments are equally deterministic – “family inheritance” is another way of saying DNA while “life experiences” is a less challenging way of saying prior causal events. But, importantly, the latter allows for a greater feeling of freedom, potentially reducing any possible negative impacts on behaviour.

Quantum weirdness

Some even argue that the notion of scientific determinism is being challenged by the rise of quantum mechanics, which governs the micro world of atoms and particles. According to quantum mechanics, you cannot predict with certainty what route a particle will take to reach a target – even if you know all its initial conditions. All you can do is to calculate a probability, which implies that nature is a lot less predictable than we thought. In fact, it is only when you actually measure a particle’s path that it “picks” a specific trajectory – until then it can take several routes at once.

While quantum effects such as these tend to disappear on the scale of people and everyday objects, it has recently been shown that they may play a role in some biological processes, ranging from photosynthesis to bird navigation. So far we have no evidence that they play any role in the human brain – but, of course, that’s not to say they don’t.

People using a philosophical definition and classical physics may argue convincingly against the existence of free will. However, they may want to note that modern physics does not necessarily agree that free will is impossible.

The ConversationUltimately, whether free will exists or not may depend on your definition. If you wish to deny its existence, you should do so responsibly by first defining the concepts clearly. And be aware that this may affect your life a lot more than you think.

Peter Gooding, PhD Candidate of Psychology, University of Essex

This article was originally published on The Conversation. Read the original article.

Why technology puts human rights at risk

Birgit Schippers, Queen’s University Belfast

Movies such as 2001: A Space Odyssey, Blade Runner and Terminator brought rogue robots and computer systems to our cinema screens. But these days, such classic science fiction spectacles don’t seem so far removed from reality.

Increasingly, we live, work and play with computational technologies that are autonomous and intelligent. These systems include software and hardware with the capacity for independent reasoning and decision making. They work for us on the factory floor; they decide whether we can get a mortgage; they track and measure our activity and fitness levels; they clean our living room floors and cut our lawns.

Autonomous and intelligent systems have the potential to affect almost every aspect of our social, economic, political and private lives, including mundane everyday aspects. Much of this seems innocent, but there is reason for concern. Computational technologies impact on every human right, from the right to life to the right to privacy, freedom of expression to social and economic rights. So how can we defend human rights in a technological landscape increasingly shaped by robotics and artificial intelligence (AI)?

AI and human rights

First, there is a real fear that increased machine autonomy will undermine the status of humans. This fear is compounded by a lack of clarity over who will be held to account, whether in a legal or a moral sense, when intelligent machines do harm. But I’m not sure that the focus of our concern for human rights should really lie with rogue robots, as it seems to at present. Rather, we should worry about the human use of robots and artificial intelligence and their deployment in unjust and unequal political, military, economic and social contexts.

This worry is particularly pertinent with respect to lethal autonomous weapons systems (LAWS), often described as killer robots. As we move towards an AI arms race, human rights scholars and campaigners such as Christof Heyns, the former UN special rapporteur on extrajudicial, summary or arbitrary executions, fear that the use of LAWS will put autonomous robotic systems in charge of life and death decisions, with limited or no human control.

Read more:
Super-intelligence and eternal life: transhumanism’s faithful follow it blindly into a future for the elite

AI also revolutionises the link between warfare and surveillance practices. Groups such as the International Committee for Robot Arms Control (ICRAC) recently expressed their opposition to Google’s participation in Project Maven, a military program that uses machine learning to analyse drone surveillance footage, which can be used for extrajudicial killings. ICRAC appealed to Google to ensure that the data it collects on its users is never used for military purposes, joining protests by Google employees over the company’s involvement in the project. Google recently announced that it will not be renewing its contract.

In 2013, the extent of surveillance practices was highlighted by the Edward Snowden revelations. These taught us much about the threat to the right to privacy and the sharing of data between intelligence services, government agencies and private corporations. The recent controversy surrounding Cambridge Analytica’s harvesting of personal data via the use of social media platforms such as Facebook continues to cause serious apprehension, this time over manipulation and interference into democratic elections that damage the right to freedom of expression.

Read more:
Should we fear the rise of drone assassins? Two experts debate

Meanwhile, critical data analysts challenge discriminatory practices associated with what they call AI’s “white guy problem”. This is the concern that AI systems trained on existing data replicate existing racial and gender stereotypes that perpetuate discriminatory practices in areas such as policing, judicial decisions or employment.

AI can replicate and entrench stereotypes.

Ambiguous bots

The potential threat of computational technologies to human rights and to physical, political and digital security was highlighted in a recently published study on The Malicious Use of Artificial Intelligence. The concerns expressed in this University of Cambridge report must be taken seriously. But how should we deal with these threats? Are human rights ready for the era of robotics and AI?

There are ongoing efforts to update existing human rights principles for this era. These include the UN Framing and Guiding Principles on Business and Human Rights, attempts to write a Magna Carta for the digital age and the Future of Life Institute’s Asilomar AI Principles, which identify guidelines for ethical research, adherence to values and a commitment to the longer-term beneficent development of AI.

These efforts are commendable but not sufficient. Governments and government agencies, political parties and private corporations, especially the leading tech companies, must commit to the ethical uses of AI. We also need effective and enforceable legislative control.

Whatever new measures we introduce, it is important to acknowledge that our lives are increasingly entangled with autonomous machines and intelligent systems. This entanglement enhances human well-being in areas such as medical research and treatment, in our transport system, in social care settings and in efforts to protect the environment.

But in other areas this entanglement throws up worrying prospects. Computational technologies are used to watch and track our actions and behaviours, trace our steps, our location, our health, our tastes and our friendships. These systems shape human behaviour and nudge us towards practices of self-surveillance that curtail our freedom and undermine the ideas and ideals of human rights.

The ConversationAnd herein lies the crux: the capacity for dual use of computational technologies blurs the line between beneficent and malicious practices. What’s more, computational technologies are deeply implicated in the unequal power relationships between individual citizens, the state and its agencies, and private corporations. If unhinged from effective national and international systems of checks and balances, they pose a real and worrying threat to our human rights.

Birgit Schippers, Visiting Research Fellow, Senator George J Mitchell Institute for Global Peace, Security and Justice, Queen’s University Belfast

This article was originally published on The Conversation. Read the original article.

Geeky Girl Reality, 2016, 3rd series

The purpose of our longitudinal study is to develop ongoing insights into girls studying STEM and women pursuing STEM careers, in response to the continuing statistics evidencing the underrepresentation of women in STEM, stereotypical environments and double standards.


Our 2016 survey of 163 women between the ages of 15-46 representing 16 different countries world wide, focused on developing insights into the current experiences of girls studying STEM at college and University, using a mixed methods approach. Previous series have found links between the impact of early childhood interests and how they affect the pursuit of STEM careers in the future (please see our previous blog) and how higher education affects a woman’s interest and confidence in STEM (see our previous blog)


Following on from our previous 2016 findings, this series analyses the relationship between different preparation activities girls undertake related to their STEM careers with their 10 year plans and confidence ‘getting a job’.

Preparation and 10 year plan

Graph 1 shows the relationship between different preparation activities on the horizontal axis and 10 year career plans on the vertical axis; the pink bar indicates the percentage of girls who predict they will be in a STEM career in 10 years and the green bar indicates the percentage of girls who predict they will be in a non STEM career.

[GGR] Blog Post #3 - Visual #1 (2)

Results from Graph 1 indicate that girls who undertake preparations in the form of research and enrollment onto programmes are around 10% and 12% respectively more likely to pursue a STEM career in the future compared to girls who undertake preparations in the form of interview practice, attending seminars and conferences, studying for STEM and taking part in volunteer and internship opportunities. Participants expressed their concern for creating more programs focused on young girls; “Have more programs aimed at the youth” suggesting that Schools and Colleges could provide more opportunities for young girls to get involved with STEM; introducing coding clubs, women ‘role model’  guest speakers and promoting general awareness and exposure to different STEM subjects. In the long run, these early influences could foster stronger STEM identities in women helping to retain them in STEM careers.

Participant #148

Furthermore results indicate that overall preparations for STEM are a good protective factor against attrition from STEM in later life, with more than 60% of girls who take part in preparational activities in total having plans to stay in STEM careers. The findings may suggest that those girls who invest more time into preparation such as carrying out research activities are less likely to deviate away from STEM careers in the future.


These initial insights suggest that girls should be encouraged to take part in different preparations regarding STEM.


Preparations and Confidence

Graph 2 shows the relationship between the different preparations and the perceived confidence levels of girls ‘getting a job’ in STEM. The horizontal axis indicates the confidence scores and the vertical axis indicates the preparational activity using the colour keyed circles.

[GGR] Blog Post #3 - Visual #2 (1)

Research suggests that low ‘Professional’ confidence is a contributing factor causing attrition from STEM. Interestingly the results in graph 2 indicate a significant association between different preparations and confidence ‘getting a job’. ‘Interview practice’ as a preparation activity is associated with the least confidence, with ‘programs’ being 25% more likely to be associated with confidence in getting a job compared to interview practice, with an average score of 4.2 out of 5.

Participant #18

Moreover, women emphasised their concern that more programs need to be made available to help encourage young girlsThere should be more accessible programs for girls at younger ages and more well-rounded visibility and representation of women in STEM fields in media“ further adding substance to the argument that society needs to be targeting STEM interest at a young age in girls, which may help build their confidence over time and suggests that media representation may hold some accountability for the confidence levels in women. Although more companies are starting to realise the benefit of employing more women in the field (see how Microsoft’s #MakeWhatsNext and Google’s madewithcode are helping to nurture young female talent with initiatives) there is still a long way to go.  


‘Volunteering/internships’ were also significantly positively correlated with confidence with an average score of 4.1 out of 5, with one participant emphasizing the importance of internships in creating a more structured career focus, “Internships. Internships. I can’t stress that enough. Getting hands-on experience can be the make-or-break when deciding what field one wants to pursue”. Research was also expressed as one of the most significant preparation methods increasing confidence scoring around 4.1 out of 5, which would suggest that increasing more funding and flexibility for women pursuing research in STEM would help improve confidence and lower attrition, with participants further suggesting “in STEM fields, increased grants and scholarships will entice more females”, “Scholarships/funding for women to take postgraduate courses” as key areas that could be improved to encourage future generations of women to pursue STEM as a career.


This would suggest that ‘programs’ and ‘research’ play an important role in both attrition and confidence.


These findings may be explained using ‘investment theory’ in that preparations which involve a large amount of sacrifice and investment with regards to time make it less likely to deviate from this path even in circumstances that are adverse, thus possibly acting as a protective factor against the adverse effects to women’s confidence with regards to stereotypes and ‘masculine’ environments.


Encouraging more women to continue studying STEM


  1. College and Universities can help to encourage girls to take part in different preparational activities by holding different open evenings and information talks about different programmes they can get involved with.


  1. Increasing the awareness and accessibility of internships and volunteering opportunities for girls. This can be achieved through social media and student unions at college and universities where students can access different opportunities.


  1. More research opportunities for girls to get involved in at College and University. Extra curricular activities could focus on research skills and helping students develop their own interests and small independent projects.


We can change the future if we work together.

This has been the third in a series of exploration into the experiences of women in science, technology, engineering, or maths. Keep an eye out for more posts as we look at other influences affecting women’s careers.



Andrea Lewis, Raiya Al-Ansari, Molly Goodman

Geeky Girl Reality, 2016

There are still comparatively few women working in science and technology. Recent studies show that only 23% of science, technology, engineering and mathematics (STEM) professionals are women, and 27% of these are likely to leave their job within the first year.

So, why aren’t more women entering and remaining in science and technology? What’s causing this gender gap?

Geeky Girl Reality is a longitudinal, independent research project looking at how women’s experiences influence GeekGirltheir interests in science and technology.

We’re drawing on data from a spring 2016 survey of 163 women between the ages of 15-46 from 16 countries around the world.

From their stories, we learn about the effects women’s experiences have on their pursuit of higher education in science, technology, engineering, and maths. We have discovered some interesting insights.

Having a plan

To start, we’ll take a look at our participants’ early life experiences and how their plans are affected by their childhood interests or mentors.

Our data indicates that career paths are influenced very early on by childhood interests. One participant said that, “One of the main reasons why I am so involved in math and CS [Computer Science] now is because I was exposed to both subjects at a very young age.”

This trend can be seen from the bar graph below, which compares our survey participants’ childhood interests to their 10-year plans.


On the horizontal axis, each childhood interest is listed along with a bar representing the corresponding 10-year plan responses. The pink bars are the percentages of women planning to pursue a STEM career; the green bars are the percentages of women planning to pursue a non-STEM career, or there was no indication of a career plan.

Those who had technology or science-based childhood interests were more likely to plan for a science or tech career



At least 52% of respondents with an interest in technology or science as a child had a 10-year plan involving a STEM career. This rose to 76% for those with an affinity for tech.

The 33% of young women who lacked exposure to science or technology said they were more likely to go into other areas instead.

Having a mentor

Childhood interests were not the only early life factors affecting their career choices. Mentors also played an important role in their plans for the future. According to one of our participants, “[My mentor] has taught me a lot about being a woman out in the real world and has helped me choose what I want to do.”

We can see this by comparing their mentors (on the horizontal axis) to their 10-year plans.


More than half of women with no mentor or with an unrelated male mentor did not plan to pursue a STEM career. By contrast, women with an unrelated female mentor were the most likely to pursue STEM, with 68% of them indicating a STEM-related career plan.

It appears that women are most encouraged when they have another successful woman as an inspiration. It’s possible that male mentors are not as easy to relate to, and made them feel like they didn’t belong in the relevant fields.

Getting more women interested in STEM careers

There are a number of steps we can take to get more women in science and tech:

  1. Talk to young girls about science and tech to give them the opportunity to explore those subjects from a younger age.
  1. Encourage the women you know to become mentors for other women and girls who are just starting out on their career paths. If you’re a woman in science or tech, consider becoming a mentor yourself.
  1. Establish a mentorship program within your organization to empower female employees in science and tech.
  1. Implement more science and tech courses in early education to increase young girls’ exposure to these fields.

We can change the future if we work together.

This has just been the start of our exploration into the experiences of women in science, technology, engineering, or maths. Keep an eye out for more posts as we look at other influences affecting women’s careers.


Andrea Lewis, Sabah Rahman, Raiya Al-Ansari


Cruz, E. (2016, July 27). The Gap Between Women and Men in STEM and What You Can Do About It [Web log post].

An interview with Dr Daria Kuss

Dr Daria Kuss is a Chartered Psychologist and Senior Lecturer in Psychology and a member of the Psychology Division and International Gaming Research Unit at Nottingham Trent University. She has earned her Master’s degrees in Cognitive and Clinical Neuroscience and Media Culture, and a Ph.D. in Psychology. She has published prolifically in peer-reviewed journals and books, and her publications include 30 peer-reviewed journal articles, numerous book chapters, two authored books, and over 30 international conference presentations. In 2015, Daria has been found to be among the Top 10 publishing academics at Nottingham Trent University, and has won the International Journal of Environmental Research and Public Health Best Paper Award 2015 for her research on online social networking. Her previous experience working with clients suffering from behavioural addictions and other mental health problems in Germany has allowed her to foster her interest and skills in psychotherapy and clinical psychology.

We were lucky enough to have the opportunity to ask Daria a few questions and gather an insight into her research:


What first drew you to researching the relationship between psychology and the internet?

53 IMG_06688
“I wanted to understand what motivates gamers to spend so many hours gaming “

DK: I started off researching gaming. Initially, I wanted to understand what motivates gamers to spend so many hours gaming. I had never fully understood how people can be so involved, spending many hours every day using games online. It was about understanding their motivation, talking to gamers to really see their drive behind it. In one of my early studies, one player expressed it was to numb themselves and forget about their real life problems. That really fascinated me. I wanted to understand the psychology behind their motives. Though the gamers use it as an escape it can actually lead to future problems.

From there I began researching into different forms of internet use and ways that it can become excessive. I looked into differences in cultures in terms of internet and gaming use and how social networking has evolved over the years, and in more recent research I’ve been looking into mobile phone use. It’s quite interesting to see how some vulnerable people use mobile phones excessively, as technology is now used in so many ways. If you go out onto the street you will see many people engrossed in their phones that they don’t look up around them. I think that technology has changed society tremendously and the ways in which we relate to one another has changed, and it really interests me to see where this change is going to take us in the future.


Gaming addiction is a relatively new phenomenon, how does online gaming addiction differ from offline addictions?

" Recent studies have actually shown that people who game excessively have similar problems as those who are experiencing substance-related addictions such as alcohol or cannabis addiction."
” Recent studies have actually shown that people who game excessively have similar problems as those who are experiencing substance-related addictions “

DK: It’s a very good question, one that we as researchers have been asking ourselves. I think the major differences between them is when an individual is gaming excessively those gaming behaviours don’t directly impact on their neurological systems. What happens is that the impact is indirect, and it might be quite similar to substance-related addictions, however, it is not direct.

Research shows that people who are gaming excessively have activations in brain regions that are traditionally associated with substance-related addictions; there seems to be a crossover. Recent studies have shown that people who game excessively have similar problems as those who are experiencing substance-related addictions, such as alcohol or cannabis addiction. I think it is important to note that the American Psychiatric Association who publishes the diagnostic statistical manual now included behavioural addictions in the addiction classification, whereas previously when we talked about addiction we only referred to substance-related addictions.

I also recall the first time I submitted a paper on gaming addiction, a long time ago, to one of the behaviour addiction journals. I was told by the editor that gaming addiction was not a behavioural addiction because it was seen as a behaviour, it’s not a substance that people can be addicted to. This is paradoxical given it was a behavioural addiction journal. 10 years later, the research landscape has dramatically changed. We have a lot of research on gaming as well other behavioural addictions such as work addiction, sex addiction, shopping addiction, suggesting people have become aware of potential problems there.


Having knowledge of the links between addictive gaming, substance abuse and psychosocial behaviour, do you feel enough is being done to combat this problem by the Government?

DK ICTA 2016
Dr Daria Kuss delivering the plenary talk at the International Congress of Technology Addiction

DK: A lot of research has been conducted across Europe but when I compare the UK to Germany with regards to treatment, what I find is that things in the UK are moving very slowly when it comes to developing therapy approaches for those who are gaming excessively and also when it comes to research in those areas. I used to work in an outpatient clinic for gaming addiction in Germany, which was the first which specialised in that type of addiction in Europe, so I saw first-hand how things have progressed rapidly in Germany.

In the UK we are moving at a slower pace. We do however have a number of people working in this area such as psychotherapists. For example in London a colleague of mine is keen on making sure awareness is raised and those needing the help are being provided with it. So slowly but surely I think we are going in the right direction, but a lot of work needs to be done, particularly when diagnosing the problem and ensuring funding is available for relevant treatment.

A study recently carried out by yourself looked at excessive mobile phone use and its association with potentially harmful and/or disturbing behaviours, known as Problematic mobile phone use (PMPU). On the whole, it was argued that the evidence supporting PMPU as an addictive behaviour was scarce. Do you think as time and technology advance that this will become a known and established addiction with society?

“whilst lecturing Psychology classes I see that my students are constantly engaged with their phones and laptops.”

DK: We should really refer to our everyday lives as this shows us our reliance on our mobile phones. I see it on a regular basis with my students, whilst lecturing Psychology classes I see that my students are constantly engaged with their phones and laptops. Regularly using mobile phones may not necessarily lead to addiction, but it can certainly lead to changes in the way we interact and communicate with each other. Also, it impacts the way in which young people’s minds develop, in particular with their social development. I think if we continue using technology to such an extent immense changes will occur especially in terms of human interaction.  How we engage with our communities and environment will change. It’s going to be interesting to see the direction mobile phone use will take us.

As the internet and social media evolve so do the numerous channels to harm others such as trolling, phishing and social media, what are your thoughts on these numerous channels? And do you feel that it is another part of society?

DK: To a certain extent I would probably say yes. In the case of social media, what we now find is that people are engaging through the medium of technology rather than engaging face to face. This basically takes away some real life cues, such as facial expressions. This could be the likely cause of certain online behaviours, due to the ability to distance oneself online (more than what is possible offline). This distance formulates a technological boundary which means our communication is less personal. This is one of the reasons why we see more trolling and bullying online, people are able to act out and behave differently on the internet. This is very problematic and therefore I advocate going back to our real life community and real life relationships to counteract this current state.

Over the last 10 years technology has developed rapidly, what are your thoughts on the effects this may have on the relationships between generations?

“You will see children as young as 3, playing and really knowing the technology, this can lead to potential rewiring of the brain, we have never had such developments in the past.”

DK: What you will find nowadays is that young parents have been raised with technology themselves, and now they are raising their children with it too. Essentially technology has become part of their lives. You will see children as young as 3 playing on their ipads and gadgets and really knowing how to use modern technology. This can potentially lead to rewiring of the brain. We have never had such developments in the past.

However, if we look at the older generations who were not raised with technology – there might be a technological divide between them and their children, in such a way that their children are technologically savvy, whereas the parents may struggle and learning to use technology is a slower and less natural process. But what we are also finding is that many middle-aged people are now starting to use social networking sites such as Facebook. So the age group of 50+ years have started exploring the use of social media more. The older generation is beginning to catch up with those earlier age groups who were exposed to technology from a young age.

Another interesting area of development are the so-called ‘silver surfers’ – individuals of an older age discovering the internet and technology for themselves. In certain aspects I believe this is has a beneficial effect on their lives, as they can connect (decrease isolation) and engage with their communities in a mediated way. It is very important to see both the advantages and disadvantages of technology use across generations. I am a strong advocator of technology use, however I’m aware that in certain cases of susceptible and vulnerable individuals excessive technology use may lead to problems.

Can you tell us more about your future area of research?

Dr Daria Kuss discussing her research

DK: I’m currently working on a number of research projects. One, in particular, is about mobile phone use and the notifications we receive on a daily basis on our phones, and how these notifications impact on our mood and possibly excessive use. This is a very interesting study we are currently conducting with the University of Kent and Auckland University of Technology in New Zealand. I am also working on a couple of cross-cultural projects on internet use across Europe, which we are currently collecting data on. I am also planning another project on cyberstalking and interpersonal violence, to understand the potential concerns when we look at the new governmental policies and development in regards to coercive control. I am also collaborating with Jo’s Cervical Cancer Trust to assess the benefits of using online forums for individuals affected by cervical cancer and cervical abnormalities. As you can see, I would like to know more about how the Internet and mediated technologies can facilitate both positive as well as potentially detrimental behaviours.

Bearing all this in mind I think it is extremely important to recognise how important and beneficial technology can be, so not only focusing on the negative aspects but really understanding and acknowledging the positive uses of technology.