Why designers have arrived in corporate boardrooms

Why designers have arrived in corporate boardrooms

Design is now being lauded as a much-needed mindset for business leaders – those seeking a customer-centred approach to innovation, reimagining operations and rethinking supply chains and financial models.
(Shutterstock)

Angele Beausoleil, University of Toronto

Design is heading to a corporate boardroom near you.

Its form is not a chair, handbag or technology. It is human. This new type of designer is equally comfortable in a navy suit or black turtleneck. Fuelled by top-selling business books and management consultant reports, this latest design movement is all about customer-tailored companies thriving in today’s uncertain economic and political climate.

Over the past 15 years we have seen an exponential growth in new design-related jobs — from computer user-interface (UI) and user-experience (Ux) designer, service designer, customer experience designer, business designer and chief design officer. These, and other design roles, were highlighted in Fast Company’s 2016 article “The Most Important Design Jobs Of The Future.” More recently, design jobs are popping up in unexpected places. Designers are now inside banks, accounting firms, telecommunication departments and manufacturers.

What’s driving this design renaissance?

It is a combination of influence, timing and proof of success.

Business, design and management books, articles and reports.
Angele Beausoleil

Early influences can be attributed to a series of published works over the past decade, particularly those authored by big thinkers like Roger Martin, design consultancy leaders like Tim Brown and design tech executives such as John Maeda. They, along with small academic and industry communities, have long connected design to business processes, operations and strategies.

The proof has been collected over many years and finally published in 2013 by the Design Management Institute (DMI). Their Value of Design report aimed to nudge the capital markets to invest in design-infused companies as they were surpassing traditional firms with an average of 220 per cent return on their share price value.

SAP is one example of a design-infused company. The German computer software company has integrated design across their global enterprise — from research and development studios to product management and strategy. Their chief design officers receive extensive investment in growing their teams and offering design education for their employees.

The Design Management Institute report was the first to offer proof that a well-designed product, service or experience sells itself.

DMI DesignValueIndex.

Top business magazines followed, including Forbes, supporting DMI’s findings in their 2014 article “What Is Behind the Rise of the Chief Design Officer,” explaining why design is moving into corporate boardrooms.

In 2017, the Harvard Business Review provided more reasons for the need for design leadership, with an article on how CEOs were admitting to costly over-engineered processes, products and business models, resulting in loss of customers, jobs and brand loyalty.

This October, global management consultancy McKinsey published their “The Business Value of Design” report, making the case that integrating design across an entire company will have a positive impact on employees, customers and the bottom line. The report, authored by trusted management consultants, is creating real design buzz in boardrooms.

Design acquisitions by top management consultancies.
Angele Beausoleil

The world’s top management consulting firms have also been actively acquiring design agencies, creating their own design leadership practices, hiring chief design officers (e.g. 3M, PepsiCo, Philips, Ford, etc.) and even offering design-thinking training for their multinational clients.

Design has officially emerged beyond products and services of the type offered by companies like Apple and Starbucks to experiences offered by tech giants like Amazon and Uber and strategies like those on offer from Designed in China.

Design and its cousin, design thinking, are now being lauded as a much-needed mindset for leaders – those seeking a customer-centred approach to business innovation, reimagining operations and rethinking supply chains and financial models. Why?

It’s because design is proving to be extremely effective as a creative problem-solving approach for business and an antidote to the over-engineering mistakes of the past.

Packaged goods corporations are seeking to understand how Spanish clothing brand Zara is able to get street fashion trends into the hands of retail customers in record time. Manufacturers are watching Amazon’s bold and encroaching actions in redefining supply chains. Financial institutions are following Apple and Google as they compete with tech companies for mobile payment transactions.

Scotiabank Digital Factory.
Courtesy of Scotiabank

In Canada, designers are finding their way to corner offices. IBM is expanding their design leadership studios, Scotiabank is expanding their design teams in their “Digital Factory” and Deloitte is establishing their Greenhouse design advisory group as customer insight departments.

Make no mistake — these are not typical designers, they are armed with graduate degrees in business, strategy and design.

In early 2018, the University of Toronto’s Rotman School of Management created a new professorship in business design (the first of its kind in the world), to teach and research the next generation of design-leading MBAs. These graduates are uniquely positioned to make a business case for design’s return on investment while also integrating customer needs.

Rotman MBAs with Business Design Major.
Courtesy of Canadian Business and Rotman School of Management

To better understand customers, companies are starting to rethink their processes and management teams. Designers, not traditional executives, are now heralded as those who will guide global corporations and local governments in offering services, experiences and strategies that both delight customers and shareholders.

Interestingly, Canadian design educator Robert Peters once stated:

“Design creates culture. Culture shapes values. Values determine the future.”

It appears companies are finally responding.The Conversation

Angele Beausoleil, Assistant Professor Teaching Stream, Business Design and Innovation, University of Toronto

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Explainer: what is experience design?

Explainer: what is experience design?

Anyone who wishes to innovate can design.

Faye Miller, Queensland University of Technology

“It’s not just a _____, it’s an experience.”

Substitute the blank space above with just about anything these days (car, meal, city, website, course, concert, charity, therapy), and you get the unofficial catch cry of the early 21st century.

Whatever you have to promote to the world – among the endless options in category X competing for attention – is not desirable without it being an “experience”.

But what exactly does that mean? And what does it mean for the two groups of people who potentially collaborate to provide it – the creative types and the business types?

Experience and design

Experiences are ultimately about human perceptions, memories and impressions. Psychologically speaking, how a person experiences an event or phenomenon is an emotional and rational response to an outside stimulus.

Once lived, an experience can be stored as a memory within a person’s mind – and we all know we like to keep pleasant memories that “stick” for the right reasons.

Design usually falls into the domain of the creative types; but “design thinking” is becoming an acceptable and popular practice for just about anyone. As Tim Brown, CEO of global design firm IDEO put it:

Design thinking is a human-centred approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success.

That means anyone who wishes to innovate can design – that is, visualise, map, conceptualise, sketch – solutions based on gathering knowledge of how people behave in terms of technological use or non-use, and how this knowledge can advance the aims of an endeavour.

When we talk about designing experiences, it is important to first understand how certain types of people experience something in context, and then design or facilitate experiences that make a positive difference for people.

Exponomy

In the US and Europe, the so-called “experience economy” (also known as exponomy) is on the rise as a potentially transformative concept for businesses, consumers and society in general. The idea can be traced back to 1998, when B. Joseph Pine II and James H. Gilmore of Harvard Business School introduced a new way of thinking about commodities not just being about goods and services.

Commodities were, the pair argued, more about human experiences that are highly memorable and emotionally engaging enough to sustain long-term value and relationships. Such experiences were powerful enough to change the ways in which people lived and behaved. In short, Pine and Gilmore believed people were willing to pay more for the commodity with the X-Factor.


azwan naim

This suggests companies need to pay much closer attention to the design of experiences co-created by their customers. Businesses need to provide opportunities for customers to participate in experience design through user research. Similarly, a collective mindset needs to be cultivated that allows businesses to realise the interrelatedness of different companies and industries.

This would help them design for experiences that are collaborative across different sectors. For example, a major fashion event would collaborate with the entertainment, media and tourism/hospitality industries to provide an audience with a lasting impression through a multi-sensory experience that is both enjoyable and prosperous.

In recent years, “experience” related positions such as User Experience (UX) Designer/Researcher and Chief Experience Officer (CXO) have increasingly become more visible in organisations of all types.

While some creative positions have a narrow focus on designing digital experiences for website users, others at the senior executive level, such as CXO, aim to plan and maintain a more holistic user-business-technology experience, including “blended” experiences online and offline.

Some experience designers work as freelance consultants, either independently or as part of a design firm, for clients in a range of sectors.

Experience design globally and in Australia


Photo Giddy

In practice, experience design has grown to include the personalisation of experiences through better understanding of different types of human beings combined with unique, innovative ideas developed by company leaders. Perhaps the most well known example of a globally influential and transformative experience-based commodity is Apple.

Apple changed the way people experienced technology with simple interfaces, interactive gestures and memorable branding permeating the products through to their digital and in-store service. This design was a combination of user needs and behaviour which Apple designers perceived and their own creativity, as Steve Jobs himself put it:

Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesise new things. And the reason they were able to do that was that they’ve had more experiences or they have thought more about their experiences than other people.

A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem. The broader one’s understanding of the human experience, the better design we will have.

While the tech industry in the US seems to have embraced the experience economy (with US-based innovation firm frog this year declaring its “coming of age”), the concept has impacted upon many types of businesses and sectors across the globe.

Australian businesses are now starting to acknowledge the emergence of the experience economy with sectors such as (but not limited to) arts and entertainment, tourism, and higher education re-thinking their roles as key players.

Recently, the Australian independent music industry was explored conceptually for the first time using an experience-economy lens, acknowledging the complex relationships and interactions between music business entrepreneurs, musicians, music fans, and the digital and live music experiences.

While these elements usually work in isolation, the exponomy (and experience designers/CXOs who implement the concept) are able to unite them on common ground. Furthermore, exponomy highlights the fusing of industries towards increasing value for all stakeholders involved in a given venture.

A good example of this is the recent collaboration between Australian musicians and wine tourism campaigns, featuring a Nick Cave classic soundtrack for the Be Consumed at Barossa Valley cinematic multi-sensory advert (see above).

The ad won international acclaim as best tourism ad at Cannes and has succeeded in its goal of attracting more tourists to visit Barossa Valley as a result. This shows that real-life experiences can begin with audio-visual tempters designed to engage imaginations on a personal level.

In the same way, higher education in Australia as a major service provider is currently reframing its understanding of how to design for diverse experiences for students, teachers, researchers and research users.

Designing experiences that acknowledge, enthuse, inspire and potentially positively transform the whole person – not just the customer, employee, student or statistic – appear vital to sustaining long-term partnerships.The Conversation

Faye Miller, PhD Candidate, Information Systems, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Google at 20: how a search engine became a literal extension of our mind


File 20180831 195325 1x7xzwe.jpg?ixlib=rb 1.1


Benjamin Curtis, Nottingham Trent University

We are losing our minds to Google. After 20 years, Google’s products have become integrated into our everyday lives, altering the very structure of our cognitive architecture, and our minds have expanded out into cyberspace as a consequence. This is not science fiction, but an implication of what’s known as the “extended mind thesis”, a widely accepted view in philosophy, psychology and neuroscience.

Make no mistake about it, this is a seismic shift in human psychology, probably the biggest we have ever had to cope with, and one that is occurring with breathtaking rapidity – Google, after all, is just 20 years old, this month. But although this shift has some good consequences, there are some deeply troubling issues we urgently need to address.

Much of my research spans issues to do with personal identity, mind, neuroscience, and ethics. And in my view, as we gobble up Google’s AI driven “personalised” features, we cede ever more of our personal cognitive space to Google, and so both mental privacy and the ability to think freely are eroded. What’s more, evidence is starting to emerge that there may be a link between technology use and mental health problems. In other words, it is not clear that our minds can take the strain of the virtual stretch. Perhaps we are even close to the snapping point.

Where does the mind stop and the rest of the world begin?

This was the question posed in 1998 (coincidentally the same year Google was launched) by two philosophers and cognitive scientists, Andy Clark and David Chalmers, in a now famous journal article, The Extended Mind. Before their work, the standard answer among scientists was to say that the mind stopped at the boundaries of skin and skull (roughly, the boundaries of the brain and nervous system).

But Clark and Chalmers proposed a more radical answer. They argued that when we integrate things from the external environment into our thinking processes, those external things play the same cognitive role as our brains do. As a result, they are just as much a part of our minds as neurons and synapses. Clark and Chalmers’ argument produced debate, but many other experts on the mind have since agreed.

Our minds are linked with Google

Clark and Chalmers were writing before the advent of smartphones and 4G internet, and their illustrative examples were somewhat fanciful. They involved, for instance, a man who integrated a notebook into his everyday life that served as an external memory. But as recent work has made clear, the extended mind thesis bears directly on our obsession with smartphones and other devices connected to the web.

Growing numbers of us are now locked into our smartphones from morning until night. Using Google’s services (search engine, calendar, maps, documents, photo assistant and so on) has become second nature. Our cognitive integration with Google is a reality. Our minds literally lie partly on Google’s servers.



But does this matter? It does, for two major reasons.

First, Google is not a mere passive cognitive tool. Google’s latest upgrades, powered by AI and machine learning, are all about suggestions. Google Maps not only tells us how to get where we want to go (on foot, by car or by public transport), but now gives us personalised location suggestions that it thinks will interest us.

Google Assistant, always just two words away (“Hey Google”), now not only provides us with quick information, but can even book appointments for us and make restaurant reservations.

Gmail now makes suggestions about what we want to type. And Google News now pushes stories that it thinks are relevant to us, personally. But all of this removes the very need to think and make decisions for ourselves. Google – again I stress, literally – fills gaps in our cognitive processes, and so fills gaps in our minds. And so mental privacy and the ability to think freely are both eroded.

Addiction or integration?

Second, it doesn’t seem to be good for our minds to be spread across the internet. A growing cause for concern is so-called “smartphone addiction”, no longer an uncommon problem. According to recent reports, the average UK smartphone user checks his phone every 12 minutes. There are a whole host of bad psychological effects this could have that we are only just beginning to appreciate, depression and anxiety being the two most prominent.

But the word “addiction” here, in my view, is just another word for the integration I mentioned above. The reason why so many of us find it so hard to put our smartphones down, it seems to me, is that we have integrated their use into our everyday cognitive processes. We literally think by using them, and so it is no wonder it is hard to stop using them. To have one’s smartphone suddenly taken away is akin to having a lobotomy. Instead, to break the addiction/integration and regain our mental health, we must learn to think differently, and to reclaim our minds.The Conversation

Benjamin Curtis, Lecturer in Philosophy and Ethics, Nottingham Trent University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

4 ways your Google searches and social media affect your opportunities in life

Lorna McGregor, University of Essex; Daragh Murray, University of Essex, and Vivian Ng, University of Essex

Whether or not you realise or consent to it, big data can affect you and how you live your life. The data we create when using social media, browsing the internet and wearing fitness trackers are all collected, categorised and used by businesses and the state to create profiles of us. These profiles are then used to target advertisements for products and services to those most likely to buy them, or to inform government decisions.

Big data enable states and companies to access, combine and analyse our information and build revealing – but incomplete and potentially inaccurate – profiles of our lives. They do so by identifying correlations and patterns in data about us, and people with similar profiles to us, to make predictions about what we might do.

But just because big data analytics are based on algorithms and statistics, does not mean that they are accurate, neutral or inherently objective. And while big data may provide insights about group behaviour, these are not necessarily a reliable way to determine individual behaviour. In fact, these methods can open the door to discrimination and threaten people’s human rights – they could even be working against you. Here are four examples where big data analytics can lead to injustice.

1. Calculating credit scores

Big data can be used to make decisions about credit eligibility, affecting whether you are granted a mortgage, or how high your car insurance premiums should be. These decisions may be informed by your social media posts and data from other apps, which are taken to indicate your level of risk or reliability.

But data such as your education background or where you live may not be relevant or reliable for such assessments. This kind of data can act as a proxy for race or socioeconomic status, and using it to make decisions about credit risk could result in discrimination.

2. Job searches

Big data can be used to determine who sees a job advertisement or gets shortlisted for an interview. Job advertisements can be targeted at particular age groups, such as 25 to 36-year-olds, which excludes younger and older workers from even seeing certain job postings and presents a risk of age discrimination.

Seek, but ye shall not always find.
Shutterstock

Automation is also used to make filtering, sorting and ranking candidates more efficient. But this screening process may exclude people on the basis of indicators such as the distance of their commute. Employers might suppose that those with a longer commute are less likely to remain in a job long-term, but this can actually discriminate against people living further from the city centre due to the location of affordable housing.

3. Parole and bail decisions

In the US and the UK, big data risk assessment models are used to help officials decide whether people are granted parole or bail, or referred to rehabilitation programmes. They can also be used to assess how much of a risk an offender presents to society, which is one factor a judge might consider when deciding the length of a sentence.

It’s not clear exactly what data is used to help make these assessments, but as the move toward digital policing gathers pace, it’s increasingly likely that these programmes will incorporate open source information such as social medial activity – if they don’t already.

These assessments may not just look at a person’s profile, but also how their compares to others’. Some police forces have historically over-policed certain minority communities, leading to a disproportionate number of reported criminal incidents. If this data is fed into an algorithm, it will distort the risk assessment models and result in discrimination which directly affects a person’s right to liberty.

4. Vetting visa applications

Last year, the United States’ Immigration and Customs Enforcement Agency (ICE) announced that it wanted to introduce an automated “extreme visa vetting” programme. It would automatically and continuously scan social media accounts, to assess whether applicants will make a “positive contribution” to the United States, and whether any national security issues may arise.

As well as presenting risks to freedom of thought, opinion, expression and association, there were significant risks that this programme would discriminate against people of certain nationalities or religions. Commentators characterised it as a “Muslim ban by algorithm”.

The programme was recently withdrawn, reportedly on the basis that “there was no ‘out-of-the-box’ software that could deliver the quality of monitoring the agency wanted”. But including such goals in procurement documents can create bad incentives for the tech industry to develop programmes that are discriminatory-by-design.

The ConversationThere’s no question that big data analytics works in ways that can affect individuals’ opportunities in life. But the lack of transparency about how big data are collected, used and shared makes it difficult for people to know what information is used, how, and when. Big data analytics are simply too complicated for individuals to be able to protect their data from inappropriate use. Instead, states and companies must make – and follow – regulations to ensure that their use of big data doesn’t lead to discrimination.

Lorna McGregor, Director, Human Rights Centre, PI and Co-Director, ESRC Human Rights, Big Data and Technology Large Grant, University of Essex; Daragh Murray, Lecturer in International Human Rights Law at Essex Law School, University of Essex, and Vivian Ng, Senior Researcher in Human Rights, University of Essex

This article was originally published on The Conversation. Read the original article.

Job: UX Researcher Trainee in London

We’re hiring a UX Researcher Trainee to join our London team.

This UX Researcher Trainee position offers an excellent opportunities for career progression and growth. The trainee role will last approximately 3 to 6 months and will function like an apprenticeship – where you learn the rules of the trade and assist senior researchers with projects.

As a successful UX Researcher Trainee your next step is Junior Researcher, where you will:
– Lead research projects and support team members on other projects
– Meet and liaise with clients to negotiate and agree research projects
– Assist in formulating a plan/proposal and presenting it to the client or senior managementWriting and managing the distribution of surveys and questionnaires
– Assist the senior management on various tasks.
– Manage data and input data into databases.

As a UX Researcher Trainee, you will be:
– An enthusiastic, hard-working and diligent individual
– Have excellent verbal and written communication skills
– A business extrovert, comfortable dealing with individuals at all corporate levels, including board level.
– Be comfortable working in a high pressure, fast paced environment where multiple projects and competing demands are the norm.
– A team-player, detail-oriented and quick learner.
– Any experience in the Tech, Marketing or Multimedia industry would be an added bonus.

Recent graduates are welcome, but must demonstrate relevant course work (e.g., thesis work), also those with more experience who are attempting a career change.

Psychology and social science degree holders strongly encouraged to apply – Research Methods in particular. More technical or design experience is also welcome, but please mention your interest in research and skills for research field work.

Salary is 60 per day and the duration of training is for a 3 to 6 month period.
Weekly schedule may vary from 30 to 40 hours per week.

Send CVs with introduction letter to info@adhocglobal.com

Digital Transformation is levelling the playing field

It’s a buzzword that’s been around for some time now. Digital transformation is now considered the necessity and could well be the key to reshaping the way corporations work. Transformation is challenging, but it has to be tackled now to avoid being left behind. And like any challenge, it brings opportunity – a chance reshuffle the deck and comes out with a stronger hand for the future.

What are the three areas leaders need to consider to have a winning hand?

Any industry leaving analogue: “It’s not you, it’s me”

Let’s face it, technologies are incurably present in our daily lives. Even the most “offline” professionals can do better, faster, cheaper with the use of innovation. From the local pub that is learning it can attract more customers using social media to your doctor relying on 3D imagery to assist in diagnosing your knee injury. It is hard to think of an industry that is totally oblivious to the new advancements in technology.

Just think about all the technological devices you rely on to do your day to day work?

78055a3545a97f1a073b9ad6887e6c73

Your customers won’t wait for you to improve

The power is shifting to customers now. They have more options. There is little to no switching costs or penalties these days, so your customer is never really only yours.  They also research products and look for digital social cues to establish emotional connections with a brand long before purchasing.

With so many channels (especially digital ones), a profound qualitative understanding of your customer base is no longer a nice to have, but a necessity. At the touch of a finger, customers can compare your product to one from the start-up around the block or a multinational across the world.

Your customers are embracing a global playing field, so why not accompany them on the journey?  

Your employees are embracing new ways of working

For many companies, managing how technology influences employee productivity is the scariest challenge.

How can companies align internal operations to save costs and be efficient? And how can they do so in a landscape that is ever-shifting and evolving? Companies strive to incorporate frugal principles into their day to day world.

If the aim is to improve the user experience and customer experience for your audience, then internal experiences must also improve in equal alignment as well.

Guess what? There’s good news if you are ready to redesign your internal experience. The key is knowing that all change begins internally.  If your team lacks the skills or momentum, then external help from trusted experts can provide you with a flexible approach so that you can begin a UX change programme when you need it the most. Your team will learn whilst doing alongside experts, rather than just being stuck in planning mode for the next decade.

file000588169565

Well, digital transformation is coming your way, and digital transformation is here to stay. Any transformation will have internal and external impacts. See it as a magnificent opportunity to refine or reshape your services to ensure you stay ahead of the curve.

Putting customers first is the right step forward and having flexible internal processes is key for such a future.

Think of your employee experience as the hand you’re dealt initially, but you can change the cards and improve your hand by trying to deliver the best user experience possible.

The psychology of believing in free will

Peter Gooding, University of Essex

From coffee table books and social media to popular science lectures, it seems it has has become increasing fashionable for neuroscientists, philosophers and other commentators to tell anyone that will listen that free will is a myth.

But why is this debate relevant to anyone but a philosophy student keen to impress a potential date? Actually, a growing body of evidence from psychology suggests belief in free will matters enormously for our behaviour. It is also becoming clear that how we talk about free will affect whether we believe in it.

In the lab, using deterministic arguments to undermine people’s belief in free will has led to a number of negative outcomes including increased cheating and aggression. It has also been linked to a reduction in helping behaviours and lowered feelings of gratitude.

A recent study showed that it is possible to diminish people’s belief in free will by simply making them read a science article suggesting that everything is predetermined. This made the participants’ less willing to donate to charitable causes (compared to a control group). This was only observed in non-religious participants, however.

Scientists argue that these outcomes may be the result of a diminished sense of agency and control that comes with believing that we are free to make choices. Similarly, we may also feel less moral responsibility for the outcomes of our actions.

It may therefore be unsurprising that some studies have shown that people who believe in free will are more likely to have positive life outcomes – such as happiness, academic success and better work performance
. However, the relationship between free will belief and life outcomes may be complex so this association is still debated.

Disturbing dualism

Language and definitions seem linked to whether we believe in free will. Those who refute the existence of free will typically refer to a philosophical definition of free will as an ability of our consciousness (or soul) to make any decision it chooses – regardless of brain processes or preceding causal events. To undermine it, they often couple it with the “determinism” of classical physics. Newton’s laws of physics simply don’t allow for free will to exist – once a physical system is set in motion, it follows a completely predictable path.

According to fundamental physics, everything that happens in the universe is encoded in its initial conditions. From the Big Bang onward, mechanical cause-and-effect interactions of atoms formed stars, planets, life and eventually your DNA and your brain. It was inevitable. Your physical brain was therefore always destined to process information exactly as does, so every decision that you are ever going to make is predetermined. You (your consciousness) are a mere bystander – your brain is in charge of you. Therefore you have no free will. This argument is known as determinism.

Descartes mind and body: Inputs are passed on by the sensory organs to the epiphysis in the brain and from there to the immaterial spirit.
wikipedia

But this approach is absurdly dualistic, requiring people to see their consciousness as their true self and their brain as something separate. Despite being an accurate description of the philosophical definition of free will, this flies in the face of what ordinary people – and many scientists – actually believe.

In reality it seems that the functioning of our brain actually affects our consciousness. Most of us can recognise, without existential angst, that drinking alcohol, which impacts our physical brain, subsequently diminishes our capacity to make rational choices in a manner that our consciousness is powerless to simply override. In fact, we tend to be able to accept that our consciousness is the product of our physical brain, which removes dualism. It is not that our brains make decisions for us, rather we make our decisions with our brains.

Most people define free will as simply their capacity to make choices that fulfil their desires – free from constraints.
This lay understanding of free will doesn’t really involve arguments about deterministic causation stretching back to the Big Bang.

But how could we learn about the arguments for and against the existence of free will without feeling threatened and having our moral judgement undermined? One way could be to re-express valid deterministic arguments in language that people actually use.

For example, when the determinist argues that “cause-and-effect interactions since the Big Bang fashioned the universe and your brain in a way that has made your every decision inevitable”, we could replace it with more familiar language. For example, “your family inheritance and life experience made you the person you are by forming your brain and mind”.

In my view, both arguments are equally deterministic – “family inheritance” is another way of saying DNA while “life experiences” is a less challenging way of saying prior causal events. But, importantly, the latter allows for a greater feeling of freedom, potentially reducing any possible negative impacts on behaviour.

Quantum weirdness

Some even argue that the notion of scientific determinism is being challenged by the rise of quantum mechanics, which governs the micro world of atoms and particles. According to quantum mechanics, you cannot predict with certainty what route a particle will take to reach a target – even if you know all its initial conditions. All you can do is to calculate a probability, which implies that nature is a lot less predictable than we thought. In fact, it is only when you actually measure a particle’s path that it “picks” a specific trajectory – until then it can take several routes at once.

While quantum effects such as these tend to disappear on the scale of people and everyday objects, it has recently been shown that they may play a role in some biological processes, ranging from photosynthesis to bird navigation. So far we have no evidence that they play any role in the human brain – but, of course, that’s not to say they don’t.

People using a philosophical definition and classical physics may argue convincingly against the existence of free will. However, they may want to note that modern physics does not necessarily agree that free will is impossible.

The ConversationUltimately, whether free will exists or not may depend on your definition. If you wish to deny its existence, you should do so responsibly by first defining the concepts clearly. And be aware that this may affect your life a lot more than you think.

Peter Gooding, PhD Candidate of Psychology, University of Essex

This article was originally published on The Conversation. Read the original article.

Why technology puts human rights at risk

Birgit Schippers, Queen’s University Belfast

Movies such as 2001: A Space Odyssey, Blade Runner and Terminator brought rogue robots and computer systems to our cinema screens. But these days, such classic science fiction spectacles don’t seem so far removed from reality.

Increasingly, we live, work and play with computational technologies that are autonomous and intelligent. These systems include software and hardware with the capacity for independent reasoning and decision making. They work for us on the factory floor; they decide whether we can get a mortgage; they track and measure our activity and fitness levels; they clean our living room floors and cut our lawns.

Autonomous and intelligent systems have the potential to affect almost every aspect of our social, economic, political and private lives, including mundane everyday aspects. Much of this seems innocent, but there is reason for concern. Computational technologies impact on every human right, from the right to life to the right to privacy, freedom of expression to social and economic rights. So how can we defend human rights in a technological landscape increasingly shaped by robotics and artificial intelligence (AI)?

AI and human rights

First, there is a real fear that increased machine autonomy will undermine the status of humans. This fear is compounded by a lack of clarity over who will be held to account, whether in a legal or a moral sense, when intelligent machines do harm. But I’m not sure that the focus of our concern for human rights should really lie with rogue robots, as it seems to at present. Rather, we should worry about the human use of robots and artificial intelligence and their deployment in unjust and unequal political, military, economic and social contexts.

This worry is particularly pertinent with respect to lethal autonomous weapons systems (LAWS), often described as killer robots. As we move towards an AI arms race, human rights scholars and campaigners such as Christof Heyns, the former UN special rapporteur on extrajudicial, summary or arbitrary executions, fear that the use of LAWS will put autonomous robotic systems in charge of life and death decisions, with limited or no human control.




Read more:
Super-intelligence and eternal life: transhumanism’s faithful follow it blindly into a future for the elite


AI also revolutionises the link between warfare and surveillance practices. Groups such as the International Committee for Robot Arms Control (ICRAC) recently expressed their opposition to Google’s participation in Project Maven, a military program that uses machine learning to analyse drone surveillance footage, which can be used for extrajudicial killings. ICRAC appealed to Google to ensure that the data it collects on its users is never used for military purposes, joining protests by Google employees over the company’s involvement in the project. Google recently announced that it will not be renewing its contract.

In 2013, the extent of surveillance practices was highlighted by the Edward Snowden revelations. These taught us much about the threat to the right to privacy and the sharing of data between intelligence services, government agencies and private corporations. The recent controversy surrounding Cambridge Analytica’s harvesting of personal data via the use of social media platforms such as Facebook continues to cause serious apprehension, this time over manipulation and interference into democratic elections that damage the right to freedom of expression.




Read more:
Should we fear the rise of drone assassins? Two experts debate


Meanwhile, critical data analysts challenge discriminatory practices associated with what they call AI’s “white guy problem”. This is the concern that AI systems trained on existing data replicate existing racial and gender stereotypes that perpetuate discriminatory practices in areas such as policing, judicial decisions or employment.

AI can replicate and entrench stereotypes.
Ollyy/Shutterstock.com

Ambiguous bots

The potential threat of computational technologies to human rights and to physical, political and digital security was highlighted in a recently published study on The Malicious Use of Artificial Intelligence. The concerns expressed in this University of Cambridge report must be taken seriously. But how should we deal with these threats? Are human rights ready for the era of robotics and AI?

There are ongoing efforts to update existing human rights principles for this era. These include the UN Framing and Guiding Principles on Business and Human Rights, attempts to write a Magna Carta for the digital age and the Future of Life Institute’s Asilomar AI Principles, which identify guidelines for ethical research, adherence to values and a commitment to the longer-term beneficent development of AI.

These efforts are commendable but not sufficient. Governments and government agencies, political parties and private corporations, especially the leading tech companies, must commit to the ethical uses of AI. We also need effective and enforceable legislative control.

Whatever new measures we introduce, it is important to acknowledge that our lives are increasingly entangled with autonomous machines and intelligent systems. This entanglement enhances human well-being in areas such as medical research and treatment, in our transport system, in social care settings and in efforts to protect the environment.

But in other areas this entanglement throws up worrying prospects. Computational technologies are used to watch and track our actions and behaviours, trace our steps, our location, our health, our tastes and our friendships. These systems shape human behaviour and nudge us towards practices of self-surveillance that curtail our freedom and undermine the ideas and ideals of human rights.

The ConversationAnd herein lies the crux: the capacity for dual use of computational technologies blurs the line between beneficent and malicious practices. What’s more, computational technologies are deeply implicated in the unequal power relationships between individual citizens, the state and its agencies, and private corporations. If unhinged from effective national and international systems of checks and balances, they pose a real and worrying threat to our human rights.

Birgit Schippers, Visiting Research Fellow, Senator George J Mitchell Institute for Global Peace, Security and Justice, Queen’s University Belfast

This article was originally published on The Conversation. Read the original article.

“More outreach, more movements, more education”

Background

The purpose of our longitudinal study is to develop ongoing insights into girls studying STEM and women pursuing STEM careers, in response to the continuing statistics evidencing the underrepresentation of women in STEM, stereotypical environments and double standards.

Our 2016 survey of 163 women between the ages of 15-46 representing 16 different countries world wide, focused on developing insights into the current experiences of girls studying STEM at college and University, using a mixed methods approach. So far data from our 2016 survey has found significant links between early childhood interests and future STEM career plans, the significant role of unrelated female models in helping reduce attrition from STEM later on in life, confidence levels drop in the second year of college, computer Science has one of the highest attrition rates, and the impact of different subjects and professional confidence in relation to future plans. 

This series analyses the impact of age and country of residence on confidence getting a job in STEM.

The bar graph below shows the age scale of participants on the horizontal axis and the average confidence scores of ‘getting a job’ from 1 to 5, with 5 being the highest on the vertical axis.

[GGR] Blog Post #4 - Visual #1 (2)

Results indicate that professional confidence peaks to a score of 5 in the first year of College at the age of 16 and slowly declines throughout this time period, continuing into the first year of University at the age of 19 reaching a much lower 3.3 confidence level. As our previous research identified this lack of confidence may be correlated with the lack of female role models for young girls to ‘relate’ to, making the STEM environment a more ‘masculine’ place where girls struggle to ‘identify’ and ‘find a place’ thus, in return professional confidence ‘ takes a hit’.

 

“Actively show stories of women leading successful tech projects, Give them as much attention as male scientists” (participant 111).

 

After the age of 20, confidence levels ‘pick up’ and stay at an average score of 4 throughout the 20’s. However there is a dip in this confidence of 20% at the age of 28. This may be due partly to limited work life family balance options amongst the STEM profession, which other research has acknowledged plays a role in attrition from STEM, particularly amongst women. Furthermore, other research suggests girls in College also feel that they will have to ‘give up’ having children for their STEM careers, suggesting that for women they see a trade off between their career in STEM and having a family, they feel the options to do both are lacking.

There is another sharp peak in confidence at the age of 32 to a confidence score of 5. A number of interrelating factors here could be contributing to these confidence levels; women in their 30’s may have transitioned careers and have already established their work life balance ‘norms’ giving them more control. It may also be that these women chose not to start a family and as a result have more time to focus on their careers. This isn’t to say women can’t do both, it is possible that different companies and different family dynamics mean women have children and maintain and progress in their career. Women in their 30’s may have worked exceptionally hard to establish their place within the workforce. Future research should explore whether age and confidence levels correlate with length of time in current job.

Results from the graph also indicate a sharp decline in participants aged 41 from 5 to 2.5 at 46. Again there are a number of possible factors that need to be explored before making conclusions; Covert sexism, where the masculine nature of the workplace may see even more gender disparity when it comes to women wanting to move up the ranks into senior positions, when the opportunity comes for a promotion research suggests they are more likely to get overlooked for their male counterparts, more women on the boards and in leadership positions may help address this.

 

“Equal pay and seeing more female role models in higher ranked positions” (participant 86).

Overall these findings would indicate that girls start off with higher confidence in their younger College years and this confidence slowly declines until they reach their 30’s. There are a number of possible interrelating factors which could answer for this; University is more atoned to the male gender, lack of female role models in senior and leadership positions, negative media about STEM professions and gender pay inequality may send a message to women that they are ‘not as good’ resulting in them questioning their own self worth when job hunting compared to their male peers.

This series also examined the role of geographical location on confidence getting a job

The graph below indicates the geographical locations of our participants. The largest representation is in North America, followed by Europe, Asia, Oceania and Africa.

Continent Infographic (1)

 

The graph below indicates the average confidence levels of participants by their country of residence, with the lighter shade of blue indicating the lowest confidence levels and the darkest shade indicating the highest. Findings suggest that the UK, North America, Mexico, Vietnam and New Zealand have the highest confidence levels followed by Canada, Australia, South Africa, Kenya, India and Indonesia, with the lowest levels reported in Spain.

[GGR] Blog Post #4 - Visual #2 (1)

Does this suggest more Western societies are progressing and combating these stereotypes more than their non western counterparts? This may be ‘popular’ belief but in fact our results do not indicate this pattern nor does other research which has found that actually western and non-western societies do not dictate the level of representation and confidence of women in STEM; one way to compare gender equality and opportunities for women in STEM is to see statistically their representation across different STEM sectors and graduation from STEM degrees in different Countries. Research has found that some leading western countries have a much lower representation of women in STEM than in Muslim countries, and places like Indonesia (see this article). Therefore this would suggest that countries with the highest confidence levels are those where STEM education is most accessible to women. Reports have indicated that the Higher Educational sector in big economies such as the US, the UK and other parts of Europe have developed a vast amount of different educational routes with an array of programs developed for women’s ‘human centered approach’ such as the ‘Social Sciences’ ‘Nursing’ and ‘Child development’. In comparison, developing and transitional economies where acute shortages of educated workers have in turn prompted efforts by governments and development agencies to increase the supply of STEM workers (see article).

Therefore to increase confidence this would suggest that more encouragement by the government and policymakers for the uptake of STEM amongst these economies would help in the representation of women in STEM and increase confidence in this sector as a viable option for women to pursue.  

 

“More outreach, more movements, more education” (participant 139).

 

Encouraging confidence and uptake of STEM

  • College and Universities can help to encourage girls to take part in different preparational activities by holding different open evenings and information talks about different programs they can get involved with.
  • More opportunities for international movement between Universities and job markets in STEM environments may help encourage women to participate whose countries lack incentives and opportunities; more scholarships, grants and internships.
  • Government and policymakers need to make an effort to eliminate barriers for women in STEM and increase incentives for participation in Higher Educational STEM subjects.

We can change the future if we work together.

This has been the fourth in a series of exploration into the experiences of women in science, technology, engineering, or maths. Keep an eye out for more posts as we look at other influences affecting women’s careers.

Contributors

Andrea Lewis, Molly Goodman, Raiya Al-Ansari,

 

Human factors, User Research, Design Research