Julian Stodd – a great thinker, author and instigator – has a bead on how organizations must reflect the context of the ecosystem in which they operate. I see this as a logical extension of the invention, dissemination and adoption of internet-based technologies. Some time ago we saw the shift from consumerism of online content, for example, to democratized production and contribution of the same. Readers became authors.
Have a read – I think you, like me, will be excited by Julian’s ideas.
I subscribe to a number of education-related publications, and there is currently a stylistic approach across many of them (and, generically, in the popular media) of relying upon sensationalism and hyperbole to drive heart rates up, in hopes of garnering interest (and of course the related retweets, likes and shares) around whatever the latest amazing technology is.
Lately, the frenzy has been around artificial intelligence (AI) and mixed realities (VR, AR, etc.), with various Paul Reveres signaling the end of the world as we know it. This almost entirely positive spin rarely addresses the needs of ALL people, however. National Public Radio reviewed an article by Pearson Education and posited a more circumspect view when it comes to AI in education:
So one great fear when it comes to the Pearson vision of AIEd is that we reproduce existing inequalities. Some students get individualized attention from highly skilled human teachers who use the best learning software available to inform their practice. Other students get less face time with lower-skilled teachers plus TutorBots that imperfectly simulate human interaction.
This is of critical importance, and I will be addressing this notion of equity in access in other postings. But for now, let’s just try to calm down, and remember that some good folk have gone before us, and that some principles endure.
At this point in my life and career, I’ve seen a number of ed tech trends and have watched as some have grandstanded about the imminent revolution these technologies would bring. Remember educational television (beamed into classrooms)? Remember e-learning? Remember the learning management system? Remember the MOOC?
I think it’s useful to learn from those that have gone before us – occasionally debunking some sketchy ideas while also building on the good ones. Here are just a few good starting points that give us some comfort in knowledge from the past, and some guidance for how we avoid hyperbolizing:
Einstein, 1930: “Imagination is more important than knowledge.”
Sir Ken Robinson has also expounded on the importance of imagination; and as we look to a new wave of technological innovation, we will need this more than ever – especially as we are faced with entire populations of people that will need to learn new skills.
Dewey, 1930: “Failure is instructive. The person who really thinks learns quite as much from his failures as from his successes.”
John Dewey – A bright light in the development of learning theories very early on talked about the opportunity for learning from failure. Today we are hearing quite a bit about this, typically framed in a conversation about innovation, and this has always been a feature of good simulations. It is also worth noting Dewey’s phrase “…person who really thinks…”. This simply reinforces the need to provide space for reflection and honest assessment.
Freire, 1970: “Education [is]… the means by which men and women deal critically and creatively with reality and discover how to participate in the transformation of their world.”
Transformation is everywhere, according to some. This too, is nothing particularly new, but one could argue that it is accelerating as the global connective tissue of the internet, culture and economies strengthens and expands. Relevant question for today: What are we doing to provide education that emphasizes critical thinking and provides individual agency as digital transformation occurs?
Knowles, 1980: “… learning activities will be based on the real needs and interests of the participants…”
I’ve seen a number of recent posts and webinar advertisements for discussions of ‘adult learning‘. I’ve always questioned this over-emphasis on differentiating adult from child learning. Having taught everyone from 6 year olds to 60 year olds, I can tell you that there is just not that much difference! However, this principle shared by Knowles will always be central to good education design.
Jonassen, 2000: “Mindtools are knowledge construction tools that learners learn with, not from.”
Jonassen, a constructivist, pointed out the proper view we should take on using technology to facilitate learning – that ‘mindtools’ as he called them (computers, and digital resources) should be used to help learners creatively explore an area of study or interest. We’ve seen, I think, an over-emphasis on using digital tools to create and push content, rather than providing tools that help learners capture, edit, create and share their own.
Warning: Anytime you hear someone talking about ‘consuming content‘, that should raise a flag in your mind about the underlying role that is assumed of the learners in question, and whether or not there is a potential recreation at hand of the ancient idea of dumping knowledge into brains.
So: Let’s relax just a bit, and maybe hesitate before we ‘consume’ that latest super-tasty hyper-urgent, hyperbolic declaration of imminent radical change. With a bit of reflection, we may realize that some time-tested truths will remain, and that our critical review will help us build more, and possibly even better, future applications of technology in learning.
Last year Google used its AI ‘Deep Mind’ to create a simple game called ‘Gathering’. The object is to gather more apples than your opponent. They gave the AI the ability, however, to stun their opponent with a laser. Watch what happens:
It is also eerily predictable that the greater the scarcity, and the more “intelligent” the AI, the more often lasers were used in gameplay. There is logically a competitive standard in this game – the ‘win state’ is determined, and that is the goal. Therefore the machine will take whatever logical steps it can to achieve that state. This is something I’ve often thought about when working with leaders in organizations: To what extent are leaders ‘playing a game of competition’ – in all aspects of their work?
In a world of constant competition for (perceived) scarce resources, what really happens to collaboration and mutual benefit?
I do wonder if artificially intelligent companions (as they are coming into play, now) and systems will learn this mindset of individualized win-states from their human counterparts, and what that may end up doing to human-machine (and human/human) relations and the possibility of abundant, shared prosperity.
We are all familiar with the concept of explicit biases. These include attitudes and behaviours regarding certain groups with the intent to harm or exclude. Explicit biases can be obvious, such as racism or believing one ethnic group is superior to another. They can also be subtler, like favouring someone we know.
These explicit biases are conscious, intentional and deliberate.
In contrast, implicit biases are stereotypes that form through our experiences and that work outside of our awareness. Even though we are not aware of them, implicit biases lead to discriminatory behaviours and biased decisions.
Implicit biases can also include non-verbal behaviours or avoidance. By their very nature, implicit biases are automatic beliefs or associated behaviours that influence us without our knowledge and despite our best intentions.
Implicit bias is harmful
Starbucks’ baristas are not the only workers who demonstrate implicit bias.
When individuals with “Black-sounding names” applied for jobs compared to individuals with “white-sounding names,” the people with white names received 50 per cent more callbacks. In another study, psychologists who were applying for jobs found that out of two identical CVs, one would be rated more positively if it was attached to the name Brian compared to the name Karen.
Research on implicit bias in health care has demonstrated how health professionals can make biased clinical decisions, even when their intentions are to treat all groups fairly.
For example, an important study by doctor Alexander Green and his colleagues in 2007 found that despite explicitly denying a preference for white versus Black patients, doctors implicitly saw Black patients as less co-operative regarding medical procedures. Those doctors who demonstrated increased levels of implicit biases were more likely to treat their white patients over treating Black patients for their heart attacks.
We also know that implicit biases lead to behaviour that undermines trust. Groups that experience discrimination experience a profound negative effect which leads to self-reinforcing cycles of distancing and disconnection.
Individuals who encounter implicit biases can gradually internalize them and this leads members of certain marginalized groups to begin to conform to negative biases about themselves.
Bias training for all?
So should we all follow Starbucks’ lead and implement implicit bias training in our organizations?
While implicit bias is a problem that erodes equity and perpetuates discrimination, research on implicit bias training highlights mixed results and suggests that implicit bias training alone will not solve the problem.
My research on implicit bias in health professions sought to understand how this training works. Early in our journey, we learned that simply making individuals aware of their implicit biases was not enough.
When our participants became aware of their biases through an online metric of implicit bias called the implicit association test (IAT), developed by researchers at Harvard, it led to significant emotional distress and a defensive reaction.
A hard look in the mirror can hurt
We were surprised to find that when we provided people with feedback about their implicit biases, this information was inconsistent with an idealized version of themselves that was simply impossible to achieve.
Societal pressures and stigma against being prejudiced led to individuals feeling like they are not allowed to have any bias, despite the fact that we all have biases, and not all biases can be eliminated. In fact, some biases may be helpful to keep us safe.
Implicit bias training is therefore unique from other forms of diversity training because a conversation on implicit bias must start with a hard look in the mirror. The conversation can only begin once we humble ourselves by recognizing that we are all deeply flawed and imperfect human beings.
Training can be most effective when there is a balance between psychological safety and motivation to change behaviour.
Knowing and reflecting
Simply knowing about our biases is not enough. Once we become aware of our own biases, we must reflect on how these biases impact ourselves and others.
Discussion and dialogue are both important to reflect on how certain biases may be negative or positive and useful or counterproductive, depending on context. Then, we must begin to set and practise tangible changes in our explicit behaviours.
For example, our research found that physicians and nurses often have implicit biases towards individuals with mental illness who come into emergency departments because these health professionals label such patients as “unfixable,” and implicitly avoid them because they do not feel like they can offer their patients any assistance.
The patients, however, perceived this implicit avoidance as prejudice and discrimination. Our initial training highlighted these biases for doctors and nurses but also promoted explicitly and intentionally engaging with such patients to counter the tendency to avoid them.
We also learned that accomplishing change requires dialogue to reconcile our biases and open conversations with our peers can help motivate us to change behaviour.
Interventions to reduce the adverse impact of bias are most effective when people who work together learn together, and when teams feel comfortable being open about their biases with one another.
Our training was most effective when it was accompanied with constant discussion and dialogue among people who work together. Individuals who participated in the training began questioning biased practices and demonstrating new behaviours which provided a model for others in the workplace to emulate.
Another challenge with implementing bias training is that biases and inequities often become embedded in workplace structures and policies over time. In our most recently published paper, we followed participants for 12 months after they participated in implicit bias training.
Initially, these participants told us that they enjoyed learning about their biases and wanted to change, but any change they promoted went up against a workplace culture that was a barrier to change.
As we followed them over time, participants began reflecting on their biases and engaging in explicit behavioural changes that influenced the perception of structural changes within the learning environment itself. Together, our participants began co-constructing social change.
This finding is important because addressing implicit bias cannot be achieved by individuals alone. Explicit structural and organizational changes are also required to promote change.
If we encourage individuals to question biased norms within their workplace but they speak up and face retribution for doing so, we are creating more problems than we are solving. If any company wants implicit bias training to be successful, the company itself must survey its policies and processes and be prepared to change them.
If your company decides to implement implicit bias training, make sure you ask them what else they plan on doing to promote equity and reduce discrimination. Shutting stores or implementing mandatory training will simply not be enough.
Whether or not diversity is a good thing is still a topic of much debate. Though many businesses tout the benefits of diversity, American political scientist Robert Putnam holds that diversity causes people to hunker down, creating mistrust in communities.
Empirical investigations into how diversity affects communities are too few and far between to provide any definitive answer to the question. So, together with colleagues in Singapore and the US, we set out to examine this very question in a series of studies – the results of which were recently published in the Journal of Personality and Social Psychology.
There is indeed evidence that diversity creates mistrust in communities. But diverse communities also provide an opportunity for people from different racial and ethnic backgrounds to come into contact with each other, and we thought that these experiences would create a positive effect on people’s identities: specifically, the extent to which they identify with humanity, as a whole.
A human connection
This is one of the biggest and broadest forms of identity, which a human being can comprehend. A number of spiritual and philosophical traditions have upheld that believing you share a fundamental connection with other human beings – regardless of race, religion, sexuality or gender – is the sign of a mature mind.
My colleagues and I thought that living in diverse neighbourhoods might create opportunities to come into contact with different people again and again, thereby expanding a person’s sense of identity. As a result, people living in diverse neighbourhoods should be more helpful towards others. We examined this possibility in five empirical studies.
In the first study, we took to Twitter to analyse the sentiments of tweets across the 200 largest metropolitan areas in the US. This was a somewhat basic, exploratory test of our hypothesis, using a large sample of data. In this study, we found that the likelihood that a tweet mentions words which suggest positivity, friendliness, helpfulness, or social acceptance was higher in a more diverse city.
Encouraged by our findings, we then sought to examine how diversity of a zip code where people lived might affect people’s likelihood to offer help in the aftermath of a disaster, such as a terrorist attack. We used data from a website that the Boston Globe set up, where people could offer help to those stranded after the 2013 Boston Marathon bombings.
After accounting for factors such as distance from the bombings, political diversity, religious diversity and the mean household income of these zip codes, we found that people who lived in more racially diverse zip codes were more likely to offer help to those in need after the bombings.
To take our investigation even further, we examined whether people living in more diverse countries would report that they had helped someone in the recent past. We used data from the Gallup World Poll in 2012, which asked more than 155,000 individuals in 146 countries to report whether they had helped a stranger in the recent past. Again, we found that people in more diverse countries were more likely to report that they had helped a stranger in the past month.
These three studies seemed to provide converging evidence for our ideas, but we needed to understand whether this was because diversity expands people’s identities. From a scientific standpoint, this presented a big challenge. It would almost be impossible to conduct a real experiment where we allocate people to live in different neighbourhoods and then check whether this had an effect on their level of helpfulness.
So instead we borrowed a technique routinely used by social psychologist, called priming. Priming is a psychological method, used to activate a state of mind for people in an experiment. We primed people to think about neighbourhoods that were either diverse, or not. We made this allocation randomly, then examined how this affected their willingness to help.
We also measured whether this simple procedure of priming also altered their identities. We used a survey measure developed by other psychologists, which measures how much someone identifies with all of humanity. In two studies, we found that imagining living in a diverse neighbourhood expanded people’s identities, which in turn made them more willing to help a stranger.
These results don’t prove definitively that diversity is always a good thing. But they do offer an encouraging view of some of the benefits which diversity might bring to communities, given the way that people’s identities shift when they often encounter those who are different to them.
Some governments are already putting policies in place to make the most of these potential benefits. For example, in Singapore, each public housing apartment block maintains the same ratio of Chinese, Malay and Indian residents as exists in the wider population. This has prevented segregation and created diversity in neighbourhoods, which has led to a better society for everyone.
In ancient Indian texts, sages exhort people to view the whole world as one family. Our studies show that this isn’t a pipe dream – it’s a real possibility.
There is quite a lot out there at the moment about virtual reality. News just today in the NY Times assesses the current position of this technology in Gartner’s hype cycle – apparently we are now in the “trough of disillusionment.” Indeed, some have even claimed that this new tech may even be the “ultimate empathy machine”. Okay. As I said back in 2007, let’s get real about the virtual.
Defining it: Virtual reality, to my mind, falls into three different categories.
Augmented – Digital content applied through a visible overlay onto one’s current physical environment
Immersive Video – 360 video as experienced through a head-mounted viewer.
Synthetic – Completely computer-generated environments to be experienced on a flat screen or through a head-mounted viewer.
I know that there is a growing number of haptic systems that allow for additional input / feedback systems (Oculus and HTC Vive, for example), but I’m only talking at the moment about broad categories of virtual reality. To that end, I’d like to share a few examples and offer some possible applications of these virtual reality technologies to learning.
The following was written hastily and with NO edits in 1998. I had just started an office-based job after having taught elementary school for 9 years. I had the great pleasure of working with Chuck in the summers of 1994-96.