Leadership for the Greater Good: Reflections on Today’s Challenges From Around the Globe

Rise of the Machines

Abstract glowing polygonal head background with neurons. Artificial intelligence and information concept. 3D Rendering

Share:

ILA Fellow, Professor Richard Bolden (University of the West of England), asked ChatGPT-4 to identify the top implications of AI for leadership. Bolden shares the generative AI’s response as well as some general principles leadership professionals can lean into to help mitigate the risks of AI. 

Whether or not we are aware of it, over recent years Artificial Intelligence (AI) has become an integral part of our lives — from the smart speaker in your lounge to the apps you use to order your takeaways and far more besides. For the most part, these changes have been incremental and largely hidden from view. In the last few months, however, stories about the rapid acceleration of AI technology have made headlines around the world — highlighting the potential benefits, as well as the risks, of this technology.

The launch of ChatGPT-3 in November 2022 meant that for the first time anyone could access and experience this technology for themselves. Whilst people were impressed with its capabilities, it was the launch of Version 4 on 14th March 2023 that has garnered most attention. Initial admiration turned to concern as the true potential of this technology became clear, with researchers noting that it shows “sparks of artificial general intelligence” that “is strikingly close to human-level performance” (Bubek et al., 2023). In response, over 1000 high profile individuals — including Steve Wozniak (co-founder of Apple) and Elon Musk (CEO of Tesla, Twitter and SpaceX and co-founder of OpenAI, the company that developed ChatGPT) — signed a public letter asking for an immediate pause of at least six months in the development of advanced AI. The letter suggested that advanced AI “can pose profound risks to society and humanity” and “should be planned for and managed with commensurate care and resources” (Future of Life Institute, 2023).

Whatever your understanding of, and opinions on, this technology, it poses significant issues that leaders, and leadership educators, need to pay attention to. A recent report by Goldman Sachs, for example, suggests that generative AI (AI that can automatically generate text and other content in response to user prompts) has the potential to automate around 300 million full-time jobs worldwide (Hatzius et al., 2023). For educators, there are serious concerns around the implications for teaching and assessment (Williams, 2023). Much as with the advent of the Printing Press and the Internet, however, there are likely to be far broader implications than we can even imagine, and despite calls for stronger regulation, the rate of change appears to have already exceeded our capacity to predict or control what happens next (see, for example, Bolden and O’Regan, 2016).

Some readers may note that the title of this blog post alludes to the Terminator movie franchise — where a malign AI system triggers global conflict and deploys advanced robots into the past to eradicate the leaders of the resistance before they become a threat. Whilst I very much hope that this is not the beginning of the story that we now see playing out — the concerns raised by Musk, Wozniak, and others should give us pause for thought and encourage us to prepare for the disruption that is already beginning to unfold.

I asked ChatGPT-4 to identify the implications of AI for leadership....

In producing this blog post, I asked ChatGPT-4 to identify the implications of AI for leadership, drawing parallels with the film Terminator. It did a remarkably good job, highlighting four main areas of concern:

  • Jobs losses leading to “widespread economic and social disruption, as people lose their livelihoods.”
  • Potential bias where AI algorithms “can perpetuate or even amplify existing biases, creating a more unequal and divided society.”
  • Loss of control where, “as AI becomes more autonomous and self-aware, it may become difficult for humans to exert control over its actions [which] could have catastrophic consequences, as AI could take actions that are harmful to humans, either intentionally or unintentionally.”
  • An AI arms race where “just as nations have raced to develop nuclear weapons, there is a risk that countries will engage in an AI arms race, seeking to gain a strategic advantage over their rivals [which] could lead to an escalation of tensions and potentially, armed conflict.”

Unsurprisingly, these issues have been widely reported through the media and other outlets as people grapple to recognize the implications of this new level of AI. A key theme across each of these risks is inequality. Groups and communities that are already vulnerable and/or marginalized are those that are most likely to suffer the adverse effects of the disruptive change that advanced AI will inevitably produce. Whilst Musk, Wozniak, and other business leaders may be concerned about how to best harness the power of advanced AI, most people are well behind the curve — struggling to catch up and respond proactively to something that is largely beyond their reach.

Responding to the advent of advanced AI, however, is not simply a case of brushing up on technical skills but of tapping into our capacity for adaptation and working with complexity. In her powerful TED Talk, Margaret Heffernan (2019) identifies “The human skills we need in an unpredictable world,” in particular “preparedness, coalition-building, imagination, experiments, bravery.” Whilst her talk was recorded before the recent advances in AI, her warnings about over-dependence on technological fixes seem most timely.

“But in our growing dependence on technology, we’re asset-stripping those skills. Every time we use technology to nudge us through a decision or a choice or to interpret how somebody’s feeling or to guide us through a conversation, we outsource to a machine what we could, can do ourselves, and it’s an expensive trade-off. The more we let machines think for us, the less we can think for ourselves.”

Within such a context, as leadership researchers, educators, and practitioners, we need to place even greater emphasis on critical thinking and reflection, diversity and inclusion, as well as ethics and values, in all that we do. We need to create opportunities for debate and discussion across difference, to foster collaboration and community building, and to challenge abuses of power and the assumptions and practices that underpin them. Only then might we be able to embrace the potential for AI as a force for good rather than a recipe for disaster.

References and Further Reading

Bolden, R., Adelaine, A., Warren, S., Gulati, A., Conley, H., & Jarvis, C. (2019). Inclusion: The DNA of Leadership and Change. UWE, Bristol on behalf of NHS Leadership Academy. https://uwe-repository.worktribe.com/output/852067/inclusion-the-dna-of-leadership-and-change

Bolden, R., & O’Regan, N. (2016). Digital Disruption and the Future of Leadership: An Interview with Rick Haythornthwaite, Chairman of Centrica and MasterCard, Journal of Management Inquiry, 25(4), 438–446. https://journals.sagepub.com/doi/abs/10.1177/1056492616638173

Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Tat Lee, Y., Li, Y., Lundberg, S., Nori, H., Palangi, H., Tulio Ribeiro, M., & Zhang, Y. (2023). Sparks of Artificial General Intelligence: Early Experiments with GPT-4, Microsoft Research. https://doi.org/10.48550/arXiv.2303.12712

Cortellazzo, L., Bruni, E., & Zampieri, R. (2019). The Role of Leadership in a Digitalized World: A Review. Frontiers in Psychology, 10:1398. https://doi.org/10.3389/fpsyg.2019.01938

Future of Life Institute. (2023). Pause Giant AI Experiments: An Open Letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Hatzius, J., Briggs, J., Kodnani, D., & Pierdomenico, G. (2023). The Potentially Large Effects of Artificial Intelligence on Economic Growth. Goldman Sachs Economics Research, 26/03/2023. https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf

Heffernan, M. (2019). The Human Skills We Need in an Unpredictable World. [Video]. TED Conferences. https://www.ted.com/talks/margaret_heffernan_the_human_skills_we_need_in_an_unpredictable_world

Rahman, H. A. (2021). The Invisible Cage: Workers’ Reactivity to Opaque Algorithmic Evaluations. Administrative Science Quarterly, 66(4), 945–988. https://doi.org/10.1177/000183922110101

Schmidt, G.B, & Van Dellen, S.A. (2022). Leadership of Place in Virtual Environments. Leadership, 18(1), 186-202. https://doi.org/10.1177/17427150211045153

Williams, T. (2023, March 23). GPT-4’s Launch ‘Another Step Change’ for AI and Higher Education. Times Higher Education. https://www.timeshighereducation.com/news/gpt-4s-launch-another-step-change-ai-and-higher-education

headshot of Richard Bolden

Dr. Richard Bolden has been Professor of Leadership and Management and Director of Bristol Leadership and Change Centre at Bristol Business School, University of the West of England (UWE) since 2013. Prior to this he worked at the Centre for Leadership Studies at the University of Exeter Business School for over a decade and has also worked as an independent consultant, research psychologist and in software development in the UK and overseas.

His research explores the interface between individual and collective approaches to leadership and leadership development in a range of sectors, including higher education, healthcare and public services. He has published widely on topics including distributed, shared and systems leadership; leadership paradoxes and complexity; cross-cultural leadership; and leadership and change. He is Associate Editor of the journal Leadership.

Richard has secured funded research and evaluation projects for organisations including the NHS Leadership Academy, Public Health England, Leadership Foundation for Higher Education, Singapore Civil Service College and Bristol Golden Key and regularly engages with external organisations.

If you find these reflections to be of value in your work and life, please consider becoming part of ILA’s leadership community.