I'm excited to share some news that I'm really proud of: I recently finished a book about augmented leadership at the intersection of leadership and generative AI with co-authors Bob Johansen and Gabe Cervantes. It was a lot of fun to write and it's my first published book, so it's a big deal for me. I'm excited to share it with you! It's officially on sale March 4th through Berrett-Koehler publishers, but you can pre-order it before then on Amazon at this link.
In the book, we explore how the most forward-thinking organizations and leaders are rejecting the false choice between human leadership and AI assistance, creating a new path of augmented leadership that enhances human capabilities while preserving what makes leadership fundamentally human and keeping organizations a supportive, healthy, and growthful place for the humans in them.
As we enter what we call the BANI world—brittle, anxious, non-linear, and incomprehensible—this leadership augmentation will be critical. Mediocre leaders will use AI to amplify their limitations, creating persistent confusion and distraction, while the best leaders will navigate their organizations with augmented clarity, turning dilemmas into opportunities.
We go into a lot more detail in the book, so we're attaching an excerpt below in case you want to check it out. Thanks for everyone who has supported me so far!
From "Leaders Make the Future: 10 New Skills to Humanize Leadership with Generative AI"
Augmenting Leadership with Generative AI
The best future leaders will be extended and enhanced in digitally savvy but humane ways. The already-simmering question is how leaders will want to be augmented to thrive in the future. This third edition of Leaders Make the Future tells a provocative futureback story from ten years ahead to help leaders develop their own clarity and commitment in the present. Foresight ignites curiosity. What leadership mindset, disciplines, and skills will need to be augmented in which ways? How can human leaders evolve into cyborgs with soul? The goal of augmented leadership is to develop and refine your clarity—while moderating your certainty. For leaders, augmented intelligence can be a "thought partner," a "foresight generator," or a "boundaries pusher." But it is better for exploring options than providing answers. This is the essence of strategic foresight: exploring questions that we have yet to ask instead of always assuming that there is a single correct answer. This book will equip leaders with actionable knowledge and skills for implementing AI in their own leadership. It will foster a paradigm shift toward accepting and utilizing AI as an essential leadership tool. It will promote ethical, empathetic, and sustainable leadership in the emerging AI-augmented digital age. Narrow forms of machine learning can be thought of in many ways as variations on data analysis and statistical methods. Their job is to analyze data and predict something based on that data. Generative AI is very different. Although we may leverage some similar methods to narrow machine learning when creating generative models, the goals, function, and purpose are different. With generative AI, the goal is to create new things. New text, images, videos, audio—even new actions. Its goal is not to predict (just as the goal of foresight is not to predict) but to expand, expound, elaborate, and explore—all critical tasks for senior leadership.
GenAI and Top Leadership
Because of these capabilities, generative AI will disrupt senior leadership in profound ways. The job of a senior leader is difficult to define. If a problem or dilemma is not messy and intractable, it will rarely filter up to a senior leader. Being an effective senior leader requires a unique blend of skills, literacies, and mindset. GenAI will disrupt all of these. This is not to say that senior leaders will be replaced—far from it. But with GenAI, how leaders lead will change dramatically. This comes with enormous opportunities and frightening risks. GenAI could amplify or degrade a leader, depending on how it is used. For future-focused and effective leaders, GenAI will be able to scale their clarity across their organization in yet-to-be-told ways. But it will be harder than ever for organizations to absorb or prop up mediocre or bad leaders. If you are a mediocre leader, congratulations! You will have the power to spread your mediocrity across your organization with unprecedented efficiency. This raises the already-high bar for senior leaders even higher. The ten future leadership skills already outline how to be future-ready, and we have added new thoughts on how augmentation will accompany, extend, and sometimes disrupt these skills. We know GenAI will be a giant disruptor for leaders, even though we don't yet have an accurate language to talk about it. At some point, everyone will realize the significance of these technological capabilities, provided they are paying attention. For co-author Jeremy Kirshbaum, it happened to be in 2019. He was leading a small team of innovation consultants and encountered GPT-2 while doing a custom forecast with IFTF. (GPT-2 was the language model from OpenAI that preceded ChatGPT.) At that time, language models were less sophisticated but already capable of creating ideas, simple frameworks, or poems. This was startling to Jeremy, because for the first time, the computer was doing what he did. Coming up with ideas, not just aggregating information, is the core value of innovation consulting. Jeremy saw glimmers of augmentation potential in GPT-2, BERT, and the other language models emerging in 2019. In the sense that his job was to write and come up with ideas, he had the distinct feeling that he was an old farmer digging with a shovel until someone showed him the first tractor. Jeremy was overwhelmed with this thought: "I need to start learning how to use tractors right now." He began a headlong dive into learning how to augment himself with GenAI. Jeremy started working more with GPT-2 and then its descendant, GPT-3, when it came out in private beta in early 2020. He realized, though, that the language model was only as good as the person using it. If someone was good at poetry, they could create beautiful poetry with it. If someone was good at expository writing, they could coax even better writing from it. It was not, however, able to turn a bad poet into a good poet or a bad writer into a good writer. Extending the farming metaphor—if you don't know how to farm, you can sit in a tractor all day and the crops are not going to grow. If an expert farmer is driving the tractor, then it is a powerful augmentation. This will be the case for generative AI for the foreseeable future. The limiting factor of GenAI for leaders will be their own ability to understand and know what "good" looks like. It will be a long time before any digital system can lead for us, far more than a decade from now—if then. Right now, AI cannot be trusted any more than the person using it. Leaders will be augmented, but only rarely will they be automated. Today, some people use AI-powered grammar checkers and generally trust them more than their own judgment to check grammar. Generative AI in the next decade will become reliably better than humans at tasks far more complex than grammar. Not so with the job of a senior leader. If senior leaders find that parts of their job can be automated with AI, then they probably shouldn't have been in top leadership positions in the first place.
Present-Forward versus Futureback
Leaders must make choices over the next decade to bring about the positive change potential of GenAI, and they must act with intention, discernment, and self-control. The table below summarizes the shifts in how leaders will perceive and use GenAI over the next decade. Many of the present-forward views are things that may seem attractive to leaders at first, but each has hidden dangers and unintended consequences. Some of them are important to focus on in the near term, but it is a trap to believe they are the whole story. The futureback view offers a way to think about the same things in a more holistic way that goes beyond the dilemmas of the present. Here are our definitions for each of the elements of the present-forward and futureback views of generative AI for senior leadership.
From efficiency and speed toward effectiveness and calm
While efficiency (doing things right, according to management guru Peter Drucker) is attractive, the long-term value of GenAI for leaders will be effectiveness (doing the right things). Meaningful outcomes will be more important than streamlined processes—although it may be possible to do both. Leaders will need to discern which efficiency tools contribute to effectiveness as well. Leaders will need to resist the temptation of taking shortcuts that sacrifice long-term effectiveness for short-term gains. The early applications of GenAI have focused on efficiency and getting work done more quickly: make things work faster with fewer mistakes. Offload the busywork that humans don't want to do. Replace humans when possible. Doing the same things faster is just the beginning. Over the next decade, machine/human teams will figure out ways to do better things better.
From prompts and answer-finding toward mind-stretching conversations
GenAI will be so much more than a question-and-answer machine for information retrieval. The exchange between a human and GenAI should be more like a conversation—not just a series of human prompts followed by computer answers. Like human conversations, leaders will get the most value from deep dynamic interactions with GenAI over time. GenAI's response to a single prompt is rarely very interesting. Discerning leaders will prompt deeper conversations and engagements that are much more likely to yield value, but these conversations could go on for a long time. By "conversation," we don't just mean words—far from it. Even in a conversation between two humans, much more than just words are exchanged. Over the next decade, conversations between humans and GenAI systems might involve iteratively creating applications, systems, videos, or virtual worlds. By conversation we mean they will be iterative, durational, and evolve over time.
From automation toward your augmentation:
The urge to automate is tempting, and some processes do lend themselves to automation. On the other hand, overly relying on automation is a danger that will increase with GenAI. Automating processes that should not have been done in the first place will be counterproductive, annoying, and perhaps even dangerous. Automation is the use of technology to perform tasks with little or no human intervention. Augmentation, on the other hand, begins with human abilities and asks how they could be amplified or extended with technology. The goal of automation is to reduce human involvement. The goal of augmentation is to enhance human involvement. Our conclusion in this book is that thinking futureback, all leaders will be augmented—but few leadership roles will be automated.
From certainty seeking toward your clarity story:
The best leaders will develop their clarity but moderate their certainty. In a BANI future that is brittle, anxious, nonlinear, and incomprehensible, certainty will be both impossible and dangerous if attempted. For some people, the chaotic uncertainties of the BANI future will be too much for them to handle. There will always be extreme politicians, religious leaders, and other simplistic thinkers who say they can offer certainty in an increasingly uncertain world. Unfortunately, there will also be people who believe them.
In a BANI future, leaders cannot be certain, but they must have clarity. In fact, leaders will need their own personal clarity story in order to thrive.
From personal agents toward human/agent swarms
GenAI agents bridge ideas to action. Over the next decade, individual agents will be organized into agents for collective actions. These human-computer partnerships will create new kinds of collaborative intelligence, leveraging both human and machine capabilities. As leaders learn how to use their own agents, it will profoundly affect how decisions are made, how teams collaborate, and how strategies are developed. As individual agents become part of orchestrated swarms of humans and agents, things will get quite complicated. When it all works well, expect enhanced collaboration and increased resilience. Complex scenario simulations will become possible, along with innovative solutions. The complexity and distribution of power will also increase the risk of governance issues and ethical challenges. The power of individual contributors in an organization will increase dramatically as they can call to their disposal complex agent systems to aid in their work. But when things go wrong, liability will be complicated.
From guardrails toward bounce ropes
Bounce ropes encircle wrestling rings to keep the wrestlers safe in a strong yet flexible way. It is even possible for a wrestler to bounce off the rope as it flexes back. For a technology as emergent as GenAI, bounce ropes are a much more appropriate metaphor than guardrails. Some constraints will be desirable, but they will have to be flexible as the reality of GenAI comes to life. Calls for public policymakers to create guardrails for GenAI are understandable but naive. Not even GenAI developers understand exactly what is going on within these large language models—let alone the implications of large-scale use. Policymakers and elected officials are likely to be uninformed or misinformed about these emerging tools and media, even if they are well-intentioned. We expect many unintended consequences from even the most prescient policies. Unexpected and often inappropriate actors are likely to step into any vacuum.
From hallucination toward meaning-making
A GenAI hallucination may contribute to a human's creativity. In the early days of GenAI, people were disturbed if an AI "made up" an answer—especially if the system stated it confidently without any hint that it might be incorrect. As conversations between GenAI and humans go deeper, however, such out-of-the-box thinking will become part of a human-driven but GenAI-fueled process of creative meaning-making. What we call "hallucinations" are a feature of GenAI systems, not a bug. At Institute for the Future, we hire people with very strong academic backgrounds, but we want people who fail gracefully at the edge of their expertise. As futurists, we are often at the edge of our expertise. How do you fail gracefully? First, you acknowledge that you are at the edge of your knowledge. Then, you do things like asking questions, drawing analogies that might help explain where you are, or perhaps use models or orient yourself to the unknowns around you. The danger is pretending you know something when you do not. Strong opinions, strongly held, will be dangerous. Similar danger will arise with GenAI. Fabrications can be very useful if they are labeled as such and used as part of an exploratory exercise like scenario planning. Most of today's GenAI systems do not yet fail gracefully.
From increasing secular control toward re-enchanting our world
The urge to control is tempting, but the ability to control is limited. GenAI has come of age in an increasingly secular world. On the surface, GenAI seems to be a secular technology. But what if GenAI can help us explore the mysteries of life—as well as the certainties? What if GenAI can help us explore the invisible—as well as the visible? Most of today's world seems disenchanted and increasingly secular. How might GenAI help to re-enchant our world? MIT computer scientist David Rose coined the term "enchanted objects" to describe how AI will allow ordinary objects to do extraordinary things—like an umbrella that lights up when rain is forecasted or a pill bottle that reminds you to take your pill. As Arthur C. Clarke said: "Any sufficiently advanced technology is indistinguishable from magic."
Love how you are thinking here. I just pre-ordered the book on Audible. Looking forward to the release!