Last week marked the one-year anniversary of ChatGPT, the large language model that introduced Generative AI (GenAI) to the world. ChatGPT’s instant success took most companies by surprise. Even a year later, businesses are still playing catch up as the pace of change continues uninterrupted.
Google now says its own GenAI model, Gemini, will soon have five times GPT-4’s computing power and potentially 20 times the power next year. For companies, GenAI is likely not a one-off technological leap, but the first of a series of rapid advancements that shows no signs of abating. In this new reality, by the time businesses do manage to integrate today’s LLMs, they will already be behind on the next wave of GenAI technologies—and the one after that.
This new state of constant change truly is a permanent revolution, to paraphrase Russian revolutionary Leon Trotsky. It is revolutionary because the change it brings about is often sudden and massive in scale. It is permanent because the rate of AI advancement will continue to exceed the pace of organizational learning, such that companies will fall further and further behind the state-of-the-art technology.
While the PC revolution, by contrast, gave businesses enough time to eventually catch up, the last year shows that catching up is unlikely in the age of AI. This is, in part, because advancements in AI promise to be self-reinforcing, where each breakthrough ripples across systems new and old, refining them and improving performance, scrambling how we live and work, and redefining what we consider possible. There is no end state to the permanent AI revolution—at least not one we can expect in the near future.
The idea that we live in a permanent AI revolution means that companies’ transformation efforts are most likely to succeed when designed with a dual intent: successful adoption of mature technologies and readiness for accelerated experimentation with inchoate ones. Since companies continue to learn at a slower rate than technology advances, success will largely hinge on a business’s relative rate of learning—which, in turn, depends on their ability to become early adopters of the foreseeable technologies on the horizon. Today, for companies working on the adoption of stand-alone LLMs, this challenge takes the form of shaping those LLM-based transformation plans with an eye towards the arrival of what’s coming next—autonomous agents.
The next disruption is autonomous agents
The great technological leaps forward of the past—from the advent of the steam engine to personal computers and the internet—each empowered humans by augmenting their physical and computational capabilities. The artificial intelligence technologies of today, however, expand the domain of technological augmentation to areas long thought to be uniquely human, like creativity. Generative AI’s mastery of what was considered distinctly human means it impacts the professional identity of knowledge workers in ways that we have not seen before, portending a future that looks very different from the world today.
To give a sense of what this new reality could look like, let’s take the example of autonomous agents, the most disruptive foreseeable next frontier of the permanent AI revolution. While AI is already evolving, requiring less and less human intervention, autonomous agents do away with it altogether.
Unlike today’s LLMs, autonomous agents need an initial goal, but not iterative prompting. Instead, they use a configuration of systems (including LLMs) to break down complex objectives into individual tasks, and give instructions to other systems to execute those tasks. Once given an overarching objective, autonomous agents are able to plan how to execute the tasks necessary to achieve it, including create their own prompts, monitor the output, make decisions, and manage the work until the job is done.
An autonomous agent, for example, could survey previous customer interactions and the outcome of each and learn, on its own, how best to reply to customers. Using GenAI, the agent could then create an entire, tailored email marketing campaign based on data from prior campaigns. This includes, for example, making its own determinations on email design, scheduling, graphics, and subject line, and then executing those decisions by interfacing directly with external systems, like customer relationship management (or CRM). It could even choose, on its own, who the campaign should target based on responses to prior campaigns and then decide whether the number of email opens, views, clicks, and responses is noteworthy enough to report back to management. In the near future, a single autonomous agent could perform the role of an entire digital marketing team.
Autonomous agents aren’t yet ready for widespread enterprise use, but they are clearly on the immediate horizon. Experts estimate autonomous agents will be ready to go mainstream within three to five years. In fact, just a few weeks ago, OpenAI launched its custom chatbots, capable of using external application programming interface (API) calls, where one application requests data or services from another for information retrieval and execution of simple actions through external systems. These systems are not autonomous agents in a literal sense, as they cannot yet operate on the basis of higher-order goals. They are, however, strongly indicative of the trajectory of GenAI development, from stand-alone LLMs to autonomous agents capable of sensing and acting on their environments to achieve a given objective.
The arrival of autonomous agents will have sweeping implications for individuals, teams, and entire organizations. Individuals will need to upskill often, as requirements change rapidly along with greater automation of even complex tasks. Team structures and role configurations will need to be able to adapt quickly in the face of agents automating entire end-to-end workflows. Organizations will also be subject to constant change and necessary recalibration, as automation brings about an increasing commoditization of today’s sources of competitive advantage.
If there is no end state, how can you prepare?
In a world of permanent revolution, prudence demands that leaders permanently scout for what’s coming next—even if the exact business applications of an early-stage technology aren’t yet apparent. Once new technologies appear on the horizon, it is imperative that companies develop robust transformation plans to facilitate the adoption of the next wave of technologies. Companies that decide to postpone planning around autonomous agents until after they become mainstream risk falling behind, a deficit that will be compounded by new, currently unforeseeable developments in AI technologies.
There is a real opportunity here for businesses to start learning how to learn: That is, treating autonomous agents as a test of whether a company is on the path to genuinely future-ready adaptation so it can build the organizational skills to become even faster adopters of whatever comes after agents. So how do companies already in the process of transforming to adopt today’s stand-alone LLMs also prepare for autonomous agents? Their GenAI transformation plans need to be robust along four key dimensions: technology architecture, workforce, operating model, and policies.
Architecture. Companies working on GenAI adoption today are investing in setting up their data architecture to ensure that LLMs are able to retrieve data from across enterprise systems. In order to prepare simultaneously for autonomous agents, it’s critical to set up two-way flows of information that allow agents to take action using those systems, not just extract information from them. To do this, when building out new tools and tech, companies should ensure that they are creating bidirectional APIs that can convey comprehensible instructions to an agent, as well as execute an agent’s instructions.
Workforce. As GenAI masters an expanding set of tasks, companies will need to calibrate their strategic workforce planning to hire and train for skills that promote effective LLM use and supervision. Autonomous agents raise the prospect of automation of entire workflows, rather than discrete tasks. This means that entire functional and business unit teams will need to be reconfigured, with the potential to significantly reduce labor requirements and costs in future. To gauge their preparedness, companies should stress test their workforce planning using scenarios with increasing end-to-end workflow automation.
Operating model. The arrival of autonomous agents is likely to commoditize existing sources of competitive advantage, making it important for companies to build their capacity to conduct rapid prototyping for autonomous agent use cases. More importantly, companies’ operating models will need to be set up in anticipation of agents progressively automating planning functions. The initial deployment of agents will likely require close human supervision, but the balance of roles and responsibilities should shift towards agents as they become more reliable and sophisticated. The more a company’s operating model is set up to enable this progressive shift from the start, the more likely the company is to reap the full benefits of agent-powered automation.
Policies. Securing a social license for the use of any new, particularly disruptive technology is crucial to its success. This need will be especially acute with autonomous agents making and implementing decisions with minimal-to-no human oversight. Formal regulation may take time, but to generate and maintain widespread societal buy-in, companies should be hypervigilant in enforcing guardrails to ensure safe and appropriate use. Until regulation is in place, self-regulation, as demonstrated by numerous industries in the past, is also a responsible step that will help in the pursuit of a social license. Self-regulation on its own, however, is not a sustainable long-term solution. Organizations should be actively engaged with regulators to help craft the right approach for governing and monitoring the use of these emerging technologies.
Conclusion
The speed of generative AI’s advance once hitting mainstream use shows that taking a wait-and-see approach to new emerging technology is no longer an option. The constant state of the disruption caused by fast-advancing tech amounts to a permanent revolution where companies must prepare for the foreseeable—now. This doesn’t mean simply chasing the latest thing, but recalibrating to maximize adaptability—in an organization’s data infrastructure, workforce planning, and operations. For companies, what might seem like discrete sprints of AI technological adoption challenges is, in fact, a high-speed marathon.
Read other Fortune columns by François Candelon.
François Candelon is a managing director and senior partner in the Paris office of Boston Consulting Group and the global director of the BCG Henderson Institute (BHI).
Mikhail Burtsev is an AI Fellow at the London Institute of Mathematical Sciences.
Gaurav Jha is a consultant at BCG, and an ambassador at the BCG Henderson Institute.
Dan Sack is a managing director and partner at BCG X, Boston Consulting Group’s tech build and design division.
Leonid Zhukov is the director of the BCG Global A.I. Institute and is based in BCG’s New York office.
David Zuluaga Martínez is at partner at BCG and an Ambassador at the BCG Henderson Institute.
Credit: Source link