AI Innovation in the Shadows

Why leaders should learn how their employees are using ChatGPT

10-Second Summary

  • Research indicates the use of Generative AI is widespread in the workplace, that it has the potential to lead to substantial productivity boosts, but also that individuals will significantly vary in their effective use of AI

  • Given this, organisations should create an environment of bottom-up experimentation that incentivises sharing insights about the application of AI across teams and the entire organisation. 

  • To do this effectively, leaders must build trust by ensuring that the benefits from AI productivity gains are equitably distributed among employees alongside managers, and leaders. 


The sudden appearance of Large Language Models (LLMs) and Generative AI (GenAI) over the past year have raised many questions for business leaders. Some of these, such as how this technology will change the shape of organisations, the tasks knowledge-workers perform, and our dominant models of management, are still unknown. The answers will come to light over the coming years. 

But there are many things we already know about the effects of AI on work and productivity that have important implications for business leaders today. 

Here’s a few things we can safely assume:

  • Your employees are already using these tools – whatever you might think or they tell you

  • GenAI has the potential to immensely accelerate productivity 

  • Your employees will be wildly uneven in their discoveries and effectiveness at using GenAI

  • Your employees currently have little incentive to share what they discover through experimenting with GenAI

These points suggest a distinct leadership strategy for integrating Generative AI into the workplace and unlocking its productivity benefits more broadly - curating an open innovation environment so that the discoveries currently happening in the shadows can be brought into the light.  

  1. Whatever they tell you, your employees are already using these tools

    It’s now old news to state that ChatGPT is the fastest adopted product in history. A recent survey found that nearly a third of the workforce are using AI at work, more than half without approval and 64% have passed off AI-generated work as their own. 

    GenAI is a new General Purpose Technology (ironically also abbreviated to GPT). The simple definition of this is obvious – technologies like steam power, steel or electricity that can be used for a myriad of different purposes. But GenAI is also an unusual general purpose technology for firms. In the past, management decided which technologies would be employed inside the company. These took considerable investment and coordination to deploy, an individual worker couldn’t surreptitiously electrify an industrial process or even smuggle a computer inside a company and immediately put it to work. 

    But LLMs differ from past general purpose technologies in that they are employee or user driven. If you have employees that use computers, GenAI has already found its way inside your organisation, and there’s no hope of quarantining it from this new technology. This also means much of the organic experimentation and innovation taking place with these technologies is driven by users, happening in the shadows, and hidden from management. 


    2. GenerativeAI has the potential to immensely accelerate productivity

    The studies have been growing all year. Programmers using AI as co-pilots become 56% more productive. Mid-level professionals improving the quality of outputs and becoming 37% more productive. Business consultants using AI finished 12% more tasks on average, completed them 25% more quickly and produced 40% higher quality results than those not using AI. If this doesn’t sound like much, consider that adding the single technology of steam power to factories in the 19th century only generated an estimated 25% in productivity gains. The fact that this technology appears to result in immediate gains is impressive.

    However, over the longer term, the more radical productivity gains of the digital economy won’t come from GenAI as a single silver bullet, but once this technology is integrated as a core component of an emerging stack of digital technologies that enable us to automate and augment large segments of existing labour. This is actually what happened in manufacturing – as we learned to combine steam power with other industrial technologies, such as electricity, railroads and steel, the transition from hand to machine labour generated an estimated 86% in productivity gains. Similarly, the future the digital revolution augurs could be very bright.


    3. Your employees will be wildly uneven in their discoveries and effectiveness at using GenAI

    Another way of understanding a general purpose technology is that we don’t know, and can’t reliably predict, what they’ll end up being used for. The various ways they can be deployed to add value or improve productivity can only be discovered through tinkering and experimentation. But the nature of experimentation and discovery is that it’s uncertain, and your employees will likely be wildly uneven in their successful applications here.

    Not all your employees will have played around with GenAI tools. Some will have used them, even spent hours tinkering, but not found a way to translate them improving the quantity or quality of their work. But a small number of your employees will likely be lead or power users of GenAI – through combinations of curiosity, skill and luck, they will have figured out ways of using these technologies to automate some work tasks and/or augment their performance in others.

    To unlock the value of GenAI for the firm, it will be essential to find a way to learn what the lead users have discovered, and fold these insights into the knowledge base of the organisation. But this isn’t easy, for the reasons outlined in the next section.

    4. Your employees currently have little incentive to share what they’ve discovered.

    Before getting to the reasons employees are unlikely to share what they’ve discovered, let me offer a few reflections on what organisational knowledge actually is. At the heart of organisational knowledge is the relationship between tacit and explicit knowledge – both at an individual and collective level. At the individual level, skilled employees develop specialised knowledge, part technical (know-what) that can be codified, part a tacit feel for how things need to be done (know-how) that is harder to share, and part an understanding of who you need to talk to actually get the thing done (know-who). This combination of different kinds of knowledge makes individual employees difficult to substitute, one of the many reasons staff turnover can be so costly. 

    This distinction between tacit and explicit knowledge also applies to collectives, both small teams and large organisations. High performing teams and organisations are not only great because of the aggregation of individual tacit and explicit knowledge, or the codified collective knowledge, but also because they tacitly develop ways of working effectively together. Learning how to constructively iterate an idea as a group, how to hand off tasks smoothly between participants, or productive ways of interacting to sustain motivation during difficult times. 

    Successful organisational knowledge management involves becoming skilled at supporting the conversion of tacit and explicit knowledge. Both socialising and externalising tacit knowledge into explicitly codified forms, but also creating new combinations of explicit knowledge and internalising into tacit skills.

    Why are notions of tacit and explicit knowledge relevant to GenAI? Well, LLMs present something of a paradox: while large scale models like Open AI’s GPT are highly centralised in their underlying architecture, they offer remarkable flexibility and decentralisation in the way users can deploy them on the surface. Learning to use LLMs effectively, through structured prompting and connecting AIs with private data, is akin to converting the tacit knowledge required to complete tasks into explicit, codified knowledge that can be performed by AI. Successfully doing this can come in two varieties. In some cases full automation of tasks that were previously carried out manually will be possible. In other cases, augmenting manual performance, where AI acts as a co-pilot or another mind, is preferable. Regardless, either productivity or quality of outputs should improve, and often both. 

    However, currently there are many reasons why employees will be reluctant to share any insights they’ve discovered in this domain.

    First, policies. Many organisations have explicit policies banning or limiting the use of these tools, or implicit limitations on their use due to data and cyber security, or privacy and confidentiality policies. Employees that have discovered useful applications through experimentation have likely disregarded these policies, and sharing their insights would involve admitting as much.

    Second, culture. There is still some cultural stigma and potential legal liability around AI use in authorship of knowledge outputs that need to be clarified. It’s not clear that admitting that improvements in the quantity or quality of one’s work due to applying GenAI will be received positively in many contexts.

    Third, and most vexing, misaligned-incentives. It’s currently unclear how employees will benefit from sharing what they’ve discovered in terms of productivity gains. For example, if an employee discovers how to improve their productivity by 40% without sacrificing quality, they might fear that sharing how they do this will either result in being given (40%) more work, or a reduction in staffing requirements for the same output, perhaps both. In many organisations this fear is not unwarranted. So, currently employees that discover benefits are largely incentivised to keep their knowledge private.  They can choose to translate their discoveries into producing more output, improving the quality of their work or simply working fewer hours. Unless organisations think carefully about how to approach this, many employees will continue to keep their discoveries to themselves. 

    5. Leadership Strategies for integrating Generative AI in the workplace

    The solution here is simple: leaders need to encourage employees to experiment with the new technologies and share what they discover. But simple solutions are not always easy. Effectively encouraging experimentation, discovery and dissemination will require a thoughtful realignment of incentives between workers, managers and leaders.

    5.1 Reform organisational policy to encourage experimentation with GenAI

    Organisations that don’t encourage experimentation with these technologies are at serious risk of falling behind those that do. So the first thing to do is to reform corporate policies so that they encourage employees to experiment. In some cases there might need to be an amnesty to clear up fears that admitting to past use will cause problems. In other cases, creating ‘sandboxed’ environments, such as providing datasets that are not private, confidential or otherwise personally identifiable so they can be used for experimentation might be necessary. But the goal should be clear: employees must feel that they will not be punished for experimenting with the technology.    

    5.2 Leaders and managers should model co-creation with AI as part of the culture 

    The quality of outputs from LLMs are partly a function of the quality of the input. For the present at least, they don’t act without direction. Generic inputs and vanilla prompts receive generic, predictable outputs. This is often fine when we want ChatGPT to function like wikipedia, but usually inadequate when we need it to function like a skilled colleague or assistant at work.  Sometimes GenAI tools do exactly what we want the first time and save an astonishing amount of time. But more often high quality outputs are the product of skillful iteration, like working with a very bright but new employee that requires several rounds of feedback on their work. In other words, high quality LLM outputs are the product of human-machine collaboration. Leaders and managers should not only acknowledge this point but where possible model and share how they are exploring this in their own work. Actions here will establish permission much louder than words and written policy. 

    5.3 Design organisational incentives towards experimentation, discovery and sharing what they learn

    The previous two recommendations are about removing the stigma associated with using GenAI. This one is about designing incentives so employees see clear benefits in both experimenting and sharing what they discover. As we’ve seen above, the experimentation is likely already happening, it’s the sharing that will need thoughtful encouragement. 

    The strategic challenge here is similar to fostering an open innovation ecosystem, so leaders should adopt a bottom up mindset and approach by avoiding the temptation to set the ‘AI agenda’ too prescriptively from above. The first problem is that no one will know who the lead/power user-employees are, what they’ve discovered, and how it might be of benefit to the organisation at large. 

    A natural place to begin is with people that regularly work together and are accustomed to sharing information, otherwise known as small teams. Incentives should be structured towards rewarding the sharing and socialisation among a team, rather than only benefiting individuals that have discovered insights. Managers will need to figure out how to encourage lead users in their teams to share what they’ve discovered in applying GenAI in their work, and see if they can ‘teach’ their colleagues how to derive similar benefits in their work. Another, complementary approach would be to enable lead/power users to teach and learn from each other – wherever they sit in the organisation. 

    The basic organisational learning cycle here is relatively straightforward and widely known. Organisational incentives – usually expressed through the way people get promoted – need to encourage proactive experimentation, explicitly identifying what works, applying these lessons in a new context, and sharing these insights in a way that can be emulated across the organisation. A spiralling conversion of tacit, to explicit, back to tacit knowledge from individuals, through teams to the wider organisation. The challenge is in cleanly aligning these incentives across teams and organisational hierarchies such that employees can see they are serious. 

Finally, but perhaps most importantly, this strategy does depend on a larger question of trust in organisational leadership. Trust that when employees are asked to share what they have discovered through experimenting with AI, they will also share in the potential productivity gains that derive from these discoveries. Ultimately employees must trust that they won’t be exploited in the process. Ensuring the benefits of this technology are fairly shared will be an important test of leadership over the coming years. 

Previous
Previous

You can’t fix Culture

Next
Next

A Funnel Process To Exploring New Product