I’m writing this from an office when I had the option to work from home today. Being in person with colleagues is just one way in which informal interactions are more productive. I promise I didn’t do it just to make a point, but there is one.
Creativity often lives in the grey space that disappears when you try to specify in advance what will happen. Writing code, or system guardrails forces you to do exactly that. So does writing policies, but if we are honest, they often get ignored. Creativity, then, is under threat from AI. Not because we will replace great novelists with ChatGPT, but because our messy world risks becoming more specified and ordered.
With that in mind, I’d like to draw your attention to the article titles linked below If You Only Read One Thing, which fizz with the type of wit that AI still struggles with (Alex Danco: Have you ever seen a goth downtown?). Businesses (and increasingly the Civil Service) are under pressure to show productivity gains and efficiency benefits. As it should be. But from what I hope is one of the more human newsletters about business and AI, a plea: let’s not pretend that optimising and learning are the same thing. There’s space for both.
James
If You Only Read One Thing
Can AI actually be creative? For me, there are two intertwined questions that people actually mean when they ask this.
The first is about livelihoods. Will it still be possible for people to make a living designing and creating things as they did before? Or will AI, somehow, do the creating for us? And if not the creating, will it do whatever we were paying creators for, that wasn’t the artistic endeavour itself?
Copyright is central to the question of who shares the gains. But so too are quality thresholds: for many tasks (such as product shots, or the cover photo for this newsletter) the output needs to be “good enough” not “as good as possible”. As automated tools can reach this level, less and less “routine” work will go to creators, and often this was the funding stream that allowed them to do the really creative projects. See what has happened to voiceover work, for actors, for example. The UK Government has been consulting on Copyright and AI recently, spurring the creation of the MakeItFair campaign, spearheaded by UKAI (which we recently joined as members). Among other excellent points about fairness, they make a highly nuanced claim about who benefits: A more permissive copyright regime, meaning less protection for creators, might indeed be the best outcome for companies training the large foundation models. But, as it stands, none of these companies are British, so prioritising their needs over the needs of British creators is a self-inflicted wound, harming our creative industries so that we can buy better general-purpose AI from abroad. British AI companies tend to focus much more on specific segments and tooling, and for that, they need public trust. So moves which undermine public perception of AI, broadly construed, will harm british AI companies, not help them. Turning the entire creative establishment against AI as a concept - regardless of whether it's a useful grouping or not - is exactly this sort of PR loss. Check out the rest of the campaign here: MakeItFair. UKAI - Creative Industries Report. UKG Consultation.
Secondly, will AI tools allow us to be more creative, as a society as a whole? Often the assumption here is that the AI tools will do the creating, directly. The AI companies certainly like to push this as part of the story. More likely, however, is that a suite of tools will emerge that will allow everyone to do simple creative endeavours, whilst allowing the real experts to do much, much more than they could before (much as Excel did for Financial Analysis - a comparison my more artistic friends won’t thank me for). Through this lens, the key question becomes: how do I use this to get the best results? Is it by talking to it, like the software engineer’s method of describing your idea to a rubber duck (or teddy bear: see below)? Is it by outsourcing part of the problem? Working this out is a real skill that is in its early stages of evolution, though Alex Danco’s fantastically named article gives some good suggestions. This is also an area where lots of startups are working, designing the bundle of services that make it easy to help creators (of whatever level of expertise) do their best work. Be incredibly sceptical of people who tell you AI can’t be creative, because they’ve already shown a personal lack of imagination: the ways that these tools can be used for art are only just being created - or dare I say, imagined. Sam Altman: Creative Writing with ChatGPT. Dan Davies: Building a better teddy bear. Alex Danco: Have you ever seen a goth downtown? Andrew Chen: The AI Horde.
Contents
If You Only Read One Thing
Can AI be creative?
What Is GenAI Good For?
Removing Friction
That Depends on the Model!
How To Successfully Integrate GenAI With Existing Organisations
Dynamic Approaches beat Target Fixation
Not Following Instructions
Don’t Confuse R&D with ROI
Hard vs Soft Skills
Our Recent Work
The Rise of Humanoid Robots
Zooming Out
Energy and Data Centres
US Industrial Strategy
Scepticism Rising
Learning More
Bottlenecks
Survey of US Workers
Prompting Advice
Co-Pilot Case Studies
The Lighter Side
What Is GenAI Good For?
Removing friction - but what if that’s the point? One key challenge facing the makers of creative tools is maintaining the right amount of friction. This goes far beyond creators, however. Often the journey or the challenge of an activity is the point, not something to be avoided. Apple used to talk about joy, rather than convenience. Speak to the companies that are most loved by their customers and you’ll hear something similar. We have talked before about friction being a necessary defensive element of processes, say, job applications. But friction of the right kind can also have wonderful results. As you start on AI strategies, be sure to make time to ask: what would I never want to automate? Apple on Joy (video at the end). AI Digest - June 24: Friction. Levels of Friction. What would you never use tech for? Friction is no longer a strategy.
Buy this card here: Link.
An aside: Last week I saw the fantastic production of The Little Prince at the London Coliseum, sadly now finished. They translated this quote as “It is the time you wasted on your rose…” rather than “spent on your rose”. Cue a brilliant conversation with my girlfriend about the difference between the two translations, which is perhaps the entire point of the quote!
What is GenAI good for? That depends on the model. A helpful graphic showing quite how many similarly named AI models just OpenAI have, and which are available at each level of subscription. Source.
How To Successfully Integrate GenAI With Existing Organisations
Dynamic approaches beat target fixation. The latest generation of models is once again markedly cheaper, and performs better on performance tests than state of the art just a few months ago. In 6 months time they’ll be uninteresting, because they’ll have been surpassed again. What does this mean for you, given you aren’t building an AI company? It means that technology that gets better and cheaper is the new normal, however counterintuitive it is to think. Any AI strategy that doesn’t account for this change will quickly become outdated. The modern day equivalent of “fighting the last war”. We see lots of organisations build an AI strategy that is premised on what AI can do now - or more commonly what it could do a few months ago when the planning work started. This is understandable, but will leave you stuck by the time the effort comes to fruition. The best strategies out there are planning for continuous evolution. They’re building systems that can continually be updated, and in most cases that means training people, not buying tech. Responding to change is a capability, one at which you can be more or less mature. It’s not as eye catching as telling the board you are “doing AI”, but, if you can include this as part of the work, it’ll pay dividends quickly in terms of results. Chart source. Target Fixation.
A key mantra for your AI strategy: People don’t always follow instructions, machines usually do. The Chinese have a phrase, translating roughly as “heaven is high, and the emperor is far away”, which is used to describe situations where regional governments don’t strictly adhere to the directives from the centre. For leaders of large organisations the sentiment is likely to be familiar. As human labour is gradually replaced with AI agents with less leeway to do their own thing, the ability to control activity from the centre is much strengthened. However, this may not always be a good (or bad!) thing:
On the one hand, economists have long since noticed that middle managers are often misaligned to company goals, choosing options that appear safer to those evaluating their performance, rather than maximising expected returns for the company. This is a key blocker to many an AI Adoption Strategy. AI agents don’t respond to promotions and performance views in quite the same way. Link.
By contrast, sometimes the result of following the rules to the letter is gridlock not harmony. Humans are regularly willing to cut corners to get things done, so can briefly operate within a productive grey zone. If you have asked Finance or HR for an exception just this once you’ll know what I mean. Agents will resist this, and may inadvertently constrain space for action.
Similar conversations occur when outsourcing to vendors, particularly offshore. Organisations must be somewhat defensive (once we give them our data we don’t know what they’ll do with it), but at the same time, staying away from the messiness required to get the job done (cheaply) is much of the point. As more and more suppliers offer outsourced options, such as AI-enabled Customer Service functions, this creative tension is worth keeping in mind!
Don’t confuse R&D work with projects seeking a return on investment. Mixing these two up leads to work that's too rushed to innovate, and ROI projects saddled with unrealistic transformation goals. Link.
Writing code is a decreasingly large part of building software (that works!). Companies tend to have about 6 engineers for every product manager. This ratio will decrease. As writing code becomes quicker (with AI support), working out exactly what to build, so that it works as intended, will be a larger share of the work. In the early days of the web, building something that worked was everything, and unsurprisingly the most successful companies were founded by Stanford Computer Science PhDs. By the 2010s, however, building websites became much easier, as the suite of tools matured. What mattered, more, was user acquisition, advertising and product design. Expect to see something similar within your organisation, and plan headcount accordingly. Writing code is getting easier, so you’ll get more of it. Those employees with so-called softer skills will consequently be more important, and scarcer! The “it works” feature. Engineering ratios.
Our Recent Work
AI advances accelerate robotics. We’ve seen this thesis popping up frequently in recent months, with a steady stream of news. We took a look behind the headlines to separate reality from product announcements. Link.
Zooming Out
Data centres will be powered by solar. Ben James explains the claim. Link.
I continue to think the shift to solar (and all the economic consequences of extremely low cost intermittent electricity) is one of the most under-discussed technology shifts at present. More on this to come shortly.
Related: the US is seeking leadership ‘across the stack’ where ‘the stack’ includes power and capital, as well as models and compute. Thanks to Matt Inness for sending this my way. Link.
But scepticism is growing too. YouGov via Exponential View.
Learning More
The Bottleneck to Progress is Humans. Tyler Cowan speaks to Dwarkesh Patel, two of the most followed thinkers by researchers in the field. Podcast.
Survey: US workers and GenAI. 30% use it at work, a third of those use it daily. They report that when they use GenAI tasks are done 3x quicker. Paper.
Prompting advice. From Greg Brockman (President of OpenAI) who really should know. Link.
How are businesses using Microsoft Copilot? From Microsoft, so take with a pinch of salt. But useful as idea prompts, or references for a business case. Link.
The Lighter Side
Robots falling over. Link.
As easy as 1,2.0,3(o). Link.