“ChatGPT is not particularly innovative.” Meta’s Turing Award-winning chief AI scientist, Yann LeCun, turned heads this week with his statements about the generative AI service from OpenAI. LeCun was making the point that the underlying technologies were developed over many years by several AI labs and are well-known to the research community.
“It's nothing revolutionary, although that's the way it's perceived in the public,” LeCun said, adding, “It's just that, you know, it's well put together, it's nicely done." He goes on to suggest that the capabilities of ChatGPT are widely available at a smattering of tech companies and research labs. "It's not only just Google and Meta, but there are half a dozen startups that basically have very similar technology to it," added LeCun. "I don't want to say it's not rocket science, but it's really shared, there's no secret behind it, if you will.”
LLMs and the Commoditization of Artificial Intelligence
Setting aside the inter-AI-lab drama, LeCun’s statement highlights a truly remarkable tectonic shift that is underway. AI is becoming a commodity. (Or, at least a certain kind of AI anyway.)
Just a year or two ago, this was almost unimaginable. Building and training AI systems was a highly-specialized effort requiring deep technical knowledge and custom tailoring for each and every use case, often at great expense. And the results and quality could vary considerably across service providers of different expertise. In other words, AI has historically resisted commoditization
Contrast that with where we are today with foundation models like ChatGPT. These large, pre-trained models have dramatically reduced the friction of using AI as a generic raw material in the production of other goods and services. Problems that were intractable just last year, are suddenly, not just in reach, but their solutions are right in front of us.
But what about fungibility? Commodities must be interchangeable. A bushel of wheat is a bushel of wheat, whether you buy it in Nebraska or Cairo. While today there are slight variations both in quality and in “flavor” of different foundation models, as these tools advance, we’ll likely no longer be able to perceive the differences among future generations. Driving this race towards exchangeability is a common user interface binding all of these models together—natural language. Natural language is becoming the universal API, or as Andrej Karpathy Tweeted, “the hottest new programming language is English.” The point: a marketplace of models all designed to respond to any natural language prompt will converge towards homogeneity. Further, the universality of natural language as the interface means models can be swapped in and out with ease, without having to rebuild entire systems.
There are downstream effects of this. When AI models are themselves commodities, we will see the rapid commoditization of AI features and capabilities. Just take a look at the startups being created in the last two quarters, and it’s clear this is already happening. Compared to a year ago, it’s now orders of magnitude easier now to build AI products that fill in the “wouldn’t it be cool if ____?” blank. We are about to see a flood of new AI-infused products hit the market in search of demand.
This will be both exciting and exhausting all at the same time. It will be exciting because we’re going to see an explosion of product innovation and experimentation in the coming months. It’s going to be exhausting because, for every real product innovation, there will be a hundred other products that will over-hype, over-promise, or completely miss the mark. Just about every application you use is about to get bloated with a smattering of LLM-driven features, for better or worse. And things that we found exciting about AI just a few months ago, may seem commonplace or even annoying a few months from now.
Differentiation through Human-centered Design
How can product designers in the coming AI boom stand out from the crowd and build products that matter? Way back in 2019, at the height of the last AI hype cycle, Jeff Bigham offered relevant insights. At the time, the initial deep learning hype was starting to wane as the high-flying “human-parity” marketing claims started crashing back to reality. However, instead of the hype-bust leading to an AI Winter, Bigham predicted an AI Autumn was on the horizon—a bountiful time when people would sort out the true innovations from the hype by solving real problems for real people.
We’re already shifting away from hype’y AI replicating human performance, and moving toward more practical human-centered applications of machine learning. If hype is at the rapidly melting tip of the iceberg, then the great human-centered applied work is the super large mass floating underneath supporting everything.
Sorting out the promise from the hype requires a human-centered lens and a set of cross-disciplinary skills and methods that are best typified by training in human-computer interaction (HCI), or as Jeff puts it, “HCI reaps the AI harvest.”
HCI (and the incorporation of HCI methods by people trained in AI) is why I think AI will have not a winter but an AI Autumn this time around. People who can apply ML to solve real human problems will become the most important tech people out there. Powerful ML is increasingly captured in easy-to-use libraries; if you want to stay ahead of the curve, you need the skills we teach in our HCI curriculum.
Jeff’s advice is equally important now in a still-hyped market where LLMs are rapidly becoming a commodity. The number of AI-infused apps is about to explode, and there are fewer technical tools with which to dig moats, making it all the more important for product designers to find other ways to differentiate themselves.
Designing AI Products that Matter
Highly differentiated products can be built with generic, commoditized parts. There were a sea of undifferentiated MP3 players flooding the market before the iPod upended everything. On the surface, Pinterest had all the raw ingredients of so many other social image-sharing websites, and yet it was crafted into something entirely its own. At the heart of such product innovations are a huge amount of empathy for the end user, and often good, human-centered design. From a product design perspective, the best path to differentiation is to (1) design the right thing and (2) design the thing right—the two hallmarks of HCD.
Designing AI products is different from designing traditional interactive systems. Researchers have identified challenges stemming from both the uncertainty around AI’s capabilities and the complexity of working with AI outputs, which are often non-deterministic. Methodologies and best practices are either still in their infancy or have just not been developed yet, but a few approaches have started to percolate out from the research. Researchers at Microsoft Research developed a set of 18 guidelines for Human-AI Interaction that designers can use to help them best position systems for success. These guidelines nudge designers to think about ways of taming the unpredictable behaviors of AI systems that can frustrate or confuse users.
In 2019 I co-authored a research paper led by Qian Yang that explored the unique challenges of following an HCD practice when designing natural language processing systems. One insight that we found was being able to quickly “sketch” low-fidelity NLP concepts and iterate on them alongside users unlocked a ton of value in the design process. Quickly moving through different high-level concepts is crucial to figuring out the right thing to design. Today, this kind of language “sketching” can be done practically off the shelf with users using the GPT-3 Playground, or ChatGPT web interface, but product designers still need to make the effort to build those iterative feedback loops into their process.
Qian has done some ground-breaking research that conceptualizes how designers might treat AI as a new kind of material with which they can design, evoking the idea that it can be combined and configured with other materials to create new and creative design objects. This kind of framing strikes me as particularly apt for a world where AI is an abundant resource that can be easily tapped into.
Ultimately, we’re all still learning how to do good work and build products that matter with these exciting new tools. The winners in a world where AI is a commodity will be the artisans who can understand real human problems and craft exceptional experiences with these newly bountiful raw materials.
Justin Cranshaw is an HCI researcher designing and building technologies that synthesize and summarize conversations. He’s co-founder and head of product at Maestro AI, a digital chief of staff that keeps your team in sync with summaries of everything going on. Connect with him at @justincranshaw on Twitter or justin@hci.social on Mastodon.
I suspect we'll be in a cycle of learning how-to do-good work for a while, especially at the rate artificial intelligence develops, and the rate at which the humans develop products with and around AI. Using your MP3 example, I hope we avoid parallelism for that experience--the iPod, for all its bluster, was still a marketing success and an inaccessible price point for many people. I'm hopeful it won't be the case with AI products, and I'm excited for a future where the world is more accessible through AI. Thanks for an inspiring read, Justin. I for one, am excited for this brave new world.