AI has become a buzzword in tech, but simply adding it to a product does not guarantee value. Designers need to have an understanding of the underlying technology and develop their approach while building on the principles of human-centered design.

A giant hand made of code reaching out toward a person with abstract representations floating, cute surrealist style, digital art, generated with DiffusionBee
A giant hand made of code reaching out toward a person with abstract representations floating, cute surrealist style, digital art, generated with DiffusionBee

☕ Designing meaningful products with AI

I keep getting asked how to design for AI. Since the hype just keeps growing, seemingly everyone is either already working on an AI product (at the minimum wondering how to add ChatGPT to all the input fields) or expects to work on one very shortly. Hence the interest in how to actually design with these systems or at least get started.

I wrote an article a couple of years ago (Designing AI products) when machine learning started to catch up. The article more or less covered the then state-of-the-art and our understanding of the challenges. Working with AI tech more recently, plus all the advances in the field (generative AI systems especially) allows me to add details, especially about what this all means for designers.

Three statements to start with - neither of these should be highly controversial:

  1. AI is a technology, just having it in the system doesn’t imply the software’s valuable.
  2. If you are not using AI tools today, you are missing out.
  3. Our design approach stays the same the more it changes.

The first point, AI is not magic. While newer systems allow for capabilities that seemed impossible a couple of years or months ago, the same can be said about the progress of technology in general too. With AI tech we can create features or indeed new classes of products that were deemed not feasible earlier.

I find there are a lot of analogies to when mobile apps exploded. There was a gold rush for new apps and a lot of companies made panicked choices. But the new tech in itself didn’t lead to better experiences on its own. Just because a software now had a mobile app version, it didn’t mean it was more valuable to users. The same applies to AI tech. Just because an app has AI features, it doesn’t necessarily mean users will find it useful.

Especially as just as with all technologies, the trade-offs are getting explored now. Until there is more clarity from all this discovery and from building new applications, no one can be certain something needs or doesn’t need AI tech.

To be able to perform this discovery and building, designers clearly need to have a good understanding of the underlying technology. Ultimately, the best way to get into AI design is to play with it, go start using the new tools already available. The added increase in effectiveness is just a bonus. Explore generative tools, and try to get into APIs - this helps in understanding the material of design. And - try to build something on your own. Same idea as in mobile design - those who didn’t have a touchscreen mobile (khm, iPhone) and were trying to build something, were missing out on what these devices were capable of.

To design great experiences, designers need to have an intimate knowledge of the material they are working with - be it web, mobile - or in this case AI tech.

Working with and using these tools also helps in getting through some of the hype, without relying on other people’s sometimes biased opinions. As of writing this, prompt engineering is all the rage, that is how to direct generative systems (ChatGPT, Midjourney, etc) to get the right results. But to understand how this matters, designers should have at least a user’s understanding of how prompts work to see how they can be embedded into an experience.

But will all these matter? Current AI tech needs to be shaped. With generative AI most of the UX is centered around text boxes. Will this remain so? To cut through the tech, designers need to remember, we are still working on human experiences. So most of the same principles and guidelines apply. Based on these ground principles the design process will need to adapt.

Some questions to think about when designing a system with AI tech:

  • How is people’s mental model affected if they realize it’s AI? Is automagical weird? Is it too weird to be trusted?
  • What is the interaction model? What’s the relation between the human using the system and the system? Does it act more like a tool, more like an awesome butler (like Alfred Pennyworth), or more like a buddy?
  • What solutions are hard with AI tech? What solutions are easy with AI tech?
  • How can experiences be more dynamic with AI tech?
  • How to design experiences with uncertain outcomes? Things are often good enough, but do we accept 80% good enough rate?
  • How to prototype AU systems? AI models might only uncover behavior once they are deployed, so showing Figma prototypes won’t be cutting it - a tighter collaboration with engineering and data science teams is desirable.

All the above doesn’t even touch ethical (and legal) questions. Those seem like an unstable Jenga tower about to collapse, and design teams will need to deal with them once the current fever clears a bit.

All of the above seem to lead to a few high-level principles emerging:

  • Principle of Transparency: AI-powered products should provide clear explanations about how they work and what data they use, to help users understand and trust their outputs. For example, a credit scoring algorithm could show users the factors it considers in making its decisions, like payment history or income level.
  • Principle of User-Centered Design: Designers should focus on creating AI products that solve real problems for users, rather than just using AI for its own sake. This means understanding users’ needs and integrating AI in a way that’s intuitive and helpful. For instance, a chatbot for a customer service app might be designed to answer frequently asked questions quickly and accurately.
  • Principle of Collaboration: product teams working on AI products should keep working closely with stakeholders from various disciplines to create products that reflect diverse perspectives and expertise. This can help ensure that the final product is effective, ethical, and accessible to a wide range of users.
  • Principle of Diversity and Inclusion: AI products should be designed to serve diverse groups of users, including those with different abilities, backgrounds, and cultures. This means accounting for things like language barriers, accessibility needs, and bias in training data. For example, a language translation tool could be designed to recognize and respect regional dialects or slang.
  • Principle of Ethical Considerations: AI designers should consider the ethical implications of their products, such as potential biases, unintended consequences, and impacts on privacy and security. They should also design products that align with ethical values like fairness, accountability, and transparency. For instance, facial recognition software could be designed to minimize the risk of false positives and protect user privacy.

Working with AI tech will pose some challenges to design teams. There needs to be a good balance between not jumping on the AI bandwagon and building expertise in AI via experimenting. Human qualities again need stronger advocacy as the excitement around the new tech rises. A lot of the ethical considerations might shift to the legal ground (like with GDPR or accessibility), but not all. Junior headcounts will get slashed, as their tasks are taken over by AI tools.

In the end, for design leaders, AI doesn’t change that much. Spaces for their team need to be maintained to experiment, collaborate, and fail. Teams need to improve their resilience. Allies will be needed to collaborate. Plus maybe a design studio to create better input fields.

🥤 To recap

  • Just as with any other tech, just having AI features in software doesn’t guarantee that it is valuable.
  • Designers need to have an intimate knowledge of AI technology to design great experiences with it, and the best way to gain this knowledge is to use the new AI tools available and build something on their own.
  • Since designers are still creating human experiences, most of the same principles and guidelines apply.
  • The design process needs to adapt to AI tech, designers need to think about the interaction model, mental models, solutions, dynamic experiences, uncertain outcomes, and how to prototype for AI systems when designing AI products.
  • To create successful AI products, designers should focus on the principles of transparency, user-centered design, and collaboration, and keep ethical and legal concerns in mind.

This is a post from my newsletter, 9am26, subscribe here:

🍪 Things to snack on

There is a lot of hype going on around AI, and it’s sometimes tough to understand what matters and why. I found the The Algorithmic Bridge newsletter by Alberto Romero quite helpful to follow trends, in details slightly above the technical level and a bit farther away from the hype arena.

To understand how to design with AI, designers should explore by creating new things with this tech, argues Gus Baggermans in Embracing AI as a material for design. The article also gives some general guidelines: Play responsibly, Design for discoverability, Help people navigate the maze of possibilities, and Consider legal & ethical implications.

Design for AI: What should people who design AI know? is a tool and an attempt to define what design skills will be needed for creating systems with AI by Hal Wuertz. It provides some useful behaviors for the five skill categories it presents (Technical, Ethics, Collaboration, Strategy, and Interactions). The list of behaviors might also give ideas for what to learn or improve.

16 excellent principles and added context to learn more by Lennart Zibruski in UX of AI. Some of the principles just show how designing for AI is not that different from designing any software (“Start with the user”), while others are quite specific (“Explain the results”).

Joël van Bodegraven focuses on anticipatory design (how systems will think ahead of the users) in Design principles for AI-driven UX, describing systems designed with AI as simply “smart”. The article also gives 5 principles to get started: Smart design has a purpose, Smart design is an extension of human capabilities, smart design anticipates, smart design should humanise experiences, and smart design is proactive.

Google published a bunch of great and insightful articles, starting with The UX of AI by Josh Lovejoy which is a case study about the Clips camera. Three main points on how AI needs human-centered design: solutions need to address a real human need, the intelligence needs guidance, and building trust needs to be core.

One problem with current machine learning-based systems is that sometimes it’s unclear why they do something the way they do it. This is the explainability problem, as Meg Kurdziolek writes in Explaining the Unexplainable: Explainable AI (XAI) for UX. The article provides an overview of techniques data scientists use. More importantly also, how this problem affects users, as most users won’t have a good mental model of what’s happening inside such a system.

Comments