Spring launch 2025 | Discover our latest AI-powered innovationsExplore launch
Go to app
BlogInspiration

The past, present, and future of design and usability with Jakob Nielsen

Last updated
11 June 2025
Published
10 June 2025
Creative
Sherline Maseimilian

Dive into Jakob Nielsen's Insight Out 2025 talk on the past, present, and future of design and usability.

Interviewed by Senior Design Manager at LinkedIn, Kristine Yuen, Dr Nielesn explores the enduring relevance of usability heuristics in the age of AI, critical ethical considerations as AI advances, and the evolving landscape of accessibility and the ROI of design.

Kristine Yuen (interviewer): Jakob, it’s the 31st anniversary of your 10 Usability Heuristics. With the rise of generative AI tools that often feature incredibly simple interfaces, do you feel your usability heuristics still apply, or do you feel like they should evolve?

Jakob Nielsen: I think they still apply; they are still solid. The reason the heuristics are solid is precisely because they are heuristics. They are very broad, general rules or rules of thumb. They're not specific design standards that change with new technologies. Heuristics relate to how humans use technology and the main barriers to accomplishing things with technology, and that doesn't change because humans don't change.

I think that [the heuristics] are not used well in current AI tools. They've been getting better over the last two years, but not at a high pace. I believe the reason for that is that AI came out of AI research, from AI scientists, who are experts in machine learning and other disciplines. I have enormous respect for their work. That said, they are all technologists and mathematicians. They don't necessarily understand human factors.

Furthermore, I think they suffer under the curse of human factors: developers and the people who create systems know too much about their own system, their own technology, and their own design. For them, it's natural to use these interfaces, but it's not for the average person.

There are a lot of examples by which the AI tools today do not comply with good well-established usability principles. I believe the main reason for that is that we have not had enough UX people, design people, usability research people on those tools.

Jakob Nielsen on stage at Insight Out 2025. Photo: Clara Rice
Jakob Nielsen on stage at Insight Out 2025. Photo: Clara Rice

Kristine Yuen (interviewer): You generally talk about AI in a very positive and optimistic light in a lot of your articles, but is there anything you fear about AI? And if so, are there things that we as designers should be very mindful of as we're working with these systems?

Jakob Nielsen: What I fear most is that AI development will slow down too much due to negative press coverage. The media has an enormous negativity bias—"if it bleeds, it leads"—which makes people fear too much. This is inherent in media; Hollywood and fictional shows also emphasize conflict for a good story, often overemphasizing negative parts.

I believe there are such strong upsides to AI that delaying it is actually the biggest ethical problem we face. You won't see the millions of people who get a worse education because they lack AI in, say, Africa, or patients who receive worse treatment because they could have been diagnosed better with AI. These benefits would accrue to millions or billions if we had better AI sooner.

The beauty of AI is that every time something goes wrong, you can and should fix it so it doesn't happen again. This is why I'm optimistic about things like self-driving cars. In the US alone, 40,000 people are killed annually by human-driven cars. If all cars were AI-driven (with current best statistics, like Waymo, experiencing only about 20% of the accidents that human-driven cars do), that number would drop to 8,000. So, AI would save 32,000 lives annually, but the newspaper headline would be "AI Kills 8,000 People."

Kristine Yuen (interviewer): Last year and this year you've written two different articles claiming that accessibility is "dead" in the age of AI. This has drawn some criticism from accessibility advocates who don't believe AI is a silver bullet. Why do you hold this perspective, and why do you think it's more important to accelerate accessibility in AI versus fixing existing product issues?

Jakob Nielsen: I think with AI we can achieve accessibility once, rather than fixing millions of individual websites and digital products. We know that fixing every existing product isn't happening. If you look at my book Designing Web Usability, I had an entire chapter on accessibility. I've been pushing for it for more than 25 years and many other people have done it as well, obviously, and yet we have a situation where it's in a poor state. With AI, it's a smaller number—perhaps 10 major AI apps may dominate in the future. This centralized approach has a much greater chance of success.

I'm very optimistic about AI's potential to help people with disabilities because it can adapt information on the fly to people's needs, creating an individualized user interface. Whatever a person's problem, AI can address it. For example, for someone with low literacy, AI can simplify information to their reading level. This is already being used in healthcare to create easy-to-understand post-visit summaries.

Similarly, for blind users, current approaches involve taking a graphical interface and making it readable aloud, which is always suboptimal. AI, however, can take the underlying information and directly translate it into an auditory user interface that works well with their abilities. So, while I'm not saying accessibility is dead yet, I think the current approach is the wrong one for the future. In five years or so, AI's ability to transform and individualize information for users' needs and capabilities holds vastly more potential than trying to retrofit accessibility onto hundreds of millions of websites, which simply isn't being done.

Kristine Yuen (interviewer): Let’s talk about the future of AI and the design industry. There's been a decline in tech jobs, and you've even suggested that design agencies larger than 10 people might not exist much longer. However, you've also indicated that a decline in ROI on design is actually a signal of success. Can you explain this, and what proof points can design leaders use to prove design's value?

Jakob Nielsen: First, I want to clarify that when I say UX has had a declining return on investment, it doesn't mean no return, just smaller than it used to be. It used to be stupendously big for two reasons:

  1. Early design was terrible: 25 years ago, early websites, DOS, Unix, and mainframes were atrociously difficult to use. When something is incredibly bad, there's immense opportunity for improvement. Today, design is much better, so the improvement potential is lower.

  2. Low investment, high impact: In the early dot-com days, I could review a client website for two hours and identify 10 fixes that would double their revenues. Usability tests with a few users would reveal glaring issues that were relatively easy to fix, often by simply removing "stupid things." The ROI was incredibly high.

Today, design is relatively good, so a redesign or project will only move the needle a little. Secondly, the investment is higher because more thorough research and work are required. So, the return is smaller, the investment is higher, leading to a lower ROI. However, this doesn't mean UX work is zero, negative, or a waste of money. It just means you don't have those several thousand percent gains we used to have.

In business terms, this means UX is becoming a commodity. While "commodity" often sounds negative, it's actually good: it means it's ubiquitous, universal, and essential, like water or electricity. Companies that provide water or electricity are vital; it's honest, good work. It's just not enormously special or rare; it's something we expect. I think the same is true for design and UX. We expect to have it. Most executives, at a minimum, pay lip service to wanting good UX.

We also need to remember that customer quality demands are dramatically increasing. Expectations are set by every other experience. Bad design tolerated 10 years ago won't be accepted today. AI will further contribute to this, as our standard of living is expected to approximately double in the next 20-30 years. The richer people get, the more they demand quality, and low-quality products will not sell.

Insight Out 2025, Fort Mason, San Francisco. Photo: Clara Rice
Insight Out 2025, Fort Mason, San Francisco. Photo: Clara Rice

Kristine Yuen (interviewer): Given the rise of generative AI tools that can go from prompt to design to prototype to code, and the layoffs we're seeing, what makes you believe that design jobs will continue to grow in-house, and specifically, what evidence do you have?

Jakob Nielsen: I think most of the growth will be in-house, but at the same time, I absolutely also believe that a lot of the work that's currently done by human designers will be done by AI. And the way you can resolve that contradiction is that the total amount of work will be so much larger because so much more is being done by software and the requirements for the quality are going up. For example, in the old days, there was no usability, but then we got to this point where it represented a few percent of a development project. In the future it might easily be 80% of a development project because the coding will be vibe coding. The bigger component of the project becomes figuring out what should be done. And the smaller part becomes actually making it happen because making it happen will be the job of AI, which includes making the design happen. 

The human role will be to orchestrate or synchronize all of the different things that are done by AI. So we're going to uplift the contribution of the humans to be almost like a manager of the AI as opposed to the ones doing the handcrafted design. 

I like to speculate here because there's one thing that I'm not totally sure of, which is, is there a scaling law for usability insights, the way we have AI scaling laws for so many other things? We're scaling up the amount of things that can be done based on what's already known. So what I think may or may not happen is that if we scale further and we have AI analyze hundreds of thousands of videos of user testing sessions and millions of designs, for example, will its insights scale further to the point where AI can know what a good design is?

The human role will be twofold. It will be agency: AI can do what you tell it to do, but you've got to tell it what to do. Secondly, judgment, and there's a third one: interacting or dealing with other humans. We've got to communicate or collaborate, and those are human skills. If you want to be crude about it, it is manipulation. You've got to be able to manipulate the other people in the organization so you have that agency, you have that judgment to say, "this is the design we want," but you've got to convince—I say convince is a nicer word, manipulate is a harder word—you've got to make the rest of the people buy in.

Those types of activities are going to be the human ones, and then making the icon, making the screen design, you know, the AI does that.

*Interview questions and answers have been edited for clarity and length. Watch the original session on YouTube.

Keep reading

See all

A whole new way to understand your customer is here

Get Dovetail free

Product

PlatformProjectsChannelsAsk DovetailRecruitIntegrationsEnterpriseAnalysisInsightsPricingRoadmap

Company

About us
Careers4
Legal
© Dovetail Research Pty. Ltd.
TermsPrivacy Policy

Log in or sign up

Get started for free


or


This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. By clicking “Continue with Google / Email” you agree to our User Terms of Service and Privacy Policy