Skip to main content

What organizations can learn from real experiences with AI and accessibility

Explore lived experiences and real-world guidance on using AI to improve digital accessibility without leaving users behind.
Lorna Hegarty

Lorna Hegarty 23 Jul 2025

Recently, Squiz hosted an insightful webinar exploring the intersection of AI and accessibility, featuring a conversation between Assaph Mehr (Senior Product Manager for AI at Squiz) and Neil Jarvis, a freelance digital accessibility consultant based in Wellington, New Zealand.

As someone who is completely blind and uses assistive technology daily, Neil brought invaluable lived experience to the discussion. Assaph complemented this with practical insights from Squiz's global work with public and private sector organizations.

The conversation covered everything from personal experiences with AI to organizational implementation strategies, offering both optimistic possibilities and important cautionary advice.

Here are the key insights from their discussion:

How has AI personally changed your experience as someone who uses assistive technology?

Neil: AI has been genuinely transformative in very specific ways. Tasks that once took me 3–4 hours can now be completed in about 30 minutes. That’s the kind of labor-saving benefit technology has delivered for over a century.

But AI has also enabled access to visual information I could never reach before, like photographs. I have thousands of family photos I was present for but never "saw". One night, I spent hours going through them with AI describing the image, reliving those moments. The value of that experience is hard to explain. It allowed me to reflect on family events the way most people take for granted.

There are practical uses too, like choosing matching clothes, planning walking routes with detailed landmark descriptions, and understanding room layouts. GPS might tell me where to go, but AI adds context like “watch for the pavement narrowing here.” And in my experience, it's often accurate.

What about the challenges? When does AI fall short?

Neil: AI can also lie, and you need strategies to deal with that. Early on, I tried to test ChatGPT and other systems. I asked about a famous football match I remember well. It got the winner right but fabricated the score and goal scorers. These details are easily verified online, but they could be misleading if you didn’t know better.

I've also had AI describe room layouts with 90 to 95% accuracy, which is comparable to what many sighted people would provide. But then it would stubbornly insist an object was to the right when I knew it was to the left. Normally, AI concedes when corrected. But that time it doubled down and we had a full argument about it. I knew the room well, so it was fine. But in unfamiliar environments, that kind of inaccuracy could be dangerous.

How do people in the accessibility community navigate this gap between AI's promise and reality?

Neil: It comes down to familiarity and strategy. You have to know how to ask the right questions, how to prompt effectively, and crucially, how to verify, challenge, and confirm information. This is why there are big conversations about how far you can trust AI with improving accessibility.

You can’t just assume AI-generated code is accessible. It may claim it is, but often it's not. Sometimes it even introduces new accessibility issues. Like any tool, the key is knowing how to prompt, check, and confirm. There’s no shortcut for that.

How does the digital divide intersect with accessibility?

Neil: Accessibility and the digital divide are deeply connected. The divide exists between people who can afford expensive equipment versus those who can’t, or between those with educational opportunities and those without. Disabled people tend to have lower incomes and face greater barriers to education and employment. Just look at disability unemployment rates.

As more essential services move online, this becomes critical. Yes, you can call your utility company, but be ready to wait two hours and navigate a complex verification process. They’ll tell you over and over to “go online” or “use the chatbot.”

But what if you don’t have the confidence, equipment, or ability to go online? That two-hour wait becomes unavoidable. And many of the most powerful AI tools are locked behind paywalls, such as $40 per month subscriptions for real-time interaction. If you can’t afford that, you’re losing out on the divide again.

What's your take on automated accessibility tools, including AI-enabled ones?

Neil: Automated tools are an important part of the toolkit, but they probably catch no more than 30 to 40% of issues. That means around 70% of problems aren't picked up. That's not a quality assurance level you'd accept in any other work area.

These tools can also mislead. For instance, they might detect that alt text exists. But if that text just says “this is a piece of alt text,” it technically passes but is functionally useless. Many tools won’t catch that.

The same goes for AI tools. Use them for heavy lifting by all means, but remember that's all they're doing. They won't catch more than half of what needs to be caught, so you still need human oversight and context. It’s about using the right tool in the right way at the right stage.

How can organizations implement AI without losing accessibility progress they've made?

Neil: Build accessibility into your end-to-end workflow, from ideation through design, development, testing, and maintenance. Make accessibility part of the process just like security is. Don't think you'll do all the work and then take a quick trip back to check if it's still accessible.

Assaph: We're developing tools that help identify issues and suggest fixes, but we always keep humans in the loop. If you’ve got 10,000 assets missing alt text, reviewing them manually is impractical. But surfacing them and making automated suggestions is useful. Our goal is to boost productivity without sacrificing quality.

What do you see coming in the future for AI and accessibility?

Neil: I’m cautious about predictions, but this feels like a punctuation moment, similar to the rise of online information in the 1990s. We won’t understand the full impact of Large Language Models AI until we look back in 20 years. But we can already see how people work and access information differently than just three years ago.

Some people argue we should “let AI sort it all out” because manual accessibility work is too slow. That thinking is being challenged, and rightly so. Accessibility only happens when people take responsibility for their products.

AI can certainly help you create and maintain more accessible products, but it won't deliver that product to you by itself. Even AI that hasn't been thought of yet will still make serious errors about accessible code and design because it learns from other people about what it thinks accessibility is.

What practical advice do you have for organizations wanting to start with AI?

Assaph: Just start, but don’t expect a silver bullet. In practice, some tasks that look equivalent to humans might have vastly different difficulty levels for AI. Finding those edges takes experimentation.

These are general-purpose tools, so you need to ground them in your organizational knowledge. But first, ask yourself: is your content good enough? What are the gaps? How do you need to prepare it for AI to consume effectively?

So yes, dream big. But start small. Build internal capability slowly and feed the tools more data as your confidence grows.

Neil: AI is coming whether you’re ready or not. So the real question is how you’ll deal with it. Start with a plan and take baby steps in the right direction. When you've done that, take bigger bites and move to the next challenge. It's a journey that takes time, planning, and recognizing that it should be part of your daily workflow, not something you bolt on.

Any final thoughts on user education and AI tools?

Assaph: Having a tool doesn't mean you can master its application. If I give you a pencil, you can scribble, but could you create art like Da Vinci? If I give you an angle grinder, could you build a shed? Having powerful tools doesn't eliminate the need to understand what you're doing with them and having domain expertise.

You don’t need to understand the math behind an AI model, just like you don’t need to know how a motor works. But you do need to understand what the tool is good for, its capabilities, and how to get the most value out of it. That’s a human responsibility, and it’s why we still need to be in charge of these technologies.


This blog post is based on a webinar hosted by Squiz. The full recording is at the top of this post, while additional resources are available to attendees and can be shared within teams for further learning and discussion.