Skip to main content

"Just add AI": Balancing AI Implementation with Digital Accessibility

Watch this webinar to learn how you can implement AI with accessibility at the center, not as an afterthought.

Webinar Q&A

If there was a quick win you could suggest for us to get started on our site, where would you recommend we begin?

The most practical quick win is implementing alt text for images using AI assistance. This is a common problem where teams either forget to add alt text or aren't sure what to write. The solution is straightforward: pass an image through an AI tool and ask for recommended alt text, then copy and paste the result to ensure screen reader compatibility.

Another immediate opportunity is using AI to optimize content for appropriate reading levels and plain language. You can configure AI tools to generate or check content at specific grade levels that match your organization's standards, whether that's grade 5, 8, or 12. This helps make content more accessible while maintaining consistency across your site.

I find using AI quite scary because I don’t always trust it to do/say the right thing. How can I go about picking AI tools that are trustworthy?

To address concerns about AI reliability, focus on selecting tools that allow you to control the data sources being used. Look for AI systems that can be configured to only index and reference your own content rather than pulling from broader internet sources.
This approach provides two key benefits: you'll get more reliable, less prone-to-hallucination responses, and you'll avoid potential issues with leveraging other people's copyrighted information. The ability to lock down AI responses to specific content areas or repositories within your own organization significantly reduces trust concerns while improving output quality.

It can be quite hard to monitor ongoing accessibility. Any AI tools or processes you recommend?

AI-powered monitoring tools are available that can track accessibility performance across websites, including options within platforms like the Squiz DXP. These tools are evolving beyond simply identifying problems to actually synthesizing issues and providing actionable guidance on how to fix them.

The key challenge many organizations face isn't identifying accessibility problems - most already have tools that reveal where issues exist. The real difficulty lies in making sense of those problems, prioritizing them effectively, and determining the best actions to take. AI is emerging as a solution to help with this synthesis and prioritization process.

Academic integrity, copyright and fair use, etc. are really important to our organization. Can you recommend any resources to address concerns about the ethics of using generative AI, when e.g. Disney is suing Midjourney for plagiarism?

For addressing copyright and academic integrity concerns with generative AI, the key is using tools that allow you to control the data sources. Instead of relying on large language models that reference copyrighted content from across the internet (like the Disney imagery issues you mentioned), look for AI tools that let you limit responses to only your own institutional content.

This approach involves using AI systems that can be configured to index and reference only your organization's own content repositories, specific sections of your website, or approved materials. By restricting the AI to work solely with your own data, you avoid the ethical issues of leveraging other people's copyrighted information while also getting more reliable, accurate responses that align with your institution's standards and values.

When implementing AI assistants across multiple enterprise products, how do you establish clear decision-making processes? What approaches help ensure teams can ship WCAG-compliant features without needing rework?

When implementing AI assistants across multiple enterprise products, three key approaches ensure WCAG compliance without costly rework:

  1. Create a single AI product charter. Establish one foundational document defining AI's role, scope, and accessibility principles including mandatory WCAG compliance. Make it a living document with formal sign-off from product, design, engineering, accessibility, and legal teams to ensure alignment across the organization.
  2. Implement governance with authority. Form a cross-functional AI Steering Group that meets regularly and owns final calls. Include an accessibility owner with veto rights over non-compliant design patterns. This eliminates decision paralysis and prevents last-minute accessibility rewrites.
  3. Make accessibility non-negotiable. Treat accessible patterns as mandatory acceptance criteria, not optional fixes. Use a shared design system or pattern library for AI interactions so teams can ship quickly while maintaining compliance standards from day one.

This framework creates accountability, reduces decision fatigue, and embeds accessibility into your development workflow.

Working on using AI for internal search, how far off is a purely 'vocal' AI interface we could use on a website?

Purely vocal AI interfaces for websites are very close to reality - essentially already here. The technology is just a small step beyond current text-based AI search capabilities, requiring only the addition of voice-to-text functionality to existing AI search systems.
This development is particularly exciting from an accessibility perspective, as it removes significant barriers for users who currently rely on screen readers to navigate websites. Instead of having to click through various elements and navigate complex interfaces, users would be able to simply ask questions verbally and receive direct answers, making websites much more accessible and user-friendly.

How do we balance personalization with the right to privacy? With so many data leaks and so many backend systems talking to each other, the idea of every site knowing – for example – that I'm visually impaired or neurodivergent, seems dangerous.

The key to balancing personalization with privacy is being selective about what information you share and when. You don't need to opt into personalization on every site - instead, choose to share data only when it provides clear benefits to you. For example, you might opt into certain audience categories like "student" to receive relevant educational content, while still relying on assistive tools like screen readers without sharing detailed personal information about disabilities or neurodivergence.

Pay close attention to the permissions you're allowing on each website. The US is behind Europe and other regions when it comes to consent frameworks, so it's especially important to be deliberate about your choices.

For organizations implementing these technologies, compliance is crucial. Know what data you're legally allowed to collect and what you're not. Avoid personalizing and segmenting audiences based on sensitive personal information like disability status. Instead, use tools that can effectively communicate relevant information to users without storing cookies tied to specific individuals or maintaining detailed personal profiles.

There's also a generational shift happening - younger users are becoming more willing to share their data when they see significant benefits, provided it's done ethically and with appropriate controls. However, regardless of age, the principle remains: only share what's necessary and beneficial to you, and ensure organizations are handling that data responsibly.

My biggest accessibility challenge when implementing AI is that the results aren’t the same each time – so how do I know and trust that the results are accessible?

While AI-generated results may vary each time, the key to ensuring accessibility and trust is focusing on the quality and accessibility of the underlying content that the AI draws from. If your source content is well-structured, accessible, and compliant – with proper metadata, alt text, and accessible formats – then even though the AI's responses may differ in wording, each answer should still meet accessibility standards.

To build trust, it's important to thoroughly test the AI by running multiple versions of key questions and reviewing the outputs for accessibility compliance. If any response fails to meet requirements, this indicates a need to improve the source content or how the AI is configured. This process of testing and refining helps ensure that, despite variability, all AI-generated answers remain accessible and compliant.

As AI evolves and becomes more agentic, do you see our roles becoming more rooted in strategy and governance?

Yes, as AI becomes more capable, team members' roles will increasingly focus on strategy and governance. While AI can automate and handle more complex tasks, human oversight remains and will always be essential for setting direction, providing clear instructions, and ensuring alignment with business goals. Skills like prompt engineering will become more important, as they help translate strategic objectives into actionable guidance for AI systems. Additionally, the human element – such as creating original content and maintaining expertise – will remain a key differentiator, ensuring quality and relevance in an environment where AI-generated content is widespread. Team members will shift from operational tasks to strategic leadership and oversight of AI-driven processes.

As AI features evolve within Squiz DXP, how do you recommend organizations embed accessibility governance into the AI development lifecycle so accessibility isn’t a retrofit?

To avoid retrofitting accessibility into Squiz DXP's evolving AI features, we would recommend you weave governance throughout your development process:

  1. Start with inclusive design principles. Make accessibility a core acceptance criterion in AI projects from day one – not something you check off later in QA. Build WCAG and regional standards (like EN 301 549 or Section 508) right into your project briefs and user stories so compliance becomes automatic, not optional.
  2. Bring in accessibility expertise early. Get accessibility specialists, assistive-technology users, and content editors involved during design, training, and testing – not just at the end. Run accessibility workshops for your developers, content authors, and product owners to help everyone understand who they're building for.
  3. Create checkpoints at every stage
    1. Discovery: Ask the tough questions upfront – how will your conversational AI work with screen readers? Will it maintain proper semantic structure?
    2. Development: Implement automated accessibility linting and content-fragment validation before merging changes.
    3. Testing: Combine automated testing tools with manual testing by diverse user groups.

This approach bakes accessibility thinking into each phase instead of tacking it on later.

Can AI content generation/review be managed at the site level to automatically enforce style guides, WCAG compliance, and reading level requirements? Rather than relying on editors to remember specific prompt parameters, is it possible to implement these standards by default across an entire website?

The AI content tools (prompts) are configured and managed at the DXP Console level. Here’s how it works:

  1. Available AI prompts at site level: You can create and manage a library of AI prompts that are available to editors through the dropdown in the Visual Page Builder (such as “Make this more casual” or “Apply Squiz tone of voice”). Editors must select content and choose to run the AI tool.
  2. Standards applied through AI prompts: Each AI prompt operates independently, but style guides, WCAG compliance, and reading level requirements can be built into individual prompts. This means that you can have a “Rewrite to age 16” which also includes instructions to conform to WCAG and the brand voice.
  3. Editor-driven process: The current workflow requires editors to actively select content and apply the appropriate AI prompts. As part of our roadmap, we are actively developing two key enhancements: combined prompt tools to increase editor efficiency, and a validation layer that will check content against accessibility and style standards automatically before publishing.

Squiz is continually innovating and improving, looking to release new and updated tools to support Accessibility.