Skip to main content

🚀 Try our newly launched Conversational Search tool

"Just add AI": Understanding AI's impact on digital experiences

AI is everywhere – but that doesn't mean it belongs everywhere. In this first webinar of our 'Just add AI' series, we show you how to cut through the hype and panic to implement AI thoughtfully.
Assaph Mehr

Assaph Mehr 02 Sep 2025

Webinar Q&A

How do we get started? What are the practical first steps to start using AI in website development?

Start with how an intern can help you. Your approach should be:

  • Think of AI as a useful but untrustworthy intern
  • Ask it to do tasks like creating 5 variations of text for different audiences (Gen Z, elderly, etc.)
  • Review and select or tweak the results
  • Use it for tasks like answering FAQ questions, but don't put it in charge of the call center

Key mindset: "How do I integrate this useful but untrustworthy intern into my workflow to help me get my job done faster?"

You mentioned picking a use case that’s low risk. Can you share an example of what a low-risk implementation might look like?

Here's a university case study example:

  • The situation: University had registration information and policies
  • The low-risk approach: Instead of deploying AI directly to students, they used it to help customer support staff
  • How it worked: Support staff could quickly search for policies they remembered but couldn't find or recall exactly
  • The benefit: AI provided the answer and the relevant page, helping staff formulate responses faster
  • The progression: Once comfortable with AI's accuracy, they could then expose it directly to students

This was low-risk because it helped people who already knew the answers find information faster, rather than making AI the primary source of answers.

Can you talk about GEO for websites - how do we ensure a website is optimized for GEO?

GEO stands for Generative Engine Optimization. It's about making sure your information appears in AI search solutions like ChatGPT, Google AI, etc.

Key principles (similar to SEO foundations):

  • Content quality first: Just like SEO was "cherry on the cake" for good websites, GEO requires really good content
  • Audit your content: Check if pages have clear, descriptive titles that relate to the content
  • Avoid jargon: Content shouldn't be full of jargon or acronyms - make it accessible to normal people, not just approved by lawyers or academics
  • Clear structure: Ensure headlines relate to the content below them

The bottom line: Having really good content will help you - AI systems will naturally find it easier to work with clear, accessible content. This also helps regular visitors, not just AI systems.

Related to the GEO question - in your experience, what percentage of organizations actually have content that's truly "AI-ready" without significant work? What does that content audit reality check usually reveal?

Content audits usually reveal what you suspected already - that there are content issues all over the place. All organizations have issues to some degree, the trick is identifying the specific set of information that is in high demand / needed as input for AI, and the extent of specific issues within that.

It doesn't matter if a page that was last viewed by a visitor in 2017 has bad content. Once you identify the content you care about, you can scan for potential issues or try running our testing tool to see how AI would use it and triage the issues.

You mention the importance of cross-functional collaboration. What's the biggest political or organizational hurdle you've seen when trying to get legal, IT, content, and business teams aligned on AI initiatives?

The challenges are the same as other tech implementations.

Common organizational hurdles you'll face:

  • Ego-driven initiatives: Projects that start with "We need to do AI so our CEO can brag to his CEO friends"
  • Conflicting C-level directives: Different executives with different priorities
  • Resource constraints: IT organizations saying "We're swamped and don't have time to deal with this"
  • Risk aversion: Legal teams saying "Nobody can touch our content"

Navigation strategies you should use:

  • Understand organizational dynamics: For your particular organization, where do you see the pull? And where do you see the resistance?
  • Work with both sides: Collaborate with supporters while addressing resisters' concerns
  • Address specific concerns: For cybersecurity fears, explain the vendor supply chain, hosting security, data tenancy, and information sanitization
  • Build comprehensive solutions: Address concerns across the whole supply chain and organization
  • Change management: Change management is a practice in its own right - this requires dedicated expertise and process

The key is getting everyone in the room, talking to them, understanding all their concerns, understanding the business and then navigating how to address all those issues with appropriate solutions.

Do we need to test the AI’s responses regularly? For example, test they are answering the FAQs well, and often. Can this be automated?

Yes, absolutely. This is critical.

The problem: There are thousands of vendors of AI chatbots out there, and it can be as easy as signing with one in the morning and deploying it on your website in the afternoon. The risk is that you could then see your name in the newspaper for all the wrong reasons by tomorrow lunchtime.

Your solution process should include:

  • Identify target questions you want the AI to handle
  • Test repeatedly - run the same questions multiple times to check consistency
  • Test in bulk - run 20, 50, 100 questions at once
  • Iterate configurations until you're happy with answers
  • Monitor ongoing conversations - see what unexpected questions users ask
  • Continuous evaluation - check if answers are appropriate and identify areas needing adjustment

Timeline: It's not a 24-hours-and-it's-live thing. It's a 3-month process to build confidence before public deployment.

Where can we get analytics data about AI bots and agents - from Google Analytics?

Depends on exactly what you mean. Squiz provides information and analytics data on the chatbots and conversational search data that we provide for customer websites. If you're looking for information about general chatbots (e.g. ChatGPT) these are available in industry reports, though not always accurate as vendors aren't always as transparent as we'd like. If you mean referral traffic to your site from those services, that will be in Google Analytics.

How can we encourage our users to still exercise their critical thinking, while using AI services?

The core issue: People have a cognitive bias thinking it's a computer system, that it's objective and unbiased and, therefore, just accept answers without thinking.

The reality: LLMs were trained on things found on the Internet... and then post-trained by specific groups of people. So instead of being an objective computer system, we just managed to codify all of humanity's biases into this black box that we can't see.

Practical approaches you can use:

  • Use concrete examples: Show them examples to demonstrate that technology isn't always trustworthy
  • Set clear expectations: This is a new technology. It has been known to recommend people eat rocks for breakfast. Take everything it says with a grain of salt.
  • Use the intern analogy: It's like that intern... a high school student who's really keen and can work fast, but they occasionally go rogue and hallucinate, so check everything they do before you roll with it.
  • Resource recommendation: Read "AI Snake Oil" - a book about AI hype that explains what the technology is good at and what it isn't

You mentioned using AI responsibility - do people care about being warned when they are seeing AI content or interacting with AI agents etc?

No, people generally don't pay attention to warnings.

However:

  • For internal use: Brief your staff and build awareness gradually
  • For external/consumer use: Include warnings but also use an "opt-in" approach rather than forcing it
  • People care when it goes wrong: You need to be confident it won't go wrong, stay on topic, and have safety switches that cut off inappropriate responses

Your strategy should: Be careful about it. Plan for it in stages and over time.

Do you have a feel for how far off is voice input for AI queries, specifically on mobile?

Not far at all. Many vendors have a voice mode already in their apps (e.g. you can converse with ChatGPT, Gemini, and Claude via voice rather than typing). The technology is progressing together with the rest of the industry - not perfect, but constantly improving.

Is it likely that Squiz will incorporate more AI tools in the future that will potentially save organizations from building it themselves? What I am getting at is the balance between being an early adopter vs waiting on Squiz.

During the webinar we've touched on the Build / Buy / Partner options. When it comes specifically to Squiz, we have a big program of work in flight. However, while we're testing and experimenting with the latest and greatest, we only release features that we believe are mature enough to keep our customers safe.

So watch this space for future announcements, or reach our directly to Assaph to have a chat about your use case and the Squiz roadmap.

Looking ahead, is there a key AI implementation trend you're seeing that we should be preparing for?

There is a lot of buzz about "agentic AI" and about agent-to-agent interactions. I think this is nascent, and while some technologies are promising (e.g. MCP) the full capabilities are still TBD.

Considering we're still in the midst of a hype-cycle, it won't hurt to take new trends with a grain of salt, wait for the dust to settle, and push on real business problems while we wait.

What are your thoughts on organizations that want to gate content from ChatGPT so it does not impact their traffic and stats on the readers of their content? Is it even possible for a website to gate from the likes of ChatGPT?

It's possible to a degree. You can block known crawlers (OpenAI, Perplexity) from accessing your site. That won't block Google and their AI Overview, or services that will try something sneaky that will not be easily correlated with them.

This is also a bit like closing the door after the horse has bolted - all those services have likely already crawled your site and kept a snapshot. So while it may reduce costs if they repeatedly hit your website, in the long run - especially if people rely more on answers from these services - this may cause inaccurate answers or being excluded.

What is the role of a website in this AI LLM world? Is the goal still to get people to our websites OR should we think about this differently?  What is the role of rich media assets, etc.?

The role hasn't changed - you are still responsible for putting out great content that's accurate, accessible, and useful. 
AI can enable you to modify the experience: test that people are engaging with rich media, adjust layout and language for users who want a quick answer vs those who want to explore the site, etc.
The ROLE hasn't changed, even if the tools available have.

AI depends on high quality, well structured and accessible data to train, operate, and deliver accurate outputs. Is it worth trying? If that's your situation.

Without knowing your unique situation I can't say exactly, but potentially yes. When you're trying to do this kind of project you have the option to build, buy or partner:

  • Build: You can try to build everything yourself and you will be competing in a market where everybody is after talent. You will need to hire the right expertise, but if you do, you will have all the data and everything within the organization to do it - given enough time and resources.
  • Buy: If you buy something off the shelf, you need to see that what they expect you to deliver in terms of data is what you have, what it'll cost you to get it to the right shape, and whether the features which they offer you are right for you.
  • Partner: You need to partner with somebody who actually understands what you're doing, that they understand your vision and can work with you back and forth and can guide you as well as build things for you.

When it comes to the specifics of the data, if your data is already organized, that's awesome. However, it's quite likely you will find that it still needs to be cleaned up and made ready for the AI.

How do you balance the excitement around AI capabilities with the reality of what your organization actually needs? What's the difference between AI for AI's sake versus strategic implementation?

Strategy answers three key questions: Where we are, where we're going and how we're going to get there.
Strategic implementation means having a clear vision of where you're going. AI is just one possible step to how you get there. Starting with a real business problem that actually matters to your business is key.

For example: If you're a university, and you want to grow among international students from Western Europe and Eastern Europe - that is a strategy. You'll know to focus on what will help you grow in that particular direction. You might expand your STEM faculty or appeal for Arts students in a particular way. From there you can consider: How can AI help me to get that right?

On the other hand, you'll know you're looking at AI for AI's sake if you're just adding it without a clear purpose. Assuming "All AI is like ChatGPT - so let's just add a chatbot to it" may not be the right solution - especially if you're not considering if AI can actually help solve the specific business problem.

Key advice: Make sure AI can actually support your use cases and that it's the right technology for your specific goals. Sometimes a chatbot is a good interface, but sometimes it's not. Consider other AI tools and interfaces that can help you navigate toward your actual vision and business goals.