podcast

Intentional AI: What the rise of conversational search means for your website

SEASON
3
EPISODE
13
When you put an LLM on top of years of unmanaged content, the AI does not skip the bad parts. Virgil and Cole bring in Will Noble from Squiz to talk through what conversational search actually requires - and why your content foundation matters more than your technology choices.
April 7, 2026
analog clock icon
27:03
min
Intentional AI
Image of Will Noble
Special Guest:
Will Noble
LISTEN ON
Apple Podcast IconSpotify iconPodcast Addict iconRSS feed icon

Show Notes

Conversational search changes the fundamental contract of how users interact with a website. Instead of returning links, it returns answers. That sounds like a clean upgrade. What it actually does is make the quality of everything sitting underneath the AI impossible to ignore. In this episode, Virgil and Cole bring in their first-ever guest for the series, Will Noble from Squiz, who has spent over a decade working in the information discovery space for large enterprise organizations.

Will explains the shift with a clean analogy early in the episode. Traditional search is like asking a librarian for help and getting handed a stack of encyclopedias. Conversational search is that same librarian reading every book in the library and handing you a direct answer. The user experience improvement is real, but so is what it depends on, because the AI reads everything - including the outdated policy documents buried in a subdomain that nobody has touched in a decade. When dormant content gets surfaced as a confident answer, the gap between what was published and what is actually true becomes a reputational and legal problem.

The practical guidance that emerges from the conversation is to start with a defined slice of content you know is solid. Will walks through a real example of a university with 250,000 pieces of content that scoped its initial conversational search implementation to 50 pages focused on student life. Questions related to that area got clean, accurate answers - everything else defaulted to traditional keyword search. That controlled scope is what allowed the project to prove value before expanding, and it is what kept stakeholders from pulling the plug the moment a bad result surfaced.

The garbage-in, garbage-out principle has always been true in search, but what this episode makes clear is that it has never carried higher stakes. LLMs do not skip the bad content - they find it, surface it, and present it with confidence. The first step toward getting conversational search right is the same step the rest of this series keeps coming back to: know what you are working with before you deploy.

Previously in the Intentional AI series:

  • Episode 1: Intentional AI and the Content Lifecycle
  • Episode 2: Maximizing AI for Research and Analysis
  • Episode 3: Smarter Content Creation with AI
  • Episode 4: The role of AI in content management
  • Episode 5: How much can you trust AI for accessibility
  • Episode 6: You’re asking AI to solve the wrong problems for SEO, GEO, and AEO
  • Episode 7: Why AI can make your content personalization worse
  • Episode 8: The real value of AI wireframes is NOT the wireframes
  • Episode 9: Just because AI can create images doesn't mean you should use them
  • Episode 10: The Super Bowl didn't sell AI, it exposed it
  • Episode 11: AI video rewards planning, not your ideas
  • Episode 12: AI might struggle with creativity, but coding isn't creative

New episodes every other Tuesday.

For more conversations about AI, design, and digital strategy, visit https://www.highmonkey.com/podcast and subscribe on your favorite podcast platform.

(0:00) - Intro

(0:51) - Meet Will Noble from Squiz

(2:14) - Today's topic: Conversational search

(4:14) - Welcome to the new era of information seeking

(7:10) - The dormant content problem

(9:48) - You can ignore the problem, but it won't ignore you

(11:45) - Where do you start with thousands of pages?

(14:36) - Start small, don't go big

(17:14) - AI's opportunity as a content auditing tool

(20:39) - Search is the foundation of everything AI does

(22:13) - How do you keep up when AI moves this fast?

(25:43) - Outro

Transcript

VIRGIL 0:00

So today we decided to start doing something a little different and bring an actual guest from the outside world into our podcast. We're going to be joined today with by Will Noble, who is with Squiz. And we're going to be talking about conversational search. And if that's a pain point that you've had around search in your organization, whether it's your website or your internal systems, this is definitely going to be the podcast episode for you. So if this is something that interests you, go ahead and join us as we start Discussing Stupid.

VIRGIL 0:48

Hi everybody. Welcome back to the podcast. As you can see, today, we actually have a special guest with us today to talk about something. I'm not sure, but anyway, welcome to the podcast, Will. Will is from Squiz, a partner of High Monkey's and somebody I've known for a long time. But I'm going to let him introduce himself a little bit and talk about who he is.

WILL 1:11

Oh, thank you, Virgil. Thank you, Cole. Great to be on here. Great to be the first guest. I feel honored. Yes, I'm looking forward to getting a gift in the mail, you know, like a commencement. Oh yes, or something like that.

VIRGIL 1:23

Guest gift is amazing.

WILL 1:27

A thank you letter. Yes. So my name is Will. I live on the south coast of England in a seaside town called Brighton. And as you can tell from my accent, I'm from here. But I've spent the last, well, coming up to 12 years of my life working for in a technology company, one in particular called Squiz. And what we do for most of that, I've been working in the information discovery space. So we have an enterprise search technology that large enterprise organizations use to improve how their visitors or potential customers or staff find stuff. And yeah, it's been really fun. Been managed to travel and live in the US for many years and that's how Virgil, you and I got to know one another. And now getting to live through this huge shift in how we integrate or how we interface with the web with the advent of LLMs. So I'm pleased that you've invited me on.

VIRGIL 2:26

Well, I'm glad you said that because now I think I must know the topic of today's conversation or I'm just way off. Are we by chance going to talk about AI and search, Cole?

COLE 2:38

Something like that. Something very Will's very familiar with. But yeah, conversational search. So it's kind of like the shift of, you know, typically you go on a website, you find links. This is more pretty much like inserting, you know, AI into a website to find the answers. Like a more answer centered sort of experience on a website. So yeah, that's what we're talking about.

WILL 3:03

Oh, you've got to reign me in. I just ruined your entire script.

VIRGIL 3:06

Yeah. Wow, you just brought it there. But no, I mean, you know, in all fairness, AI has been part of search for a really long time. I mean that's like nothing new. AI has been there around in that, you know, the large, the big dogs, Google and you know, Bing and all those have been using that for, for good I suppose 10 plus years now to kind of enhance it, you know, and just kind of everything that goes along with that with natural language processing, you know, different things like that where you could ask a question of it. But I think today we really kind of want to talk about and really kind of delve a little bit into your experience Will, about kind of this next evolutionary step which is more using AI to have a conversation, but more so not just looking for a page or a website, but actually looking for an answer. And I think that's one of the most intriguing things about this for us and hopefully our audience.

WILL 4:10

Yeah, no, absolutely. I think we were right at the beginning of a fundamental shift about how we find information. One that I think is a really good thing. I think the web for at many respects kind of mirrors the physical world. Like we call stuff pages and we interface with it and even things like you get an email for a receipt, it looked like a physical receipt from a till from 100 years ago. And we've really struggled to reimagine how humans and how we share information because it's kind of mirrored printed text and we would have to go and read through all of printed text to find that information. Which kind of mirrors what the experience you get in traditional search, right? Say you like go to a library, you ask a librarian, oh, I need to find this piece of information about how fast the cheetahs run. And a librarian be like, okay, well look, here's three encyclopedias and go and try and find that information. This is a very trite analogy I'm trying to weave together here. Whereas now we can go to that same library, hey, how fast does a cheetah run? And that librarian will read every book in the library and then give you a very concise summary and show you the answer. And that is a fundamental huge improvement in user experience. I think is really fascinating and one that's much more accessible to a larger cohort of people. You don't have to understand how to search large corpuses of content. You don't have to be an expert in search operators or, oh, I'm using this internal application to try and find information, but I'm getting a dead end because you don't actually know that that information lives elsewhere. We're now moving to a world where all of that kind of complexity just erodes away. And that's really quite interesting.

VIRGIL 6:02

Yeah, I mean, I agree. I think that, you know, when you look at AI from a potential, especially on the search, I mean, that's kind of where AI started and that's where I think one of the areas it has the most potential is to be able to look through vast amounts of information that no human could just ever process through and be able to come to some kind of conclusion, make answers, look at it, be able to compare, contrast, all that kind of stuff. I think it's so fascinating. But I think one of the challenges we've really hit in here, and you and I are both seeing this a lot out there, is wrong information or kind of how it can take something and think it's the right answer when it's the wrong. And you know, the one thing about knowing search and kind of having to go through that, if you wanted to be somebody that was advanced in search, you had to really understand how to create search statements, how to use operators, how to do all that kind of stuff in there, was you kind of had to put a lot of thought process on trying to get to that right piece of information. AI opens it up to everybody by saying, hey, I'll just ask a question, it'll give me some kind of answer. But there's this whole other side that we see a lot, which is not all the time, is that answer right?

WILL 7:23

Yeah. The big issue, right? How do you drive deterministic answers? And one of the interesting things, so here at Squiz, we work with lots of organizations who are trying to include generative answering into the experiences that they provide their customers or their employees. And what it's uncovered, like you said, Virgil, is that there was all of this kind of dormant content that kind of, you know, like a dormant volcano that kind of lived under this, under the seabed that no one ever accessed. And when you are putting very powerful large language models on top of it, that information suddenly gets resurfaced right to the top and it pops up in an answer. And that information could have been out of date from years ago, but because it was on some PDF or policy document buried in a subdomain that no one ever accessed it and it was never really an issue. So for large organizations this is causing quite a lot of headaches. And of course if you find it, or the LLM you're doing for your own internal applications finds it, the large web answer engines are going to find it and then present that as factual statements. And they are basically the visible invisible users of your content, aren't they? They've grabbed all of your content as the inputs that they would then train themselves on. And if they're being trained on incorrect information, it's a huge reputational risk, big branding risk. And if that experience is then on your own domain as well, then you're liable for any kind of lawsuits and things like that. So what it has reiterated, which is the age old thing of garbage in, garbage out, never has been more true in what we're seeing now. And I'm sure that many seasoned IT execs listening to this would be like, yeah, like I've heard that one before. So some things just persist. And that's certainly what we're looking at. And I think when you have outdated content or incorrect statements or we've described our product service in a way that we would never describe it, why would it come up in AI? Oftentimes you find out is that you've described it in that way. You just weren't aware of it.

VIRGIL 9:37

Weren't aware or you forgot it. Yeah, I was going to say that. That's one of my number one slides in search. When I speak is the garbage in, garbage out analogy.

COLE 9:48

So basically you can ignore the problem if you want, but it's not gonna ignore you because it's just kind of inevitable at this point. You're either gonna get crawled by like Geo or if you apply it to your own digital experience, it's just going to surface. So it's important to have that good foundation like you're talking about here.

WILL 10:09

Yeah. And if we think about why, remember when LLMs first came out and everyone was talking about it? They'll never go anywhere. The level of hallucination is just too high. It gaslights you. And that is still very present today, albeit less with these new models. They're a lot more accurate. But really the reason they hallucinate less is because their crawlers have got better at understanding websites and the data that they're accessing. But if your content structure or the way that you write about what you do isn't clear, isn't just the what, the how, the why, then an LLM has to infer what you can do and can't do, and that's why it hallucinates. So if you're not providing enough context to the machines, then they will make it up in your absence. And I think that's quite interesting. We anthropomorphize them as these big sentient beings, but they are just really, really smart probabilistic engines. And if your own content is the input to that probabilistic engine, you want to make sure it's as watertight as possible because that reduces the level of hallucinations in general answer engines. But also if you're looking to build a conversational search experience on your own website, which increasingly firms are moving towards, so that search bar at the top right hand corner of your website that used to only get maybe 9 to 12% of traffic is suddenly getting this new facelift.

VIRGIL 11:39

So I'm curious, where do you guys start with this? I mean, because you guys are much like us in that we tend to deal with very large websites with tons of information, I mean, hundreds, sometimes thousands of pages of content and nobody's curated it for a long time. So where do you start? Because in the traditional search world, the way I'd always say is, hey, your most powerful tool can be exclusion, excluding content out of there and kind of just narrowing on the content that you know is good. You don't necessarily have that same power with LLMs that you have in a traditional search where you can go through and sit there and tell, you know, Claude to ignore these 400 different pages. But it's a very tedious process versus saying, don't go here, don't go there, like you can do with a traditional search engine. So where do you start with people saying, okay, they really want this to work for their organization, on their public website, internally, for their employees, whatever it might be. Where do you guys start with saying, here's where you really need to begin the journey?

WILL 12:52

So the approach follows many other initiatives that you would see in other pieces of software development where it's, let's pick a particular use case, run with it, test it, see that we like it, and then move forward. So to give you an example, we work with a large university and they have 250,000 pieces of content online that's under their own domain. A huge, vast site. It's authored by hundreds of different people. And it's really hard for people to navigate a site that large, right? You can't just rely on navigation. You have to use search to find what you're looking for. And they wanted to move away from traditional keyword search and include answering. So if someone could type a long form question, they could get an answer. So the way that they did that is like, well, we don't have the ability to run a full content audit or inventory over 250,000 pieces of content. It's just unrealistic. We're a small team. How do we do this? And we call it a phrase internally: we need to pick a slice. Pick a slice of content that you know people will be asking questions around. And let's optimize the content around those 50 pieces of content, 50 pages, and we'll run answers on that. And so when someone types in a question, if the question is related to that area that we did, so in this instance it was all about student life. So what's it like to come on campus? What's the accommodation like? What clubs do you offer? What are the extracurricular things on offer? If someone asks a question related to that, they get a really nice summary that pops up on the website that tells them about it. But if they ask a question that's unrelated to that, they just navigate to normal keyword search. And so for the internal team, it meant that they're picking areas of the site to ask questions over. They go and fix the content and then they move on to the next area. So I suppose if I was to distill that down, it's start small, don't go big. Don't just run the whole LLM over all of your content because then from a political standpoint, you'll never get it off the ground because your superiors, your chancellors, they will run questions over it. They won't like the results, or your director of marketing, and they'll can the project.

VIRGIL 15:09

Well, it's kind of like our entire series has been about intentionality. I do it intentionally, which a lot of intentionality is small. And it's interesting to say that you could kind of target that because, you know, the first thing my mind went to is, you know, the most ubiquitous thing in search is best bets. I mean, like in Squiz, you know, in the search tool, you have the curator. But it's basically saying if it's these type of searches do this, so the searches just default back to the beginning. And that's probably a really good way to approach it. And I think that's the problem, is a lot of the ways these products are sold is we'll just turn it on and you're going to be amazed. And that's not new to AI. That's been kind of universal for a long time. It was like, wait until you see how this search works once it sees your content, and then everybody's like, what the hell's it doing? Why did it bring back that? So I think that's a good strategy, is to really look at it at a more granular level.

WILL 16:17

So put another way, what you're saying there is using generative answering in search is like, it's the next evolution of promoted results. Promoted results would come if you hit a keyword or if you're searching from a particular location, then it's that you're adding your business rules into the template links experience. But now if you go small with LLMs, if a question is related to what the LLM can answer, it's going to jump in and provide a quick answer and help with task completion. But if it doesn't know the answer, theoretically it shouldn't answer it, rather than just trying to bullshit you. So yeah, that's quite an interesting way of phrasing it.

VIRGIL 17:04

Well, that might be the first curse word in our podcast. Oh, man. Put this one up to the explicit adult level. But you bring up a good point because I think one of the things that a lot of people tackle and even in talking with search vendors that are bringing more AI into it, they kind of talk about the quality of content, needing to change that and do things like schema tags and that kind of stuff, and it's all great and it's a super awesome opportunity. Everybody should have better content. But there's a reality. If you have 250,000 pieces of content out there and you sit there and say, what you really need to do is rewrite all this content to be better and add the correct tags and everything, I mean, you get left right out of the building.

WILL 17:51

Yes. Yeah. And then, you know, there's, it makes me think of top tasks. So it's like, is this an opportunity to run a bit of an audit? And I think content audits historically were so painful, weren't they? You just get hired as a director of comms at any kind of organization and it's like, right, run an audit of all of the sites and the microsites and the landing pages and campaigns. And you're like, okay, right, how hard could this be? And then you get a spreadsheet going and then before you know it, just the URL list is in the thousands. And then you're trying to review each of those pages based on your own brand guidelines. And then you're trying to analyze, okay, is this piece of text complete or is it specific? Is there things that are missing, should this paragraph be on a different page? And humans, we're not designed to have that level of diligence to carry on completing all of that over time. On this call, we can maybe do that for like an hour. And then we would start to cut corners because it's hard to keep up that level of discipline. And the advent of LLMs, one of the great things is that you can actually automate to a degree that level of content auditing. And the machine never gets tired. It will be able to understand the semantics of your text, understand your brand guidelines, understand areas that are not in code or not following best practice. And with the right tooling, you can then surface up those insights and present that to your marketing executives to actually go and make those changes. But historically that would have been very challenging. You probably only do an audit once every 10 years and you add hundreds of pages a year. Very few of us have a very good archiving policy, do we? There was a talk I saw recently that described it as: websites don't poop. They basically just accrue and accrue and grow in size.

COLE 20:17

I will say, Will, I think you just made a lot of people happy with what you just said. You made me pretty happy to think about because, you know, I've gone through a lot of content inventories myself and yeah, at a certain point you just start to see URLs appearing in all corners of the room. So pretty exciting piece of technology there, I would say.

VIRGIL 20:38

Yeah, it's a tough prospect. And you know, it's ironic because, as you were talking about that Will, one of the things is I've been amazed how many people, you know, kind of technologists that I know, talk about AI and search and they're like, well, you know, search might not be the best use. And I said, well, you do realize for AI to be able to do anything, it has to search, right? I mean, search is basically the fundamentals of it all, and a lot of people don't think of it that way. But I think that's a good supposition to say that it can bring the discipline that we tend to lack, and that's such a good point. But at the same time, you've got the risks that go along with that, that it can draw conclusions, it can come up with things, it can give you suggestions. And so we're a long ways away from where I think people can always trust all the answers that AI gives them. But it's getting better. I mean, even honestly, even from the time we started this series back in September to now, AI has improved exponentially. And I think that's one of the big things. So I think as kind of a last question, with AI, we've definitely seen over the last 15, 20 years, technology just continues to accelerate at a faster pace and change all the time. So like Squiz and you guys bringing this into your platform and the search technology, how do you work with customers to constantly stay up on the new? Because they see somebody else doing something or doing it a little bit better. How are you guys keeping up with your models? Because I think that's also a concern, is that as soon as you implement, if you take six, nine months to implement AI based search, are you already behind because the technology has already gone up exponentially?

WILL 22:47

I think we're in a really good place compared to historical developments and IT initiatives. We had the advent of the cloud, right, which made compute immensely scalable. Before then we'd have to go and provision your own servers and you'd have to go and buy proprietary software, get it installed, and you had to make all of this capital expenditure before you'd realize any value from the initiative. And many IT projects just never even got off the ground and millions were spent. You know, all those amazing stories of failed IT projects. And then like 15, 20 years ago, the advent of the cloud and moving all of our compute into data centers that were distributed from your own premises took a lot of the risk away from projects and from making sure that you were keeping up. And then the move to SaaS especially, and the idea of automatic provisioning and upgrades, your risk profile there of like being left behind was less because if you invested in the right partners, they'd be able to update the latest version of the software onto your instance. And if you had good customer success, they're there to help train you to make sure you're using the most of it. And I think with this, when we're looking at AI projects as well, if you're looking at initiatives, it's perhaps you're partnering with like an Anthropic or a ChatGPT or OpenAI or a partner like Squiz who works underneath with these models. As the new models get released, they instantly become accessible to the solution that you're running. So you actually are building experiences that come somewhat versionless. Because if you've built this front end, it can then hook up to the latest model, and then you benefit from the increased reasoning, increased level of tokenization. So it's an interesting point that you asked there, but that's kind of my viewpoint about it and I think you see that as a big benefit.

VIRGIL 24:56

Yeah, no, it's interesting. I mean, obviously there is a little bit of the risk of some of the versions of the LLMs that have kind of missed the mark and had to be rolled back or had to be updated because of some of the things they've done. But no, I agree. I think that, you know, there's a lot to this future, but it's so much for organizations to wrap their heads around and that's why AI in a lot of organizations I think is failing to a certain extent, because it's just so much. And I think the big part is they try to tackle too much at once. There's been so much sold on this is going to automate your processes and replace all your employees and everything, when the reality is it can be very beneficial, but it's beneficial to a point. So, Will, we really appreciate you joining us here and exciting to have our first guest and really kind of talk on this from the standpoint of what your experience is. And we appreciate your time and thank you for joining us.

WILL 26:00

Pleasure, Virgil. Thanks very much.

COLE 26:02

Thanks Will.

VIRGIL 26:03

Yeah, so thanks everybody. That's it for the podcast. We'll talk to you later.

VIRGIL 26:12

Just a reminder, we'll be dropping new episodes every two weeks. If you enjoyed the discussion today, we would appreciate it if you hit the like button and leave us a review or comment below. And to listen to past episodes or be notified when future episodes are released, visit our website at www.discussingstupid.com and sign up for our email updates. Not only will we share when each new episode drops, but also we'll be including a ton of good content to help you in discussing stupid in your own organization. Of course, you can also follow us on YouTube, Apple Podcasts, Spotify or SoundCloud or really any of the other favorite podcast platforms we might use. Thanks again for joining and we'll see you next time.

Latest Episodes

Intentional AI: What the rise of conversational search means for your website
PODCAST
|
Intentional AI

Intentional AI: What the rise of conversational search means for your website

SEASON
3
EPISODE
13
When you put an LLM on top of years of unmanaged content, the AI does not skip the bad parts. Virgil and Cole bring in Will Noble from Squiz to talk through what conversational search actually requires - and why your content foundation matters more than your technology choices.
April 7, 2026
analog clock icon
27:03
min
Image of Will Noble
Special Guest:
Will Noble
Intentional AI: AI might struggle with creativity, but coding isn't creative
PODCAST
|
Intentional AI

Intentional AI: AI might struggle with creativity, but coding isn't creative

SEASON
3
EPISODE
12
Coding runs on patterns and repetition, which makes it one of the better fits for AI... at least in theory. In this episode, Virgil, Cole, and returning guest Chad put Claude, ChatGPT, and GenSpark to the test building an accessible web component, and unpack what AI can and can't do depending on who's behind the keyboard.
March 24, 2026
analog clock icon
25:52
min
Photo of Chad Heinle
Special Guest:
Chad Heinle
Intentional AI: AI video rewards planning, not your ideas
PODCAST
|
Intentional AI

Intentional AI: AI video rewards planning, not your ideas

SEASON
3
EPISODE
11
AI video tools can generate promotional clips in seconds, but the gap between what comes back and what is actually usable is wider than most teams expect. In this episode, Virgil and Cole test three AI video tools using the same prompt and break down why planning, storyboarding, and clear objectives matter far more than the tool you choose.
March 10, 2026
analog clock icon
22:42
min

Sign up for Discussing Stupid updates

Get the latest Discussing Stupid episodes, expert insights, and exclusive content- straight to your inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.