podcast

Intentional AI: You’re asking AI to solve the wrong problems for SEO/GEO/AEO

SEASON
3
EPISODE
6
SEO, GEO, and AEO are often treated as optimization problems. In this episode, we explain why that mindset often fails, how AI exposes weak structure and unclear intent, and what actually drives search visibility today.
December 16, 2025
analog clock icon
25:58
min
Intentional AI
Photo of Seth Moline
Special Guest:
Seth Moline
LISTEN ON
Apple Podcast IconSpotify iconPodcast Addict iconRSS feed icon

Show Notes

In Episode 6 of the Intentional AI series, Cole, Virgil, and Seth move into the visibility stage of the content lifecycle and tackle a common mistake they see everywhere. Teams keep treating SEO, GEO, and AEO as optimization problems, when in reality they are content quality, structure, and clarity problems.

Search engines and generative models have both gotten smarter. Keyword tricks, shortcuts, and “secret sauce” tactics no longer work the way they once did. Instead, visibility now depends on clear intent, strong structure, accessible language, and content that actually helps people. The group looks at how SEO history is repeating itself, why organizations keep chasing hacks, and how that mindset actively works against long-term discoverability.

They also dig into how SEO, GEO, and AEO overlap, where they differ, and why writing exclusively for AI can backfire by alienating human readers. The conversation covers content modeling, headless-style structures, and why these approaches help machines understand relationships without sacrificing usability.

A major focus of the episode is schema. The team explains why schema is becoming increasingly important for generative engines, why it is difficult and error-prone to manage at scale, and where AI can help draft complex schema structures without fully understanding context. This leads to a broader point. AI can accelerate specific tasks, but it cannot replace judgment, prioritization, or review.

In the second half of the episode, they continue their ongoing experiment using the same AI-written accessibility article from earlier episodes. They test how three tools approach GEO-focused improvements. Each tool surfaces different insights, none of them are complete on their own, and all of them require human decision-making to be useful. The takeaway is consistent with the theme of the series. AI is powerful when you ask it to solve the right problems, and dangerous when you expect it to fix foundational issues for you.

In this episode, they explore:

  • Why SEO, GEO, and AEO fail when treated as optimization tricks
  • How search has shifted from keywords to clarity, structure, and intent
  • Where SEO and GEO overlap and where they meaningfully diverge
  • The risk of writing for AI instead of for people
  • Why content modeling supports both search engines and generative engines
  • How AI can assist with schema creation and where humans must intervene
  • Why repeating the same schema everywhere weakens its value
  • A GEO-focused comparison of Writesonic, Grammarly, and Claude
  • Why broad prompts underperform and targeted prompts lead to better outcomes

A downloadable Episode Companion Guide is available below. It includes tool notes, schema examples, prompt guidance, and practical takeaways for applying AI to search without losing clarity or control.

DS-S3-E6-CompanionDoc.pdf

Previously in the Intentional AI series:

  • Episode 1: Applying AI to the content lifecycle
  • Episode 2: Maximizing AI for research and analysis
  • Episode 3: Smarter content creation with AI
  • Episode 4: The role of AI in content management AI
  • Episode 5: How much can you trust AI for accessibility?

Upcoming episodes in the Intentional AI series:

  • Jan 6, 2026 – Content Personalization
  • Jan 20, 2026 – Wireframing and Layout
  • Feb 3, 2026 – Design and Media
  • Feb 17, 2026 – Back End Development
  • Mar 3, 2026 – Conversational Search (with special guest)
  • Mar 17, 2026 – Chatbots and Agentic AI
  • Mar 31, 2026 – Series Finale and Tool Review

Holiday break notice ❄️

Discussing Stupid will be taking a short break for the holidays. The next new episode will be released on January 6th.

Whether you work on websites, structured content, or digital strategy, this episode is about recognizing when AI is being asked to solve the wrong problems. The goal is not more optimization. It is clearer intent, better structure, and content that actually deserves to be found.

New episodes every other Tuesday.


Chapters

(0:00) - Intro

(0:37) - Boosting your SEO, GEO & AEO with AI

(1:10) - Virgil on how SEO history is repeating itself

(4:08) - Defining SEO & GEO overlaps

(7:04) - Is a headless CMS better for GEO?

(8:27) - Schema generation is awesome with AI

(13:54) - If you tag everything, you’ve tagged nothing

(15:18) - We tested 3 AI tools for SEO/GEO/AEO

(16:39) - Testing Writesonic

(18:16) - Testing Grammarly

(19:33) - Testing Claude

(20:54) - Every AI tool has gaps & you’re the filler

(23:49) - Next episode preview…

(24:55) - Outro

Transcript

VIRGIL 0:00
You know honestly, everybody wants their content to be found, whether it's by Google or by AI or someone searching on a site. But the rules of visibility are changing fast. Now it's not just about search engines, but it's also about how AI interprets your content. So how do you optimize content for both search engines, but also for something that thinks instead of crawls? If you're interested, let's find out as we start Discussing Stupid. Hi everybody, welcome back to the podcast. Today we're going to be talking about, really, I'm going to say a continuation from the last episode because so many of the things are similar, but we're going to be talking about GEO and SEO and how to use AI for those areas. And those are definitely the hot topics right now and probably one of the better areas to look at using AI in. So Cole, why don't you go ahead and as always, kick us off.

COLE 1:10
Well, I'm going to actually do the old Uno reverse card on you here, Virgil, because you've been in the digital strategy world for like, I don't know, 26, 27 years now, and you've seen search evolve just so much. And it just, it must be crazy to see the way we've gone from SEO to AEO to GEO. And I just want to see kind of like your reaction first off on, what's going on with it.

VIRGIL 1:39
I'm not sure we have enough time for that. But yeah, I mean, the irony of it and kind of what is happening. I mean, I know you're kind of bringing this up because of something we were talking about the other day, back when search engine optimization started being a thing. There were all these organizations that had software products or strategies or things you could do. It was the secret sauce around SEO. It was about clustering up a bunch of keywords, sometimes hiding them from the screen, you know, doing all kinds of tricks in there to try and fool Google and, you know, the different search engines back then to be able to do it. The search engines got smart and that became much less. And so SEO today is really about good content, good structure, good accessibility, and really just having quality information out there. And that's a lot of what gets scored around there. I mean, it's not that keywords and that kind of stuff aren't important, but they're definitely not the main priority. Now you look at GEO, which is generative engine optimization, which is really around AI or answer engine optimization, whatever you want to call it there. And there's a lot of companies popping up that are like, oh, we've got the secret sauce. This is what you got to do. You got to create a top 10 list. You got to do this. You got to do that in there. And it's kind of the same thing. I mean, you know, if Google itself got smarter and search engines got smarter, and then we're using AI, which is supposed to be smarter in general, they're getting smarter. So the funny thing is, it really goes back to the same thing. You need good content, you need good structure, you need good accessibility, you know, you do it. There are some definite differences when you start looking at SEO versus GEO type optimization, but overall, in general, so much of that is the same. And I think that's really, you know, kind of the crux there in that, I guess, the reason that we bring this up is because we want people not to kind of jump on these bandwagons. Like, here's the trick. Because the reality is that there is really no trick. And, for AI even more so, thinking that, you ask it the same question twice and it gives you different answers. How can you ever have a trick for that?

COLE 4:04
Yeah, for sure. Yeah, I think it's just so interesting. You take a look at GEO vs SEO. I don't know. It seems like in a lot of ways you're kind of aiming for like the same goal with the well-structured content, well-structured content, like modular content. But so how do GEO and SEO kind of work together versus separate though, when we're talking about these two different topics?

VIRGIL 4:36
Well, I thought you were going to talk, Seth.

SETH 4:38
Well, I was going to say, it took me a while to really realize the differences between the two. Like I still kind of feel like there's a lot of overlap and I still don't 100 percent understand it. I mean, I have a lot of ideas on what separates those two, but it's definitely, it feels like moving goalposts, especially as AI is learning more and more.

VIRGIL 5:03
Yeah. I mean, I think, you know, if you wanted to look at similarities, we've kind of already said that, well-structured content, so that also is going to mean accessibility. Clear, plain, simple language. Now for SEO, this probably isn't as important because SEO is really about words. It's about, you know, links. It's about, you know, how words are associated, how close together they are in that. Where AI is trying to do more meaning from it. So, you know, having things in plain, simple language is such a big thing from an AI standpoint because it needs to understand what you're doing there. But overall, there's so much overlap in this area. But there's kind of that other side, which is, you know, one of the things that I've found very interesting that I'm starting to see is you kind of, I don't want to say, maybe backlash is the good word, but with the backlash kind of to the AI is that organizations are trying to change their content to be better for AI, which, okay, great, except is that actually right for your human users that are on your page? You know, if you start writing things to get picked up by AI, are you starting to alienate the people that you're actually visiting your site trying to get to stuff? And so it's kind of this interesting balance. SEO never really has had, it kind of had this issue many years ago when it used to be like, use the keyword 500 times in a paragraph and you'd do that and you'd go there and it just was a bunch of gobbleygook that you had in there. But it's kind of the same with AI where it's like, you have to find that balance of something being human readable, consumable and actually provide value to the humans, but at the same time provide value to AI.

COLE 7:05
Okay, real quick, beyond your actual content, it seems like a kind of semi no nonsense, no, gosh, today I'm just all over the place. It seems like there's kind of a no nonsense path to these like GEO goals with like headless CMSs. Like there's just a lot of opportunities and a lot of fields to like put your content in. And does that help with AI discoverability, having like a headless CMS?

VIRGIL 7:35
Is headless CMS helping out? Sure. But, kind of the difference between, I don't want to go too far down the path of headless versus traditional is headless, you know, in traditional, we tend to use WYSIWYGs to solve a lot of our problems. With headless, you tend to look at it more as content. And segmented content and how it all has relationships in that. So it's actually more content modeling itself that helps in this manner and that you're really breaking down your content to more structural components and singular components and how those are related together. So absolutely. Is it the function of headless itself? No, not really. It's more of a byproduct of just the overall content modeling process that can have a big impact to that.

COLE 8:27
Right. One thing you mentioned the other day, Virgil, in terms of like how AI can really help with this process of SEO, GEO, and AEO is generating schemas. And you want to talk a little bit about that?

VIRGIL 8:44
Yeah, I mean, you know, this is one of those areas that I think AI has a lot of abilities too. So take schema tags, and for people that aren't familiar with schema tags, schema tags are from schema.org, and they're basically tags that you can put inside a page content. It's usually in the form of JavaScript that it's done, or JSON, as we like to say. And it basically kind of identifies information about the organization, information about the page, information about the content, maybe products you're selling and all that kind of stuff. I mean, there's thousands and thousands of different schemas. And so one of the things is for SEO, we've been using schemas for years in what we like to call technical SEO. Otherwise, that it really, instead of trying to do something with the content itself, you're adding technical information behind the scenes to be able to greatly identify. A good example is like when results come up and Google is kind of rich results where it has like, you know, your navigation item sitting there on the page or information about your open and close dates and all that kind of stuff. That's all rich content that you could call out or like how a recipe comes out, you know, from like a baking site or something like that in there. From, so with SEO, there's about 10, maybe 15 different ones that are commonly used in that area. Now from AI, AI actually is very excited about this and that they want you to use a lot more. So now you're talking about using hundreds of different types. And so where we talk about using AI for it is this is not a simple process. I mean, this is not something you just, you know, some marketer gets on one day and they're just like, oh, let me make this real quick. The schemas, the schemas are not only done in code, but they're also very complex. There's lots of layers. You can layer schemas inside of other schemas and all this stuff. It's not there. And so using AI to help you build that. So we did an example where, which we'll be including in our document, where we actually went to our expertise page on our site and we asked it to create some schema tags. And it created like 15 different schema tags, including like an organizational one and a page one and a services one and a products offered one and all these different ones. And then there were FAQ ones and there were all these. So all these things. Could you imagine if you were a person going through and just making one of these? You really got to go look at the document to really get this understood, just how complex these are, just making one of them. But now you're talking about making 15 of them and adding them to your page. And then a lot of them have the tag with one of the other tags also nested inside of it as part of it. So it's all this stuff. And this really helps AI and search engines understand what you're trying to do and convey in that versus always having to change your content to make that happen. So there's a lot of opportunity, but there's a ton of pitfalls with that one that just can cause a lot of challenges. I know, Seth, we see errors on these all the time.

SETH 12:04
Oh, all the time. And like you said, it's, there's so much of it, especially if you want to get ranked well, not only with SEO, but also with GEO now. And for someone who's not as technically savvy as I could be, you know, having AI to help kind of set the structure around how schema tags can be implemented on pages and just kind of working with someone who is more technical to hone that in and then just reusing that and editing it to make all the content better as we're pumping out pages and making sure that it stays consistent across the site.

VIRGIL 12:45
Right, and the biggest part of it is it's literally giving you the breakdown of how the code needs to be laid out, so you don't have to do that. It's not missing a comma, like I've done many times, and doing it and causing an error with a tag. It's not missing those things. But inside that, it doesn't always get your content or the gist of your stuff 100 percent. That's where the humans come in.

SETH 13:12
Exactly. You know, like with anything, accessibility, SEO, you really can leverage AI to help you, but you do need a second set of eyes, maybe even a third set, because as you're, you know, on your 50th page of the afternoon doing schemas, you know, you kind of just, eyes glaze over and it's nice to have a second set of eyes to make sure that, okay, yes, this is accurate. And then also setting up processes to go back and review pages that you built in the past and give them a second look, especially as AI is evolving and learning, making sure that you're staying with those updates as they happen, which, you know, happen pretty quick with AI as it learns more and more.

VIRGIL 13:54
Yep. And it's ironic that you bring that up because one of the other points I wanted to make with this, I was actually just thinking about this on my drive home from dropping off my kid. And that is, Cole knows this very well because I've preached this so much. If you tag everything with a word.

COLE 14:15
You've tagged nothing.

VIRGIL 14:18
Well, it's kind of the same thing with these. You know, I bet if we went and ran AI on like a bunch of our other pages, it'd come with very similar schema tags. So if every page has the exact same schema on it, where are you trying to call out this important? Where are you trying to do that? So there's going to be, there is going to be no AI only solution to this. It is going to have to be humans. And like you said, Seth, you're on your 50th page. Now, how many times have you just regurgitated the same information over and over on pages?

SETH 14:52
Right. And if you rely on AI to, you know, spin it up for you and you just copy and paste it on every page, you're going to miss errors and that's going to knock you.

VIRGIL 15:03
Right. I mean, and then another good point is, I mean, those schema tags are long. I mean, they'll add more weight to your page, for a load time. It's not going to be significant, but those are all things you want to make sure you're optimizing in that. So, which kind of segways us into our test, right, Cole?

COLE 15:23
Yeah, for sure. We, for this episode, tested 3 tools again to take a look at, you know, which ones you give us some pretty good SEO and GEO recommendations. And for this one, we looked at Writesonic and we looked at Grammarly and Claude. And Virgil, do you want to take us through the prompt that you put into each of these tools?

VIRGIL 15:47
Sure, yeah. So again, I want to kind of reiterate to people, you know, we're doing very simple prompts. And the reason we're doing very simple prompts is because that's what most people are going to do. I mean, I was telling you guys, on our call this morning that I ran across something where it was about doing like an ideal customer profile and the prompt was like 2 pages long. That's awesome and that's probably going to give you great results, but the reality is most people aren't ever going to figure that out. So if anybody questions why we're doing such simple prompts, that's kind of the reason is because that's going to be a typical use case. So the prompt for this one was suggest five ways to improve the article for better generative engine optimization on a website. So again, it was looking at a Writesonic answer from before that gave us the kind of article that we've used as a basis of a lot of these. And it was kind of ironic. WriteSonic, when I use that and type this in there, its answer was nothing short of amazing. Not only was it very in depth, but it gave some great examples. It probably if I was to say a negative, it was that I wasn't really sure what the tool was actually doing at any given time. I didn't really understand what was happening in that. And yet, but one thing I really liked about it is it gave kind of on the screen very brief findings. And then when I export it to a document, it gave much more details, including examples and that kind of stuff. But it did take me a little bit of time to get it to load my document. It was not a very intuitive process because I wasn't using the wizard that I had used before with Writesonic because you weren't doing it. You know, it gave very in-depth. One thing I liked about it is it hit a lot of the main points, things we've talked about today, like accessibility, you know, header structure, that having good structured content, authoritative content. This is a big thing about kind of referencing thing from an authoritative, so if having things like, who the author is, why they know what they're talking about, why is this topic important, how big, how much of the audience does it cover out there in the world? It brought up schema tags, it brought up, focusing on some secondary keywords in that. So it was very interesting.

COLE 18:10
Yeah. So I will, sorry.

VIRGIL 18:21
No, go ahead.

COLE 18:21
I was gonna say. How about Grammarly? How did the two tests go with that one?

VIRGIL 18:21
So Grammarly was actually the exact opposite, where we were very impressed with it on the accessibility one and the readability and what it did there. It did not do well at all. I mean, it gave examples. But it wasn't very detailed in that. And so I had to ask it additional questions to get more details around what it meant and good examples in that. But one thing that I think was very different of it, overall, it hit a lot of the same themes as Writesonic. But one thing that I did really like about it is it actually recommended different ways to word your content for better readability, which I thought was very interesting as part of that. It also helped you incorporate the sources more in the content versus just listing statistics, doing that kind of stuff. And I thought that was a big strength of it is it really does seem to do very well in the readability area. And it gave specific examples around that. Like here's the content in the sentence right now. Here's what we'd rework it to in that. So that was really good.

COLE 19:34
Right. Cool. Cool. How about Claude?

VIRGIL 19:37
So Claude is obviously the one, you know, Anthropic, that kind of stuff. It's kind of the one we went into the perplexities and the ChatGPTs and the Copilot, but we hadn't went the Claude direction, though a lot of them are using Claude or GPT as their background language learning model. But I wanted to use Claude itself and kind of see that. It was very much the same as Grammarly when I first did the prompt. It didn't give very detailed information. It kind of hit some of the better points, you know, that the other tools did. So again, I had to ask it for more detailed information in that. But again, like Grammarly, unlike Writesonic, it really encouraged expanding on your references and really giving more information there. But the one thing that it picked up that none of the other two did, if you've actually read the Writesonic article, there's a lot of redundancy where it kind of says the same thing over and over and kind of very subtle differences. And Claude actually picked up on that and it encouraged removing some of these redundancies. So reworking the article to get rid of kind of saying the same things over and over. And a lot of, you know, if you've read the Writesonic article, it says a lot of things, but a lot of things are very similar in nature. So, you know, it's funny, none of them did it perfect. I would honestly say Writesonic did it best, but none of them were exactly perfect. And it almost feels like, they each brought something to the table that could make this better. So it's one of those interesting areas where, is there a tool? Well, yeah, I mean, I think any of these would get you down the direction. But honestly, if I was to do this, I would have probably used all three. And taking the recommendations for most. So it's one of those things. And again, it kind of goes back to our whole thing, which is intentional AI and realizing that humans have to play a role.

SETH 21:39
You took the words out of my mouth, Virgil. Thinking about going through these processes, it's like, it almost feels like it's more work using, you know, different AIs to get, you know, the good pieces of information out of each one. And that's really what it is. It's leveraging AI to be better, not using it as end all, be all, because each one is different and you can prompt it more and more to really hone in on what you need. Man, Seth, you took the words right out of my mouth.

VIRGIL 22:09
Yeah.

COLE 22:11
I was about to say, no.

VIRGIL 22:12
Yeah, Well, I think it's like you brought up in the last episode, Cole, around accessibility and that, and that is pick a piece. Pick a piece. And I think schemas are a huge piece. If you're going to tackle schemas on your site, you don't want to do that by yourself. I mean, sure, there are some tools out there, but even those tools are not very user-friendly. But using AI for that can be very good and can bring a lot of power from there. So I think there are specific pieces, but I think the one thing we're learning more and more throughout this process is that adding broad-based questions, asking broad-based questions, just is not a good path to being able to get great examples and great, answers in that you kind of have to narrow that down. And it does go back to that prompting.

COLE 23:04
Precisely, yeah. You know, asking AI and open-ended questions is not going to get you very far in any of these types of goals here.

VIRGIL 23:13
Or a little bit of hallucinating.

COLE 23:15
Yeah. No, we should just break all the feedback we got out of the three tools and copy them all and put them into ChatGPT and OK.

VIRGIL 23:24
Yeah, and see what it is. I know it is interesting, you know, when you start talking about building prompts and what that is. I understand the complexities about it and I understand what you can do, but the question is how many people are actually ever going to do that. So I think you're going to get more bang for your buck if you spend time using it for specific, very specific tasks versus kind of general make my stuff better.

SETH 23:48
Right.

COLE 23:49
Sure.

SETH 23:49
One thing I thought about too is who your users are, making sure you're writing content or asking the AI to help you with content for those users, whether it's, you know, someone who has a very low level of English or whether it's engineers. You know, if you're building content, you want to be consumed by people who've, you know, gone through eight years of school in very high level fields, that's the kind of stuff you want.

VIRGIL 24:21
Wait, Seth, are you talking about like personalization?

SETH 24:24
Yeah.

VIRGIL 24:25
So that's so weird in that. So I don't think that was even a setup, was it, Cole? That was just like there.

COLE 24:32
I think Seth is very intentional with this intentional AI.

VIRGIL 24:37
It's intentional AI.

SETH 24:38
Yep, Well, I've just got ChatGPT telling me what to say over here.

VIRGIL 24:42
Yeah, right, yep, in that. So, Cole, I guess that does lead into what our next episode is going to be, which is on what?

COLE 24:50
It's on content personalization.

VIRGIL 24:52
Content personalization.

SETH 24:53
There you go.

COLE 24:54
Well, look forward to that one, everyone.

VIRGIL 24:56
Yeah, in that. Thank you all for joining.

COLE 25:02
Thanks, everyone.

VIRGIL 25:08
Just a reminder, we'll be dropping new episodes every two weeks. If you enjoyed the discussion today, we would appreciate it if you hit the like button and leave us a review or comment below. And to listen to past episodes or be notified when future episodes are released, visit our website at www.discussingstupid.com
and sign up for our e-mail updates. Not only will we share when each new episode drops, but also we'll be including a ton of good content to help you in discussing stupid in your own organization. Of course, you can also follow us on YouTube, Apple Podcasts, Spotify, or SoundCloud, or really any of the other favorite podcast platforms we might use. Thanks again for joining, and we'll see you next time. SoundCloud, or really any of the other favorite podcast platforms we might use. Thanks again for joining, and we'll see you next time.

Latest Episodes

Intentional AI: The Super Bowl didn't sell AI, it exposed it
PODCAST
|
Intentional AI

Intentional AI: The Super Bowl didn't sell AI, it exposed it

SEASON
3
EPISODE
10
AI dominated a noticeable share of this year’s Super Bowl ads, but what was actually being sold? In this episode, we break down the hype, the promises of effortless automation, and the gap between AI marketing and real world implementation. If you are trying to separate AI strategy from expensive noise, this conversation is for you.
February 24, 2026
analog clock icon
20:24
min
Intentional AI: Just because AI can create images doesn't mean you should use them
PODCAST
|
Intentional AI

Intentional AI: Just because AI can create images doesn't mean you should use them

SEASON
3
EPISODE
9
AI can generate images and graphics in seconds, but visuals introduce ethical, legal, and trust risks most teams are not prepared for. In this episode, the team tests AI image generation tools, examines where they help and where they fail, and explains why judgment matters more than speed when AI starts producing visuals.
February 10, 2026
analog clock icon
29:22
min
Intentional AI: The real value of AI wireframes is NOT the wireframes
PODCAST
|
Intentional AI

Intentional AI: The real value of AI wireframes is NOT the wireframes

SEASON
3
EPISODE
8
AI can generate wireframes and page layouts in minutes, but speed changes the risk profile of design work. In this episode, the team tests AI wireframing tools, breaks down where they help and where they fail, and explains why human judgment matters more than the wireframes themselves.
January 28, 2026
analog clock icon
28:31
min
Photo of Chad Heinle
Special Guest:
Chad Heinle

Sign up for Discussing Stupid updates

Get the latest Discussing Stupid episodes, expert insights, and exclusive content- straight to your inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.