Intent: an experiment in GEO
GEO is a new discipline. There are still a lot of unknowns, and for every deep expert there are a thousand of us in real marketing roles thinking some variation of the following:
What the heck, I already log into 10 tracking tools a day.
How the hell am I supposed to prove business value on this thing?
Alright, but how do I actually see results so I can change them?
I’m not yet a GEO expert. In fact, this case study is really about noodling around with one new tool, which you can try out for free, on one campaign. I learned a lot from it and got some very promising results. If you’re already deep in GEO, this may feel pretty elemental - this is for those of you wondering where to start.
Bonus: you don’t have to run any of the tools I’m going to talk about on company servers or share sensitive data. If your company is still working through governance questions around AI, this is still completely doable.
The business problem was intent
This all started with a classic business problem: how to get more high-intent traffic?
For spatial data suppliers Informed Decisions (.id), I didn’t need to find more traffic in general. .id already had an exceptionally high-volume website - it was barely worth caring how many visitors arrived each day because the volume was already outpacing competitors by the thousand.
The real issue was conversion. How could we identify, and then increase, the number of high-intent searches that surfaced us as the experts rather than a competitor?
I set up a separate program of work for SEO, which is a related case study for another day. Then I turned to GEO.
Business value moment: LLM searches, whether they happen in ChatGPT, Claude, Google AI Overviews or somewhere else, tend to be high intent. People instinctively turn to LLMs when their queries are complex. Given that .id’s core business is deep technical insight and complex data, those were exactly the searches I wanted to capture and increase.
We were exceptionally lucky to be among the first users of Obsero, an AI insights platform. This isn’t sponsored, by the way, but they are awesome, and as early users we were lucky enough to get some pointers that I’m passing on here.
We also had a major campaign about to launch: a deep dive into the state of aged care in Australia. There was one major gated asset, the report itself, which meant LLMs wouldn’t be able to access it directly.
It was the perfect time to take Obsero for a spin - to try to measure and then increase the number of high intent Aged Care planning searches for which we were cited.
Defining and prepping the experiment
Building my to-do list was fairly straightforward.
Define the queries I wanted to track
This meant identifying the queries most likely to reflect genuine aged care planning intent. I won’t say I did this perfectly: I had some top of mind, I asked the marketing and sales teams, I grabbed some context from our SEO tools and a variety of strategy docs. ID wasn’t rich in tracked customer insights - if I’d had those I would have used them. But there was enough to start with.
Set a baseline
I ran those queries in Obsero over seven days. At the start of the campaign, .id was sitting at an average position of number eight on the leaderboard for my “Aged care planners” GEO set.
Identify high-performing content
I pulled the top 10 performing URLs for everyone who outranked me in positions 1-7. Bonus: when I explained this use case, the Obsero team kindly built in a bulk export function, and now it’s available for everyone. You can just pull out all the content that works best in one go.
Analyse what was working
I ran analysis to see what I could copy-or-better from the best performing stuff. I built a custom agent for this, but you could absolutely just ask ChatGPT or Copilot to pull out the patterns you want to understand. To be honest, if you’ve worked in content and SEO, your own eye can do a lot of the heavy lifting here. Not surprisingly, the best performers were well structured, with clear headings and timely data. They answered very specific queries for specific audiences - perfect for agentic fan out.
To-do list for the GEO experiment
The aim was simple: make it easier for LLMs to understand that we were a high-quality source of information on aged care. I put together a practical set of actions for the campaign period:
Publish fresh case studies
These were deliberately structured for LLM readability, with the main point up front, clear headings, a specific use case well articulated, FAQs, standalone paragraphs and rich, well structured data. And of course, they provided rich content for our ‘regular’ newsletter and socials.
There’s a delicate balance, I think, between planning for agents and writing for humans. My approach for SEO has always been to start with the keywords I’m targeting in mind, write for humans, then update and optimise. GEO provides a new structural challenge but the same principles apply.
Rework high-potential existing articles
We had a lot of older content. Obsero showed me which older pages were already showing up as sources, and I updated them - improving clarity and structure, updating product references, and adding campaign CTAs.
De-gate parts of the gated content
This was a tricky one. Our campaign model needed the main asset to stay behind an email gate, but LLMs couldn’t access it and therefore had no way of understanding our expertise. In a separate project, I’ve developed a suite of content ops agents, including a specialist in article creation. I gave this agent the task of pulling three to four standalone articles from the report. (You could do this manually, it’d just be a really long task).
No one can lift the whole story from our site, but this de-gated content helps signal our authority to humans, search engines and LLMs.
Secure trusted PR mentions
We targeted coverage tied to aged care, ageing, retirement, development and planning.
One surprising takeaway was that the value of tiered coverage has shifted a bit in this environment. We absolutely wanted tier one media, but that content is often gated. The big movers were specialist publications - which makes sense, because LLMs interpret “Publication for aged care planners + specific and new data on aged care planning = useful to someone asking about aged care planning”.
Outlets that ran our release more or less wholesale, including the data tables, also noticeably increased our LLM visibility. This is the end, I think, of Tier One Or Die.
Build LinkedIn and newsletter content
This helped reinforce expertise and strengthen the wider content ecosystem around the campaign.
Improve related YouTube content
This sounds like a much bigger job than it was. In reality, some very simple changes made a difference. I had no capacity to create new video content for this campaign, but Google uses YouTube metadata heavily for AI overviews. 14-year-old videos were appearing in Google’s AI overviews - great! But also eek! - so I replaced them with fresher pieces like our Living in Australia webinar, which we already had tucked into our Hubspot files. The impact was immediate.
A note - we had some advantages and limitations that impacted what I could choose for this list. On the tricky side, we had a very small team, and no capacity to create further big assets beyond the campaign report. But we also had license to play with AI at will - I’d already had the green light to build out my content ops program - PR support from the legends at Dentsu, and subject matter experts in-house who created fresh, high value data for the campaign.
In a more typical organisation with less fresh data, I’d probably have gone harder on things like Instagram, a Youtube series featuring invited experts, third party endorsements and Google reviews, and/or influencers.
Results
Please note - these results should be treated as directional, not absolute. Performance moves around day to day. Even so, the movement was strong enough, and it’s now lasted long enough, to suggest the changes we made were positive.
After two weeks in market, .id had moved up three positions on the leaderboard, and had overtaken all our commercial competitors for the relevant terms.
Visibility share over that two weeks rose from 2% to 9%. And citation frequency also increased sharply. Early on, we were behind several highly trusted government platforms. At the time of writing this, we were the second most highly cited source in the category - behind only the Australian Bureau of Statistics.
Most importantly, the pages that were now surfacing weren’t random. The campaign report page began appearing for relevant searches, suggesting LLMs were learning that it contained useful category information. Even better, our product pages have also started surfacing - a clear sign that the work is now influencing commercially relevant discovery.
Thoughts - and what I’d do differently
You’ll notice that although the to-do list looks a little different to a classic SEO play, the golden rules of content have stayed the same more than they’ve changed. LLMs still reward content that is genuinely useful, well structured and clearly relevant. AI search highlights the value of authority, formatting and topical depth. And it makes it obvious when commercially important pages aren’t doing enough to explain themselves.
If I were extending this experiment,
I’d sharpen the query set using richer inputs like sales calls, calls for tender and email conversations. Since this experiment ran, we’ve been working to pair Hubspot’s AI tools with call recordings - surfacing better data on what our customers ask and how they ask it.
I’d also work into third-party references more deliberately, especially focusing on platforms like Reddit.
I’d put a lot more into YouTube and other social platforms. For us this was a capacity question, but if I had it I’d use it. Even our little LinkedIn newsletter had an impact.
And I’d ask my most aligned customers to review us somewhere other than our site, speaking specifically to their issues and how we helped solve them.
But the core lesson will stay the same.
The conversation around GEO is full of noise right now. New acronyms, new claims, new shortcuts. This experiment was fun and new, but also a reminder that the useful work in content is still the same work. Understand intent, publish something genuinely valuable, and then make it easy to find, trust and act on.
People who made this possible
Andrew Hedge was the main man on campaign development, and a hugely intelligent sounding board to make sure the test supported the core business goals
Kassandra Humphreys saw the value of this work early, and oversaw the test in play with intelligent questions
Madeleine Page, Sarah Male, Ebony Santin and Liam Jensen at Dentsu PR put boots on the ground to gain serious media coverage over all tiers - learning with us and changing their strategy as the campaign evolved.
Mike Logue got us started in Obsero and provided tips and technical support - even upgrading the tool as we worked.
And Natalie Field gave us not only license to play - but a mandate - and a budget!