Intent: an Experiment in GEO
GEO is a new discipline. There are still a lot of unknowns, and for every deep expert there are a thousand of us in real marketing roles thinking some variation of the following:
What the heck, I already log into 10 tracking tools a day.
How the hell am I supposed to prove business value on this thing?
Alright, but how do I actually see results so I can change them?
I’m not a GEO expert. In fact, this case study is really about noodling around with one new tool, which you can try out for free, on one campaign. I learned a lot from it and got some very promising results. If you’re already deep in GEO, this may feel pretty elemental - this is for those of you wondering where to start.
Bonus: you don’t have to run any of the tools I’m going to talk about on company servers or share sensitive data. If your company is still working through governance questions around AI, this is still completely doable.
The business problem was intent
This all started with a classic business problem: how to get more high-intent traffic?
I didn’t need to find more traffic in general: .id already had an exceptionally high-volume website. It was barely worth caring how many visitors arrived each day because the volume was already outpacing competitors by the thousand. The real issue was conversion. How could we identify, and then increase, the number of people whose high-intent searches would lead them to us rather than a competitor?
I set up a separate program of work for SEO, which is another related case study for another day. Then I turned to GEO.
LLM searches, whether they happen in ChatGPT, Claude, Google AI Overviews or somewhere else, tend to be high intent People instinctively turn to LLMs when their queries are complex. Given that .id’s core business is deep technical insight and complex data, those were exactly the searches I wanted to capture.
Defining and prepping the experiment in Obsero
We were exceptionally lucky to be among the first users of Obsero, an AI insights platform. This isn’t sponsored, by the way, but they are awesome, and as early users we were lucky enough to get some pointers that I’m passing on here.
We also had a major campaign about to launch: a deep dive into the state of aged care in Australia. There was one major gated asset, the report itself, which meant LLMs wouldn’t be able to access it directly.
It was the perfect time to take Obsero for a spin.
Building my to-do list was fairly straightforward:
Define the terms I cared about
This meant identifying the queries most likely to reflect genuine aged care planning intent. I won’t say I did this perfectly: I had some top of mind, I asked the marketing and sales teams, I grabbed some context from our SEO tool and Console. But I had enough to start with, and I plugged those into Obsero to start tracking how often people searched for them and whether LLMs were using our resources to answer.
Set a baseline
I ran those queries in Obsero over seven days. At the start of the campaign, .id was sitting at an average position of number eight on the leaderboard for this aged care GEO set.
Identify high-performing content
Using Obsero, I pulled the top 10 performing URLs for everyone who outranked me in positions 1-7. Bonus: the Obsero team kindly built in a bulk export function when I explained this use case, and now it’s available for everyone. You can just pull out the content that works in one go.
Analyse what was working
I ran analysis to see what I could copy-or-better from the best performing stuff. The content that performed best was usually clearly structured, rich in data, easy to scan and well referenced.
I built a custom agent for this, but you could absolutely just ask ChatGPT or Copilot to pull out the patterns you want to understand. To be honest, if you’ve worked in content and SEO, your own eye can do a lot of the heavy lifting here.
Running the GEO experiment
The aim was simple: make it easier for LLMs to understand that we were a high-quality source of information on aged care.
I put together a practical set of actions for the campaign period.
Here’s what we did:
Publish fresh case studies
These were deliberately written and structured for agent readability, with clear headings, an FAQ and rich, well structured data.
Rework high-potential existing articles
We had a lot of older content. Obsero showed me which older pages were already showing up as sources, and I updated them - improving clarity and structure, updating product references, and adding campaign CTAs.
De-gate parts of the gated content
This was a tricky one. Our campaign model needed the main asset to stay behind an email gate, but LLMs couldn’t access it and therefore had no way of understanding our expertise. I created an agent whose role is to pull three to four standalone articles from reports. No one can lift the whole story from our site, but this de-gated content helps signal our authority.
Secure trusted PR mentions
We targeted coverage tied to aged care, ageing, retirement, development and planning. One surprising takeaway was that the value of tiered coverage has shifted a bit in this environment. We absolutely wanted tier one media, but that content is often gated. Outlets that ran our release more or less wholesale, including the data tables, noticeably increased our LLM visibility.
Build LinkedIn and newsletter content
This helped reinforce expertise and strengthen the wider content ecosystem around the campaign.
Improve related YouTube content
This sounds like a much bigger job than it was. In reality, some very simple changes made a difference. I had no capacity to create new video content for this campaign, but Google uses YouTube metadata heavily as a trust signal. I found 14-year-old videos appearing in search, so I replaced them with fresher pieces like our Living in Australia webinar, which we already had - they were just tucked into our Hubspot files. The impact was immediate.
We had some advantages and limitations that impacted what I could choose for this list. On the tricky side, we had a very small team, and no capacity to create further big assets beyond the campaign report. But we also had license to play with AI at will, PR support from the legends at Dentsu, and subject matter experts in-house who created fresh, high value data for the campaign.
Results
Please note - these results should be treated as directional, not absolute. Performance moves around day to day. Even so, the movement was strong enough, and it’s now lasted long enough, to suggest the changes we made were positive.
After two weeks in market, .id had moved up three positions on the leaderboard, and had overtaken all our commercial competitors for the relevant terms.
Visibility share rose from 2% to 9%. And citation frequency also increased sharply. Early on, we were behind several highly trusted government platforms. Within two weeks, .id had become the second most highly cited source in the category - at the time of writing it’s behind only the Australian Bureau of Statistics.
Most importantly, the pages that were now surfacing weren’t random. The campaign report page began appearing for relevant searches, suggesting LLMs were learning that it contained useful category information. Even better, our ‘how we help’ page also started surfacing - a clear sign that the work was influencing commercially relevant discovery.
Thoughts - and what I’d do differently
You’ll notice that although the to do list looks a little different to a classic SEO play, the golden rules of content have stayed the same more than they’ve changed. LLMs aren’t perfect, but they still reward content that is genuinely useful, well structured and clearly relevant. AI search highlights the value of authority, formatting and topical depth. And it makes it obvious when commercially important pages aren’t doing enough to explain themselves.
If I were extending this experiment,
I’d sharpen the query set using richer inputs like sales calls, calls for tender and email conversations. Since this experiment ran, we’ve been working to pair Hubspot’s AI tools with call recordings - surfacing better data on what our customers ask.
I’d also work into third-party references more deliberately, especially on platforms like Reddit
I’d put a lot more into YouTube and other social platforms - for us this was a capacity question, but if I had it I’d use it. Even our little LinkedIn newsletter had an impact.
And I’d see if I could gather a wider range of third-party endorsements on pages we don’t host.
But the core lesson will stay the same.
The conversation around GEO is full of noise right now. New acronyms, new claims, new shortcuts. This experiment was fun and new, but also a reminder that the useful work in content is still the same work. Understand the intent, publish something genuinely valuable, and then make it easy to find, trust and act on.
People who made this possible
Andrew Hedge was the main man on campaign development, and a hugely intelligent sounding board to make sure the test supported the core business goals
Kassandra Humphreys saw the value of this work early, and oversaw the test in play with intelligent questions
Madeleine Page, Sarah Male, Ebony Santin and Liam Jensen at Dentsu PR put boots on the ground to gain serious media coverage over all tiers.
Mike Logue got us started in Obsero and provided tips and technical support - even upgrading the tool as we worked.
And Natalie Field gave us not only license to play - but a mandate - and a budget!