Case study: improving AEO score from 30 to 85
The starting point: an invisible website
When this B2B software company from Utrecht first had its website scanned for AI readiness, the score was disappointing: 30 out of 100. The website had a modern design and attracted reasonable organic traffic via Google, but was virtually invisible to AI models. ChatGPT did not mention the company for relevant questions about their niche. Perplexity referred to competitors. Google AI Overviews showed content from other sources.
The causes were diverse but not unusual. The website completely lacked Schema.org markup. There was no llms.txt file present. The robots.txt blocked several AI crawlers. The content was primarily written for human visitors, with no attention to machine readability. The heading hierarchy was inconsistent and FAQ sections were absent.
This profile matches what we see with the majority of Dutch business websites. As we describe in our article about AEO and why it matters, the gap between SEO readiness and AI readiness is still large for most organizations. The good news: that gap can be closed in a relatively short time with a structured approach.
The company in this case study is a SaaS provider with approximately 50 employees, active in the supply chain sector. For confidentiality reasons, we do not use their real name. All figures and results are authentic.
Weeks 1 to 3: laying the technical foundation
The first three weeks were entirely dedicated to technical optimization. The team started with the most accessible improvements that would have the greatest impact on the AI readability of the website.
Opening robots.txt for AI crawlers
The existing robots.txt blocked GPTBot, ClaudeBot and PerplexityBot. This was likely a default setting from their CMS bot protection plugin. The team replaced the blocking rules with explicit permission for the most important AI crawlers, with rate limiting to prevent overload.
# Old robots.txt (blocked AI)
User-agent: GPTBot
Disallow: /
User-agent: ClaudeBot
Disallow: /
User-agent: PerplexityBot
Disallow: /
# New robots.txt (welcomes AI)
User-agent: GPTBot
Allow: /
Crawl-delay: 2
User-agent: ClaudeBot
Allow: /
Crawl-delay: 2
User-agent: PerplexityBot
Allow: /
Crawl-delay: 2For a complete overview of how to configure robots.txt for AI, we refer to our article about robots.txt for AI. Correctly setting up this file was the first and most direct step toward AI visibility.
Implementing Schema.org markup
The team implemented Organization schema on the homepage, Article schema on all blog posts and FAQPage schema on the five most visited product pages. The markup was validated with the Google Rich Results Test and the Schema Markup Validator.
The impact of structured data on AI citations is often underestimated. In our article about Schema.org markup, we explain how this structured data functions as a universal language between your website and AI models.
Creating an llms.txt file
The team created an llms.txt file in the root of the website with a concise description of the company, its core expertise and the most important pages. This file functions as a business card for AI models and helps them quickly understand what the website is about.
Weeks 4 to 6: improving content structure
With the technical foundation in place, the focus shifted to the content itself. The team identified three priority areas: heading hierarchy, FAQ sections and internal link structure.
- All pages received a consistent heading hierarchy with exactly one H1, logical H2 sections and H3 subsections where needed.
- The ten most important product pages and blog posts each received a FAQ section with five to eight frequently asked questions, complete with FAQPage schema.
- The internal link structure was strengthened with context-rich anchor texts that help AI models understand the relationships between pages.
- Each blog post received a structured summary at the end with the five key takeaways.
- The readability score of all core pages was improved to a Flesch reading ease score above 50.
Improving the heading hierarchy had a surprisingly large effect. As we explain in our article about heading hierarchy for humans and machines, AI models use headings as primary navigation through your content. An inconsistent hierarchy can cause entire sections to be skipped.
Start by improving your best-performing pages. These have already proven traffic and relevance. Making them AI-ready increases the chance of quick results that strengthen support for further optimization.
Weeks 7 to 9: strengthening E-E-A-T signals
The third phase focused on strengthening the expertise and authority signals that AI models use to evaluate sources.
- Each blog post received an author page with bio, photo, LinkedIn profile and relevant certifications.
- The company published three in-depth whitepapers on supply chain optimization, each with extensive source citations.
- Existing customer cases were rewritten with concrete figures and measurable results instead of vague testimonials.
- The "About us" page was expanded with company history, team expertise and industry associations.
- Each product page received a "Research and methodology" section describing the scientific basis of the software.
E-E-A-T optimization is not a one-time action but an ongoing process. In our article about E-E-A-T optimization for AI, we describe how you can systematically strengthen each of the four pillars.
Weeks 10 to 12: measuring, adjusting and results
The final three weeks were dedicated to monitoring and fine-tuning. The team used weekly AEO scans to measure progress and adjust priorities.
The score progression
Week 0: AEO score 30 | Agent Readiness 22 | Total 27
Week 3: AEO score 48 | Agent Readiness 55 | Total 51
Week 6: AEO score 62 | Agent Readiness 68 | Total 64
Week 9: AEO score 74 | Agent Readiness 78 | Total 76
Week 12: AEO score 85 | Agent Readiness 82 | Total 84
Biggest jumps:
Robots.txt + llms.txt (week 1-2): +18 points Agent Readiness
Schema.org markup (week 2-3): +12 points AEO score
FAQ sections + heading fix (week 5): +10 points AEO score
E-E-A-T strengthening (week 8-9): +8 points AEO scoreConcrete results after twelve weeks
- ChatGPT cited the company for 4 out of 10 tested niche questions, compared to 0 out of 10 before optimization.
- Perplexity referenced the website for 6 out of 15 relevant queries.
- Google AI Overviews showed content from the website for 3 supply chain related searches.
- Organic traffic to the blog increased by 34% due to the improved content structure.
- Average time on page increased by 28%, partly due to the FAQ sections and improved readability.
The mistakes they made (and what you can avoid)
The process was not flawless. The team made several missteps you can learn from.
In the first week, the team implemented Schema.org markup with a WordPress plugin without validating the output. The plugin generated invalid JSON-LD with missing required fields. It took two weeks before someone noticed this via the Google Rich Results Test. Lesson: always validate markup immediately after implementation.
Additionally, the team initially focused on adding FAQ sections to all pages simultaneously. This led to superficial questions and answers that added little value. After course correction, they focused on five core pages with in-depth, relevant FAQs. Lesson: quality over quantity, especially with FAQ content.
Finally, the team forgot to update internal links after rewriting pages. Old anchor texts referenced outdated sections that no longer existed. Lesson: treat internal links as part of every content update.
The biggest gains came not from a single measure, but from the combination of technical optimization, content improvement and E-E-A-T strengthening. Each element reinforces the others.
Dive deeper: Schema.org markup: the language AI understands | E-E-A-T optimization for AI | Google AI Overviews and search results
Key takeaways
- An AEO score improvement from 30 to 85 is achievable in twelve weeks with a structured approach combining technology, content and authority.
- The quickest wins come from technical improvements: opening robots.txt, implementing Schema.org and creating an llms.txt file.
- Content improvements such as heading hierarchy, FAQ sections and summaries form the second wave of score improvement.
- E-E-A-T signals such as author pages, source citations and concrete cases strengthen the trust AI models have in your website.
- Validate every technical change immediately and focus on quality over quantity in content optimization.
Frequently asked questions
How long does it take for AEO improvements to take effect?
Technical improvements such as robots.txt and Schema.org markup take effect within days to weeks, as AI crawlers regularly revisit your website. Content improvements have a longer lead time of two to six weeks, depending on how quickly AI models re-index your changed content. E-E-A-T signals build gradually and typically show their full effect after eight to twelve weeks.
Is a score of 85 the maximum that is achievable?
No, a score of 100 is theoretically possible but exceptional in practice. Most well-optimized websites score between 75 and 90. A score above 85 requires virtually perfect technical implementation, excellent content and strong E-E-A-T signals. The difference between 85 and 95 typically requires disproportionately more effort than the difference between 30 and 85.
Can we replicate these results without technical knowledge?
Most technical steps are feasible with basic CMS knowledge and the right plugins or tools. Schema.org markup can be added via plugins. Adjusting robots.txt is a simple file change. The content-related improvements require no technical knowledge but do require editorial discipline. For more complex implementations such as custom JSON-LD or automated validation, technical support is recommended.
What if my website runs on a different CMS than WordPress?
The principles are CMS-independent. Whether you use WordPress, Shopify, Drupal, a headless CMS or a custom solution, the fundamental steps remain the same: configure robots.txt, implement Schema.org markup, structure content and strengthen E-E-A-T signals. The specific implementation differs per platform, but the strategy is universally applicable.
How do we measure whether AI models actually cite our website?
There are several methods to monitor AI citations. The simplest is to regularly ask relevant questions to ChatGPT, Perplexity and Gemini and check whether your website is mentioned as a source. More advanced monitoring is possible through tools that automatically track AI citations. Additionally, you can search your server logs for visits from GPTBot, ClaudeBot and PerplexityBot to verify that your website is being crawled.
The journey from an AEO score of 30 to 85 is not a sprint but a structured march. Each step builds on the previous one and the cumulative effect exceeds the sum of its parts.
How does your website score on AI readiness?
Get your AEO score within 30 seconds and discover what you can improve.