Experts = AI Search juju


Last week, we talked about why the subject matter expert (SME) interview is an important step in the AI-optimized content production process.

This week, I want to go deeper into that and tie it into the larger idea/strategy of reputation engineering (as it relates to AI search visibility).


This newsletter is brought to you by the lovely folks at AirOps.

Your buyers are getting answers from ChatGPT. Are you in them?

AirOps analyzed 15M+ AI queries and turned the findings into a 4-play framework used by teams at Carta, Ramp, and Webflow to earn AI citations, visibility, and pipeline. It includes insights on structural patterns (FAQs, heading hierarchy, schema) that increase citation odds by up to 2.8x, where Reddit, LinkedIn, and YouTube serve as citation sources (and how to show up in them), and a 90-day action plan to help you get started right now.

Sounds nice, huh? Get the Complete AI Search Playbook for Marketers.


Ok. Let’s start by zooming out for a second and looking at how AI search results work and how LLMs actually approach information retrieval.

When an LLM decides what to cite (see also: surface in an AI summary), it's building a picture of your brand across every signal it can find, including your owned content, third-party mentions, author bios, quotes in other publications, and the topics you consistently show up on.

Every citable piece of content you produce is a step toward constructing that picture. That's reputation engineering.

Done consistently over time, this is how a brand (or individual) starts being the source AI systems default to when a question lands in your category.

Most brands are thinking about AI search one piece of content at a time, but really, they should be thinking about it as a long-term, compounding body of evidence that tells AI systems: here’s who owns this topic.

How to start reputation engineering: Integrate SME interviews strategically

Ok! Now that we’re on the same page about the big-picture theory behind all this, let’s talk about how interviews are part of putting it into practice.

When you’re thinking about reputation engineering as it relates to the content you publish on channels you own (like your blog, SubStack, etc.), interviews are a critical piece of building E-E-A-T, which stands for Experience, Expertise, Authoritativeness, and Trustworthiness.

It's a framework Google's quality raters use to evaluate whether content is worth surfacing…and it's become increasingly relevant to AI search because LLMs are trained heavily on content that already ranks well.

Every SME interview conducted with EEAT in mind builds entity signals, AKA what AI systems use to understand what your brand knows and who it’s for.

Data backs this up: Digital Bloom's 2025 AI Visibility Report, which analyzed 680 million+ citations, found that adding original quotes from experts can boost AI visibility by 37%.

That means your SME interview isn't just you gathering quotes from third-party experts to improve the quality and trustworthiness of the content, but it’s also a source of entity signals, which are an important piece of AI Search.

What's an entity signal?

An entity signal is what AI systems use to decide whether your content is worth citing. They help answer: Is this a credible, distinct source of knowledge on this topic?

I think the term “entity signal” is jargony and technical, so I’m going to call them “source signals,” because that’s how journalists think about the third-party experts they tie into their news stories.

Now come join me down in the weeds of LLM information retrieval, will you?

I spent my last semester taking a course that went very in-depth into the systems thinking that drives LLMs (like ChatGPT and Claude) and the technicalities of how they surface information/answers to queries.

Here’s the TL;DR.

When an LLM processes your content, it's not just reading words. It maps relationships and evaluates many factors before arriving at an answer, asking questions like:

  • Who is this?
  • What do they know?
  • Are they expert enough to be trusted as a go-to person in their field?
  • What specific, verifiable claims are they making that I can't find anywhere else?

Building content that includes source signals from high-quality third-party experts helps LLMs quickly understand that you’re applying journalistic rigor to your work by integrating the appropriate expertise (and not just churning out AI slop.)

What "source signals" actually look like

When I'm conducting an SME interview, I'm listening for four specific things beyond the obvious quote that will help me build EEAT:

  1. Proprietary data. Numbers, percentages, findings that exist nowhere else. "We analyzed 500 customer accounts and found that…" is citation gold. AI systems love a specific, ownable data point. And so do I.
  2. Named frameworks. Has your subject matter expert developed a way of thinking about a problem that has a name? Even an informal one? “We call this the 'trust gap’” is more citable than a generic explanation of the same concept. (See what I did there with “source signals”? You’re getting it.)
  3. Contrarian positions. Where does your expert disagree with the conventional wisdom in your space? AI systems tend to cite content that takes a clear, defensible stance, backed by experience, data, and actual results that defy best practices.
  4. Lived specificity. Details that could only come from someone who has actually done the thing. Not "content marketing takes time" but "we published 3x a week for 14 months before we saw compounding returns, and here's what month 6 looked like."

Interviews are where you get the good stuff

Most content marketers go into SME interviews with a list of questions designed to address a brief. That's fine for producing pretty standard content.

It's not enough to produce citable content.

It means instead of asking "what's your approach to X?" you start asking "what do you know about X that most people get wrong?"

Instead of "Can you explain Y?" you ask, "What would you call the framework you use for Y?"

You're not just collecting information. You're excavating the signals that make the content distinct enough for an AI system to treat it as a primary source rather than one of 40 similar articles on the same topic.

The journalism parallel

This is how good journalists have always operated.

The best interviews aren't just information-gathering sessions. They're opportunities to mine for gold.

You go in knowing the general shape of the story, but you're listening for the detail that makes it interesting, unexpected, or surprising. The specific number. The unexpected admission. The reframe that changes how you understand the whole topic.

That instinct, the one that makes a journalist keep pushing until they get the thing that makes the story worth telling, is exactly the instinct that produces the content that LLMs love to cite.

These tools know what authoritative sourcing looks like, and they reward it.

Four questions for building source signals via SME interviews

Before your next SME interview, add these four questions to your prep:

  1. What do you know about this topic that most people in your industry get wrong?
  2. Is there a framework or process you use that has a name, even an internal one?
  3. Do you have any data or findings from your own work that I won't find anywhere else?
  4. What's a specific moment or example that shows this tactic in action and the results it produced?

You don't need all four to land in the final piece, but asking them gives you the raw material to build content that AI search engines favor as a go-to source.


That's all for today, but if you haven't yet, hit reply or find me on LinkedIn and feel free to ask questions about this!

'Til next time,