Google’s John Mueller used an AI-generated image to illustrate his point about low-effort content that looks good but lacks true expertise. His comments pushed back against the idea that low-effort content is acceptable just because it has the appearance of competence.
One signal that tipped him off to low-quality articles was the use of dodgy AI-generated featured images. He didn’t suggest that AI-generated images are a direct signal of low quality. Instead, he described his own “you know it when you see it” perception.
Comparison With Actual Expertise
Mueller’s comment cited the content practices of actual experts.
He wrote:
“How common is it in non-SEO circles that “technical” / “expert” articles use AI-generated images? I totally love seeing them [*].
[*] Because I know I can ignore the article that they ignored while writing. And, why not should block them on social too.”
Low Effort Content
Mueller next called out low-effort work that results content that “looks good.”
He followed up with:
“I struggle with the “but our low-effort work actually looks good” comments. Realistically, cheap & fast will reign when it comes to mass content production, so none of this is going away anytime soon, probably never. “Low-effort, but good” is still low-effort.”
This Is Not About AI Images
Mueller’s post is not about AI images; it’s about low-effort content that “looks good” but really isn’t. Here’s an anecdote to illustrate what I mean. I saw an SEO on Facebook bragging about how great their AI-generated content was. So I asked if they trusted it for generating Local SEO content. They answered, “No, no, no, no,” and remarked on how poor and untrustworthy the content on that topic was.
They didn’t justify why they trusted the other AI-generated content. I just assumed they either didn’t make the connection or had the content checked by an actual subject matter expert and didn’t mention it. I left it there. No judgment.
Should The Standard For Good Be Raised?
ChatGPT has a disclaimer warning against trusting it. So, if AI can’t be trusted for a topic one is knowledgeable in and it advises caution itself, should the standard for judging the quality of AI-generated content be higher than simply looking good?
Screenshot: AI Doesn’t Vouch for Its Trustworthiness – Should You?
ChatGPT Recommends Checking The Output
The point though is that maybe it’s difficult for a non-expert to discern the difference between expert content and content designed to resemble expertise. AI generated content is expert at the appearance of expertise, by design. Given that even ChatGPT itself recommends checking what it generates, maybe it might be useful to get an actual expert to review that content-kraken before releasing it into the world.
Read Mueller’s comments here:
I struggle with the “but our low-effort work actually looks good” comments.
Featured Image by Shutterstock/ShotPrime Studio