Ben Young
Ben Young
October 11, 2023

Google is saying it doesn’t matter if content is written by AI or not, it matters if it is helpful.

To dive into that, I thought, what is an objective measure of if a piece of content is helpful? In this post, I explore some measures, make some conclusions on measuring helpfulness, how Google may tackle it and what you can do about it.

To get started, emojis

For our content at Nudge, we have emojis on it. Hail, strong and thumbs down. That might be worth exploring.

Example of Emoji as a feedback mechanism.

I took all the content, that had had at least one response, and dived into them.

There were three groups.

  • Top performers, these had the most positive emoji responses but they also had negatives.
  • Positive only, These had slightly lower engagement.
  • Negative only, the lowest engagement of the three.

Engagement here as calculated by ‘emoji responses/people who read the content’.

The data

Of those that got a response, the engagement rate was about 5%. Which is not bad. These are located at the bottom of our content, so you can assume they are a fair representation of how someone felt.

The engagement rate of the top performers, was 28% higher than the negatives. The bounce rate of the negative only pieces (as measured by Nudge’s proprietary measure) was 22% higher than the positives.
You may recall, that Nudge measures bounce if someone hits the back button, or leaves within 5 seconds and/or isn’t active on the page (no scroll/attention). This provides a much more precise measure of bounce. And helps optimize for when, you want the audience to read a page, but further engagement isn’t necessary.

How about attention? How did the content perform?

The least helpful content, had 2 seconds less attention than the highest performers.

And we know from Nudge data, that each second you lose 2.44% of your audience, each second is critical.

What does all this mean?

With the bounce data & engagement, it shows that the line that delineates between unhelpful and helpful content is a thin one. And I think this speaks to a wider issue, everyone has a few bangers, great pieces which clearly are the absolute best.

These few, separate themselves from everything else. But the average piece of content, which makes up the bulk of our content, is much much harder. And it’s this where AI content, at least today, is contributing. To the ‘bulk’ or middle of your content performance.

The risk here is, of creating a lot of content, that looks good, even passes your own sniff test but sits just below that line of helpfulness.

It also speaks to the value of optimizing & improving content, if you can lift a piece of content, just across that helpfulness line, that makes a world of difference.

Does this mean there is a universal measure of helpfulness?

I don’t think there is, attention & scroll is certainly an indicator. Content has to have attention to be helpful! But short content, can also deliver if that is what is needed for the query. As always, I think Google will have to fine tune the dials between attention, clicking back to search results and conducting another search.

For companies, it means getting closer to the data, to find those lines of helpfulness. In trying to meet Googles helpfulness test, even with AI, companies will have to work harder, to deliver on what meets peoples expectations and delivering on satisfaction. Of course, you just have to do it, better than the competition.

..

If you want to get a better view of how your content performs, try Nudge, it’s like your advertising analytics, meets your content, heat map & site metrics.

Further reading, Googles own piece on creating helpful content.


“Join