For Google, automatically generated content violates its recommendations and is therefore subject to penalties – SEO & Engine News

Google has repeated it once again: content automatically generated by GPT-3 type algorithms is considered spam and is therefore penalized by the search engine. Everything is then certainly a matter of nuances…

John Mueller (Google) pointed this out (again, because this is nothing new) in a recent hangout with webmasters: Auto-generated content using artificial intelligence and tools like GPT-3 and assimilated, is considered spam and goes against its recommendations for webmasters. It is therefore potentially liable to penalties from the search engine.

A page dedicated to this automatically generated content is also available in these recommendations. Here is what she says:

Auto-generated content relies on programming. When this content is intended to manipulate search rankings rather than help users, Google may step in. Here are some examples (non-exhaustive list):

  • Text that may contain some search keywords but does not make sense to the reader;
  • Text translated by an automated tool without human intervention or correction before publication;
  • Text generated by automated processes, such as Markov chains;
  • Text generated using automated techniques of synonymy or obfuscation;
  • Text generated by hijacking search results or Atom/RSS feeds;
  • Assembly or combination of content from different web pages without adding value.

If you host such content on your site, prevent it from appearing in Google Search.

When we know that Google now knows how to tell the difference between a text written by humans and a text designed by an algorithm, we can understand that the text created by artificial intelligence involves significant risks.

We can, however, also compare this situation to that of spinning: if it is done with bourin methods (massively and without human intervention), it is obvious that it is condemnable because detectable (what is more, it is ethically and humanly blameworthy). But if the algorithms are only secondary tools and a human part is added to a “first draft” made by a machine, chances are that it will be completely undetectable and not penalized.

It therefore remains to be seen at what level we are positioned on the bourinage scale… 🙂

The question posed to John Mueller during the hangout for webmasters. Source: YouTube

Leave a Comment