Most people, website managers and even SEO agencies are still a bit confused regarding how Google handles duplicate content. In fact, as a SEO agency in London many a time our clients express to us that they are more afraid of duplicate content than of having a spammy backlink profile.

Yes; the duplicate content issue is surrounded by so many myths that is clear that people have no clear idea of the way in which Google deals with it.

Defining Duplicate Content

According to Google, duplicate content generally refers to substantive blocks of content within or across domains that completely match each other or are appreciably similar. Mostly, this is not deceptive in origin.

People mistake duplicate content for a penalty because of how Google handles it. Really, the duplicates are just being filtered in the search results. If you add &filter=0 to the end of the URL you’ll remove the filtering and you’ll be able to see it

What Does Google Think of Duplicate Content?

  • Duplicate content won’t cause your site to be penalized
  • Google has designed algorithms to prevent duplicate content from affecting webmasters by grouping the various versions into a cluster and displaying only the best URL of the cluster.
  • Duplicate content won’t entice Google to do anything unless its intent is to manipulate search results.
  • Google tries to determine the original source of the content and display that one.

What Are The Most Common Causes of Duplicate Content?

  • Www and non-www
  • Session IDs
  • Http and https
  • Dev or hosting environments
  • Pagination
  • country/language versions

Duplicate Content – Solutions-

There are different solutions you can implement depending on your particular situation.

  1. Use canonical tags
  2. Tell Google how to handle URL parameters
  3. 301 redirects
  4. Follow syndication best practices

Related posts

The Myth of Keyword DensityDecember 13th, 2012