Tracking URL for Duplicate Content

One problem that we see on almost every website is duplicate content. Large websites having multiple pages definitely face with this problem. Duplicate content is exactly what you think it is: two or more pieces of content which are identical, the only difference being the URL.  It Confuses Search Engines. Session IDs, Sorting options, Affiliate codes, Domains appends with original URL and issue get’s bigger. Duplicate pages will use up your crawl allowance. Although there is no exact limit for the amount of pages crawled by the Google bots in each crawl session, there are definite patterns that point to crawl limit themes. Any links you get will be split between the different versions of the page. So, each of your duplicate pages is likely to rank badly. The webmasters fear regarding the duplicate URLs that are generated for the same page and possible negative consequences in Google or other search engines as a result of it.

Issues that occurs in front of Search Engines:

  • Search engines don’t know which version(s) to include or exclude from their indices.
  • Search engines don’t know whether to direct the link metrics to one page, or keep it separated between multiple versions.
  • Search engines don’t know which version(s) to rank for query results.

Let’s say your article about CSS Transparency appears on https://theegeek.com/css-transparency/ and the exact same content also appears on https://theegeek.com/category/web-design/css-transparency/, this type of link splitting happens in lots of modern Content Management Systems. Your article has been picked up by several bloggers, and some of them link to the first URL, others link to the second URL, here the problem starts. The problem is that when the documents are on different URLs, only one of those URLs is going to be able to show in the results pages for that content. The URL selected by the engines may or may not be the optimal URL from a ranking perspective or, in some cases, may not even be the original owner’s URL.

 

Tracking URL for Duplicate Content

 

Every piece of content on your website should exist once i.e. permanently with unique URL, and any duplicate pages should be consolidated through 301 redirects. A 301 Redirect is a type of redirect that tells search engines that a page has permanently moved from location A to location B. Think of a 301 Redirect as mail forwarding from your old home to your new home. This will be helpful for avoiding issue of Duplicate Content. Paid search tracking links don’t create this problem in most cases, because those links are usually JavaScript or no followed and ignored by search engines.

Some Solutions:

  • Block URLs
  • Redirect URLs
  • Use Robots.txt, Robots NoIndex and NoFollow.
  • Configure your analytics solution

The canonical tag i.e. Rel=Canonical Tag must be inserted into all URLs, not just into nodes to solve the problem. Duplicate content comes in many forms. While every website is likely to have a bit of it here and there, if your site has any of the issues mentioned here, you should assign some time to clear the issue.

For more reading about technology news in singapore and seo to online marketing do view more about other pages.

Sourabh Bhunje

Sourabh Bhunje, B.E. IT from Pune University. Currently Working at Techliebe. Professional Skills: Programming - Software & Mobile, Web & Graphic Design, Localization, Content Writing, Sub-Titling etc. http://techliebe.com/about-us

Leave a Reply