SEO & Content Marketing

Google Removes Parked Domains with New Algorithm

Written by Ann Stanley on 7th December 2011

As part of their new monthly search quality blog series designed to detail recent algorithm updates, Google has elucidated a number of intriguing changes, the most pressing of which will target parked domains and identify the true sources of original content.

One of ten updates, the introduction of a new “parked domain classifier” will serve to identify “placeholder sites that are seldom useful and often filled with ads.” The announcement comes as no surprise to SEOs due to the phasing out of parked domains in SERPs in recent years. The fact that Google has already largely tackled this issue has, however, fuelled speculation as to the finer points of the update.

One theory posited is that while the new update will primarily serve to devalue the few remaining parked domains that have not yet been penalized, it will also target the grey area of mini-sites, destinations which are often poorly expanded placeholders that hold very little value for the user.

The response to the update is generally positive as parked domains tend to only have monetization value in the event of direct navigation, so their demotion from search results should have little effect on domain owners. Search Engine Land did, however, note the irony of Google’s further targeting of parked domains as it is the company’s own Adsense For Domains scheme which has helped popularise the monetization of parking.

Of the other updates, the most important focuses on original content, a change that Google have added “to help [them] make better predictions about which of two similar web pages is the original one.”

Essentially this means that Google can now better identify the original source of content when confronted with duplicates, a positive implementation that seems to be an elaboration of the Google Panda update which specifically targeted duplicate content on scraper sites. A scraper site is a website that ‘scrapes’ or copies content from other websites often with a view to poaching visitors and monetizing traffic.

The new update, while not officially targeted at scraper sites, should help to combat their recycling of content and penalize them in the SERPs, promoting the original content source above duplicate articles. Additionally, the changes could further affect the practice of article spinning which, used by some SEOs to build targeted links, was severely devalued by Panda.

Both updates are fairly minor but stand as valuable revisions nonetheless and should ultimately have a positive effect on the quality of search results without affecting honest sites. Google’s dedication to the reduction of duplicated spam results has been clear since Panda and now they seem to be shoring up the issues that they missed first time around.

Also, with the new monthly post on the Google Inside Search blog and fresh approach of transparency on the issue of small updates, it would seem that Google are becoming more open to enlightening those who “care about how search works” and will hopefully be disclosing some more interesting morsels as the series progresses.

By David Gerrard

Posted by