Google Released Panda 4.0 on 21 May
I have written a lot about Panda since its introduction in February 2011 because it goes to the very heart of your online business.
For years Black Hat webmasters tried to fool Google into ranking sites higher than they deserved. What makes the Panda algorithm special, in all its various manifestations, is its ability to “learn”. Panda has hitherto relied on human input from Google Quality Rating team to help it distinguish good sites from bad. It makes sense then for new reiterations of Panda to be more finely nuanced.
Panda was never going to be a one-time deal, therefore to all those who think they “Panda proofed” their websites I say think again.
Perhaps a quick recap is in order. Panda’s aim continues to be to stop poor quality sites reaching the top of the search results and even hiding poor quality ones. The term quality, strictly speaking refers to content. The evolution of Panda has seen it modify what it regards as poor quality content. Each reiteration of the algorithm has resulted in winners and losers and this time round is no different. Big brands are not immune. In fact, it’s the fact that very large sites, monitored by researchers, get caught in the updates that we can be sure Panda 3.0 or 4.0 is impacting search results. Panda 4.0 is said to have a different level of impact on different languages with the percentage of English queries being impacted, in the region of 7.5%.
In previous versions of Panda, it’s believed there were too many false positives. This refers to sites caught by the algorithm that probably shouldn’t have been. It’s been reported this time round Panda is more “gentle” and will correct some past mistakes.
Sites that have been caught by Panda 4.0 are believed to include, to some extent, Ask.com and eBay.com and Yellowpages.com. A possible loss of traffic from Google caused by Panda 4.0 can plainly be regarded as a comment on the quality of content being published. On the other hand sites gaining visibility in the serps are believed to include glassdoor.com and buzzfeed.com. Sites like Buzzfeed may not be to everyone’s taste but most observers would agree it has a natty way of creating original content that’s structured for high virability.
Your site may be a world away fromBuzzfeed.com, but that’s not the point. If you are lucky to offer or make something truly unique, Panda is less of a threat. However, if you operate in a sector where there are many businesses doing the same thing, creating super original content may be more challenging.
To put it simply. Consider that Google has more than 19m results for the query, ‘How to boil an egg’. You may think that possibly less than 20 results might suffice. Trying to win for queries like this is never going to result in success – you have to find points of differentiation. There are a number of ways to achieve this. Branding is one. New angles on customer problems which can help you create unique, original content is, of course, another.
This brings me back to customers. If you don’t truly have a grasp of your customer persona and what your customers are searching for, original, sparkling content may elude you. Keywords are no longer viable, semantic search is here to stay, and is a golden opportunity for small businesses.
It’s always been the case that Google traffic isn’t about getting lots of traffic, it’s about getting the right traffic. If you give your customers and prospects what they are searching for everybody is a winner.
Panda 4.0 may turn out to be more subtle than previous Panda algorithm updates. This is both a plus and a minus. There will be fewer false positives, but the algorithm will display more finesse in acknowledging good sites. And you definitely don’t want to be on the wrong site of a Panda penalty because it will apply to your whole site, or a section, even if only a couple of pages fall foul of the algorithm.