Webmasters everywhere are asking the same question these days: when will the latest update to Google’s search algorithm, known as Google Penguin, hit the web? Part of the point of Google’s algorithms, of course, is the fact that we aren’t supposed to necessarily know when they’ll be updated, nor what aspects of the formulas will be changed. If this type of information was known in advance, it would defeat the purpose of how Penguin is meant to work.
With that said, predictions abound on when, exactly, these updates can be expected, and how websites can potentially be prepared to survive them. Accurately predicting and preparing for the next Google Penguin roll-out can best be done with some background knowledge. Read on for more insider facts and information about how these algorithms work and what to expect from them.
If you’re curious about how to predict the next Penguin update, chances are you’ve already got a good idea of what Penguin is and how it functions. But a quick refresher course never hurt anyone.
Even casual users know that Google searches operate on a series of algorithms, or formulas, that use specific data from websites to respond to queries, ranking the sites in order of relevancy (and kicking out sites that the algorithms read as spam). For several years now, the process of search engine optimization (SEO) has been a lucrative field in web development, with webmasters working to design their sites, and optimize their copy, in order to place their site higher up in Google’s results for a particular search.
As you can imagine, this type of optimization is ripe for abuse, and that’s where the specific workings of Google’s algorithms come into play. Penguin, in particular, is Google’s search formula that is designed to target and remove sites from its rankings that use what are known as “black hat” SEO techniques. Black hat techniques are those that load websites illegitimately with keywords and other “tricks” of SEO, often sacrificing content for placement and becoming little more than spam sites. Because it wants to keep its search results relevant and informative, Google works hard to keep these sites from gaining top spots in their search results, and part of how they do that is via the workings of Penguin.
Google first began publicly introducing (and giving code names to) its algorithms in February 2011, with the initial release of Panda. Designed to strip ranking authority from websites that held little useful content, Panda was designed to follow a series of formulas put in place by Google designers that determined a site’s usability and quality of content. Panda is still in operation (and still going through periodic updates); its main goal is to filter out sites that provide little useful content to users, particularly those sites that have too little content.
Penguin was introduced to the public in April of 2012. The specific goal of Penguin is to target and remove the aforementioned black hat sites, i.e. those that go against the guidelines set up by Google for webmasters to follow. These guidelines include controlling the number of links that connect to the webpage in question (multiple links increase Google ranking, but are also, like keywords, often used by spammers), avoiding keywords designed to attract traffic without providing content, and other sneaky techniques.
Since its original release in 2012, Penguin has gone through a series of updates, each tweaking the algorithm used and affecting search results and websites. The first update rolled out just a month after the original Penguin; another came in October of 2012, and the most recent update, known as Penguin 4, hit the net in May of 2013.
A third algorithm, Google Hummingbird, joined its fellow cute critters in September of 2013, and is designed to organize search results into better, more accurate rankings and to improve the timeliness of what a user receives with his or her query. Hummingbird can be considered an all-encompassing algorithm that holds, under its umbrella of “quality results,” the specifics built into Penguin and Panda.
A great deal of energy and research goes into predicting the next update to Google Penguin. It’s not, as you might think, a matter that concerns only the spammers. Because the Google search process is so carefully balanced, even those with good intentions who are attempting to use strategies like keywords and link placements to improve their Google rankings can be affected by how Penguin operates. And in many cases, the good sites have to adjust themselves just as much as the bad ones do.
It’s understandable, then, that webmasters and others in the field work hard to attempt predictions regarding when the next Penguin will hit, and what types of algorithms it might contain. Of course, while Google has thus far been transparent and cooperative in releasing information about how the current Penguin operates, they’re not about to give this type of warning beforehand, since that would allow the spammers and black hatters to avoid their filtering process, too. Thus, all the webmasters really have is the process of educated guessing.
And there are plenty of educated guesses to be had. A quick search on (what else?) Google provides plenty of data on Penguin predictions about when the next update will be and what it will contain. While a large part of the game involves little more than personal opinion, there are some solid facts that may be potentially useful for users concerned:
1. All past Penguin updates have targeted areas of SEO that had, at the time, received recent concentration from webmasters attempting to improve their standings in searches. For example, the first two versions of Penguin in 2012 were aimed for the most part towards text used as link anchors on sites – the text on which a user must click in order to navigate to a new page. During 2011, as SEO techniques developed, many webmasters had discovered anchor text was an avenue by which to improve their Google rankings, and thus anchors had become an overly-exploited area for spammers and black hat SEO users.
After all, the point of updating the algorithm of Penguin is for Google to fight back against newly-developing issues in their searches. If the strategies of spammers never developed, no updates would be needed. Therefore, when attempting to predict what will change next in Penguin, users would do well to look at current trends in black hat techniques.
2. Be aware that, while they’ve thus far been open about algorithm updates, Google may very well not continue the process of announcing what changes. The formulas that drive Google searches are actually constantly in flux – changes to Penguin and its fellow famous algorithms are simply the largest, most publicized adjustments. Google has already stated, however, that updates to Penguin and Panda may or may not be made public in the future; they may simply be rolled out as part of the general update process. In other words, webmasters who follow Google news feeds or webmaster forums in order to get their information about when updates will occur may very well be blindsided.
Webmasters trying to predict when the next Google Penguin updates will occur would do well to be aware of the fact that they may be fighting a losing battle. While guesses can be made, solid information isn’t available – and that’s a deliberate choice by the powers that be. The best you can do to be ready for the changes is to pay attention to Google’s webmaster guidelines and do your best to fit your site within those rules, without drawing attention to yourself by attempting to bend them.
To put it in basic terms, if your site has a great Google ranking due to something other than quality content and lots of satisfied users, odds are you’re violating some terms somewhere, and you should probably be concerned about potential repercussions next time Penguin changes. If you’re following the rules, and you ensure that you’re updated on those rules as they are adjusted in the future, your site shouldn’t suffer – and you’ll be better equipped to use Google’s guidelines to your advantage.