AudienceBloom

CALL US:  1-877-545-GROW

Tag Archive: google

  1. A Comprehensive History of Google Updates (and a Look to the Future)

    1 Comment

    A Comprehensive History of Google Updates (and a Look to the Future)

    In the world of search engine optimization (SEO). Few topics have generated as much collective attention as the development and emergence of Google algorithm updates. These coding additions to Google’s search ranking algorithm, varying in size and scope, have the capacity to fundamentally change the way the algorithm works—and they have, periodically, over the years.

    Whenever a new update is released, SEO professionals go crazy, excited to dig deep and learn what changed. And between updates, we tend to wait and speculate about what changes could be coming down the pipeline in the future.

    If you’re new to the SEO world, this can all seem very intimidating. Google has been releasing updates for the better part of two decades, which is a lot to catch up on—and there are no signs of its momentum slowing down in the near future. Google is a company committed to ongoing change and development, and its algorithm is a reflection of that; as search marketers, we need to be willing to change and develop alongside it.

    To help educate newcomers to the SEO world, provide a refresher for those of us in the thick of it, and lay a foundation with which we can predict the future of Google developments, I’ve put together this comprehensive history of Google updates for your reading or perusing pleasure.

    Table of Contents

    + Basics About Google Updates
    + The Early Era
    + Panda and Penguin
    + Smaller Updates
    + The Modern Era

    Basics About Google Updates

    First, I want to cover some basics about Google updates. Because of the increased attention they get from SEO professionals as well as webmasters, marketers, and entrepreneurs in general, there have been a few misconceptions that have developed over time.

    How updates work

    You probably understand the idea behind a Google update well from a conceptual perspective. You’re definitely familiar with Google search in general; its primary function is to give users a list of the best, most relevant results for their search queries—and, unless you’re a hardcore Bing user, you’d probably agree it’s doing a pretty good job.

    But it wasn’t always as impressive of a system as it is today. Search results are generally calculated based on two broad categories of factors: relevance and authority.

    The relevance of a result is determined by evaluating how well a site’s content matches a user’s intention; back in the day, this relied on a strict one-to-one keyword matching system that hunted for web pages that used a specific keyword term more than its contemporaries. The authority of a site is determined by PageRank, a system that looks to sites’ backlink profiles to determine how they relate to other websites and authorities.

    These two systems are updated frequently, as Google discovers new, more sophisticated, and less exploitable ways to evaluate relevance and authority. It also finds new ways of presenting information in its search engine results pages (SERPs), and adds new features to make searchers’ lives easier.

    When Google decides to create an update, depending on the size, it may be pushed directly to the main algorithm or be published as a kind of test branch, to be evaluated for effectiveness and functionality.

    Either way, the update may be improved upon with subsequent iterations over time. Sometimes, Google announces these changes in advance, letting webmasters know what they can expect from the update, but most of the time, they roll out silently, and we only know of their existence because of the changes we see and measure in SERPs.

    Updates matter because they affect how search works in general, including how your site is displayed and how users interact with it.

    How updates affect SEO

    The common perception is that updates are bad news for SEO. They’re seen to be bad for the SEO business, throwing a wrench in optimizers’ best-laid plans to earn higher ranks for their respective sites and causing mass panic when they come out and crush everyone’s hard-earned rankings.

    How updates affect SEO

    (Image Source: SearchEngineLand)

    There’s some truth to this; people do tend to panic when a new Google update causes a significant changeup in how rankings are displayed. However, updates aren’t simply a way for Google to step in and harsh everyone’s SEO buzz; they’re complicated pieces of algorithmic machinery designed to do a better job of providing quality information to Google’s users.

    Accordingly, they do have a massive effect on SEO, but it isn’t all bad.

    • Ranking volatility. First, there’s almost always some ranking volatility when a new update is released, but for some reason, this has become associated with ranking drops and penalties. Yes, “penalties” can emerge from Google updates as Google decreases the authority of sites that partake in certain actions it deems to be low-quality (like illegal activities), but if sites only got penalized in ranking, what would happen to the SERPs? The reality is, some sites go down in ranking and some go up as they’re rewarded for partaking in higher quality strategies and offers. Overall, this volatility tends to be modest; it’s not as severe as most people make it out to be.
    • Influence on best practices. Google updates also help search optimizers reevaluate and redefine their best practices for online visibility success. When Google pushes a new update, it’s usually to improve some element of the online user experience, such as when it improves its ability to evaluate “good” content. When this happens, search marketers can learn from the update and improve their own provision of content to users. It’s true that some optimizers are perpetually ahead of the curve, and not all algorithm updates directly affect webmasters, but as a general rule, updates are a means of progression in the industry.
    • New features to play with. Some Google updates don’t interfere with evaluations of authority or relevance; instead, they add entirely new functions to Google Search. For example, the Google Knowledge Graph emerged as an entirely new way to provide users with information on their given topics. Now, we’ve all gotten used to the idea of getting instant information on whatever questions we ask or whatever topics we choose. These new additions also represent new opportunities to gain search visibility; for example, it’s possible for webmasters to work on getting their brands featured in more Knowledge Graph entries as opposed to purely organic search results.
    • Reinforcing old ideas. Oftentimes, Google updates won’t release anything particularly new, but instead exist as a reinforcement of an old idea that was pushed out previously. For example, Google will occasionally roll out new features that evaluate the quality of content on the web; this doesn’t change the fact that content quality is a massive factor for determining authority, but it refines how these figures are calculated. Accordingly, many Google updates don’t provide new information to search optimizers, but instead, give them greater positive reinforcement that the strategies they’ve adopted are on the right track.

    Why updates are often ambiguous

    One of the more frustrating elements of Google updates is their stunning level of ambiguity, and it manifests in a number of different forms:

    • Announcement ambiguity. Google will occasionally announce its updates in advance. For example, it released information on its so-called “Mobilegeddon” update well in advance of the actual rollout to give webmasters time to update their sites for mobile devices. However, it’s more common that Google rolls out its updates in silence, committing the change to its algorithm without telling anyone about it.
    • Effects on search ranks. Even when search optimizers notice the volatility in ranking and start to investigate, Google is notoriously tight-lipped about the true nature and extent of the update. When the company deviates from the norm and announces updates in advance, it usually describes the nature of the update generally, such as stating that the update is meant to improve the evaluation of content quality. It generally does not go into specific details about the coding or the real functionality of the update, even when asked by professionals seeking further information.
    • Beginnings and ends. Updates don’t always roll out as quickly as a light switch being flipped. Instead, they tend to occur gradually; for example, some updates “roll out” over the course of a weekend before their full effects on search rankings are seen. In other cases, Google may push one core update, then add onto it with new iterations, modifications, and versions later on.

    If Google releases these updates to improve the web, why does the company intentionally withhold details like these from webmasters? The answer is pretty simple. The biggest reason for Google releasing updates in the first place is to improve overall user experience.

    Imagine Google releases the exact details for how it ranks sites; webmasters would strive to exploit this information for the sole purpose of gaining rank, compromising that user experience. Cloaking this information in ambiguity is a defensive measure to prevent this level of manipulation from taking place.

    Patterns and commonalities

    Because Google updates are so ambiguous, one of the best tools we have as search optimizers is our ability to detect patterns and commonalities in these updates. Not only does this give us a better understanding of Google’s motivation and a clearer knowledge of the full scope of an update’s effects, but it also allows us to make more meaningful predictions about what updates may be coming in the future.

    • Annual patterns. Generally, Google will release several updates in the span of a year, but there’s typically been at least one “big” update per year; this wasn’t the case in 2015, and before 2011, most updates were smaller, monthly changes, so it’s difficult to nail this pattern down definitively. Bigger updates, like Panda and Penguin, were the subject of annual revisits for a time, with 2.0 versions coming out almost exactly one year later, but Google has recently changed this format of release to something more gradual. Oftentimes, search marketers will predict the timing for Google’s upcoming releases for a given year based on previous information.
    • Batch updates. Google also has a tendency to “batch” updates together, especially when they’re small. If it’s making a major change to its core algorithm, this isn’t possible, but if it’s making a number of smaller tweaks, it typically releases them all at once as a pack. These rarely come with a warning or overall description.
    • Testing specific improvements. Specific improvements, like changes to the layout of an SERPs, tend to be released as a test before rolling out to the broader community. Users of a specific demographic, or in a specific area, might report seeing the change before it’s committed on a full national scale. Then, some months later, Google will likely commit the change with various improvements as the finalized version.
    • Iterations and reiterations. Most of the minor updates speak for themselves, but for bigger updates, Google needs to use building blocks to complete its construction. Giant, game-changing updates like Panda and Penguin, are the subject of revisions, adjustments, data refreshes, and even combinations, such as when Panda was incorporated into Google’s core ranking algorithm.

    Naming conventions

    I’ve already made reference to a handful of different named Google updates, so I wanted to take a minute to talk about these update naming conventions. Some updates are formally named by Google; for example, the major algorithms Panda and Penguin were given formal, official names so they could be easily understood and discussed by members of the community. However, since Google is quiet about most of its rollouts, only a handful of updates get official names.

    Instead, it’s the collective power of a community that usually lands on a name. In the early days of Google updates, the community of Web Master World took charge of naming updates that were otherwise going informally and silently released. Sometimes, these names were based on basic descriptors, such as the name of the city where the update was announced, and other times, they took on human names much like hurricanes.

    Today, most update names emerge from the SEO community as leaders attempt to either describe what’s going on (such as with the suggestively titled Mobilegeddon update) or adhere to Google’s habit of arbitrarily naming updates after animals that start with “P” (such as with Pigeon, Panda, and Penguin). Sequential updates are typically kept in numerical order, as you might imagine.

    As we enter the main section of this article, where I detail each of Google’s significant updates, I want to warn you that these updates are grouped contextually. For the most part, they’ll be kept in chronological order, but there may be deviations based on each update’s respective category.

    The Early Era

    Now it’s time to dig into the actual updates that have shaped Google into the powerhouse universal search algorithm it is today.

    Pre-2003 updates

    Even though Google officially launched in 1998, it wasn’t until 2003 that it started making significant updates to its search process (or at least, held enough attention in the online marketing community for people to care about them). Before 2003, there were a handful of changes, both to the ranking process and to the visual layout of the search engine, but things really kicked off with the Boston update in February of 2003.

    • Boston and the Google dance. The Boston update is the first named Google update, so called because it was announced at SES Boston. It’s unclear exactly what this update changed, but it did make some significant tweaks to Google’s ranking algorithm in addition to refreshing its index data. More importantly, Boston set a tone for the series of updates to come from Google, known as the “Google dance.” These Google dance updates were small yet significant monthly updates that changed any number of things in Google’s core algorithm, from how authority is calculated to major index refreshes.
    • Cassandra. Cassandra was one of the early monthly updates, rolling out in April of 2003. Its main purpose was to start fighting back against link spammers, who were exploiting Google’s system of ranking sites based on the number of backlinks pointing back to their domain. It also penalized sites with hidden text or hidden links, which were common occurrences back in 2003.
    • Dominic. Dominic is still a bit of a mystery. When it was released in May of 2003 as the latest addition of the Google dance, search optimizers seemed to notice its prominent effects immediately. Though it’s not clear the full extent of the changes, backlink counting and reporting seemed to undergo a massive overhaul.
    • Esmerelda. Esmerelda is one of the last of the Google dance updates, coming out in June of 2003. It was less significant than Cassandra or Dominic, but seemed to represent some major infrastructural changes at Google, as monthly updates were replaced with a more continuous flow.
    • Fritz. As Matt Cutts explains it, Fritz is the official last of the Google dance updates. Instead, Google began to strive for a more incremental, gradual updating approach.
    • The supplemental index. In September of 2003, Google made a change to produce a “supplemental” index that served as a branch, or peripheral unit serving the main index’s purpose and increasing overall efficiency by indexing a portion of the web.

    Florida and tidying up

    The 2003 Google dance era was the first major block of updates, but starting with the Florida update later that year, updates began to take on new roles and new significances for the online marketing community.

    • The Florida update. The Florida update was one of the first updates to really make an impact on the online marketing community. Even though updates up to this point had addressed common spam and low-quality problems, such as bad backlinks and keyword stuffing, Florida was the first major overhaul that put an end to these tactics for good. Thousands of websites plummeted in ranks for using these black-hat techniques, and webmasters were furious. Any investments in old-school SEO were now worthless, and suddenly people were beginning to fear and respect Google updates.
    • Austin. Austin was designed to be the clean-up hitter for Florida, coming out a short while later in January of 2004. It targeted spammy meta description practices and hidden on-page text, essentially missing the few things Florida missed with its clean sweep.
    • Brandy. Brandy was the name given to a cluster of updates that rolled out shortly thereafter in February of 2004. Rather than focusing on cleaning up spam, Brandy was mostly focused on improving the Google algorithm inherently. It added new semantic capabilities—including synonym detection and analysis—and started a transformation of Google’s old keyword practices, not to mention greatly expanding the primary index.
    • Allegra. It was a year later before another algorithm update rolled out (though there were some changes at Google in the meantime, including the addition of Nofollow links, which I’ll touch on shortly). In February of 2005, Allegra emerged to penalize new types of suspicious links.
    • Bourbon. The strangely named Bourbon update came out in May of 2005 to improve Google’s ability to evaluate content quality. There were a handful of changes here, but none that webmasters were able to specifically identify.
    • Gilligan. I questioned whether to include Gilligan here. In September of 2005, webmasters saw a spike in ranking volatility, which is usually an indication of an update emerging, but Google insisted that no update had occurred.
    • Jagger. Jagger was an iterative update launched between October and November of 2005. It targeted practitioners of link schemes like reciprocal links and paid backlink programs.

    Nofollow and XML sitemaps

    Aside from Google’s content evaluation and general improvement updates, this era was also privy to the development of several new features that search optimizers could utilize for greater content visibility.

    • Nofollow links. This wasn’t technically a Google update, but it was one of the more significant changes to the search engine landscape in general. It was a collaboration in January of 2005 between Google, Microsoft, and Yahoo that produced this new type of link—the nofollow link. Now, webmasters could include a tag in the back end of their code (“nofollow”) to indicate that a link shouldn’t be regarded for passing authority or PageRank. It’s a way to include a link for practical purposes (such as redirecting traffic or offering a citation) without formally citing a source for Google’s index. It helps brands protect themselves from accidentally building links that appear spammy, and generally improves the overall quality of links on the web.
    • Personalized search. It’s hard to imagine a time before personalized search features, but it wasn’t until June 2005 that it formally developed. Previously, Google relied on some specifically submitted forms to help personalize search, but with this update, the company could tap into your search history to provide you more accurate sources. Of course, the early version of this personalization feature was sketchy at best, but later iterations would refine and perfect the feature we all now take for granted.
    • XML Sitemaps. Sitemaps are constructs that help webmasters formally chart out all the pages and interconnections of their websites. Up until June of 2005 (the same time personalized search was rolling out), webmasters relied on their basic HTML sitemaps both to help users understand the layout of their sites and to help search engines index it properly. Though Google does a good job of eventually indexing every page and understanding the connections between them, it’s helpful and faster to upload an XML sitemap, a concept that Google created in June 2005. Now, creating, submitting, and regularly updating your XML sitemap is a staple of onsite optimization.

    XML Sitemaps

    (Image Source: Sitemaps.org)

    • Big Daddy. The hilariously named “Big Daddy” update was a bit more technical than the updates Google rolled out in the past. Rather than focusing on qualitative issues, like how to handle a user query or how to rank the value of a link, the Big Daddy update was focused on technical processes that happen behind the scenes. It improved Google’s canonicalization processes, the handling of 301 and 302 redirects, and a handful of other issues, rolling out between December of 2005 and March of 2006.

    Maps, SERP changes, and canonical tags

    It was around this time when Google started stepping up its game with even more features and functions for its users, going above and beyond the call of duty for search engines to give users a wider possible experience. This series of updates was focused on bringing new concepts to users, rather than improving existing infrastructure:

    • The dawn of modern maps. Google Maps had been around for a few years already, but the system hadn’t been very relevant to SEO. In October of 2005, Google decided to merge all its local Maps data with the business data it held in its Local Business Center. The resulting combination led to an almost-modern format of finding businesses on local maps, and helped to shape local SEO into a different form. This merge greatly increased the relevance of listing your business in Google’s Local Business Center, and forced more businesses to start thinking about their local SEO strategies.
    • Universal search. The “universal search” update was Google’s way of reimagining its SERPs for a new era. The biggest and most noticeable change was the introduction of a format that became known as “vertical search,” though this has become so standardized it’s hard to remember when Google functioned without it. Rather than offering only a search function that distributed web-based results, users could now toggle through several different “types” of results, including images, news, and maps. This opened up new doors of potential visibility for businesses everywhere, and forced webmasters to reconsider what they qualified as indexable content. It rolled out in May 2007.
    • Suggest. Believe it or not, Google Suggest didn’t come out until August of 2008. In a bid to improve the average user’s experience, Google wanted to offer more search possibilities to stem from a given query. As users type a query into the search box, Google would suggest a list of possible extensions and endings to the query, giving users branches of possibilities to explore. It was—and continues to be useful for users, but it also had two broad implications for search optimizers. First, it became an incredible tool for keyword research, powering software like Ubersuggest to help search optimizers find better and different keywords for their strategies. Second, it would come to power Google Instant, which I’ll describe in more detail later on.
    • The canonical tag. In February 2009, Google introduced the canonical tag, which was incredibly helpful in sorting out issues with duplicate content. It’s a relatively simple element to include in the back end of your site that informs Google which versions of a page should be indexed and which ones should be avoided. Without it, if your site has two versions of a single page (even unintentionally, as with http and https distinctions), it could register as duplicate content and interfere with your search visibility.

    Further tweaking

    Throughout this era, Google was also coming up with even more “tweaking” updates, all of which committed some minor changes to its search algorithm or improved user experience in some way. These are the most important ones to note:

    • Buffy. Google announced Buffy in June 2007, shortly after one of their engineers, Vanessa Fox departed the company. It was clear the update had some significant impact due to high volatility, but Google never disclosed what the update actually included. It was implied, however, that this was an accumulation of several other, smaller updates.
    • Dewey. Dewey was the next minor update on the radar, coming in at the end of March 2008 and stemming into April. Some professionals suspected this was Google’s way of pushing some of its own branded properties, but there was no hard evidence for this.
    • Vince. Vince was one of Google’s more controversial updates, taking place in February of 2009, nearly a year after Dewey. After rolling out, rankings shifted (like they always do), but this time, the shifts seemed to favor big brands and corporations. It’s hard enough for small- to mid-sized businesses to stay competitive in the search visibility world (or at least it was at that point), so many entrepreneurs and marketers were outraged by the changes—even though they were minor.
    • Real-time search. Though there was always a News section on Google, it wasn’t until the real-time search update at the end of 2009 that Google was able to index news stories and items from social media platforms in real time. After this update, new stories and posts would populate instantly in Google’s index, getting content to users faster.
    • Google Places. Though it didn’t have much of an impact on SERPs or company rankings, Google did overhaul its Local Business Center in April of 2010. It took Google Places, previously incorporated as a section of Google Maps, and created a specifically designated Google Places center for local businesses to create their profiles and advertise their establishments.
    • May day. The May day update was a niche change that seemingly only affected traffic generated by long-tail keyword queries. As explained by Matt Cutts, the update was intended to decrease the ranks of sites that used long, undetailed or low-quality content as a gimmick to rank higher for long-tail keywords.
    • Caffeine. Caffeine is one of the more substantial updates on this list, and it actually started rolling out as a preview in August 2009, but it didn’t roll out fully until the summer months of 2010. It was another infrastructural improvement, increasing the speed and efficiency of the algorithm rather than directly changing its ranking criteria. Evidently, the change increased indexation and crawling speeds by more than 50 percent, and increased search results speeds as well.
    • Instant. Google Suggest foreshadowed the release of Google Instant, which came out in September 2010. With Instant, users could sneak preview results as they typed in their queries, shortening the distance between a user’s query and full-fledged results.
    • Social signals. Corresponding with the rise in popularity of Facebook and other social media sites, Google added social signals as slight ranking signals for sites’ authorities. Today, this signal is weak, especially when compared to the power of backlinks.
    • Negative reviews. This update was a simple change at the end of December 2010 that addressed concerns that negative reviews could lead to an increase in rank due to increased visibility.
    • Attribution fixes. As a kind of prequel to Panda (which I’ll cover in-depth in the next section), Google released an attribution update in January of 2011. The goal was to stop content scrapers and reduce the amount of spam content on the web.

    Panda and Penguin

    So far, we’ve seen some significant updates, revisions, changes, and additions to Google Search, but now we’re getting to the heavy hitters. Two of the most impactful, talked-about, and still-relevant updates to the Google ranking algorithm happened practically back-to-back in 2011 and 2012, and both of them forever changed how Google evaluates authority.

    The Panda update

    • The basics. The simplest way to describe the Panda update is as an algorithmic change that increased Google’s ability to analyze content quality. Obviously, Google has a vested interest in giving users access to the best possible content available online, so it wants sites with “high quality” content to rank higher—but what exactly does “high quality” mean? Previous updates had attempted to solve this dilemma, but it wasn’t until Panda that Google was able to take this concept to the major leagues.

    The update rolled out on February 23, 2011, and fundamentally changed the way search optimizers operated online. This update was announced formally and explained by Google; the company stated that the update impacted roughly 12 percent of all search queries, which is a huge number, compared to previous algorithm changes.

    Digging into more specific details, the update targeted content farms, which previously existed to churn out meaningless “fluff” content to increase ranks. It also penalized sites with content that seemed to be unnaturally written, or stuffed with keywords, and instead rewarded sites that it evaluated to have detailed, well-written, valuable content.

    Over the course of its months-long rollout, thousands of sites saw their rankings—and their traffic—plummet, and the backlash was intense.

    Google Panda Update

    (Image Source: Neil Patel)

    • Panda 2.0 and more. Everything I described in the preceding paragraph relates only to what is retroactively referred to as Panda 1.0. As a follow-up to this update, Google released another algorithmic change, known as Panda 2.0 to the SEO community, in April 2011. The update polished some of the additions that Panda 1.0 introduced, and added a handful of new signals for English queries.

    Yet another follow-up, briefly known as Panda 3.0 until users realized it paled in significance to 1.0 and 2.0, became known as 2.1—these changes were hardly noticeable. A round of more Panda updates, minor in scale but significant in nature followed, including Panda 2.2 in June 2011, 2.3 in July, and so on each month until October.

    • Panda 3.0 and slowdown. There’s some manner of debate in what actually qualifies as Panda 3.0. For some professionals, 3.0 happened in October 2011, when Google announced a state of “flux” for the Panda update, which saw no fewer than three separate minor updates working together to affect about 2 percent of queries.

    For others, 3.0 happened in November 2011, when an update on November 18th shook up rankings further, seemingly based on the quality of their sites’ content.

    • The Panda dance. After the November 2011 update, new iterations of the Panda update seemed to settle into a groove. Each month, there would be some kind of new update, affecting a small percentage of queries, for more than a year. Some of these updates would be full-fledged algorithm changes, adding, subtracting, or modifying some component of the core Panda algorithm.

    Others would merely be data refreshes, “rebooting” Panda’s evaluation criteria. In any case, the number of Panda updates grew to be about 25 when Matt Cutts announced that in the future, monthly Panda updates would roll out over the course of 10 days or so, leading to a more seamless, rotating update experience for users and webmasters. This setup became known informally as the “Panda dance,” loosely referencing the Google dance of 2003.

    • Panda 4.0. After the monthly revolving door of the Panda dance, things stayed relatively quiet on the Panda front until May 2014, when an update that became known as the official Panda 4.0 was released.

    It rolled out gradually, much like a Panda dance update, but was massive in scale, affecting about 7.5 percent of all queries. Evidence suggests it was both a fundamental algorithmic change and a data refresh, which lent power to the overall release.

    Panda 4.1, a follow-up in September 2014, affected a hypothetical 3 to 5 percent of all search queries, but the typical slow rollout made it hard to tell. Panda 4.2 came in July 2015, but didn’t seem to have much of an impact.

    • Lasting impact. Today, the Panda update is still one of the most important and influential segments of Google’s ranking algorithm. Though it started as a separate branch algorithm, today it’s a core feature of Google search, and most search optimizers recognize its criteria as the basis for their on-site content strategies.

    Presumably, it’s still being updated on a rolling, gradual basis, but it’s hard to take a pulse of this because of the constant, slow updates and the fact that each successive update seems to be less significant.

    Schema.org and other small updates

    Between and surrounding the Panda and Penguin updates were a number of other small updates, including the introduction of Schema.org microformatting, now a major institution in the search world.

    • org. Much in the same way they came together for nofollow links, Google, Microsoft, and Yahoo worked together to settle on a single standard for “structured data” on sites. This structured data is a format in the back end of a site that helps search engine bots decode and understand various types of data on the site. For example, there are specific types of Schema markups for various categories of information, such as people, places and things. Today, it’s used primarily to draw information from sites to be used in the Knowledge Graph.

    Schema

    (Image Source: Schema.org)

    • Google+. Though currently, the social media platform seems somewhat obsolete, when Google launched it back in June 2011, it seemed to be the future of search engine optimization (SEO). Here was a social media platform and content distribution center connected directly into Google’s own ranking algorithm; as a result, the platform saw 10 million signups in less than 2 weeks.

    Unfortunately, it wasn’t able to sustain this momentum of growth and today, Google+ doesn’t bear any more impact on search ranks than any other social media platform.

    • Expanded sitelinks. In August 2011, Google changed the layout of certain page entries in SERPs to included what are known as “expanded sitelinks,” detailed links under the subheading of a master domain. These pages, often ones like About or Contact, could then be optimized for greater visibility.
    • Pagination fixes. Up until September 2011, pagination was a real problem for the Google ranking algorithm. Because there are multiple ways to paginate a series of pages on your site (such as when you have multiple pages of a blog), many duplicate content and canonicalization issues arose for webmasters. With this update, Google introduced new attributes specifically designed to better index multiple-page sections.
    • Freshness. The Freshness update came in November 2011, with Google proclaiming the update to affect 35 percent of all queries, though this number was never demonstrated. The goal of the update was to work in conjunction with Panda to prioritize “fresher,” newer content over old content.
    • The 10-packs. In a real departure from typical Google style, Matt Cutts announced the 10-pack of updates in November 2011 proactively and transparently. None of these updates were game-changers, but they did affect a number of different areas, including “official” pages getting a ranking boost and better use of structured data. Additional update “packs” followed in the coming months, but never revolutionized the search engine.
    • Ads above the fold. Though never named, a small update in January 2012 targeted sites that featured too many ads “above the fold”—what had become known in the industry as “top-heavy” sites. Instead, Google preferred sites with a more conservative ad strategy—or ones with no ads at all.
    • Venice. Venice was one of the first significant local updates, significantly increasing the relevance of local sites for corresponding local queries. It was a major boon for small businesses taking advantage of their local niches.

    The Penguin update

    • The basics. What Panda did for content, Penguin did for link building. In the weeks leading up to the release of Penguin, Google warned webmasters that there would be serious action taken to fight back against “over optimization,” the general nomenclature for any series of strategies designed to optimize a site to rank higher at the expense of user experience.

    Subsequently, Penguin was formally released on April 24, 2012. The update targeted a number of different activities that could be qualified as black-hat practices. For example, it cracked down on keyword stuffing, the practice of artificially including unnatural keywords into links and content for the sole purpose of increasing rankings for those terms.

    It also seriously improved Google’s ability to detect “good” and “natural” backlinks versus unnatural or spammy ones. As a result, more than 3 percent of all search queries were affected, and webmasters went into the same outrage they did when Panda disrupted their ranking efforts.

    Like Panda, it set a new tone for search optimization—one focused on quality over manipulation—and helped solidify Google as the leader in search it remains to be.

    • Initial boosts. Like Panda, Penguin needed some initial boosts to correct some of its early issues and sustain its momentum as a search influencer. The update that became known as Penguin 1.1 was released in May 2012, and served as more of a data refresh than an algorithmic modifier. However, it did confirm that Penguin was operating as an algorithm separate from Google’s core search ranking algorithm, much in the way that Panda did in its early stages.

    A third Penguin update came in October 2012, though it didn’t seem to affect many queries (less than 1 percent).

    • Penguin 2.0. The next major Penguin update rolled out in May 2013, almost exactly one year after the original Penguin update was released. This one was officially numbered by Google, but it didn’t bear as much of an impact as some professionals were anticipating. Though details were limited about the scope and nature of the update, it was speculated that this update focused more on page-level factors than its 1.0 counterpart.
    • Penguin 3.0. Due to the pattern of releases in May, most search optimizers anticipated there to be a Penguin 3.0 release in May 2014. However, it wasn’t until October 2014 that we saw another refresh of the Penguin algorithm. Because the update was spread out over the course of multiple weeks, it’s hard to say exactly how much of an impact it had, but it looks as though it was less than 1 percent of all queries, making it the weak link in the Penguin bunch.
    • Penguin 4.0. In September 2016, Google announced yet another Penguin update, suggesting that Penguin is now a part of Google’s “core” algorithm. Over the next few weeks, Penguin 4.0 rolled out over several phases, devaluing bad links instead of penalizing sites and reversing some previous Penguin penalties. Since then, Penguin updates have been small refreshes, in the same vein as the Panda update.

    Like Panda, Penguin still holds an esteemed reputation as one of the most significant branches of the Google search algorithm, and is responsible for the foundation of many SEO strategies.

    The Knowledge Graph

    The Knowledge Graph is a distinct section of indexed information within Google. Rather than indexing various sites and evaluating them for relevance to a given query, the Knowledge Graph exists to give users direct, succinct answers to their common questions. You’ve likely seen a box like this pop up when you Google something basic:

    The Knowledge Graph

    (Image Source: Google)

    The Knowledge Graph was rolled out as part of an update back in May 2012. When it was released, it only affected a small percentage of queries, as the Knowledge Graph’s scope and breadth were both limited. However, Google has since greatly prioritized the value and prominence of the Knowledge Graph, and thanks to more websites utilizing structured markup, it has access to more information than ever before.

    The next official Knowledge Graph expansion came in December 2012, when Google expanded the types of queries that could be answered with the KG and ported it to different major languages around the world. After that, an update in July 2013 radically increased the reach and prominence of the Knowledge Graph, raising the number of queries it showed up for by more than 50 percent. After this update, more than one-fourth of all searches featured some kind of Knowledge Graph display.

    Since then, the prominence of rich answers and KG entries has been slowly and gradually increasing, likely due to the gradual nature of incoming knowledge and increasing capabilities of the algorithm. The Hummingbird update and its subsequent partner RankBrain (both of which I’ll explain in coming sections) have also amplified the reach and power of the Knowledge Graph with their semantic analysis capabilities.

    Exact Match Domains update

    The exact-match domains update was a seemingly small update that affected a ton of queries. Exact-match domains are ones that match the wording of a user’s query exactly. In some cases, this is highly valuable; a user is likely searching for that exact brand, but in some cases, this can be used as a deceptive way to poach organic traffic.

    Google accordingly reevaluated the way it handled cases of exact-match domains, affecting nearly 1 percent of queries and reducing the presence of exact-match domains by more than 10 percent in search engine results pages.

    Smaller Updates

    After discussing the powerhouses of Panda and Penguin at length, all other search engine updates seem smaller by comparison. However, there have been some significant additions and modifications in recent years, some of which have opened doors to entirely new search visibility opportunities.

    Payday Loan update

    The Payday Loan update came around June 2013, and its purpose was to penalize sites with dubious intentions or spammy natures. As the name suggests, the primary target for these were “payday loan” sites and other similar sites that deliberately attempt to take advantage of consumers.

    Porn sites were also targeted. The main scouting mechanism for this was evaluating certain types of link schemes in an effort to reduce spam overall. For most legitimate business owners, this update didn’t make much of a difference.

    Payday loans also saw future iterations—2.0 and 3.0—in 2014, which targeted the same types of sites.

    Hummingbird

    Hummingbird was, and continues to be, a beautiful and particularly interesting update. Even though many users never noticed it, it fundamentally changed the way Google search works—and continues to influence it to this day. Released sometime around August 2013 and formally acknowledged later in September, the Hummingbird update was a core algorithm change that improved Google’s ability to evaluate queries, working on the “relevance” side of the equation rather than the “authority” side.

    Specifically, the Hummingbird update changed the way Google looks at keywords in user queries. Rather than dissecting queries based on what individual keywords and phrases it contains, the Hummingbird update allows Google to semantically decipher what a user is intending to search for, then find results that serve that intention throughout the web. This may seem like a small difference—and for most queries, these types of results are similar—but now that it exists, heavily keyword-focused strategies have been weakened, and the power of naturally-written content with contextual relevance signals has increased even further.

    Hummingbird has also been important in setting the stage for the future of Google developments, as we will soon see. For starters, it has fueled the rise in voice searches performed by users; voice searches tend to be more semantically complex than typed queries, and demand a greater semantic understanding for better results. Google has also modified Hummingbird with an important and game-changing update more recently with RankBrain.

    Authorship drops

    One of the biggest motivations to getting a Google+ account and using it to develop content was to take advantage of the concept of Authorship that developed. Authorship was a way for you to include your name and face (in other words, your personal brand) in Google’s search results, alongside all the material you’d authored.

    For a time, it seemed like a huge SEO advantage; articles written with Authorship through Google+, no matter what site they were intended for, would display more prominently in search results and see higher click-through rates, and clearly Google was investing heavily in this feature.

    Author Rank

    (Image Source: Dan Stasiewski)

    But starting in December 2013, Google started downplaying the feature. Authorship displays went down by more than 15 percent, with no explained motivation for the pull-back.

    In June 2014, this update was followed up with an even more massive change—headshots and photos were removed entirely from search results based on Authorship. Then, on August 28, 2014, Authorship was completely removed as a concept from Google search.

    In many ways, this was the death knell for Google+; despite an early boost, the platform was seeing declining numbers and stagnant engagement. Though search optimizers loved it, it failed to catch on commercially in a significant way.

    Today, the platform still exists, but in a much different form, and it will never offer the same level of SEO benefits that it once promised.

    Pigeon

    Up until Pigeon rolled out in July 2014, Google hadn’t played much with local search. Local search operated on an algorithm separate from the national algorithm, and it had toyed with various integrations and features for local businesses, such as Maps and Google My Business, but the fundamentals remained more or less the same for years.

    Pigeon introduced a number of different changes to the local algorithm. For starters, it brought local and national search closer together; after Pigeon, national authority signals, like those coming from high-authority backlinks, became more important to rank in local results.

    The layout of local results changed (though they previously and have since gone through many different layout changes), and the way Google handled location cues (mostly from mobile devices and other GPS-enabled signals) improved drastically. Perhaps most importantly, Pigeon increased the power of third party review sites like Yelp, increasing the authority of local businesses with a great number of positive reviews and even increasing the rank of third party review site pages.

    It was a rare move for Google, as it was seen as a partial response to complaints from Yelp and other providers that their local review pages weren’t getting enough visibility in search engines. In any case, Pigeon was a massive win for local business owners.

    Ongoing Panda and Penguin refreshes

    I previously discussed Panda & Penguin already in their respective section, but they’re worth calling attention to again. Panda and Penguin weren’t one-time updates that can be forgotten about; they’re ongoing features of Google’s core algorithm, so it’s important to keep them in the back of your mind.

    Google is constantly refining how it evaluates and handles both content and links, and these elements remain two of the most important features of any SEO campaign. Though the updates are happening so gradually they’re hard to measure, the search landscape is changing.

    Layout and functionality changes

    Google has also introduced some significant layout and functionality changes; these haven’t necessarily changed the way Google’s search algorithm functions, but they have changed the average user’s experience with the platform:

    • In-depth articles. In August 2013, Google released an “in-depth” articles update, and even formally announced that it was doing so. Basically, in-depth articles were a new type of news-based search result that could appear for articles that covered a topic, as you might guess, in-depth. Long-form, evergreen content was given a boost in visibility here, driving a trend of favoring long-form content that continues to this day.
    • SSL. In August 2014, Google announced that it would now be giving a ranking boost to sites that featured a layer of SSL encryption (denoted with the HTTPS designation), clearly with the intention of improving the average user’s web experience with increased security features. This announcement was blown up to be “critical” for sites to upgrade to having SSL protection, but the actual ranking boost here turned out to be quite small.
    • The quality update. The “official” quality update happened in May 2015, and originally it was described as a “phantom update” because people couldn’t tell exactly what was happening (or why). Eventually, Google admitted that this update improved the “quality signals” that it received from sites as an indication of authority, though it never delved into specifics on what these signals actually were.
    • AdWords changes. There have also been several changes to the layout and functionality of ads over the years, though I’ve avoided talking about these too much because they don’t affect organic search that much. Mostly, these are tweaks in visibility and prominence, such as positioning and formatting, but a major shakeup in February 2016 led to some big changes in click-through rates for certain types of queries.

    Mobilegeddon

    Mobilegeddon is perhaps the most entertainingly named update on this list. In February 2015, Google announced that it would be making a change to its ranking algorithm, favoring sites that were considered to be “mobile friendly” and penalizing those that weren’t mobile friendly on April 21 of that same year. Although Google had slowly been favoring sites with better mobile user experience, this was taken to be the formal, structured update to cement Google’s desire for better mobile sites in search results.

    The SEO community went crazy over this, exaggerating the reach and effects of the update as apocalyptic with their unofficial nickname (Mobilegeddon). In reality, most sites by this point were already mobile-friendly, and those that weren’t had a great deal of tools at their disposal to make their site more mobile-friendly, such as Google’s helpful and still-functional tool to literally test and analyze the mobile friendliness of your site.

    Overall, “mobile-friendly” here mostly means that your site content is readable without zooming, all your content loads appropriately on all manner of devices, and all your buttons, links, and features are easily accessible with fingertips instead of a mouse. When the update rolled out, it didn’t seem very impressive; only a small percentage of queries were affected, but subsequent refreshes have given it a bigger impact.

    Google also released a Mobile 2.0 update in May 2016, but since this basically reinforced concepts already introduced with Mobile 1.0, the effects of the update were barely noticeable to most business owners and search optimizers.

    The Modern Era

    Now, we enter the modern era of Google updates. I’ve already mentioned a handful of updates that have come out in the past couple years or so, but now I want to take a look at the game-changers that will hold the biggest impact on how the search engine is likely to develop in the future.

    RankBrain

    RankBrain made headlines when it first emerged—or rather, when it was formally announced by Google. Google announced the update in October 2015, but by that point, the process had already been rolling for months. What makes RankBrain special is the fact that it doesn’t necessarily deal with authority or relevance evaluations; instead, it’s an enhancement to the Hummingbird update, so it deals with better understanding the semantics of user queries.

    But RankBrain is even more interesting than that; rather than being a change to Google’s algorithm or even an addition, it’s a machine learning system that will learn to update itself over time. Hummingbird works by trying to analyze user intent of complex phrases and queries, but not all of these are straightforward for automatic algorithms to analyze; for example, the phrase “who is the current president of the United States?” and the phrase “who’s the guy who’s running the country right now?” are basically asking the same thing, but the latter is much more vague.

    RankBrain is designed to learn the complexities of language phrasing, eventually doing a better job at digging into what users are actually searching for. It’s highly significant because it’s the first time Google has used a machine learning update, and it could be an indication of where the company wants to take its search algorithm in the future.

    Quality updates

    Google has also released a number of what it calls “quality updates,” which make alterations to what factors and signals indicate what’s determined to be “quality content.” The quality update of May 2015 was one of these, but other updates have presumably rolled out, making less of an impact and going largely unnoticed by search optimizers.

    However, Google has recently opened up more about what actually qualifies as “quality content,” publishing and regularly revising a document called the search quality evaluator guidelines. If you haven’t read it yet, I highly recommend you check it out. A lot of it is common-sense stuff, or is familiar if you know the basics of SEO, but there are some hidden gems that are worth scoping out for yourself; I highlighted 10 of them in this infographic.

    With regard to spam, Google has released two new updates in the past two years to combat black-hat tactics and practices that negatively affect users’ experiences. In January 2017, Google released an update they announced five months previously, called the “intrusive interstitial” penalty. Basically, the update penalizes any site that uses aggressive interstitials or pop-ups to harm a user’s experience.

    In addition, Google launched a soft update in March 2017 called “Fred,” though it’s still officially unconfirmed. The specifics are cloudy, but Fred was designed to hit sites with low-value content, or those practicing black-hat link building tactics, penalizing them.

    Possum

    In September 2016, just before Penguin 4.0, the search community noticed significant ranking volatility for local searches. Though unconfirmed by Google, the informally named “Possum” update attracted significant attention. It appears the update increased the importance of location for the actual searcher, and updated local search entries to include establishments that were just outside city limits. Google also seems to have experimented with location filtering, separating individual locations that point to the same site.

    Snippet Changes

    Google has also been experimenting with its presentation of “featured snippets,” the selected, standalone search entries that pop up to answer your questions concisely, above organic search results, but below ads. Though the past few years have seen a steady increase in the number of featured snippets, there was a significant drop in October 2017; conversely, Knowledge Graph panels increased, suggesting Google may be rebalancing its use of featured snippets and the Knowledge Graph as a means of presenting information to searchers.

    Later, in November 2017, Google increased the length of search snippets for most results, upping the character limit to nearly twice its previous cap of 155.

    Micro-updates

    Google also seems to be favoring micro-updates, as opposed to the behemoths that made search optimizers fear for their jobs and domains for years. Instead of pushing a massive algorithm change over the course of a weekend, Google is making much smaller tweaks, and feeding them out over the course of weeks, or even months. For example, in April 2017, about half of all page-one organic search results were HTTPS as of mid-April, yet by the end of 2017, they represented 75 percent of all results, heavily implying a slow-rollout update favoring secure sites.

    In fact, it’s hard to tell when Google is updating its algorithm at all anymore, unless it announces the change directly (which is still rare, thanks to Google wanting to cut back on spam). These updates are going unnamed, unnoticed, and for the most part, un-worried about. November 2016, December 2016, February 2017, May 2017, September 2017, and November 2017 all saw significant ranking volatility associated with small updates.

    The core algorithm

    Part of the reason why Google is cutting back on the massive updates and favoring much smaller ones is because its core algorithm is already doing such a good job on its own. It’s a high-quality foundation for the search engine, and it’s clearly doing a great job of giving users exactly what they’re looking for—just think of all the times you use Google search on a daily basis.

    Especially now that Panda and Penguin are rolled into the core algorithm, Google’s not in a place to be making major infrastructural changes anymore, unless it someday decides to fundamentally change the way we think about search—and honestly, I wouldn’t put it past them.

    App indexing and further developments

    It is interesting to see how Google has increased its attention on app indexing and displays. Thanks to the greatly increased popularity and use of mobile devices, apps have become far more relevant and Google has responded.

    Its first step was allowing for the indexation of apps, much like the indexation of websites, to allow apps to show up for relevant user queries. After that, it rolled out a process called app deep linking, which allows developers to link to specific pages of content within apps for users who already have the apps installed.

    For example, if you have a travel app, a Google search on your mobile device could theoretically link you to a specific destination’s page within that app.

    But Google’s pushing the envelope even further with a process now called app streaming. Now, certain apps are being stored on Google’s servers, so you can access app-based content in a Google search without even having the app installed on your mobile device. This could be a portent of the future of Google’s algorithm development.

    Where does it go from here?

    With the knowledge of the modern era, and the pattern of behavior we’ve seen from Google in the past, we can look to the future and try to predict how Google will develop itself.

    • The end of big-game updates? The first major takeaway from Google’s slowdown and propensity for smaller, more gradual updates is the fact that game-changing updates may have come to an end. We may never see another Panda or Penguin shake up rankings as heavily as those twin updates did. We may see search evolve, in radical new forms no less, but even these changes will happen so gradually, they’ll be barely noticeable. However, it’s still important to pay attention to new updates as they roll out so you can adapt with the times.
    • Ongoing Knowledge Graph expansion. Google is heavily favoring its Knowledge Graph, as rich answers are rising in prominence steadily, especially over the past few years. They’re getting more detailed, they’re showing up for more queries, and they’re taking up more space. It’s clear that Google wants to be able to answer every question immediately and specifically for users, but the implications this might have for overall search visibility and organic traffic have yet to be determined.
    • Machine learning updates. RankBrain is only one indicator, but it is a big one, and it only makes sense that Google would want more machine learning algorithms to develop its core search function. Machine learning takes less time, costs less labor, and may ultimately produce a better product. However, more machine learning updates would make search optimization less predictable and more volatile in terms of changes, so it’s definitely a mixed bag.
    • The eventual death of the traditional website. Thanks to Google’s push for more app-based features and app prominence, not to mention the rise of voice search and smart speakers, the death of the “traditional website” may be accelerated. It’s unlikely that websites will die out within the next couple of years, but beyond that, consumers could be ready for an overhaul to the average online experience.
    • New features and functions. Of course, Google will probably develop new features, new functions, and accommodations for new technologies in addition to simply improving its core product. These developments are somewhat unpredictable, as Google keeps them tightly under wraps, and it’s hard to predict what technologies are on the horizon until they’re actually here.

    These predictions are speculative and ambiguous, but unfortunately that’s the nature of the beast. Historically, it’s been difficult to know what to prepare for, because not even Google engineers know exactly what technologies users will take to and which ones they won’t.

    Everything in the search world—from algorithm updates to SEO and user experience design—demands an ongoing process of feedback and adaptation. You have to pay attention, remain flexible, and work actively if you want to stay ahead of the curve.

    Google has come a long way in the past 19 years of its existence, and it likely has a long road of development ahead of it. If you attune yourself to its changes, you can take advantage of them, and ride those waves of visibility to benefit your own brand.

    What can we help you with?

  2. Why Did Google Update Search Quality Raters Guidelines?

    Leave a Comment

    Google’s “Search Quality Raters” guidelines (henceforth shortened to SQR guidelines) are something of a holy document in the SEO community. We live in a world where Google is pretty much the dominant force in search, dictating the trends and tropes we optimize our sites for (and influencing any other competitors who peek their heads out for search space), and it’s hard because they don’t tell us specifics about how their search algorithm works. Instead, we get hints and suggestions that indirectly tell us how to rank higher but mostly just prevent us from relying on black hat tactics to rank.

    The Original SQR Guidelines

    There have been a handful of “leaked” versions of the SQR document, and one official abridged version released by Google, but it wasn’t until last year that Google released the full SQR guidelines, in all their 160-page glory. Search marketers, once they got over being intimidated at the length, delved into the document to see what new insights they could uncover about how Google interprets the authoritative strength and relevance of websites.

    Google Search Quality Rating Program

    (Image Source: Google)

    The original document didn’t exactly revolutionize the search world, but it did bring up some important considerations and strategic takeaways we wouldn’t have gotten otherwise.  Much of the document did, admittedly, tread old ground by covering things like the importance of high-quality content and how Google views relevance to search queries from a semantic angle. However, there were some noticeable new insights:

    • Google views pages that deal with your money or your life “YMYL” pages more significantly than other pages.
    • Expertise, authoritativeness, and trustworthiness (EAT) are the three factors that Google uses to determine a site’s domain strength.
    • Content positioning and design come into play. Google evaluates content differently based on how it’s placed.
    • Know queries and know simple queries. These are designations for different types of queries based on how they can be answered; namely, succinctly or with more elaboration necessary.

    Now, it appears Google has made some major modifications to the SQR document.

    The Changes

    Only four months after the document was originally published, Google has made significant changes. You might think Google added even more content to the 160-page behemoth, but actually, the document shrank, specifically to 146 pages.

    Among the most important changes include:

    • A decreased emphasis on supplementary content. Supplementary content refers to any on-page content other than the “main” source of information. For example, if your Contact page has a few paragraphs of text explaining who you are and what you do, you might have supplementary content in the form of notes in the footer, or testimonials. Supplementary content can help or harm you, and it was a major point of emphasis in the previous version. Now that Google has downplayed it, it might be a sign that it’s not as important to your rank as it used to be.
    • An increased attention to Local search, now called “Visit-in-Person.” Google spends more time talking about the importance of local ranks and how to achieve those ranks. It has also adopted new terminology, “visit-in-person,” which may explain how they perceive these types of user queries. Rather than simply relegating these types of entries, which function on an algorithm separate from the national results, to a geographic sub-category, Google is now boasting these entries as means for foot traffic. It makes sense, as most local searches happen on mobile and are related to some semi-immediate need.
    • Increased descriptions of YMYL and EAT concepts. I described both the YMYL and EAT concepts in the section above. The concepts themselves haven’t changed, but Google has increased its emphasis on them. This means the concepts may be becoming more important to your overall rank, or it may mean that there was some initial confusion surrounding them, and Google has worked to clarify those points.
    • More examples of mobile marketing in effect. It’s no surprise that Google is doing more to play up mobile, especially with another Mobilegeddon-style update in the works. Mobile is a topic that still confuses a lot of webmasters, but it’s still becoming increasingly important as a way to reach modern audiences. Mobile isn’t going away anytime soon, so this is a vital area (and Google recognizes that).

    If you’re interested in a much, much more thorough analysis of the changes, there’s a great post about it here.

    Google’s Main Priorities

    By examining Google’s motivations, we can better understand where the search platform hopes to be in the next few years, and get a jumpstart on preparing our SEO strategies for the future. For starters, Google is extremely fixated on the mobile user experience. With an expanded section on mobile compliance and a new frame of reference for local searches, it’s clear that Google wants mobile searchers to have an integrated, interactive, and seamless experience finding websites. The YMYL and EAT systems of rating content quality and significance are standbys, but the fact that Google is actively expanding these concepts is evidence that they’ll be around for the long haul.

    It’s uncertain exactly how often Google will update their SQR guidelines document, or what other changes might be in store for our future as search marketers. Certainly, there may be major new additions for new technologies like apps and interactive content, but in the meantime, keep your focus on producing expert, authoritative, trustworthy content, and optimize your site for mobile user experiences.

    What can we help you with?

  3. How Far Will Google’s “Right to Be Forgotten” Plan Go?

    Leave a Comment

    Google has been under a ton of pressure, both in the European Union and in the United States, to do more for user privacy. One of the biggest changes pushed by the EU back in 2014, the “Right to Be Forgotten” act, basically mandated that Google remove any old or irrelevant links that individual users request for removal (pending approval). Now, it appears that the Right to Be Forgotten rules are growing larger and more complex, and Google is taking an active role in their development.

    What does this mean for Google? What effects will it have for the future of search?

    The Right to Be Forgotten Act

    The specifics of the Right to Be Forgotten Act are more complicated than my initial, brief explanation, and it’s important to understand them before I dig any deeper into more recent developments. The original guidelines in the EU give all EU citizens the right to request a link to be removed from Google’s massive web index. This functions similarly to Google’s DMCA requests, which are used for copyright holders to request the removal of infringing material.

    right to be forgotten

    (Image Source: SearchEngineLand)

    However, there is no guarantee that Google will honor this request if the link is deemed relevant to search user interests. The only links required to be removed are ones that are out of date or no longer relevant to the public interest, which is a painfully ambiguous term. As a quick example, let’s say when you were a kid, you started up a personal blog in which you complained about school and expressed your angst with reckless abandon. Somehow, you found yourself unable to take this site down, and now, 20 years later, that blog is still showing for anybody who searches your name. Since this information is old, and isn’t inherently tied to the public interest, it would seem fair for Google to remove the link from its index to reduce its visibility.

    If Google rejects the request, it can be appealed. If the link is removed, the content is still available online—it’s just significantly more difficult to find. It may also appear on another site, complicating the process. Right to Be Forgotten isn’t a perfect system, but it’s what we have.

    Various alternative forms of “right to be forgotten” are starting to emerge in the United States, as well. For example, California passed a recent law known as the “eraser button,” which demands similar functionality (going a step further to demand the full removal of certain content from the web), and Illinois and New Jersey are working on similar laws. A federal version of the legislation is also underway.

    This falls in line with the general attitude of the times: a demand for greater responsibility from tech giants. Google is also under heavy scrutiny for alleged antitrust violations, originally in the EU exclusively, and now in the United States as well.

    The Latest Developments

    Back in 2014, Google was resistant to Right to Be Forgotten legislation, claiming it to be a form of censoring the Internet and a violation of user rights. Now, Google is inching closer to a more comprehensive application of those laws.

    Under the old policy, “forgotten” links would only be removed from versions of the search engine for other countries—Google.com.uk, for example. A user with the right incentive could simply access the United States’ version of Google to find the links that have been removed. Now, Google has implemented functionality to prevent this breach; all link removal requests are based on the origin of search, so any user in the UK will not be able to find forgotten links, no matter which version of the search engine they use. Speculation suggests that Google made this change under pressure from European authorities.

    Where Does It Go From Here?

    Frankly, the speculation rings of truth to me. I suspect that Google won’t take any moves to remove content from users’ reach until it is forced to (or pressured to) by international government bodies. This move, while small, is a concession the search giant is willing to give in order to remain in good standing on the international scene.

    The company isn’t known for buckling in response to requests; for example, when Spain passed a law that would require the company to pay a kind of tax to writers of articles that showed up in Google News, Google responded by pulling Google News from Spain entirely.

    Google News

    (Image Source: Arts Technica)

    With Google more or less tacitly accepting these new demands for indexation rules, does this mean that Google is liable to respond passively to such requests in the future? This remains to be seen. It depends on how much pressure is put on the company by international organizations, and how important the issue is to Google. For example, removing a link to a half-assed personal blog from 20 years ago doesn’t carry the same consequences as censoring information available to an entire country about that country’s government.

    My guess is, lawmakers in the United States and overseas will gradually work to introduce new, better regulations to encourage user privacy and Google, as long as these demands are reasonable, will comply. Overall, this will have a minimal effect on the way we use search engines, but it shows that we’re entering an era of greater responsibility and accountability for tech giants.

    What can we help you with?

  4. How Google’s Candidate Cards Turned Into a Travesty

    Leave a Comment

    Google is never short on ideas how to improve its search system and break ground in new areas of user satisfaction. Sometimes, these ideas are large and revolutionary, like its embedment of maps into local searches. Its Knowledge Graph, a system of direct information provision (forgoing the need to peruse traditional search entries) is one of the most robust and useful additions in recent years, and it keeps evolving in new, unique ways.

    Rather than solely providing word definitions, or numerical unit conversions, or even filmography information, the Knowledge Graph can provide unique insights on news and upcoming events. Take its entry on the Super Bowl, for example (keep in mind this was written just before the actual Super Bowl):

    Super Bowl Keyword Search Results

    Presumably, this entry will self-update as teams score throughout the evening, and in the next week, will instead offer retrospective analysis of what is currently a forthcoming event. As a user, this doesn’t leave much to be desired; I can even scroll down to find additional results.

    But a recent feature of the Google Knowledge Graph has made a much bigger impact, and reveals one of the biggest current flaws of the direct-information model. It’s all about Google’s portrayal of candidates in what has undoubtedly been one of the most eventful, peculiar election seasons of the past few decades.

    Candidate Cards

    Google’s politics-centric new feature, candidate cards, has begun the same way all its features begin: as an experiment. Accordingly, let’s refrain from judging this system too harshly.

    The idea was to give the American public more transparency on their leading presidential candidates—which sounds great in theory. Google’s idea was to give each significant candidate a designated position in their search results for certain queries. These “candidate” cards would appear in a carousel to potential voters, giving them a direct line of insight into the candidates’ actions and positions. This feature was rolled out as a trial for the recent “undercard” Republican debate, along with YouTube integration and live tracking via Google Trends.

    Google Candidate Cards Mobile View

    (Image Source: Google blog)

    Here’s the issue: if you followed along with this experiment during the actual debate, you wouldn’t see multiple candidates’ positions. You only would have seen one, at least for the bulk of the time and for most queries.

    As SearchEngineLand’s Danny Sullivan noted in a blog post on the issue, the carousel of cards that appeared, for practically any search, only showed posts and positions by one candidate: Carly Fiorina.

    gop debate serp

    A handful of general searches like “gop debate” or even just “debate” returned the same carousel. Likewise for any undercard candidate-specific searches, such as “Huckabee” or “Santorum.” At first glance, you would assume that this is some type of error with Google’s system, that somehow these posts were “stuck” as the top results for any query that tapped into the feature. Could this mean that Google was unfairly favoring one candidate over the others?

    Google would later confirm that nothing was wrong with the feature. Each candidate had the same ability to upload information to this system; Fiorina was the only candidate who made use of the system, and therefore had substantial ground to gain.

    Main Candidate Cards

    Candidate cards for the main GOP candidates appeared not long after the undercard debate ended, including Donald Trump, who was absent from the “main” debate. Take a quick look at these and take note of anything peculiar that stands out:

    GOP Debate

    (Image Source: SearchEngineLand)

    Look at the center post, which features a link to donate $5 to Mark Rubio’s campaign, and consider the nature of the query: 2016 Republican debate. If you’re like me, this raises some questions about the card system and whether it goes “too far” for search results.

    Three Major Concerns

    I don’t care who you support, which party you belong to, or what you think about this election. For the purpose of this article, I’m assuming every candidate on both ends of the political spectrum is equally unqualified to lead the country, and so should you. Remove your biases and consider the following dilemmas this system presents, for any candidate in the running:

    1. Free Advertising. There are strict rules about political advertising, which go into exhaustive detail that I won’t attempt to reproduce here. It seems that Google’s card system can be taken advantage of as a free, unrestricted place to advertise, whether it’s through the request for campaign donations or an attack on another candidate.
    2. SEO as a Political Skill. Take Fiorina’s case; should she be rewarded with extra publicity because of what basically comes down to SEO skills? This seems strange at first, until you realize this is mostly the case anyway—you can bet each candidate has a dedicated contact responsible for making sure they rank highly for public searches (not to mention the presence and effects of social media marketing in political elections).
    3. Biased Media Control. Last, and perhaps most importantly, should Google be allowed to control the parameters for which we view candidate information? Contemplating the possibility of filtering out one candidate’s cards, this is concerning, yet again, it’s nothing entirely new—Google’s stranglehold on search results is currently being investigated as a violation of antitrust laws in Europe.

    What does the candidate card system say about Google? What does it mean for the political system? Is it a useful tool that needs refinement or a total travesty that should be scrapped? I’m not quite sure myself, but you can be sure this experiment didn’t quite go the way Google originally intended. Keep your eyes peeled for how this feature develops—it could have a massive impact on how this and even future elections pan out. In the meantime, you better hope your favorite candidate is as skilled at SEO as you are.

    What can we help you with?

  5. Your Guide to Google’s New Stance on Unnatural Links

    2 Comments

    Recently, Google quietly released an update to its link schemes/unnatural links document in Webmaster Tools. For something that happened so quietly, it generated significant noise across industry media outlets. So, what changes were made and what do SEO professionals, business owners and webmasters need to do differently as a result?

    Building Links to Manipulate PageRank

    articleimage575 Building Links to Manipulate PageRank

    Here’s what Google’s document now says about manipulating PageRank:

    “Any links intended to manipulate PageRank or a site’s ranking in Google search results may be considered part of a link scheme and a violation of Google’s Webmaster Guidelines. This includes any behavior that manipulates links to your site or outgoing links from your site.”

    What does Google mean when they say, “any links intended to manipulate PageRank”? According to Google, any links you (or someone on your behalf) create with the sole intention of improving your PageRank or Google rankings is considered unnatural.

    The quantity and quality of inbound links have always been a crucial part of how Google’s algorithm determines PageRank. However, this fact manifested manipulative link building schemes that created nothing other than spam across the Web, which is something Google has been working feverishly to eliminate since it launched it original Penguin algorithm in April 2012.

    Now, Google is much better at differentiating true editorial links (ie, natural) links from manipulative (unnatural) ones. In fact, Google now penalizes Websites in the search rankings that display an exceptionally manipulative link profile or history of links.

    What about Buying or Selling Links?

    articleimage575 What about Buying or Selling Links

    Google says, “Buying or selling links that pass PageRank. This includes exchanging money for links, or posts that contain links; exchanging goods or services for links; or sending someone a “free” product in exchange for them writing about it and including a link.”

    If people found out that their favorite politician had in some way purchased a majority of his or her votes, how would they feel about it? When we purchase (or sell) links for a website, we are essentially doing the same thing.

    Google has made it clear that purchasing links violates their quality guidelines. However, many companies continue to do so, and some companies have severely lost search rankings and visibility as a result.

    Google is getting better at understanding which links are purchased in a wide variety of ways. They also have a team devoted to investigating web spam, including purchased links.

    Excessive Link Exchanges

    articleimage575 Excessive Link Exchanges

    Google says, “Excessive link exchanges (“Link to me and I’ll link to you”) or partner pages exclusively for the sake of cross-linking.”

    A few years ago, it was a common for webmasters to exchange links. This method worked; as a result it started to become abused at large scale. As a result, Google started discounting such links. Now, Google has officially added this to its examples of unnatural link building tactics.

    Large-scale Article Marketing or Guest Posting Campaigns

    Google says, “Large-scale article marketing or guest posting campaigns with keyword-rich anchor text links.”

    This, in particular, has a lot of people wondering, “can you still engage in guest posting as a way to get inbound links?” The answer depends on how you’re doing it.

    A few years ago, it was a common for SEOs to engage in large-scale article marketing in an attempt to quickly get tons of inbound links. Many were using low-quality, often “spun” content (mixed and mashed, sometimes computer-generated nonsense) to reduce time and content production costs. The result was a surge in nonsensical articles being published around the Web for the sole purpose of creating inbound links. It was a true “throw spaghetti at the wall and see what sticks” approach to online marketing; some publications rejected these submissions, and others approved them without any editorial review. It was all a numbers game with the hopes that some of the content would get indexed, and thus, count for the link.

    Google responded by launching its Penguin and Panda algorithms to penalize businesses that were creating this mess; Penguin targeted websites with many inbound links that were obviously unnatural, while Panda targeted the publishers that published the content without any editorial review. As a result, most of the large-scale article marketing links became worthless.

    After people started to realize that large-scale article marketing campaigns were no longer working, they turned to guest posting as an alternative. Unfortunately, what many considered “guest posting” was simply an ugly reincarnation of article marketing; the only difference was the publishers and the extra steps of finding websites open to publishing guest contributions. Many continue to use low-quality content in mass quantities, and wonder why they still get penalized by Penguin.

    Does guest posting still work for building inbound links?  Yes, but only if you publish high quality content on relevant, authoritative sites. High-quality guest posts are a popular and tremendously effective way to acquire editorial links for your site, and they have many other benefits as well. For more information on how to use guest posting as a safe, effective link building tactic, see my article “The Ultimate, Step-by-Step Guide to Building Your Business by Guest Blogging.

    Automated Link Building Programs

    Google says, “Using automated programs or services to create links to your site.”

    A few years ago, during the same time period that article marketing and spinning was all the rage, a market developed for programs and services that would automate the steps involved in these processes. These tools and services became popular because they were an easy way to get huge numbers of links to your site quickly. Most importantly, they worked. Unfortunately, they only accelerated the permeation of low-quality nonsense that pervaded the industry at that time.

    Google now hunts down sites that have these sorts of inbound links, denying them any benefit.

    Conclusion

    So, why is Google waging a war on unnatural links? For years, many SEOs effectively manipulated their rankings using the methods described above, along with others. However, the types of links and content that people created as a result provided no value to people; only clutter. They cause search results to display nonsensical, confusing content, which makes Google look bad to its users. Furthermore, they cost Google money as its bots spend time scraping and indexing nonsense rather than good, quality content.

    As of April 2012, with the release of the Penguin algorithm, Google has been trying to keep low quality content out of its index as well as the search results. Now, they’re becoming more transparent with their goals as they refine and clarify their webmaster guidelines.

    Although these changes created quite a stir across the industry, it’s really just the same message that Google has been trying to convey for years. Create quality content that people want to read and share; the inbound links will come as a result, and you won’t need to worry about unnatural ones bringing down your website in the rankings.

    Want more information on link building? Head over to our comprehensive guide on link building here: SEO Link Building: The Ultimate Step-by-Step Guide

    What can we help you with?

  6. How to Find LSI (Long-Tail) Keywords Once You’ve Identified Your Primary Keywords

    4 Comments

    LSI keywordsFor many SEOs, keyword research is all about finding keywords with a high number of monthly searches and low competition. Some of the more advanced will move on to long tail keywords, or keyword phrases, or look to local keywords to help lower the competition and leap to the top of the search engine results page.

    These are all great strategies, but to truly show your skills as a keyword ninja, and find those untapped gold nuggets, you have to know how to identify long-tail, LSI keywords.

    What are LSI Keywords?

    If you were to search for a definition to LSI, or latent semantic indexing, keywords you would find answers all over the map. Most people will tell you that LSI keywords are simply synonyms for your keywords. The belief is that by finding similar terms for your primary keywords you can make your content look a bit more natural while adding more possible search terms into the mix.

    However, this rudimentary explanation of the term doesn’t do enough to serve our purposes. If you want to master the LSI keyword we have to get elbows deep in what it means. I wrote an article specifically for that purpose: “Latent Semantic Indexing (LSI): What is it, and Why Should I Care?

    Wikipedia describes LSI as having the, “ability to correlate semantically related terms that are latent in a collection of text;” a practice first put into use by Bell Labs in the latter part of the 1980s. So it looks for words (keywords or key phrases, in our case) that have similar meanings and words that have more than one meaning.

    Take the term “professional trainer,” for example. This could mean a professional fitness trainer, a professional dog trainer, or even a professional corporate trainer. Thanks to LSI, the search engines can actually use the rest of the text in the surrounding content to make an educated guess as to which type of professional trainer is actually being discussed.

    If the rest of your content discusses Labrador Retrievers, collars, and treats, then the search engine will understand that the “professional trainer” being referenced is likely a dog trainer. As a result, the content will be more likely to appear in search results for dog trainers.

    Another case would be where multiple synonymous terms exist in the same piece of content. Take the word “text” for example. If this were a keyword for which you were trying to optimize your page, words like “content,” “wording,” and “vocabulary,” would all likely appear within the content because they are synonyms and/or closely related terms.

    The benefits of LSI keywords

    The most obvious benefit to LSI keywords is that your keyword reach becomes broader by using synonyms. As I wrote in my article “The Rise of the Longtail Keyword for SEO,” “they are playing an increasingly essential role in SEO.”

    In addition to the broader reach, your content will rank higher in search engines because of the supporting effect of the LSI keywords. Repeating your keywords over and over throughout the text in an attempt to achieve the perfect keyword density (which, by the way, is a completely outdated SEO term and tactic) makes the content read awfully funny; and the search engines are smart enough to detect this sort of manipulation, too. Using synonymous keywords helps make your content a richer experience for the reader, and more legitimate (and thus, higher ranking) to search engines.

    Finally, LSI keywords help keep you competitive for your primary keywords in the right context. If you are optimizing for the term “professional dog trainer,” you’re less likely to be competing against the other types of professional trainers in search results.

    Great, how do I find LSI keywords?

    The search for LSI keywords starts with your primary keywords. They are the foundation of your SEO efforts, so if you don’t have these identified yet, then go back and find these first. Once you have them you can get started with LSI keywords. How do you find primary keywords? See my article, “The Definitive Guide to Using Google’s Keyword Planner Tool for Keyword Research.”

    Contrary to what you learned in high school, the thesaurus is not your first stop to find synonyms for your LSI keywords.

    The easiest way to find out what the search engines think are terms related to your keyword are to use the search engines themselves.  Go over to Google, and start typing your primary keyword into the search box. Note all of suggestions that are provided and you will not only have a list of related keywords, but a list of keywords that Google knows are related.

    Once you’ve made your list, hit enter to perform a search for your keyword. Scroll to the bottom of the results page, and look at the Searches related to <your keyword>. This will also give you some insight as to good ideas for your LSI search terms.

    Google’s Keyword Planner

    There have been a few changes, other than the name, when it comes to Google’s new Keyword Planner, but anyone familiar with the old Keyword Tool should be able to navigate through it with no problems.

    You can use it to find LSI keywords, and the process is simple. First, click on Search for keyword and ad group ideas and enter your primary keyword in the Enter one or more of the following box and click on the Get Ideas button at the bottom. On the following page, click on the Keyword ideas tab to get a look at not just a list of recommended LSI keywords, but their monthly searches, competition and other metrics that can help you decide which ones to target.

    Paid keyword tools

    Like anything else in SEO, there are plenty of software packages and services you can buy that will help you in your search for LSI keywords. The downside to these is that you are paying for something that you can get for free. The upside is that the training and support that comes along with most of these purchases will help you learn how to find these keywords more easily.

    The secret operator

    Actually, this is no real secret, but if you place a tilde (the squiggly line ~) before your primary keyword in the Google search engine, it will provide you with the results for synonyms to your search term; for example, ~professional dog trainer.

    Reading over the titles and descriptions of the results, you’ll be able to find some good LSI keywords. If you want to leave a term out of the results, add that phrase to the query with a minus sign in front of it. For example: ~professional dog training –dog grooming.

    Like your primary keywords, you need to make sure that you don’t over-do it when it comes to LSI keywords. A few closely-related terms will be sufficient to help your SEO efforts. And like your primary keywords, don’t try to insert LSI keywords into the text where they don’t fit.

    Remember, latent semantic indexing will only help you if you are writing good content for your readers. LSI keywords will give the search engines the information and evidence they need to understand what your content is saying and reward you accordingly.

    Want more information on content marketing? Head over to our comprehensive guide on content marketing here: The All-in-One Guide to Planning and Launching a Content Marketing Strategy.

    What can we help you with?

  7. Penguin 2.0: What Happened, and How to Recover

    1 Comment

    If you’ve spent any time recently in the world of SEO, you’ve probably heard about Penguin 2.0 — Google’s search engine algorithm change that was just launched on May 22nd, 2013. By the way that some SEOs were talking, you’d think it was the Zombie Apocalypse. Whatever it is, you can be sure that it will have a dramatic change on the web landscape. Here are five important questions and answers about Penguin 2.0.

    What is Penguin 2.0?

    To understand the 2.0 of anything, you need to understand the 1.0. The original Penguin is the moniker for Google’s algorithm update of April 24, 2012. When Google tweaked the algorithm in a big way, 3.1% of all English-language queries were affected by the update. Penguin was carefully designed to penalize certain types of webspam. Here are some of the main factors that Penguin targeted:

    1.  Lots of exact-match anchor texts (30% or more of a link profile)

    2.  Low quality site linkbacks, including directories and blogs

    3.  Keyword intensive anchors

    The aftershocks of Penguin continued long after April 24. Several mini Penguins were released since then, which is why some SEOs prefer to call the coming change “Penguin 4.” The new Penguin is predicted to do the following:

    • Penalize paid links, especially those without “nofollow”
    • Penalize spamdexing in a more effective way
    • Penalize advertorial spam.
    • Tightening penalties on link spamming/directory listings
    • Removing hacked sites from search engine results
    • Boost ranks for sites that have a proven authority within a niche

    How much different is it from Penguin 1.0?

    Calling this Penguin 2.0 is slightly misleading. We shouldn’t think of algorithm changes in the same way we think of software updates — better features, faster architecture, whatever. Penguin is not a software update. It’s a change in the way that a search engine delivers results to users.

    Here is a brief explanation of search engines, and how they change. Search engines are designed to give people the most accurate, trustworthy, and relevant results for a specific search query. So, if you type in “how to cook lima beans,” the search engine attempts to find the very best site on the Internet to help you cook your lima beans. Obviously, every lima bean recipe site wants to have the top dog spot on the search engine results page (SERP).

    Some webmasters will cook up clever tricks to do so. Thus, a site with hordes of banner ads, hordes of affiliate links, and barely a word about cooking lima beans could, with a few black hat techniques, climb in the rankings. The search engine doesn’t want that. They want people to have their lima bean recipe — great content — not just a bunch of ads.

    Thus, they change things deep within the algorithm to prevent those unscrupulous tricks from working. But the slithery lima bean site figures out a new way to slip by the algorithm. And the algorithm figures out another way to block them. And so on, and so forth.

    As all of this is happening, several key points emerge:

    1.  Search engine algorithms become more sophisticated and intelligent.

    2.  It becomes less likely for sites to game the system.

    At AudienceBloom, we follow white-hat SEO principles. We understand that there are a few tricks that we could use that might bump your site higher in the short term. However, we don’t engage in those practices. We want our clients to be successful for the long haul, which is why we engage in SEO techniques that are truly legitimate.

    What’s going to happen? 

    Now that Penguin 2.0 is rolling out, one of two things will happen to your site (as Google’s data centers propagate with the algorithm rollout and your rankings are adjusted accordingly):

    1. Nothing.

    2. Your rankings will drop, organic traffic will tank, and your site will begin to flounder.

    If, unfortunately, number 2 strikes, you may not realize it for a few days unless you are a big site with 10k+ visits with 30%+ organic a day.  In order to answer “what’s going to happen” for your site, you need to understand whether or not your site is in violation of any Penguin 2.0 targets. That question is better answered with an entire article of its own, but here are a few warning signs that your site could be targeted by Penguin 2.0.

    • You’ve had unscrupulous link building efforts conducted on your site.
    • You purchased paid links from another site (e.g., advertorials)
    • You rely on spam-like search queries (for example “pay day loans,” “cheap computers,” “free gambling site,” etc.).
    • You have aggressively pursued link networks listings on unreliable directories.

    Each of the above four points are common SEO tactics. Some SEOs have become sneakier than the algorithm, which is why Google is making these important changes.

    What should I do to prepare or recover?

    The most important thing you can do right now is to follow Matt Cutt’s advice in his recent video:

    “If you’re doing high quality content whenever you’re doing SEO, this (the Penguin update) should not be a big surprise. You shouldn’t have to worry about a lot of changes. If you have been hanging out in a lot of blackhat forums, trading different types of spamming package tips and that sort of stuff, then this might be a more eventful summer for you.”

    Content is the most important thing, of course, but that’s more of a proactive preparation than an actual defense. Is there a way to actually defend yourself from the onslaught of Penguin 2.0? What if you’ve already been affected by it?

    One important thing you can do right now is to analyze your site’s link profile to ensure that your site is free of harmful links. Then, you should remove and perform disavow requests on the bad links to keep your site’s inbound link profile clean. This is the equivalent of a major surgery on your site, and it could take a long time to recover. Here’s what Matt Cutts said about it on May 13:

    Cutts on Penguin 2.0

    Here are the steps you need to take to recover from Penguin 2.0:

    Step 1. Identify which inbound links are “unclean” or could be hurting your rankings (ie, causing you to be affected by Penguin 2.0). To do this, you’ll need to perform an inbound link profile audit (or have us do that for you).

    Step 2. Perform major surgery on your site’s link profile in order to make it as clean as possible. This includes removing links identified in the link profile audit, and then disavowing them as well.

    Step 3. Build new inbound links using white-hat tactics like guest blogging, while abiding by proper anchor text rules with your new inbound links.

    Step 4. Establish a content calendar to keep pushing out high-quality content, engage in social media, and avoid spammy techniques of any kind.

    If you’re looking for SEO help, AudienceBloom is prepared to help. One of our major efforts in the wake of Penguin 1.0 was helping sites to recover their rankings and clean up from their past. If you’ve been hit by Penguin 2.0, now is the time to take action to recover your rankings and search traffic. Contact us for a complimentary assessment and action plan.

    What can we help you with?

  8. How to Prepare for Penguin 2.0: Take Off that Black Hat!

    2 Comments

    Google Penguin 2.0What do Penguins, Pandas, and black hats have in common? Lots! Penguin is the most recent set of guidelines published by Google designed to clean up abuses in the field of SEO, and a new version is due out soon, according to Google’s Web Spam Czar, Matt Cutts. The impending event has marketers, reputation managers, and webmasters scurrying for cover.

    SEO – A Concept Recap

    SEO (search engine optimization) is the relatively newborn public relations field that tries to increase the visibility of websites by the strategic placement of keywords, content, and social media interaction, and the industry has grown rapidly in a little over a decade.

    Carried to extremes, as such things always are, black-hat SEO is a subdivision within the field that tries to achieve money-making results in an unsustainable way (ie, against Google’s webmaster guidelines). It frustrates the very purpose of a search engine, which is to help users find the information they need. Instead, rampant SEO gone amok serves only the needs of online marketers wishing to increase sales for themselves or their clients.

    To readjust the proper balance, Mr. Cutts and his team of “penguin” police have attempted to establish guidelines that will rule out the most abusive practices of black hat SEO.

    BlackHat SEO – Are You Doing It?

    The predecessor to Penguin was Panda, with much the same purpose. Panda included a series of algorithm updates, begun in early 2011. These were aimed at downgrading websites that did not provide positive user experiences.

    Panda updates of the algorithm were largely directed at website quality. The term “above the fold” is sometimes used to refer to the section of a website that a user sees before one begins to scroll down. The term comes from newspapers, which are delivered folded in two. The section that is “above the fold” is the section one sees before opening the paper, or unfolding it.

    Typically, marketers wish to cram as much eye-catching, commercial material as possible into this section, while responsible journalists wish to pack it with the most relevant and useful information.

    Penguin, on the other hand, is targeted more specifically at keyword stuffing and manipulative link building techniques.

    One targeted abuse, keyword stuffing, is not a tasty Thanksgiving delicacy, but the practice of loading the meta tag section of a site, and the site itself, with useless repetition of certain words. Sites can lose their ranking altogether as a result of such stuffing.

    Abusive practitioners of keyword stuffing are not above using keywords that are rendered invisible because their font color is identical with the background color. The user doesn’t see them, but the search engine spider does. This practice was soon discovered, however, and dealt with by the search engines.

    Meta tags are sometimes placed behind images, or in “alternative text” fields, so that the spiders pick them up while they remain invisible to users. Popular or profitable search keywords are sometimes included invisible to humans, but visible to the search crawlers. Very clever, but also soon discovered and dealt with. With Penguin, Google now analyzes the relevance and subject matter of a page much more effectively, without being tricked by keyword-stuffing schemes.

    “Cloaking” is another tactic that was used for a while to present a different version of a site to the search engine’s crawler than to the user. While a legitimate tactic when it tells the crawler about content embedded in a video or Flash component, it became abused as a Black Hat SEO technique, and is now rendered obsolete by the technique of “progressive enhancement,” which tailors a site’s visibility to the capabilities of the user or crawler. Pornographic sites have often been “cloaked” in non-pornographic form as a way of avoiding being labeled as such.

    The first set of Penguin guidelines and algorithms went live in April 2012, and the second main wave is due out any day now (though Penguin has gone through several periodic updates since its initial release). It’s designed to combat an excessive use of exact-match anchor text. It will also be directed against links from sources of dubious quality and links that are seen as unnatural or manipulative.

    The trading or buying of links will be targeted as well. The value of links from directories and bookmarking sites will be further downgraded, as will links from content that’s thin or poor-quality. Basically, the revision in the algorithms will be designed to rule out content that serves the marketer’s ends rather than the users’.

    Advice For SEO Marketers To Stay Clean

    If you are a professional SEO, the questions to ask yourself are:

    • Is this keyword being added in order to serve the customer’s potential needs, or is it designed merely to increase the number of hits? If the latter, then the additional users that would be brought to the site by the keyword are probably not high-quality conversion potential.
    • Is the added SEO material being hidden from the user or the search engine crawler? If so, with what purpose? If that purpose amounts to dishonest marketing practices, the material runs the risk of getting you in trouble with Penguin.
    • What’s the overall purpose of your SEO strategy? If it’s anything other than increasing sales by enhancing user experience, then you may expect an unwelcome visit from Penguin.

    If you’re a user, you’ll very likely not be as conscious of these changes, except inasmuch as they will alter the look of your search results page when you perform a search in Google. Will the new Penguin algorithms cut down on those ubiquitous “sponsored links” or “featured links”? Probably not. But savvy users know how to ignore those links by now, except of course when they turn out to be useful.

    Will the new algorithms enhance the overall usefulness of the search engine experience? Probably, at least marginally, and perhaps even in a major way. The whole field of internet marketing and e-Commerce is changing so rapidly and radically that it’s hard to keep track of the terminology, especially the proliferation of acronyms. But the ultimate goal will be an enhanced user experience.

    What can we help you with?

  9. 5 Ways to Outrank the Big Brands

    Leave a Comment

    One persistent rumor about Google is that the search giant favors big brands—“authority websites” and reputed sites of long standing—when it comes to search engine rankings. Prior to Google’s Penguin algorithm update, it was easy for small websites to rank ahead of big brands by using spammy SEO tactics that were specifically targeted by Penguin. This not only caused the drop of countless small websites battling for top rankings, but also fueled the flames of suspicion that Google favors big brands.

    Google Loves Big BrandsMany webmasters now suspect—and probably with some justification—that authority is everything in the world of search. Opinions are floating around that Google has bestowed its favor solely upon big brands and websites that have been around a long time. I see that in my line of work, too. Brand websites (or authority websites) often get away with posting what almost anyone would agree is “thin” content.

    Does this mean Google’s algorithm is skewed to favor big brands? While it isn’t possible to prove this point of view, several new websites have arisen and ranked well for highly competitive keywords where brand/authority websites already were operating. So, it can be done. But what’s the secret?

    The solution is to move beyond the general perception of unfairness and work toward becoming a brand yourself. And I would argue that whether Google favors the big guys or not really doesn’t matter. Here’s why.

    1. Identify Opportunities with Long-tail Keywords

    One of the easiest things to observe is that big brands—the seemingly invincible market leaders—rank for only a particular set of keywords. Not every keyword that relates to a niche can be targeted by the top brands, so there’s a considerable array of keywords—mostly long-tail—that are lying about. While ranking for core keywords is probably not going to be possible, that’s ok; long-tail keywords are your opportunity to shine, and they often yield better conversion rates, bounce rates, and time-on-site metrics than core terms.

    Regardless of how big a website is (in terms of brand value) and how long it’s been established, it can’t possibly account for every possible keyword combination in its content strategy. The authority websites are strong because they built a solid content strategy around a few keywords and stuck to it, with plenty of dollars and hours invested in that strategy.

    Key Takeaway: Figure out the keywords for which your big-brand website doesn’t rank well. Then take on the competition with those keywords. There are many ways to find long-tail keywords, but the most basic is to use Google’s Adwords Keyword Tool to perform keyword research.

    2. Authority Isn’t Everything; Content is King

    A key feature of Google’s algorithm is the concept of authority; Author Rank seems to be the mantra of every SEO effort. But, in truth, how many websites with correct author information have you observed still failing to secure the top spot? Some of these, you’ll notice, are from websites that are in the big-brand league.

    For instance, you won’t see Mashable or BuzzFeed ranking right at the top for every “tech-related” keyword. But honestly, they’ve got some really awesome content.

    Key Takeaway: The ranking algorithm is a summation of factors that consist largely of:

    • Inbound links at the page level
    • Social shares at the page level
    • Comments at the page level

    So, while big brands easily get inbound links, social shares, and comments at the page level, this isn’t an algorithmic favor from Google; it’s simply the result of big brands investing time and money into developing and nurturing their reader base.

    Counter this advantage by creating better content around the keywords for which you want to rank, properly optimizing that content from an on-site perspective, and strategically marketing it. Big brands may get links and social shares more easily, but great content will always win out over time.

    3. Social Signals Don’t Play Favorites; Use Them to Your Advantage

    This is the age of social signals. Google and Bing are actively seeking out social signals to value the websites they rank, and this offers a huge benefit for new websites looking to compete against authority websites. What is interesting is that the notion of authority itself is often deciphered through the kind of social shares and signals your website/page sends (besides the other, usual factors of Author Rank and links, of course).

    When it comes to social signals, brand/authority doesn’t really matter. If you provide value, you win. If you provide the most useful and unique content, you win.

    Key Takeaway: Encourage social shares and maintain your social media presence. It’s the perfect intersection between SEO and user engagement that offers you an opening to beat the big guys.

    4. Focus on People, Not Search Engines

    Pause a moment and think about this: What actually comprises an authority website? Most of the big brands have taken years to build a strong and credible following, a fan base, or active user-engagement. That boils down to just two things: 1) value and 2) people. When you provide value through your content strategies and your products, you attract people. As such, your focus should be on people.

    Key Takeaway: Treat SEO as a tool and not as the means to achieving the goals of your website/brand. The real means involves pushing value outward and enabling it to be shared across a wide platform—ideally, multiple platforms. This will draw in customers over the long run, and establish you as a brand and authority in your own right.

    5. If You Can’t Beat ‘em, Join ‘em

    If you’re already doing the things that are recommended by content strategists, user-engagement experts, and community managers, you’re already on your way to building your own brand. Why, then, should you worry about whatever bias Google might harbor with regard to the big brands?

    If the bias isn’t really as strong as we think, then there’s nothing to worry about. If it is a fact of life, then it may favor you over time as you build your own brand. Building a business requires long-term investments of time and money, and that’s often what it requires to outrank the big brands. But as you grow your business strategically and have patience, eventually you’ll be playing in that same ballpark with the big guys.

    In the age of “authority,” the challenge for new websites is that, by the time you get halfway toward becoming a recognized authority on Google’s radar, many of your competitors may have established a firmer brand, better positioning, and a stronger level of authority.

    That’s why you can’t waste time by competing with them on every keyword they rank for. Instead, build your brand, gradually, by targeting the areas the bigger brands and authority websites haven’t included in their net. From then on, it’s merely a matter of repeating the efforts, using social channels and strong content strategies, while expanding your territory.

    What can we help you with?

  10. Google vs. Facebook: Who’s Got the Upper Hand?

    Leave a Comment

    We’re lining up Google and Facebook against each other. It looks like the match of our lifetime.

    Facebook recently joined the ranks of high-profile tech companies that have gone public. While Facebook’s IPO failed to impress, the company still is a huge force to be reckoned with. Facebook is valued at nearly $105 billion, and it has a solid 900 million-plus members.

    With its membership nearing the 1 billion mark, Facebook appears poised to take on the whole world. Or is it? Is Facebook really invincible? Can it topple the current king of the tech industry — Google?

    Not likely . . . at least not in the foreseeable future.

    Some web experts believe that Facebook lacks the appeal of Twitter, which has been instrumental in breaking some of the biggest news stories in recent history.

    Some even argue that Facebook is going to end up like MySpace, which has entirely lost its appeal. You may recall that MySpace was once the king of the social media hill, after it stole the thunder from rival Friendster. Now MySpace has been reduced to nothing more than a virtual ghost town.

    Lately, we have seen a growing number of people who complain about Facebook selling its members out. Some have sought refuge in Google+, now viewed by many as a worthy successor to Facebook.

    Google is hell-bent on making social interactivity work on Google+. Although at one point the company denied that Google+ is a social network, for users it works, looks, and feels like one.

    Regardless, Google’s command of the online advertising space is staggering, and Facebook wants a piece of that. Who will be the last man standing in this battle?

    The cutthroat rivalry of social networks
    Facebook has maintained a solid reputation with its status as the largest social network in the world. On the other hand, Google has amassed a huge follower base that is increasingly drawn to Google+ for its new approach to social networking. Quite simply, Google+ offers great features that Google+ users aren’t willing to give up.

    Google+ is now home to nearly half a billion members, of whom 100 million were active users as of September 2012. It took Google+ just over a year to grow its user base to that substantial number. By comparison, it took Facebook many years to grow its user base to 900 million.

    At this rate, Google+ could snatch the helm from Facebook in a matter of months.

    While users can do a lot of things on Facebook, such as share and recommend content, upload and watch videos, post and share photos, etc., Google+ has some huge advantages over Facebook.

    One advantage Google+ has over Facebook is its seamless integration of products and services that many of us had already been using fairly heavily, such as YouTube, Google Calendar, and Google Drive (formerly Google Docs).

    Since Google+ integrates many of the tools that members already use, it’s much easier to aggregate content and collaborate with other people.

    What I especially like about Google+ is Hangout, a cool video-conferencing feature that allows as many as 10 people to chat via video all at once. Hangout also allows users to watch a YouTube video simultaneously. How cool is that? Very cool indeed.

    Right now, for those who desire some rest from the noise that’s plaguing Facebook, Google+ is an excellent alternative. Google+ is full of interesting images, videos, and content you won’t find on Facebook or other social networking sites.

    The advantages of Google’s product lineup
    Google did run into some trouble when they declared that user data will be shared across its products, but the company did a great job of providing a seamless user experience.

    Whether you are using an Android tablet or an Android phone, you can tap into Google’s wide range of products seamlessly from YouTube, GTalk, Calendar, Google Play, Maps, to a host of other product lines. You can also sync all your favorite Google apps to a single device.

    Personally, I don’t think I’d be able to do the things I do on a regular basis as productively without the apps from Google to which I’ve grown so accustomed.

    But that doesn’t mean I’m not active on Facebook. I still visit the competition to this day, but only when I want to get in touch with friends and keep myself in the loop about what’s going on with them.

    The competitive world of online advertising
    Facebook is still enjoying a steady growth in number of users, so the odds are good it will surpass the one billion point fairly soon. And after its recent IPO, the company has a pile of cash at its disposal to fund more innovation.

    Facebook’s got a great deal of resources that may enable it to beat Google in the mobile advertising arena, an area where Google is already playing well.

    With regard to advertising, Facebook could one-up Google in several key areas, such as display ads. However, Google has already gotten a good head start in mobile ads, forcing Facebook to struggle to make a decent foray into this area.

    Google is also keen on flexing its muscles to extend its might into social, an area where it has repeatedly failed in the past — with Google Wave and Google Buzz, for example.

    But with Google+, Google seems to be doing things right this time around.

    So Facebook is still the undisputed king of the social media hill, but the Google+ threat is real and looming. With Google’s vast resources in terms of cash gained from its extremely profitable advertising model, it could be only a matter of time before we see Google+ take the lead from Facebook.

    Conclusion
    What do you think? Who will be left standing, and receive the crown as the true social media giant in 2013 and beyond? Will we still see Facebook dominating the social space next year, or will Google+ finally stake its claim to unchallenged dominance of the Internet?

    We would love to know what you have to say about this. Post your comments below!

    What can we help you with?

Success! We've just sent an email containing a download link for your selected resource. Please check your spam folder if you don't receive it within 5 minutes. Enjoy!

Love,

-The AudienceBloom Team