AudienceBloom

CALL US:  1-877-545-GROW

Tag Archive: duplicate content

  1. How to Make Sure You’re Not Publishing Duplicate Content

    Leave a Comment

    You’re probably already well aware of how important it is to upload fresh and relevant content to your site on a regular basis. It increases website traffic and inbound links, increases brand awareness, and increases conversion rates.

    And if you’re doing it right, it’ll help your site rank higher in the search results. To put it simply, the more online presence your business has, the more successful it’ll be.

    There are many other benefits to updating your site often besides search rankings, though. It helps with user integration and branding as well. For instance, if you’re fully connected to all the most widely used social networking platforms (like Facebook and Twitter) and constantly putting your business in front of existing and potential clients, then this will benefit your company in the long run.

    It’s vital to use a blogging format in order to upload new content on a frequent basis to the actual website. One reason blogging can be so crucial to your company’s success is because of the SEO benefits.

    Google favors sites that have fresh, new content — as long as it’s original and helps readers find what they’re looking for. That means you need to make sure that duplicate content isn’t being published on your site.

    Tools to use in order to avoid publishing duplicate content

    The most important thing to consider when uploading content to your site is that each and every article is original. If your site has duplicate content, you run the risk of having that content get de-indexed by Google. Too much duplicate content could result in harsher, site-wide measures, along with a negative impact on your traffic and sales.

    You may have duplicate content appearing on your site and not even be aware of it. If you have hired a writer to post content to your site, there’s a chance that the person may not be producing original content for you. This writer could well be posting identical copies of the articles you commissioned to a variety of other websites.

    But don’t worry. There’s a great tool you can use to make sure that all your content is 100% original: Copyscape. Copyscape uses innovative technology that can tell you in a matter of seconds whether your content is original, or if it has been stolen. Simply type in the URL of your website on the homepage, and it does all the work for you.

    They also offer an additional service called Copysentry, which will send you e-mail notifications to alert you when duplicate content is found in relation to your site. This is great for those who wish to keep an eye on the company, but don’t have the time or resources to do it alone.

    Even though there may be someone in your company whose role is to manage blogging activities, it’s still nice to stay in the loop yourself, and receive the warnings directly. Remember that people can easily make a mistake and fail to identify duplicate content properly and explicitly; the speed and thoroughness of computers and search programs make identification of duplicates a lot more certain.

    I recommend upgrading to Copyscape premium and you have an option to check whether or not a piece of work is original before you even publish it on your site.

    Conclusion

    To sum up, it’s important that you regularly upload awesome new content to your site in order to boost growth and exposure. All your content must be original, however, or you’ll risk a duplicate content penalty.

    Software programs such as Copyscape can assist you in making sure your site avoids the publication of duplicate content, as well as letting you know when others may be stealing your content.

    If you need help cleaning up your site and ensuring it has nothing that Google is likely to view as duplicate content, or you could use a little assistance in generating the kind of high-quality original content the search engines love, we can help.

    Simply contact us. We’ll get in touch with you ASAP to develop a unique strategy to market your business online and increase your rankings.

  2. SEO for Beginners and Article Marketing: Checking for Duplicate Content

    Leave a Comment

    First, there was Google Panda. Now it’s fiercer. Watch out for Google Penguin! They provide some good reasons why you should pay attention to the tips below on SEO for beginners.

    In one of Google’s most recent updates called Penguin, the emphasis shifted even further toward the content’s uniqueness and authority. Any site that hosts duplicate content would get penalized — or the duplicate content could get removed from Google’s index altogether.

    But with the billions of contents on Google’s index, figuring out how to check for duplicate content has become a frightening prospect. Every conscientious content marketer needs to figure out how to do this efficiently, or you run the risk of getting your sites badly penalized by Google.

    Another very important thing to keep in mind about the need to come up with high-quality and unique content is that copyright laws definitely govern materials that are published online. You’ve got to respect other authors who have sweated and strained to come up with their own original material. For that, they should at least receive attribution.

    So how do you make sure that each piece of content you publish online is not only of excellent quality, but equally important, is unique?

    There are several tools you can use for free in order to check for duplicate content. (Others offer more sophisticated services for a small fee.)

    Google

    Google exists not just for finding information by searching for certain keywords. You can also actually use it to check an entire paragraph for duplicate content. Here’s how.

    Copy a line, a sentence, a snippet, or an entire paragraph from what you’ve written and paste it into Google’s search box. Copied texts show up in the search results in bold.

    So if you’ve searched for a whole paragraph and the search results come up with the entire text you pasted in bold, your copy is probably going to be tagged as a duplicate content if it gets published online.

    However, Google cannot process texts that run longer than 500 words. In that case, you’ll want to go through your content paragraph by paragraph to be really thorough and ultimately safe.

    Copyscape

    For 5 cents per search you can have Copyscape vet an entire piece for you. But if your budget won’t allow that kind of expenditure, you can still use Copyscape for free. The catch with free Copyscape is that you’ll have to publish the content online first to retrieve its URL.

    Copy and paste the URL of your newly published content in Copyscape’s search box. What Copyscape does is scan the entire interwebs for any copies of the content you’ve just published.

    Copyscape is a reliable tool that many publishers depend on heavily to check for quality and originality. There are other tools very similar to Copyscape that you can use for the same purpose, such as Plagiarism Detect and InterNIC.

    Checking for duplicate content is fairly easy and simple. It’s an indispensable SEO task for beginners, but no one should take it for granted. With the right set of tools, you can comfortably ensure that your content is unique well before you publish it online.

    And by providing your readers with high-quality and unique content, you will have furnished great value.

    Conclusion

    Always strive to create original and high-value content, not just to please your audience, but also to get the search engines’ approval. Taking the time to check your content for duplication on the Internet will pay off in the long run because your readers’ trust in your integrity will rise.

    To find out more about how you can get your business to rank highly with top-notch search engine optimization services, contact us today. We have a wide range of options for your online marketing needs.

  3. Why Duplicate Content is Bad for SEO and What to Do About It

    1 Comment

    With the rollout of Google Panda, we have heard sad stories of sites that have been either devalued or removed from Google’s index entirely.

    One of the reasons for the huge drop in some sites’ rankings has been duplicate content — one of the problems that Panda was released to control.

    Most of the sites that have experienced a drastic decrease in rankings were content farms and article directories; that is, sites loaded with thousands of duplicate articles.

    While it had been made clear that duplicate content was one of the primary things Panda frowns on, some content authors breathed a sigh of relief after Google appeared to say that “There’s no such thing as a ‘duplicate content penalty’ ” in a blog post several years ago.

    But duplicate content remains a controversial issue. It has kept bloggers and webmasters nervous about publishing content that could hurt their rankings. Like many other things, there are two sides to the issue. There’s duplicate content that Google allows and there’s the type that hurts your website’s rankings.

    Let’s try to clear up the difference.

    What type of duplicate content can hurt your rankings?
    To determine whether a sample of duplicate content is going to pull down your rankings, first you have to determine why you are going to publish such content in the first place.

    It all boils down to your purpose.

    If your goal is to try to punk the system by using a piece of content that has been published elsewhere, you’re bound to get penalized. The purpose is clearly deceptive and intended to manipulate search results.

    This is what Google has to say about this sort of behavior:

    “Duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results.”

    If Google has clear evidence that you are trying to manipulate your search rankings, or that you are practicing spammy strategies to try to improve rankings and drive traffic to your website, it could result to your site being removed from Google’s index.

    The effects on users
    Publishing duplicate content could also hurt your reputation in the eyes of the users.

    The ultimate goal of the search engines is to provide users with the most valuable, most useful, and most relevant information. If you publish a bit of content that has been previously published elsewhere, your site may not show up for the same search, because search engines tend to show results only from the main content sources.

    This explains why the search engines omit duplicate results to deliver only those that the users need.

    When users read content on your site that they have already seen previously on a site that they trust more, chances are their trust in your site will diminish.

    But is there ever a case where duplicate content is acceptable?

    When duplicate content can be acceptable
    There may be instances when duplicate content is accidental, and therefore should not lead to any penalties.

    One such instance is when the search engines index tree identifies separate URLs within a domain that point to a single content. An example is the following trio of URLs: http://abc.com, http://www.abc.com, and http://www.abc.com/index.htm. There’s clearly no indication of manipulation or intent to spam in this case.

    Another case of legitimate duplication occurs when a content sample is published in several different formats to cater to specific users. With the explosion of mobile web browsing, content is now published to suit desktops, tablets, and mobile phone web users. Publication of a single content in several formats is not subject to any penalties for duplicate content.

    Also, keep in mind that there are instances when publishing a copy of a piece of content, in part or in whole, is needed for reference, such as when citing a news source. If the purpose is to reference or to add value to users, such content duplication is not subject to penalties.

    Avoiding duplicate content that provokes the Panda’s wrath
    Simply put, avoiding duplicate content is your best defense against any duplicate content penalties administered by Google Panda. Remember that Google and other search engines strive to provide search results that are unique and of high quality.

    Your goal must therefore be to publish unique and original content at all times.

    However, if duplication cannot be avoided, below are recommended fixes that you can employ to avert penalties:

    Boilerplates. Long boilerplates or copyright notices should be removed from various pages and placed on a single page instead. In cases where you would have to call your readers’ attention to boilerplate or copyright at the bottom of each of your pages or posts, insert a link to the single special page instead.

    Similar pages. There are cases when similar pages must be published, such as SEO for small and big businesses. Avoid publishing the same or similar information. Instead, expand on both services and make the information very specific to each business segment.

    Noindex. People could be syndicating your content. If there’s no way to avoid this, include a note at the bottom of each page of your content that asks users to include a “noindex” metatag on your syndicated content to prevent the duplicate content from being indexed by the search engines.

    301 redirects. Let the search engine spiders know that a page has permanently moved by using 301 redirects. This also alerts the search engines to remove the old URL from their index and replace it with the new address.

    Choosing only one URL. There might be several URLs you could use to point to your homepage, but you should choose only one. When choosing the best URL for your page, be sure to keep the users in mind. Make the URL user-friendly. This makes it easier not only for your users to find your page, but also for the search engines to index your site.

    Always create unique content. Affiliates almost always fall victim to the convenience of ready-made content provided by merchants. If you are an affiliate, be sure to create unique content for the merchant products you are promoting. Don’t just copy and paste.

    Conclusion
    Whatever your intent is, the best way to avoid getting penalized by Google Panda is to avoid creating duplicate content in the first place. Keep in mind that quality is now at the top of the search engines’ agenda.

    It should be yours too.

  4. Duplicate Content: All Evidence Considered, All Questions Answered

    2 Comments

    Duplicate content. One of the most hotly contested and widely shrouded-in-mystery concepts of SEO. I’m going to tackle this concept right here, right now. I decided to write about this topic for two reasons:

    1) I am about to launch a massive Website which I hope to monetize quickly, but I need to know if it’ll be a good idea to syndicate the content from the site as a viable method for obtaining direct referral traffic and backlinks without compromising its organic search traffic (more on this later).

    2) I searched for two hours last night and couldn’t find a definitive conclusion to this question.

    In this blog post, I’m going to address everything you want and/or need to know about duplicate content. This is partially for my own personal future reference as I will undoubtedly face this question again in the future, but also to share with you the fruits of hours upon hours of research, testing, and analysis that I’ve been working on. After all, sharing makes everything more fun, right? OK, take a deep breath. Here goes.

    Duplicate content: What is it?

    Here’s Google’s own definition of duplicate content:

    Duplicate content generally refers to substantive blocks of content within or across domains that either completely match other content or are appreciably similar.

    So basically, there are two types of duplicate content:

    • Duplicate content within the same domain
    • Duplicate content across different domains

    First, let’s cover duplicate content within the same domain.

    Q: Is there a duplicate content penalty?

    Ever since I started getting my feet wet in SEO, this question has swirled around forums and blogs. Somewhere, someone out there perpetuated the idea that having the same content on page A of your Website as page B of your Website would cause your site to be penalized in search engine rankings. This idea began to percolate in the internet marketing community because a bunch of spammers realized that when they had a piece of content (ie, an article) that was getting a lot of search traffic, they could fill up every page of their Website with the same content in order to pull even more traffic from the search engines. Obviously, the same article blatantly duplicated across hundreds of pages within a single domain is a malicious attempt to gain search engine traffic without actually adding any value. Google caught on pretty quickly to this method and fixed its algorithms to detect duplicate content and display only one version of it in the search rankings. Websites that engaged in this blatant activity were de-indexed and cried up a river across forums and blogs throughout the internet marketing community. Thus was born the fear of the “duplicate content penalty.”

    However, in the vast majority of cases, duplicate content is non-malicious and simply a product of whichever CMS (content management system) the Website happens to be running on. For example, WordPress (the industry-standard CMS) automatically creates “Category” and “tag” pages which list all blog posts within certain categories or tags. This creates multiple URLs within the domain that contain the same content. For example, this particular post will be on the root domain (www.jaysondemers.com, while it remains on the first page), the “single post” version (which you can find by clicking the title of the blog), and in the “Categories” and “Tags” pages. So that means this particular post will be duplicated 4 times on this domain. But am I doing that intentionally in order to get more search engine traffic? No! It’s simply a product of the automatic, behind-the-scenes work that my CMS (WordPress) is doing.

    Google knows this, and they are not going to penalize me for it. Millions of Websites are running on WordPress and have the exact same thing happening. But what if I were to take this particular post and re-post it 100 times in a row on my blog? That would definitely send red flags when Google’s crawler sees it, and one of two things will happen at that point.

    1) Google may decide to let me off with a “warning” and simply choose not to index 99 of my 100 duplicate posts, but keep one of them indexed. NOTE: This doesn’t mean my Website’s search rankings would be affected in any way.

    2) Google may decide it’s such a blatant attempt at gaming the system that it completely de-indexes my entire Website from all search results. This means that, even if you searched directly for “jaysondemers.com” Google would find no results.

    So, one of those two scenarios is guaranteed to happen. Which one it is depends on how egregious Google determines your blunder to be. In Google’s own words:

    Duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results. If your site suffers from duplicate content issues, and you don’t follow the advice listed above, we do a good job of choosing a version of the content to show in our search results.

    This type of non-malicious duplication is fairly common, especially since many CMSs don’t handle this well by default. So when people say that having this type of duplicate content can affect your site, it’s not because you’re likely to be penalized; it’s simply due to the way that web sites and search engines work.

    Most search engines strive for a certain level of variety; they want to show you ten different results on a search results page, not ten different URLs that all have the same content. To this end, Google tries to filter out duplicate documents so that users experience less redundancy.

    So, what happens when a search engine crawler detects duplicate content? (from http://searchengineland.com/search-illustrated-how-a-search-engine-determines-duplicate-content-13980)

    Duplicate content
    The final word: Duplicate content on the same domain

    The final word is that, unless you are really blatantly duplicating your content across tons of URLs within the same domain, there’s nothing to worry about. One of your URLs on which the duplicated content resides will be indexed and chosen as the “representative” of that URL cluster. When users perform search queries in the search engines, that particular piece of content will display as a result for relevant queries, and the other URLs in the dupe cluster will not. Simple as that.

    However, the other side of the coin is duplicate content across different domains. And that’s a whole different monster. Ready to tackle it? Here we go.

    Duplicate content across domains: What is it?

    Sometimes, the same piece of content can appear word-for-word across different URLs. Some examples of this include:

    • News articles (think Associated Press)
    • The same article from an article directory being picked up by different Webmasters
    • Webmasters submitting the same content to different article directories
    • Press releases being distributed across the Web
    • Product information from a manufacturer appearing across different e-commerce Websites

    All these examples result from content syndication. The Web is full of syndicated content. One press release can create duplicate content across thousands of unique domains. But search engines strive to deliver a good user experience to searchers, and delivering a results page consisting of the same pieces of content would not make very many people happy. So what is a search engine supposed to do? Somehow, it has to decide which location of the content is the most relevant to show the searcher. So how does it do that? Straight from the big G:

    When encountering such duplicate content on different sites, we look at various signals to determine which site is the original one, which usually works very well. This also means that you shouldn’t be very concerned about seeing negative effects on your site’s presence on Google if you notice someone scraping your content.

    Well, Google, I beg to differ. Unfortunately, I don’t think you’re very good at deciding which site is the originator of the content. Neither does Michael Gray, who laments in his blog post “When Google Gets Duplicate Content Wrong” that Google often attributes his original content to other sites to which he syndicates his content. According to Michael:

    However the problem is with Google, their ranking algo IMHO places too much of a bias on domain trust and authority.

    And I agree with Michael. For much of my internet marketing career I have syndicated full articles to various article directories in order to expand the reach of my content while also using it as “SEO fuel” to get backlinks to my Websites. According to Google, as long as your syndicated versions contain a backlink to your original, this will help your case when Google decides which piece is the original. Here’s proof:

    First, a video featuring Matt Cutts, a well-known blogger and search engine algorithm engineer for Google:

    The discussion on syndication starts at about 2:25. At 2:54 he says you can tell people that you’re the “master of the content” by including a link from the syndicated piece back to your original piece.

    More evidence:

    In cases when you are syndicating your content but also want to make sure your site is identified as the original source, it’s useful to ask your syndication partners to include a link back to your original content.

    And finally:

    Syndicate carefully: If you syndicate your content on other sites, Google will always show the version we think is most appropriate for users in each given search, which may or may not be the version you’d prefer. However, it is helpful to ensure that each site on which your content is syndicated includes a link back to your original article. You can also ask those who use your syndicated material to use the noindex meta tag to prevent search engines from indexing their version of the content.

    Now, what I think is interesting from this last quote from Google is that they actually admit that the piece of content they choose may not be the right one. In my experience, it’s very likely not to pick the right one if the site that originated the content is relatively young or has a low PageRank. So this raises the next big issue:

    How do I get ranked as the original source for the content I syndicate?

    I’ve syndicated tons of my articles to EzineArticles only to see Google credit them with higher search results for my content, even when I made fully sure that Google had indexed my content at its original location prior to submitting it to Ezine. Vanessa Fox, who previously worked at Google and built Webmaster Central, attempts to tackle this question in her blog post, “Ranking as the Original Source for the Content you Syndicate.”

    Unfortunately, she concludes that, basically, there’s nothing you can do to ensure that you do. She suggests:

    Create a different version of the content to syndicate than what you write for your own site. This method works best for things like product affiliate feeds. I don’t think it works as well for things like blog posts or other types of articles. Instead, you could do something like write a high level summary article for syndication and a blog post with details about that topic for your own site.

    Rewriting a piece of content is not my definition of syndication. That’s just rewriting an article in different words and distributing it. Almost all information circulating on the Web has already been posted elsewhere anyway; even this blog post is composed of a ton of information that I found elsewhere on the internet. So to me, writing a new article that says the same thing in different words and distributing that to syndication partners isn’t really syndication of the original article. It’s syndication of a different article. So we’re still left with the question of the results of syndicating the exact same content that already appears on your Website: what are the effects of doing so? Can it harm my rankings in any way?

    To me, this is the most important question surrounding duplicate content. Before I jump into that analysis, let’s consider an important foundational question.

    Why would I want to syndicate the exact same content from my Website elsewhere?

    The internet really operates on a simple economy of give-and-take. The two commodities that are exchanged are unique content and backlinks. Unique Content is defined as content which Google does not identify as duplicate. There are various theories about where exactly Google draws the line of deciding whether content should be considered duplicate, but one figure I’ve heard tossed around a lot is 30%. Basically, according to the 30% theory, if Google identifies that more than 30% of a particular piece of content appears elsewhere across the internet, it’ll be categorized as duplicate. Now, I can’t attest to the accuracy of this figure, so take it for what it’s worth. There’s also various duplicate content-detection software such as CopyScape which is designed to help Webmasters check to see if their content has been stolen and duplicated across other domains. This is also a good tool to use to determine whether your content is likely to be considered duplicate by Google. And that’s what really matters.

    But I’ve gotten a bit off track, let’s get back to the discussion of why you’d want to syndicate content. I mentioned the internet economy of backlinks and unique content. Unique content is desirable because it will be indexed by Google, giving that particular Website another instance of its “name in the hat” so to speak. Basically, the more content a Website has indexed, the more chances it has of being returned in Google’s search results for relevant queries.

    But what about backlinks? Backlinks are simply links from any other Website to your own. Search engines consider it a “vote” when one Website links to another. This vote is used to determine authority & relevance in Google’s search results. In fact, it’s thought that backlinks are the single most-important factor in determining how your Website should rank for a given query. There are a ton of factors that play into backlinks and how much their “vote” counts for, but I’ll get into that in a future blog post. For now, what you need to know is that backlinks are valuable because they improve your rankings in the search engines, and that means more traffic to your Website.

    OK, so now we’ve covered the basic commodities of the micro-economy of the Web. This is important because when you syndicate your content, assuming you have included a backlink in it linking back to your original source, you get a backlink from each and every Website to which your content was syndicated. Awesome, right?

    Maybe not. The first question is how highly Google values a backlink from a piece of content that is known to be duplicate content. Frankly, I don’t know. On the one hand, it’s easy to syndicate content to a bunch of auto-accept blogs if your sole goal is to get backlinks, and this says nothing about the quality of your content or how much the originator of the content should be rewarded. On the other hand, syndication can also be a great indicator of the quality of a particular piece of content. After all, why would it be syndicated so much if it weren’t really great?

    In the end, Google probably has signals for how it answers these two questions, but the real answers are probably only known by the software engineers that coded the algorithm. Many folks try to boost the value of their syndicated content by engaging in content “spinning” which is perfectly legitimate as long as it’s not the garbage that’s often spouted out by automated software. I’ll go into more depth about content spinning in a later post. For now, we’re still trying to answer the question of whether syndicating content exactly as it appears on your own Website is a good idea or a bad idea. After careful testing I’ve come to the following conclusion:

    .

    …….

    *drumroll*

    ……

    *more drumroll*

    …..

    Maybe.

    I know, I know. That’s not the answer you wanted. Allow me to explain.

    I own over 50 domains, and I like to do a lot of testing across them. I spent a couple hours last night performing searches for my content that I had syndicated to various other blogs and directories. And what I found was both disappointing and encouraging.

    The disappointing part was that, in many cases, my syndicated content outranked my own original content. Even if a site ranked higher than mine for my own content had a backlink to my site, the originator of the content, it was like Google completely ignored that backlink and still gave more credit to the other sites. In some cases, my own site’s version of the content was nowhere to be found, obviously falling into Google’s duplicate URL cluster and being filtered out of the search results. This means that by syndicating my content, I actually, in effect, got my own content de-indexed.

    This is pretty much the worst possible scenario, but it happened. Sometimes, at least. And that’s the weird part; sometimes, my content was recognized as the original content and received the highest ranking. With other sites and pieces of content, it ranked second behind a high-authority site, usually EzineArticles. So I have to conclude the following:

    When you syndicate your content, it might:

    • Cause your own, original content source (ie, your Website) to be, in effect, de-indexed for that piece of content
    • Cause your site to rank highly for queries relevant to your content, but not highest
    • Cause your site to rank highest for your content

    Well, that pretty much covers all the bases, doesn’t it? These are all the results I observed when looking at my own sites and the results of syndicating articles that originated on those sites. Basically, I can conclude that Google just doesn’t always get it right. And, Google doesn’t like to do anything with any sort of consistency. The last thing they want is for us SEOs to completely figure out their algorithm, because once that happens, the integrity of their search results will be destroyed as folks manipulate them all to hell.

    The encouraging part was when I discovered that the backlinks from the syndicated content definitely helped my sites’ rankings for my target keywords. So there is definitely at least some value of backlinks originating from content which Google has labeled as “duplicate.”

    So, the final question remains: Should I syndicate my content?

    Let’s look at the benefits of doing so:

    Benefits of syndicating your content:

    • Get backlinks from lots of sites
    • Expand your reach and brand awareness to highly-trafficked sites
    • Get direct traffic via referrals from backlinks in your syndicated content
    • Much cheaper way of getting backlinks than writing brand-new content (or re-writing existing content) for distribution/syndication

    Drawbacks of syndicating your content:

    • The sites to which you syndicate might actually outrank you for your own content if they have higher authority than your own site, even if you follow Google’s advice and include a backlink to the original source of the content
    • Google might group the URL on which your content resides with the rest of the duplicates, hiding it from search engine results pages (effectively de-indexing it)

    So, in the end, syndicating your content is risky. You can definitely get the best of both worlds if Google decides your site is the originator of the content, thereby rewarding your content with the top position in the search results and also getting all the juicy backlinks that play into your overall rankings for specific keywords. But if Google gets it wrong (and it does, quite often, contrary to what they might think), you risk having your content never rank for relevant search engine queries.

    And this really worries me, because I’ve always held the opinion that there’s nothing else someone else can do to harm the rankings of a particular Website. After analyzing these results, I fear I’ve found a loophole in my own argument; If someone else visits my Website, copies all my content and syndicates it around the Web, it’s possible that the sites to which my content was syndicated will actually rank higher for it than my own site. Google tries to address this problem here as well as in the Matt Cutts video:

    In most cases a webmaster has no influence on third parties that scrape and redistribute content without the webmaster’s consent. We realize that this is not the fault of the affected webmaster, which in turn means that identical content showing up on several sites in itself is not inherently regarded as a violation of our webmaster guidelines. This simply leads to further processes with the intent of determining the original source of the content—something Google is quite good at, as in most cases the original content can be correctly identified, resulting in no negative effects for the site that originated the content.

    Again, unfortunately I have to point out that in my own experience, repeatedly, I’ve seen my own content rank worse than the sites to which it was syndicated. So even though Google thinks it’s good at identifying the original source of the content, my data suggest otherwise. In time, we can only hope that Google improves this aspect of its algorithm; there’s certainly nothing more we can do as Webmasters. Instead, you just have to understand the benefits and drawbacks of syndication and decide whether you’re comfortable with taking on the risks of having Google wrongly identify ownership of your content.

    Here are a couple tips to minimize the risk of Google getting it wrong (in theory):

    • Always post new content to your own Website and then wait to syndicate it elsewhere until Google has crawled and indexed your content. You can check to see if a particular page has been indexed by performing a search query of your exact URL, in quotes. If the search returns the correct result (ie, not zero results) then it has been indexed. Another neat trick you can try is to randomly select 11-12 words from your content and search for that string, again in quotes. You wouldn’t think it, but the likelihood that any 10-12 words in a specific sequence will appear elsewhere on the Web is extremely small. Try it now — copy and paste a random sentence from this paragraph into Google, surround it in quotes, and see how many results you get. You will probably only find this URL as a result, unless this article has been syndicated (this is also a great way to check out which sites have picked up your content when you syndicate it).
    • Always include a backlink in your syndicated version to the original content source URL. Google says this is the way to do it right, but it’s still not a surefire thing. Nonetheless, it certainly can’t hurt.

    What about taking Vanessa’s suggestion and re-writing your content before syndicating it?

    This would definitely solve the problem of possibly getting your own content essentially de-indexed when Google wrongly attributes content ownership, but there are some major problems with it too:

    • It’s really expensive if you have a lot of content. Think about how much time it would take you to rewrite each article you have. This post alone is over 4,000 words and took me 3+ hours to type! You could outsource the rewriting to a service like Human Rewriter but that will cost you around $4 per 500 words. That could get very expensive if you have a lot of content.
    • You are still distributing content that is topically themed around the same keywords as your original content, so it’s not a stretch to think that the rewritten content would still outrank your original content for relevant search queries, especially on high-authority sites such as EzineArticles.

    In the end, it all comes down to testing on a massive scale, getting solid data and making decisions based on that data. So here’s what I’m going to do. I’m going to run a huge test and then update this post with my results. At the beginning of the post I mentioned that I am soon launching a massive Website with tons of unique content. I’m going to syndicate it all, completely unedited, as far and wide as I possibly can. As I do so, I’ll monitor traffic sources to see what keywords people are using to find my content. Then, I’ll replicate those keyword queries in Google and see where my site ranks in the search results. This should be the definitive test for the merits of syndication.

    Thanks for sticking with me through this post! Check back soon for updates.

    Further reading on duplicate content:

    Google Webmaster Central

    Demystifying the Duplicate Content Penalty

    Duplicate content due to scrapers

    Ranking as the original source for content you syndicate

    When Google gets duplicate content wrong

    How a search engine determines duplicate content

Success! We've just sent an email containing a download link for your selected resource. Please check your spam folder if you don't receive it within 5 minutes. Enjoy!

Love,

-The AudienceBloom Team