SEO is not what we can call an exact science. Often SEO experts and webmasters have different opinions on how to get a website ranked faster, or higher in search results (SERP). Site age, content, links, speed, quality, freshness and validation all come into play. One thing everyone agrees, though, is that generally speaking the more backlinks to one website the better positioning in Google and other search engines. How to obtain these backlinks, what kind, from where, how many backlinks and many other details is where we can find a plethora of opinions, software utilities, and different techniques. These go from traditional manual link building to the more sophisticated and controversial black hat and spamming techniques.
In this article I will try to explain how to use one of the most popular backlinks builder software on the market, ScrapeBox. At its core this utility is basically a spamming tool, but before you may think that for this reason you should avoid using it (or not), please read on, for ScrapeBox is a serious tool that can be used for many different things and not necessarily just spamming.
Two things I want to say about this software is first, that I am not in any way involved with the authors, and second, that ScrapeBox is very intelligent, very well made, constantly updated and well worth the little money it costs. It is actually a pleasure to use, unlike many SEO utilities on the market. Please do not try to get this software illegally, instead purchase it because it is definitely worth the investment if you are serious in building your own arsenal of SEO tools.
The interface is at first slightly intimidating, but in fact, it is quite easy to navigate. The design is graphically oriented to what the software does in a semi-hierarchical order, divided in panels. From top-left, these are: 1) Harvesting, where you find blogs of interests to your niche 2) Harvested URLS’s management 3) Further management. From the bottom-left we have 4) Search engines and proxies management 5) The ‘action’ panel, i.e. comments posting, pinging and relative management. So basically it is quite easy to understand what to do from the first time you run the program. In the following paragraphs I will be giving a basic walkthrough, so please make sure you are still with me so far and read on.
First you want to find proxies, these are necessary so search engines such as Google do not think that are receiving automated queries from the same IP and also, since ScrapeBox has an internal browser, to browse and post anonymously. Clicking on Manage Proxies opens the Proxies Harvester window which can quickly find and verify multiple proxies. Of course good quality proxies are also being offered for sale on the web, but the proxies that ScrapeBox finds are generally good enough, although they must be regenerated very often. Notice that we haven’t even started yet and already have proxies finder and anonymous browsing, see how different parts of ScrapeBox are worth the price of the software alone, and what I meant when I said that you can use this program for many different things? Once verified the proxies are transferred to the main window, where you can also select the search engines you want to use, and (very nice) the time span of returned results (days, weeks, months etc.). After this first operation, you go to the first panel, where keywords and an (optional) footprint search can be entered. For example imagine we want to post on WordPress blogs related to a particular product niche. We can right-click and paste our list of keywords in the panel (we can also scrape the keywords with a scraper or a wonder-wheel. In fact, ScrapeBox is also a great keywords utility), then we select WordPress and hit Start Harvesting. ScrapeBox will start looking for WordPress blogs related to this niche. ScrapeBox is fast and getting huge lists of URLs does not take long. The list automatically goes in the second panel, ready for some trimming. But let’s stay in the first window for a moment. As obvious, you can look for other kind of blogs (BlogEngine etc.) but more importantly, you can enter your own custom footprint (in combination with your keywords list). Clicking on the tiny down arrow reveals a selection of pre-built footprint, but you can also enter entirely new footprints in the empty field. These footprints basically follow the same Google advanced syntax, so if you enter for example: intext:”powered by wordpress”+”leave a comment”-”comments are closed” you will find WordPress blogs open to comment. Do not forget the keywords, which you can also type on the same line. For example a footprint like this one: inurl:blog “post a comment” +”leave a comment” +”add a comment” -”comments closed” -”you must be logged in” + “iphone” is perfectly acceptable and will find sites with the term blog in the url, where comments are not closed, for a keyword such as Iphone. Last thing before we move on to the commenting part: you can also get very good quality backlinks if you register in forums rather that posting/commenting, in fact even better because you can have a profile with a dofollow link to your website. For example, typing “I have read, understood and agree to these rules and conditions” + “Powered By IP.Board” will find all the Invision Power Board forums open for registration! Building profiles requires some manual work of course, but using macro utilities such as RoboForm greatly reduces the time. FIY the biggest forum and community platforms are:
Vbulletin –> “Powered by vBulletin” 7,780,000,000 results
keywords: register or “In order to proceed, you must agree with the following rules:”
PhpBB –> “Powered by phpBB” 2,390,000,000 results
Invision Power Board (IP Board) –> “Powered By IP.Board” 70,000,000 results
Simple Machines Forum (SMF) –> “Powered by SMF” 600,000 results
ExpressioonEngine –> “Powered By ExpressionEngine” 608,000 results
Telligent –> “powered by Telligent” 1,620,000 results
Please notice the number of results that you can get, literally billions of sites waiting for you to add your links! You can easily understand how with ScrapeBox things can get really interesting and how powerful this software is.
It is clear that the harvesting panel is where most of the magic happens, you should spend some time playing with it, and above all, being creative and intelligent. For example, you could check your own site(s) to see the amount of backlinks (or indexed pages, with the site:youdomain operator). Also, what about spying your competitors backlinks? You could enter link:competitorsite.com and find the sites that links to it, then you could get the same backlinks yourself from the same sites to give you an edge. Sadly Google’s link: operator does not give all the links (Matt Cutts of Google explains why on YouTube) but it is still very useful. (ScrapeBox however helps us once again with a useful add-on called Backlink checker which finds all the links to a site from Yahoo Site Explorer. You can then export and add these to the links from the link: operator, then using the Blog Analyzer you can post on your competitors links and get their same rank!). As said be creative as much as you can.
We are now looking at the second panel (URL’s Harvested) where automatically ScrapeBox saves our results. Also automatically (if you want to) duplicate URLs are deleted. After spending much time and attention harvesting and testing different footprints, these URLs are obviously precious to us, and ScrapeBox offers a large number of functions to manage them. We can save and export (txt, Excel etc.) the list, compare them with previous lists (to delete already used sites for example), and most importantly, we can check the quality of the sites, i.e. Google/Bing/Yahoo indexed and PageRank. We can for example only keep sites within a certain PageRank range. (The PageRank checker is incredibly fast). Notice that in the footprint we can also use the site: operator, for example to find.edu and.org sites only. This and the PageRank checker allow us to harvest really excellent quality links. There is also a function to grab emails addresses from the sites. We can also right-click and visit the URL via our default browser or the internal (proxied) one. For example imagine that you have found some high rank.edu or.org sites open for comments, you definitely do not want to automatically post generic content on these, you may therefore decide to manual post using the internal browser. In fact, for many users, ScrapeBox ends here, i.e. most people do not use the automatic commenter at all. I indeed do agree with this technique, for a single PR7 backlink with a good anchor text is better than hundreds of generic links in my mind. Then again, as said in the beginning, there are many opinions on this. ScrapeBox does offer the option to build thousands of automatic backlinks overnight. Is this effective? To me, not much. Is ScrapeBox bad because of this? No, because it also offers you the capability of much more creative backlinking (and SEO in general, and research) work. I would like to open a parenthesis on this. First the much debated Google “sandbox” mode, meaning the rumour that if you build 3,000 links on a site overnight Google will put the website out of search results because of suspected “spamming”. This is in my opinion obviously not true, for one could do the same for a competitor and ruin them. Second thing, programs like ScrapeBox keep selling thousands of copies and the number of blogs open for un-moderated commenting are limited and heavily targeted, especially for competitive niches. This means that blind commenting is basically useless. You can see that yourself just browsing, there are thousands of worthless blogs with pages and pages of fake comments such as “thank you for this”, “this has been helpful” and so on and so forth. Having said that, the commenting panel is an important function in ScrapeBox, useful for other things too, so let’s see how it works.
On the right part of the lower panel you can see a number of buttons, these allow to insert the details necessary to do the commenting. These are basically text files containing (from the top) fake names, fake emails addresses, your own (real!) website(s) URL, fake (spinnable) comments, and the last one contains the harvested URL’s (clicking on the List button above will pass the list here). ScrapeBox comes with a small number of fake names and email addresses and even comments. Of course, it is up to you to create more (they are chosen randomly), and also to write some meaningful comments which theoretically should make the comment look real. This is important if the blog is moderated, for the moderator should believe that the comment is pertinent. I personally can tell if a comment is real or fake, on my blogs, even if it’s half a page long. Many do not even bother, hence the Internet is full of the aforementioned “Thank you for this!” stupid comments. What to do here of course is entirely up to you. If you have the inclination, write quite a number of meaningful comments. If you don’t, go ahead with “Thank you for this!” and “Great pictures!”. Of course, there is no guarantee that these comments will stick. (By the way, you could, of course, even increase your own blog(s) popularity, posting fake comments to your site). After filling these text tabs, the last operation left is the actual commenting, this is easily done selecting the blog type previously chosen during the harvesting and then Start Posting. Depending on the blog type and the number of sites, this can take a while, especially if using the Slow Poster. A window will open with the results in real time. Unfortunately you will see many failures of course, for ScrapeBox diligently tries them all but there are so many reasons (comments closed, site down, bad proxy, syntax and many others) for a failure. You can, however, leave the program running overnight and see the results the day after. At the end of the “blast”, you will have several options, including exporting the successful sites URLs (and ping them), check if the links stick, and a few others. Speaking of pinging, this is another great feature possibly worth the price by itself, for you can artificially increase your traffic (using proxies of course) for affiliate programs or referrals, articles etc. There is also an RSS function which allows to send pings to multiple RSS services, useful if you have a number of blogs with RSS feed that you want to keep updated.
This covers the basic functions of the main interface. What’s left is the top row menus. From here, you can adjust many of the program defaults and features, such as saving/loading projects (so you don’t have to load comments, names, emails, websites lists etc. separately one by one), adjust timeouts, delays and connections, Slow Posting details, use/upgrade a blacklist and more. There is even a cool email and names generator, a text editor, and a captcha solver (you have to subscribe to a paid service separately though. Notice that captchas show up only when/if you browse, i.e. there is no annoying captcha solving during normal use and automatic posting). But an even more useful option is the add-ons manager, where (like if it wasn’t enough!) you can download quite a number of really useful extensions (all free and growing). Among them, the Backlink checker (already mentioned), the Blog Analyzer, which checks if a particular blog is postable from ScrapeBox (maybe one of your competitors, so you can get the same backlinks). Also a Rapid Indexer with a list of Indexer Service already provided. Plus some minor add-ons such as a DoFollow checker, Link extractor, WhoIs scraper and many others, even including Chess!
Backlinking is the most important part of search engine optimization, and Scrapebox can consistently help with this difficult task, as well as many others. It is obvious that the author knows a big deal about backlinking and SEO, and how to make (and maintain) great software. ScrapeBox is a highly recommended purchase to anyone serious about search engine optimization. Despite being known as a semi-automated solution to “build thousands of backlinks overnight” it actually requires knowledge, planning and research, and it will perform better in the hands of creative and intelligent users.
This day in history...
Powered By WPHistory