6 Actionable Web Scraping Hacks for White Hat Marketers

I'm an "SEO" with 7+ years experience; founder of The SEO Project; "link building" enthusiast; regular Ahrefs contributor; avid drinker of red wine; self-proclaimed steak expert; and all-round cool guy. I'm also shorter than you (probably).

Article stats

  • Referring domains 16
Data from Content Explorer tool.
    Have you ever used a program like Screaming Frog to extract metadata (e.g. title/description/etc.) from a bunch of web pages in bulk?

    If so, you’re already familiar with web scraping.

    But, while this can certainly be useful, there’s much more to web scraping than grabbing a few title tags—it can actually be used to extract any data from any web page in seconds.

    The question is: what data would you need to extract and why?

    In this post, I’ll aim to answer these questions by showing you 6 web scraping hacks:

    1. How to find content “evangelists” in website comments
    2. How to collect prospects’ data from “expert roundups”
    3. How to remove junk “guest post” prospects
    4. How to analyze performance of your blog categories
    5. How to choose the right content for Reddit
    6. How to build relationships with those who love your content

    I’ve also automated as much of the process as possible to make things less daunting for those new to web scraping.

    But first, let’s talk a bit more about web scraping and how it works.

    A basic introduction to web scraping

    Let’s assume that you want to extract the titles from your competitors’ 50 most recent blog posts.

    You could visit each website individually, check the HTML, locate the title tag, then copy/paste that data to wherever you needed it (e.g. a spreadsheet).

    view source https ahrefs com blog asking for tweets

    But, this would be very time-consuming and boring.

    That’s why it’s much easier to scrape the data we want using a computer application (i.e. web scraper).

    In general, there are two ways to “scrape” the data you’re looking for:

    1. Using a path-based system (e.g. XPath/CSS selectors);
    2. Using a search pattern (e.g. Regex)

    XPath/CSS (i.e. path-based system) is the best way to scrape most types of data.

    For example, let’s assume that we wanted to scrape the h1 tag from this document:

    HTML h1

    We can see that the h1 is nested in the body tag, which is nested under the html tag—here’s how to write this as XPath/CSS:

    • XPath: /html/body/h1
    • CSS selector: html > body > h1
    Sidenote.
    Because there is only one h1 tag in the document, we don’t actually need to give the full path. Instead, we can just tell the scraper to find all instances of h1 throughout the document with “//h1” for XPath, and simply “h1” for CSS

    But what if we wanted to scrape the list of fruit instead?

    html fruit

    You might guess something like: //ul/li (XPath), or ul > li (CSS), right?

    Sure, this would work. But because there are actually two unordered lists (ul) in the document, this would scrape both the list of fruit AND all list items in the second list.

    However, we can reference the class of the ul to grab only what we want:

    • XPath: //ul[@class=’fruit’]/li
    • CSS selector: ul.fruit > li

    Regex, on the other hand, uses search patterns (rather than paths) to find every matching instance within a document.

    This is useful whenever path-based searches won’t cut the mustard.

    For example, let’s assume that we wanted to scrape the words “first’, “second,” and “third” from the other unordered list in our document.

    html regex

    There’s no way to grab just these words using path-based queries, but we could use this regex pattern to match what we need:

    <li>This is the (.*) item in the list<\/li>

    This would search the document for list items (li) containing “This is the [ANY WORD] item in the list” AND extract only [ANY WORD] from that phrase.

    Sidenote.
    Because regex doesn’t use the structured nature of HTML/XML files, results are often less accurate than they are with CSS/XPath. You should only use Regex when XPath/CSS isn’t a viable option.

    Here are a few useful XPath/CSS/Regex resources:

    And scraping tools:

    OK, let’s get started with a few web scraping hacks!

    1. Find “evangelists” who may be interested in reading your new content by scraping existing website comments

    Most people who comment on WordPress blogs will do so using their name and website.

    wordpress comment name website

    You can spot these in any comments section as they’re the hyperlinked comments.

    hyperlinked comment

    But what use is this?

    Well, let’s assume that you’ve just published a post about X and you’re looking for people who would be interested in reading it.

    Here’s a simple way to find them (that involves a bit of scraping):

    1. Find a similar post on your website (e.g. if your new post is about link building, find a previous post you wrote about SEO/link building—just make sure it has a decent amount of comments.);
    2. Scrape the names + websites of all commenters;
    3. Reach out and tell them about your new content.
    Sidenote.
    This works well because these people are (a) existing fans of your work, and (b) loved one of your previous posts on the topic so much that they left a comment. So, while this is still “cold” pitching, the likelihood of them being interested in your content is much higher in comparison to pitching directly to strangers.

    Here’s how to scrape them:

    Go to the comments section then right-click any top-level comment and select “Scrape similar…” (note: you will need to install the Scraper Chrome Extension for this).

    scrape similar comments

    This should bring up a neat scraped list of commenters names + websites.

    scrape similar done

    Make a copy of this Google Sheet, then hit “Copy to clipboard,” and paste them into the tab labeled “1. START HERE”.

    Sidenote.
    If you have multiple pages of comments, you’ll have to repeat this process for each.

    Go to the tab labeled “2. NAMES + WEBSITES” and use the Google Sheets hunter.io add-on to find the email addresses for your prospects.

    email addresses

    Sidenote.
    Hunter.io won’t succeed with all your prospects so here are more actionable ways to find email addresses

    You can then reach out to these people and tell them about your new/updated post.

    IMPORTANT: We advise being very careful with this strategy. Remember, these people may have left a comment, but they didn’t opt into your email list. That could have been for a number of reasons, but chances are they were only really interested in this post. We, therefore, recommend using this strategy only to tell commenters about the updates to the post and/or other new posts that are similar. In other words, don’t email people about stuff they’re unlikely to care about!

    Here’s the spreadsheet with sample data.

    2. Find people willing to contribute to your posts by scraping existing “expert roundups”

    Expert” roundups are WAY overdone.

    But, this doesn’t mean that including advice/insights/quotes from knowledgeable industry figures within your content is a bad idea; it can add a lot of value.

    In fact, we did exactly this in our recent guide to learning SEO.

    how to learn seo in 2017 experts

    But, while it’s easy to find “experts” you may want to reach out to, it’s important to remember that not everyone responds positively to such requests. Some people are too busy, while others simply despise all forms of “cold” outreach.

    So, rather than guessing who might be interested in providing a quote/opinion/etc for your upcoming post, let’s instead reach out to those with a track record of responding positively to such requests by:

    1. Finding existing “expert roundups” (or any post containing “expert” advice/opinions/etc) in your industry;
    2. Scraping the names + websites of all contributors;
    3. Building a list of people who are most likely to respond to your request.

    Let’s give it a shot with this expert roundup post from Nikolay Stoyanov.

    First, we need to understand the structure/format of the data we want to scrape. In this instance, it appears to be full name followed by a hyperlinked website.

    tim soulo expert roundup

    HTML-wise, this is all wrapped in a <strong> tag.

    html inspect chrome

    Sidenote.
    You can inspect the HTML for any on-page element by right-clicking on it and hitting “Inspect” in Chrome.

    Because we want both the names (i.e. text) and website (i.e. link) from within this <strong> tag, we’re going to use the Scraper extension to scrape for the “text()” and “a/@href” using XPath, like this:

    strong scraper

    Don’t worry if your data is a little messy (as it is above); this will get cleaned up automatically in a second.

    Sidenote.
    For those unfamiliar with XPath syntax, I recommend using this cheat sheet. Assuming you have basic HTML knowledge, this should be enough to help you understand how to extract the data you want from a web page

    Next, make a copy of this Google Sheet, hit “Copy to clipboard,” then paste the raw data into the first tab (i.e. “1. START HERE”).

    raw data from scraper

    Repeat this process for as many roundup posts as you like.

    Finally, navigate to the second tab in the Google Sheet (i.e. “2. NAMES + DOMAINS”) and you’ll see a neat list of all contributors ordered by # of occurrences.

    roundup scraping final tab

    Here are 9 ways to find the email addresses for everyone on your list.

    IMPORTANT: Always research any prospects before reaching out with questions/requests. And DON’T spam them!

    Here’s the spreadsheet with sample data.

    3. Remove junk “guest post” prospects by scraping RSS feeds

    Blogs that haven’t published anything for a while are unlikely to respond to guest post pitches.

    Why? Because the blogger has probably lost interest in their blog.

    That’s why I always check the publish dates on their few most recent posts before pitching them.

    guest post recently

    (If they haven’t posted for more than a few weeks, I don’t bother contacting them)

    However, with a bit of scraping knowhow, this process can be automated. Here’s how:

    1. Find the RSS feed for the blog;
    2. Scrape the “pubDate” from the feed

    Most blogs RSS feeds can be found at domain.com/feed/—this makes finding the RSS feed for a list of blogs as simple as adding “/feed/” to the URL.

    For example, the RSS feed for the Ahrefs blog can be found at https://ahrefs.com/blog/feed/

    Sidenote.
    This won’t work for every blog. Some bloggers use other services such as FeedBurner to create RSS feeds. It will, however, work for most.

    You can then use XPath within the IMPORTXML function in Google Sheets to scrape the pubDate element:

    importxml(“https://ahrefs.com/blog/feed/”,”//pubDate”)))

    pubdate google sheets

    This will scrape every pubDate element in the RSS feed, giving you a list of publishing dates for the most recent 5–10 blog posts for that blog.

    But how do you do this for an entire list of blogs?

    Well, I’ve made another Google Sheet that automates the process for you—just paste a list of blog URLs (e.g. https://ahrefs.com/blog) into the first tab (i.e. “1. ENTER BLOG URLs”) and you should see something like this appear in the “RESULTS” tab:

    rss google sheets

    It tells you:

    • The date of the most recent post;
    • How many days/weeks/months ago that was;
    • Average # of days/weeks/months between posts (i.e. how often they post, on average)

    This is super-useful information for choosing who to pitch guest posts to.

    For example, you can see that we publish a new post every 11 days on average, meaning that Ahrefs would definitely be a great blog to pitch to if you were in the SEO/marketing industry 🙂

    Here’s the spreadsheet with sample data.

    Recommended reading: An In-Depth Look at Guest Blogging in 2016 (Case Studies, Data & Tips)

    4. Find out what type of content performs best on your blog by scraping post categories

    Many bloggers will have a general sense of what resonates with their audience.

    But as an SEO/marketer, I prefer to rely on cold hard data.

    When it comes to blog content, data can help answer questions that aren’t instantly obvious, such as:

    • Do some topics get shared more than others?
    • Are there specific topics that attract more backlinks than others?
    • Are some authors more popular than others?

    In this section, I’ll show you exactly how to answer these questions for your blog by combining a single Ahrefs export with a simple scrape. You’ll even be able to auto-generate visual data representations like this:

    blog data graph

    Here’s the process:

    1. Export the “top content” report from Ahrefs Site Explorer;
    2. Scrape categories for all the blog posts;
    3. Analyse the data in Google Sheets (hint: I’ve included a template that does this automagically!)

    To begin, we need to grab the top pages report from Ahrefs—let’s use ahrefs.com/blog for our example.

    Site Explorer > Enter ahrefs.com/blog > Pages > Top Content > Export as .csv

    ahrefs site explorer top content

    Sidenote.
    Don’t export more than 1,000 rows for this. It won’t work with this spreadsheet.

    Next, make a copy of this Google Sheet then paste all data from the Top Content .csv export into cell A1 of the first tab (i.e. “1. Ahrefs Export”).

    blog content analysis

    Now comes the scraping…

    Open up one of the URLs from the “Content URL” column and locate the category under which the post was published.

    blog post category

    We now need to figure out the XPath for this HTML element, so right-click and hit “Inspect” to view the HTML.

    html post category

    In this instance, we can see that the post category is contained within a <div> with the class “post-category”, which is nested within the <header> tag. This means our XPath would be:

    //header/div[@class=‘post-category’]

    Now that we know this, we can use Screaming Frog to scrape the post category for each post; here’s how:

    1. Open Screaming Frog and go to “Mode” > “List”;
    2. Go to “Configuration” > “Spider” and uncheck all the boxes (like this);
    3. Go to “Configuration” > “Custom” > “Extraction” > “Extractor 1” and paste in your XPath (e.g. //header/div[@class=‘post-category’]). Make sure you choose “XPath” as the scraper mode and “Extract Text” as the extractor mode (like this)
    4. Copy/paste all URLs from the Content URL into Screaming Frog, and start the scrape;

    Once complete, head to the “Custom” tab, filter by “Extraction” and you’ll see the extracted data for each URL.

    screaming frog extracted data

    Hit “Export”, then copy all the data in the .csv into the next tab in the Google Sheet (i.e. “2. SF extraction”).

    sf scrape

    Go to the final tab in the Google Sheet (i.e. “RESULTS”) and you’ll see a bunch of data + accompanying graphs.

    blog data complete

    Sidenote.
    In order for this process to give actionable insights, it’s important that your blog posts are well-categorized. I think it’s fair to say that our categorization at Ahrefs could do with some additional work, so take the results above with a pinch of salt.

    Here’s the spreadsheet with sample data.

    5. Promote only the RIGHT kind of content on Reddit (by looking at what has already performed well)

    Redditors despise self-promotion.

    In fact, any lazy attempts to self-promote via the platform are usually met with a barrage of mockery and foul-language.

    But here’s the thing:

    Redditors have nothing against you sharing something with them; you just need to make sure it’s something they actually care about.

    The best way to do this is to scrape (and analyze) what they liked in the past, then share more of that type of content with them.

    Here’s the process:

    1. Choose a subreddit (e.g. /r/Entrepreneur);
    2. Scrape the top 1000 posts of all time;
    3. Analyse the data and act accordingly (yep, I’ve included a Google Sheet that does this for you!)

    OK, first things first, make a copy of this Google Sheet + enter the subreddit you want to analyze. You should then see a formatted link to that subreddits top posts appear alongside it.

    Reddit Analysis Google Sheets and Screaming Frog SEO Spider 8 1 List Mode Pasted

    This takes you to a page showing the top 25 posts of all time for that subreddit.

    top posts reddit

    However, this page only shows the top 25 posts. We’re going to analyze the top 1,000, so we need to use a scraping tool to scrape multiple pages of results.

    Reddit actually makes this rather difficult but Import.io (free up to 500 queries per month, which is plenty) can do this with ease.

    Here’s what we’re going to scrape from these pages (hint: click the links to see an example of each data point)):

    OK, let’s stick with /r/Entrepreneur for our example…

    Go to Import.io > sign up > new extractor > paste in the link from the Google Sheet (shown above)

    import io url

    Click “Go”.

    Import.io will now work its magic and extract a bunch of data from the page.

    Sidenote.
    It does sometimes extract pointless data so it’s worth deleting any columns that aren’t needed within the “edit” tab. Just remember to keep the data mentioned above in the right order.

    Hit “Save” (but don’t run it yet!)

    Right now, the extractor is only set up to scrape the top 25 posts. You need to add the other URLs (from the tab labeled “2. MORE LINKS” in the Google Sheet) to scrape the rest.

    reddit analysis sheet

    Add these under the “Settings” tab for your extractor.

    import io add urls

    Hit “Save URLs” then run the extractor.

    Download the .csv once complete.

    import io done

    Copy/paste all data from the .csv into the sheet labeled “3. IMPORT.IO EXPORT” in the spreadsheet.

    Finally, go to the “RESULTS” sheet and enter a keyword—it will then kick back some neat stats showing how interested that subreddit is likely to be in your topic.

    keyword analysis reddit

    Here’s the spreadsheet with sample data.

    6. Build relationships with people who are already fans of your content

    Most tweets will drive ZERO traffic to your website.

    That’s why “begging for tweets” from anyone and everyone is a terrible idea (note: I proved this in my recent case study where ⅘ tweets sent no traffic whatsoever to my website).

    However, that’s not to say all tweets are worthless—it’s still worth reaching out to those who are likely to send real traffic to your website.

    Here’s a workflow for doing this (note: it includes a bit of Twitter scraping):

    1. Scrape and add all Twitter mentions to a spreadsheet (using IFTTT);
    2. Scrape the number of followers for the people who’ve shared a lot of your stuff;
    3. Find contact details, then reach out and build relationships with these people.

    OK, so first, make a copy of this Google Sheet.

    IMPORTANT: You MUST make a copy of this on the root of your Google Drive (i.e. not in a subfolder). It MUST also be named exactly “My Twitter Mentions”.

    google drive my twitter mentions

    Next, turn this recipe on within your IFTTT account (you’ll need to connect your Twitter + Google Drive accounts to IFTTT in order to do this).

    What does this recipe do? Basically, every time someone mentions you on Twitter, it’ll scrape the following information and add it to a new row in the spreadsheet:

    • Twitter handle (of the person who mentioned you);
    • Their tweet;
    • Tweet link;
    • Time/date they tweeted

    And if you go to the second sheet in the spreadsheet (i.e. the one labeled “1.Tweets”), you’ll see the people who’ve mentioned you and tweeted a link of yours the highest number of times.

    twitter mentions

    But, the fact that they’ve mentioned you a number of times doesn’t necessarily indicate that they’ll drive any real traffic to your website.

    So, you now want to scrape the number of followers each of these people has.

    You can do this with CSS selectors using Screaming Frog.

    Just set your search depth to “0” (see here), then use these settings under the custom extractor:

    screaming frog extractor settings

    Here’s each CSS selector (for clarification):

    1. Twitter Name: h1
    2. Twitter Handle: h2 > a > span > b
    3. Followers: li.ProfileNav-item.ProfileNav-item–followers > a > span.ProfileNav-value
    4. Website: div.ProfileHeaderCard > div.ProfileHeaderCard-url > span.ProfileHeaderCard-urlText.u-dir > a

    Copy/paste all the Twitter links from the spreadsheet into Screaming Frog and run it.

    Once finished, go to:

    Custom > Extraction > Export

    screaming frog custom extraction

    Open the exported .csv, then copy/paste all the data into the next tab in the sheet (i.e. the one labeled “2. SF Export”).

    Lastly, go to the final tab (i.e. “3. RESULTS”) and you’ll see a list of everyone who’s mentioned you along with a bunch of other information including:

    • # of times they tweeted about you,
    • # of followers
    • Their website (where applicable)

    twitter results

    Because these people have already shared your content in the past, and also have a good number of followers, it’s worth reaching out and building relationships with them.

    Here’s the spreadsheet with sample data.

    Final thoughts

    Web scraping is crazily powerful.

    All you need is some basic XPath/CSS/Regex knowledge (along with a web scraping tool, of course) and it’s possible to scrape anything from any website in a matter of seconds.

    I’m a firm believer that the best way to learn is by doing, so I highly recommend that you spend some time replicating the experiments above. This will also teach you to pay attention to things that could easily be automated with web scraping in future.

    So, play around with the tools/ideas above and let me know what you come up with in the comments section below 🙂

    I'm an "SEO" with 7+ years experience; founder of The SEO Project; "link building" enthusiast; regular Ahrefs contributor; avid drinker of red wine; self-proclaimed steak expert; and all-round cool guy. I'm also shorter than you (probably).

    Article stats

    • Referring domains 16
    Data from Content Explorer tool.

    Shows how many different websites are linking to this piece of content. As a general rule, the more websites link to you, the higher you rank in Google.

    Shows estimated monthly search traffic to this article according to Ahrefs data. The actual search traffic (as reported in Google Analytics) is usually 3-5 times bigger.

    Get notified of new articles

    46,617 marketers are already subscribed to Ahrefs blog. Leave your email to get our weekly newsletter.

    • Hi Joshua, wow that is a really helpful article. Thank you for that! I usually do lots of stuff manually, but I think I have to start using at least some of your scraping techniques 🙂

      I actually have a question. How would you proceed in case that you have few articles and you want to get author’s contact and social links? Is there any easy way to do that? Thanks.

      • Joshua Hardwick

        i would use something like URL profiler for that. that will automatically scrape a bunch of emails from the website (across multiple pages) and will also scrape the author of the post (most of the time). i think it also scrapes social profiles, i.e. facebook, twitter, etc, but not 100% sure. i think they have a free trial so definitely worth a go.

        alternatively, use buzzstream. that does pretty much the same thing…they scrape email addresses, contact information, social profiles, etc. it’s also the BEST outreach tool imho

    • Boris Nikator

      Thanks a lot for your post! Not everyone is sharing advanced tactics in SEO / digital marketing field.

      Scraping data is first thing first when you are planning your campaign, I used it for audience research (scraped job site) and have got a lot of insights about personas and people that are vital for our business (we started a new branch and didn’t have enough in-house data). I also used it for getting data about influencers and scraped forums in order to understand people needs.

      The second step after scraping data is applying statistic and machine learning algorithms so I am looking forward to see what tactics do you use=)

      PS: And about comments on your site — you have to make sentiment analysis in order not to try to reach people who were unsatisfied with your post (generally).

      • Joshua Hardwick

        totally agree, scraping is seriously powerful if you know how to use it 🙂

    • I think I need to reread this post a couple time, it’s not easy for me to understand all off this in the first time but again a great sharing post from Ahref team.
      Thank you! 😀

      • Joshua Hardwick

        no worries — let me know if you have any questions 🙂

    • Paolo Gironi

      I can’t open the link to Google Sheet in step 6, 404 error…
      Thank you, great post!

      • Joshua Hardwick

        sorry about that — should be fixed now 🙂

    • Bincy Aliyar

      Great Post!

      • Joshua Hardwick

        thanks Bincy! 🙂

    • I have to test it, after I have done the test I will give you information from here.

      • Joshua Hardwick

        cool — just give me a shout if you run into any issues 🙂

    • Good Article

    • Great !!!

    • Shivangi Shrivastava

      Very very informative post! Thanks.

    • JOe Glines

      I use web scraping on most days. Part of it is to fill in forms (I call this web pasting) and part is to extract content from a page I’m on and save, or use, in another location. It saves HOURS of time! I wish more would people would spend a little time learning how to works smarter, not harder!

    • Awesome post.. I didn’t knew that we can scrape some useful content so easily. Thank you Joshua 🙂

    • moar

      Good post Joshua! You could have mentioned Scrapebox…the master tool for scraping content 😛

      • Joshua Hardwick

        good point. i do love scrapebox but to be honest, i don’t use it so much anymore, although that’s probably partly because I use a mac these days.

        i also think the learning curve is quite steep, which is another reason i didn’t mention it in this article.

        but yeah, it’s still a great tool, especially for the price!

    • This is epic! Thanks for providing so many sheets — love web scraping.

      One cool thing I’ve been doing recently with the Screaming Frog extraction tool is actually extracting the full text contents of blog posts on a site. You can then pull down the Top Keyword report from Ahrefs and search within the cell containing the blog text to see if it’s being used. Or any keyword you wish. Great for reverse engineering why competitors are ranking for a query. This is the only way I know to truly look this up, as I haven’t seen any SEO software that offers this (maybe there is though?).

      You can also do this with your own keyword targeting if for example you had assigned 3 keywords to a page at some point in the past and you wanted to double check it was being used in the post body, title tag, h1, etc.

      That Screaming Frog extraction is pretty amazing if you use it right!

    • Arthur Maldonado

      I love scraping, nice post 🙂
      will test some of those methods.

    • I think I need to save it for reference, thank you for sharing this useful

    • Thanks for the detailed post about scraping. Will test the methods mentioned on the post.

      Thanks

    • Thanks for the detailed post