How to improve your SEO: web scraping SERPS with keywords

The majority of what works now will soon become outdated. That applies to just about anything in the software world. In this case, depending on an out-of-date method of scraping SERP data would leave you stranded in the trenches.

Since Google and other search engines’ SERP structure is constantly changing, you must take advantage of a future-proof solution.

In this article, I’ll present such a solution, namely web scraping. Additionally, we’ll go over the reasons why you’ll want to use it.

What are keywords?

People often use keywords or key phrases to find helpful information in search engines like Google. The result is shown on a page known as a SERP, or Search Engine Results Pages.

This terminology is rarely used outside of the SEO (Search Engine Optimization) industry. The majority of people will refer to these in terms like Google searches or queries. Just keep in mind that keywords are synonymous with both of these terms.

For example, if you wanted to buy some coffee, you could search for “best coffee in town” on Google. Even though it contains more than one word, the phrase is still a keyword.

Advantages of using web scraping software

Keyword analysis

You’ll waste both time and resources if you venture in by using the wrong keywords.

Let’s say, for example, that you have an online shop with many good products, but what’s the point of having the best products if nobody can find them?

Keyword research is invaluable because it helps you generate more leads for your business by allowing you to write short, well-written pieces that help SEO.

Proper keyword analysis will help you create quality content for your company and improve your marketing strategies.

Time saver

Just imagine spending half of your day trying to manually find related keywords and ending up not focusing on more critical aspects of your business. That would be inconvenient, to say the least.

Fortunately, using a web scraping API can save you a ridiculous amount of time, and it can cut down on the labour work for you. In just a matter of seconds, you can gather a tremendous amount of usable data.

Scalability

As I mentioned earlier, reading thousands of pages full of data to identify key terms manually could be time-consuming. The amount of data you could find yourself going through in one day could be done in seconds by automating this process. Using a web scraping API makes this extremely easy and scalable.

If the project changes and you need to start gathering more data or on different subjects, the tools you’re already using can handle the new workload with little to no changes.

Accurate results

You will deal with inconsistencies if you do this by yourself. SERPs are trained to show the most relevant approaches, as related keyword suggestions follow a set of rules and parameters. Thinking about associated keywords can be a hit or miss sometimes. You need to take into consideration a lot of factors to make sure you’re getting the right results.

The SERP scraping process

Let’s assume that we want to scrape Google for related keywords. To begin with this process, we will need a scraping tool. I like to use the WebScrapingAPI for these kinds of jobs because it has many reliable features like residential proxies, which can avoid IP blocks and many more. In addition to that, we will use NodeJS with got and jsdom libraries.

In this example, we’ll imagine that we’re operating a coffee online store. We need to identify the best keywords to focus on so that we use those keywords on our website and gain the right kind of traffic.

1. Create a WebScrapingAPI account

This step is an easy one, and there’s no need to be concerned about pricing because you have 1.000 free API calls so that you can try it out! After you register and validate your account via email, we’ll move on to the next step.

2. API key

You can get your API access key from the dashboard, and you’ll need the key later to authenticate with the API.

It’s important not to share your key with anyone. If you think that your key has been compromised, you can reset it any time by pressing the “Reset API Key” button.

3. Integrate with your application

3.1 Required tools

To make an HTTP request, we’ll need to install the got package and, to parse the HTML returned from the request, we will use jsdom.

npm install got jsdom

3.2 Define the keywords

Use an array to store your keywords.

let keywords = [“coffee”, “beans”, “caffeine”, “roasted”];

3.3 Set the API request parameters

For each keyword defined earlier, we should make an API request and search Google for related keywords. Thanks to WebScrapingAPI, we can use a residential proxy to avoid IP blocks.

const params = {
api_key: “XXXXXXXXXXXXXX”,
url: “https://www.google.com/search?q=" + keyword,
proxy_type: “residential”,
render_js: 1,
wait_until: “domcontentloaded”
}

3.4 Make the request

const response = await got(‘https://api.webscrapingapi.com/v1', {searchParams: params})
console.log(response.body)

After making the request, we already see that the page is returned in HTML format.

3.5 Inspect the SERP

We can explore the page and search for the elements we’re interested in using the Developer Tools. For inspecting a component, right-click on it and select the Inspect option.

The portion that we need is at the bottom of the page.

We can find that the related keyword elements have the class s75C5d.

3.6 Parsing the HTML

We need to parse the request result in order to manipulate it because the result is returned in an HTML format. JSDOM will simplify the job for us!

const {document} = new JSDOM(response.body).window

3.7 Processing the results

We make an API request using WebScrapingAPI to search Google for each of our keywords. After we parse the response body with jsdom, we are trying to find the elements with a class of s75C5d. If these elements exist, we iterate through them and push each element into an array called relatedkeywords.

Finally, we print the related keywords that we found for each of our initial keywords, and voila! The final code should look like this:

const {JSDOM} = require(“jsdom”);
const got = require(“got”);
let keywords = [“coffee”, “beans”, “caffeine”, “roasted”];
keywords.forEach(async (keyword) => {
const params = {
api_key: “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx”,
url: “https://www.google.com/search?q=" + keyword,
proxy_type: “residential”,
render_js: 1,
wait_until: “domcontentloaded”
}
const response = await got(‘https://api.webscrapingapi.com/v1', {searchParams: params})
const {document} = new JSDOM(response.body).window
const relatedElements = document.querySelectorAll(‘.s75CSd’)
let relatedKeywords = []
if (relatedElements) {
relatedElements.forEach(el => {
relatedKeywords.push(el.textContent)
})
}
if (relatedKeywords) {
console.log(“Keyword: “ + keyword + “\n” + “Related keywords: “ + relatedKeywords + “\n”)
}
})

4. Presto!

Here you can see some of the results that we’ve got from our tests.

Closing Thoughts

Let’s hope that you’ve realised by now why a web scraping tool can save you a tremendous amount of time. It’s essential to find a tool that is good at avoiding IP blocks because a SERP like Google can easily detect your scraper and block that IP.

We recommend using WebScraperAPI because it also offers you 1000 API calls to get you started. You can give it a try, don’t let all the valuable information escape through your fingers!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store