Crawl Product Details in Decathlon Pages Using Scrapy-Splash

Photo by Bruno Nascimento on Unsplash

In this tutorial, we will scrape product details by following links using the Scrapy-Splash plugin.

Create a virtual environment to avoid package conflicts. Install necessary packages and start scrapy project.

Install scrapy:

pip install Scrapy 

If you have trouble with installing Scrapy through pip, you can use conda. See docs here.

conda install -c conda-forge scrapy

Start the project with:

scrapy startproject productscraper
cd productscraper

Also, install scrapy-splash as we will use it further in the tutorial. I assume you already have Docker installed on your device. Otherwise, go ahead and install it first. You will need it to run the scrapy-splash plugin however you don’t need to know how containers work for this project.

# install it inside your virtual envpip install scrapy-splash# this command will pull the splash image and run the container for youdocker run -p 8050:8050 scrapinghub/splash

Now you are ready to scrape data out of the web. Let’s try to get some data before using Scrapy-Splash. This is the link I am going to scrape in this tutorial. Feel free to try on different links and websites as well. Take some time to view the page sources and inspect the elements you want to extract.

Press CTRL+SHIFT+C or click the button on the top I circled to inspect elements

To discover about the scrapy selectors check out here.

Use shell to extract elements you want to scrape before trying to run the spider on the script. In this way, you will gain some time, you will not make requests many times, and avoid getting banned from the website.

Open your scrapy shell with:

scrapy shell

Now you can try to extract elements here and see if it works. First, fetch the link and check the response. If it is not returning 200, check the link on the browser link might be broken or it might be a typo.

>>> fetch(‘')
2021–05–15 12:14:52 [scrapy.core.engine] INFO: Spider opened
2021–05–15 12:14:53 [scrapy.core.engine] DEBUG: Crawled (200) <GET> (referer: None)
>>> response

The plan is to get product URLs on this page, go into them one by one and scrape product details.

Try to get one of the product links by selecting the link element:

>>> response.css(‘a.js-de-ProductTile-link::attr(href)’).get()

To grab all of the elements, use getall().

Since we get the URLs correctly we can now fetch one of the product pages and see if we also get the product details correctly.

>>> fetch(‘')
2021–05–15 12:32:38 [scrapy.core.engine] DEBUG: Crawled (200) <GET> (referer: None)

Try to get the name of the product:

>>> response.css(‘’).get()
“\n Quechua NH100 Mid-Height Hiking Shoes, Women’s\n "

Try to get the description, price, image URL:

>>> response.css(‘’).get()
“\n Quechua NH100 Mid-Height Hiking Shoes, Women’s is designed for Half-day hiking in dry weather conditions and on easy paths.\n “
>>> response.css('span.js-de-PriceAmount::text').get()
'\n $24.99\n '
>>> response.css('').get()

So far so good. Let’s now try to get the other images. You will notice it is a slider. It requires you to click a button to get other images.

>>> response.css(‘response.css(').getall()

Our scrapy spider cannot select the other images correctly because it is rendered by JavaScript. This is where come Scrapy-Splash plugin comes to the rescue.

I assume your container is still running from the docker command above. Check it out at http://localhost:8050/. You should see the splash page which means your splash is ready to get requests from you.

Try rendering the same product page through your splash container:


You should be able to see the product page on your localhost. Go back to your shell and fetch splash URL this time.

>>> fetch(‘http://localhost:8050/render.html?')
2021–05–15 13:55:52 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://localhost:8050/render.html?> (referer: None)

Now try again for the images:

>>> response.css(‘’).getall()
[‘//–82a9–’, ‘//–71ae-4d21–9912–’, ‘//–’, ‘//’, ‘//–49e0–’, ‘//–4455–’, ‘//–43f1-a779–’, ‘//–279e-4954–’, ‘//–07ec-448e-beb0–’, ‘//–’]

Bom! It is all there. We were able to get all the data we want thanks to Splash.

To integrate Splash with your own scrapy project go to and add these lines:

# Splash SetupSPLASH_URL = 'http://<YOUR-IP-ADRESS>:8050'DOWNLOADER_MIDDLEWARES = {'random_useragent.RandomUserAgentMiddleware': 400,'scrapy_splash.SplashCookiesMiddleware': 723,'scrapy_splash.SplashMiddleware': 725,'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,}SPIDER_MIDDLEWARES = {'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,}DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

The only thing left is preparing our spider to extract data out of the page.

Normally for following links, you would do :

yield response.follow(link, callback=self.parse_products)

With splash, you just need to replace response.follow with SplashRequest

You also need to override the start_request method to start making requests through splash.

See the example below:

import scrapyfrom scrapy_splash import SplashRequestclass DecathlonSpider(scrapy.Spider):name = ‘Decathlonspider’ # You will run the crawler with this namestart_urls= [‘',]# When writing with splash, you need to override the start_request method to start making request through splash.def start_requests(self):
for url in self.start_urls:
yield SplashRequest(url=url, callback=self.parse, args= {'wait':1})
# Extract the links we need and start another Splash Request to follow themdef parse(self, response):links=response.css(‘a.js-de-ProductTile-link::attr(href)’).getall()for link in links: splashLink = ‘' + link yield SplashRequest(splashLink, callback=self.parse_product)
# Extract product details
def parse_product(self, response):datasets = response.css(‘’).getall()images = []# get the biggest image inside data-setfor data in datasets:dataArr = data.split(‘,’)images.append(dataArr[len(dataArr) — 1].strip())yield {‘response’: response,
‘brand’: response.css(‘’).get().split(‘ ‘)[1],
‘name’: response.css(‘’).get(),
‘price’ : response.css(‘span.js-de-PriceAmount::text’).get(),
‘images’: images

To understand better how scrapy and spiders works you can check out this article I wrote.

To run the spider and get the extracted data into a json file, run:

scrapy crawl Decathlonspider -o decathlon.json

It should create a file with the data, otherwise check the command line to debug the mistakes.

Here we handled JavaScript rendered content in a Scrapy Project using Scrapy-Splash project. Splash is a lightweight web browser that is capable of processing multiple pages, executing custom JavaScript in the page context. You can find more info on Splash itself in the docs.

If you have any questions regarding this, feel free to ask in the comments!

Software Engineer at 42 Silicon Valley | Technical Writer at