Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 13951

Scrapy spider skips pages and stops before end

$
0
0

I'm developing a spider with scrapy and Playwright to scrape a retail brand website. Basically, it's parsing all images of all colors of all products on all pages (page -> product -> color -> image). My problem is that my spider skips many products and stops crawling way before the end. In addition to that, it begins to scrape pages without having finished to scrape the previous ones and I don't want it to do that. Here is my code:

import scrapyfrom connect_spider_db import DbSpiderclass MySpider(DbSpider):    name = "website"    first_page = "https://www.website.fr/men/tshirts/"    start_dicts = [{"url": first_page}]    for k in range(2,150):        start_dicts.append({"url": first_page +"?p=" + str(k),})    def __init__(self, name=None, **kwargs):        super().__init__(name, **kwargs)        self.website = self.start_dicts[0]["url"].split("/")[2] # extracting 'www.website.fr'        self.init()    def start_requests(self):        for start_dict in self.start_dicts:            yield scrapy.Request(                url=start_dict["url"],                callback=self.parse,                meta=dict(                    playwright=True,                    playwright_include_page=False                ),            )    def parse(self, response):        css_selector = response.css('...').get()        main_links = []        for product in response.css(css_selector):            link = product.css('a').attrib["href"]            main_links.append(link)        for product_link in main_links:            yield response.follow(                product_link,                meta={"playwright": True,"playwright_include_page": True,                    },                callback=self.parse_product,                )    async def parse_product(self, response):        page = response.meta["playwright_page"]        links_list = []        for product in response.css('...'):   # in all cases, the images of the current color must be selected            image_url = product.xpath('@src').get()            yield {'url': image_url}        button = page.locator('button...')        if await button.is_visible():            await button.click()        li_list = await page.locator('...Available colors...').all()        for link in li_list:            link_url = await link.get_attribute("href")            links_list.append(link_url)        for color_link in links_list:            yield response.follow(                color_link,                callback=self.parse_color                )        await page.close()    def parse_color(self, response):        for product in ...:            yield {...}

The button click corresponds to the display of all colors (not all displayed if there are more than 5). The imported class DbSpider is initializing a few things that don't matter in this simplified spider.

I've set CONCURRENT_REQUESTS = 1, I checked my CPU and it's not a memory problem.

This spider works when I keep only the first page, I would want it to begin to scrape the next page when it is done scraping the previous page. I think the asynchronous nature of scrapy requests is the problem.

I tried to begin with the first page and loop/click on the next page button but I ended with the same result. What could be wrong?


Viewing all articles
Browse latest Browse all 13951

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>