Answer a question

Just picking back up python after a 4 year hiatus. I wanted to practice some web scraping with the Beautiful soup libraries. It has been a bit of a pain as I've had to learn basic html/css at the same time, but ultimately I am reminded of the thrill (and frustration) of coding.

Anyways, My soup.findAll() method seems to be only fetching 48/50 of the desired listings. I am unsure if this is due to some limit in list length, improper method overloading, or error in the lxml parser.

I specifically included the webpage to sort the listings by price so I would be able to check the generated .csv file for missing items. It appears to be the final two listings ommited which leads me to believe it's a list length issue.

Any advice is appreciated and any additional tips outside of my question are welcome! Thanks!

#import Dependencies
from bs4 import BeautifulSoup
import requests

url = 'https://www.etsy.com/search?q=knitted+Toe&explicit=1&order=price_desc'
response = requests.get(url) #request http Data at URL
soup=BeautifulSoup(response.content,'lxml') #parse the data with lxml data parser
#Find all containers in divs with classes named item-container that hold item objects and store them into a list
containers = soup.findAll("div", {"class":"js-merch-stash-check-listing"})

print("-----------------------------------------------------------------------------------------------------------")
print("Search Term:\n"+'"'+soup.h1.text+'"\n') #print <h1> tag contents text
print("Items: ",len(containers)) #print length of container
print("-----------------------------------------------------------------------------------------------------------")

#print(containers[0].a)
container=containers[0]

#CSV data input methods
filename = "EtsyProducts.csv"
f = open(filename,"w")
headers = "Brand, Product Name, Cost, Product Page\n"
f.write(headers)

for container in containers:
    brand_container = container.findAll("div", {"class":"v2-listing-card__shop"})
    brand = brand_container[0].p.text  #Call subclasses of container object 
    cost_container= container.findAll("span", {"class":"currency-value"})
    cost= cost_container[0].text
    product_name = container.a.h3.text.strip()
        
    urlContainer = container.find('a', href=True)
    productPage = urlContainer['href']
        
    print('===========================================================================================================')
    print("Brand: "+brand)
    print("Name: "+product_name)
    print("Price: "+cost+"\n")
    print("URL: "+productPage.strip());
    #sleep(randint(3,10))
    
    f.write(brand + "," + product_name.replace(",","|") + "," + cost + "," + productPage + "\n")
        
f.close() #Close CSV

Answers

This is not an error in parsing the HTML, but merely a side-effect of the Etsy page being optimised for JavaScript capable browsers.

from bs4 import BeautifulSoup
import requests

url = 'https://www.etsy.com/search?q=knitted+Toe&explicit=1&order=price_desc'
response = requests.get(url) #request http Data at URL
print(response.text.count("js-merch-stash-check-listing"))

# 48

The initial HTML response from Etsy does contain exactly 48 items. You can verify that by saving the response.text to a file and opening that html file in a browser. You will see a grid of 12 rows and 4 columns.

That page contains JS instructions for browsers to load more information through AJAX (probably based on the display size), and that's how the extra entries show up.

That said, your code is all correct. If you wish to grab more results from scraping, you might need to reverse engineer the Etsy APIs instead, since that is what your browser uses to render all 50 results.

Logo

Python社区为您提供最前沿的新闻资讯和知识内容

更多推荐