Answer a question

Given a soup I need to get n elements with class="foo".

This can be done by:

soup.find_all(class_='foo', limit=n)

However, this is a slow process, as the elements I'm trying to find are located at the very bottom of the document.

Here is my code:

    main_num = 1
    main_page = 'https://rawdevart.com/search/?page={p_num}&ctype_inc=0'
    # get_soup returns bs4 soup of a link
    main_soup = get_soup(main_page.format(p_num=main_num))
    
    # get_last_page returns the number of pages which is 64
    last_page_num = get_last_page(main_soup) 
    for sub_num in range(1, last_page_num+1):
        sub_soup = get_soup(main_page.format(p_num=sub_num))
        arr_links = sub_soup.find_all(class_='head')
        # process arr_links

Answers

The class head is an attribute of the a tag on this page, so I assume you want to grab all follow links and keep moving thru all the search pages.

Here's how you might want to get that done:

import requests
from bs4 import BeautifulSoup

base_url = "https://rawdevart.com"

total_pages = BeautifulSoup(
    requests.get(f"{base_url}/search/?page=1&ctype_inc=0").text,
    "html.parser",
).find(
    "small",
    class_="d-block text-muted",
).getText().split()[2]

pages = [
    f"{base_url}/search/?page={n}&ctype_inc=0"
    for n in range(1, int(total_pages) + 1)
]

all_follow_links = []

for page in pages[:2]:
    r = requests.get(page).text
    all_follow_links.extend(
        [
            f'{base_url}{a["href"]}' for a in
            BeautifulSoup(r, "html.parser").find_all("a", class_="head")
        ]
    )

print(all_follow_links)

Output:

https://rawdevart.com/comic/my-death-flags-show-no-sign-ending/
https://rawdevart.com/comic/tsuki-ga-michibiku-isekai-douchuu/
https://rawdevart.com/comic/im-not-a-villainess-just-because-i-can-control-darkness-doesnt-mean-im-a-bad-person/
https://rawdevart.com/comic/tensei-kusushi-wa-isekai-wo-meguru/
https://rawdevart.com/comic/iceblade-magician-rules-over-world/
https://rawdevart.com/comic/isekai-demo-bunan-ni-ikitai-shoukougun/
https://rawdevart.com/comic/every-class-has-been-mass-summoned-i-strongest-under-disguise-weakest-merchant/
https://rawdevart.com/comic/isekai-onsen-ni-tensei-shita-ore-no-kounou-ga-tondemosugiru/
https://rawdevart.com/comic/kubo-san-wa-boku-mobu-wo-yurusanai/
https://rawdevart.com/comic/gabriel-dropout/
and more ...

Note: to get all the pages just remove the slicing from this line:

for page in pages[:2]:
    # the rest of the loop body

So it looks like this:

for page in pages:
    # the rest of the loop body
Logo

Python社区为您提供最前沿的新闻资讯和知识内容

更多推荐