问题:无法从网页中提取连接到see all按钮的链接

我创建了一个脚本来使用请求登录到linkedin。脚本运行良好。

登录后,我使用这个 urlhttps://www.linkedin.com/groups/137920/从那里刮取这个名字Marketing Intelligence Professionals你可以在这个图像中看到。

该脚本可以完美地解析名称。但是,我现在想做的是刮掉连接到See all按钮的链接,该按钮位于这个图像中显示的那个页面的底部。

群链接you gotta log in to access the content

到目前为止我已经创建(它可以刮掉第一张图片中显示的名称):

import json
import requests
from bs4 import BeautifulSoup

link = 'https://www.linkedin.com/login?fromSignIn=true&trk=guest_homepage-basic_nav-header-signin'
post_url = 'https://www.linkedin.com/checkpoint/lg/login-submit'
target_url = 'https://www.linkedin.com/groups/137920/'

with requests.Session() as s:
    s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1; ) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36'
    r = s.get(link)
    soup = BeautifulSoup(r.text,"lxml")
    payload = {i['name']:i.get('value','') for i in soup.select('input[name]')}
    payload['session_key'] = 'your email' #put your username here
    payload['session_password'] = 'your password' #put your password here
    r = s.post(post_url,data=payload)
    r = s.get(target_url)
    soup = BeautifulSoup(r.text,"lxml")
    items = soup.select_one("code:contains('viewerGroupMembership')").get_text(strip=True)
    print(json.loads(items)['data']['name']['text'])

如何从那里刮掉连接到See all按钮的链接?

解答

当您单击“查看全部”时,会调用一个内部 Rest API:

GET https://www.linkedin.com/voyager/api/search/blended

keywords查询参数包含您最初请求的组的标题(初始页面中的组标题)。

为了获取组名,您可以抓取初始页面的 html,但是当您提供组 ID 时,有一个 API 会返回组信息:

GET https://www.linkedin.com/voyager/api/groups/groups/urn:li:group:GROUP_ID

您的组 id 是 137920 可以直接从 URL 中提取

一个例子 :

import requests
from bs4 import BeautifulSoup
import re
from urllib.parse import urlencode

username = 'your username'
password = 'your password'

link = 'https://www.linkedin.com/login?fromSignIn=true&trk=guest_homepage-basic_nav-header-signin'
post_url = 'https://www.linkedin.com/checkpoint/lg/login-submit'
target_url = 'https://www.linkedin.com/groups/137920/'

group_res = re.search('.*/(.*)/$', target_url)
group_id = group_res.group(1)

with requests.Session() as s:
    # login
    s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1; ) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36'
    r = s.get(link)
    soup = BeautifulSoup(r.text,"lxml")
    payload = {i['name']:i.get('value','') for i in soup.select('input[name]')}
    payload['session_key'] = username
    payload['session_password'] = password
    r = s.post(post_url, data = payload)

    # API
    csrf_token = s.cookies.get_dict()["JSESSIONID"].replace("\"","")
    r = s.get(f"https://www.linkedin.com/voyager/api/groups/groups/urn:li:group:{group_id}",
        headers= {
            "csrf-token": csrf_token
        })
    group_name = r.json()["name"]["text"]
    print(f"searching data for group {group_name}")
    params = {
        "count": 10,
        "keywords": group_name,
        "origin": "SWITCH_SEARCH_VERTICAL",
        "q": "all",
        "start": 0
    }
    r = s.get(f"https://www.linkedin.com/voyager/api/search/blended?{urlencode(params)}&filters=List(resultType-%3EGROUPS)&queryContext=List(spellCorrectionEnabled-%3Etrue)",
        headers= {
            "csrf-token": csrf_token,
            "Accept": "application/vnd.linkedin.normalized+json+2.1",
            "x-restli-protocol-version": "2.0.0"
        })
    result = r.json()["included"]
    print(result)
    print("list of groupName/link")
    print([
        (t["groupName"], f'https://www.linkedin.com/groups/{t["objectUrn"].split(":")[3]}') 
        for t in result
    ])

几点注意事项:

  • 那些API调用需要cookie session

  • 那些 API 调用需要与 JSESSIONID cookie 值相同的 XSRF 令牌的特定标头

  • 搜索调用需要一个特殊的媒体类型application/vnd.linkedin.normalized+json+2.1

  • 字段queryContextfilters内的括号不应进行 url 编码,否则将不会考虑这些参数

Logo

Python社区为您提供最前沿的新闻资讯和知识内容

更多推荐