如何使用Python登录到Amazon的Audible.com子公司

问题描述 投票:2回答:1

[我想使用Python Beautiful Soup抓取Audible网站。除非登录到Audible帐户,否则有些数据将无法访问。它是Amazon.com的子公司。我一直没有成功。我只想使用Python登录并抓取html。

我已经尝试过各种代码,例如How to login to Amazon using BeautifulSoup。有人会认为,仅用此代码替换我的凭据即可。

python-3.x python-requests
1个回答
1
投票

不幸的是,这不再可以在Python中简单地自动化。这是我用Audible AU可以得到的。 POST需要一堆标题,除了metadata1(底部还有更多信息)之外,大多数标题都可以提取:

"""load packages"""
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlsplit, parse_qs

"""define URL where login form is located"""
site = "https://www.audible.com.au/signin"

"""initiate session"""
session = requests.Session()

"""define session headers"""
session.headers = {
    "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
    "accept-encoding": "gzip, deflate, br",
    "accept-language": "en-US,en;q=0.9,cs;q=0.8",
    "sec-fetch-dest": "document",
    "sec-fetch-mode": "navigate",
    "sec-fetch-site": "none",
    "upgrade-insecure-requests": "1",
    "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36",
    "metadata1": "",
}

"""get login page"""
resp = session.get(site)
html = resp.text

"""extract clientContext from the login page"""
query = urlsplit(resp.url).query
params = parse_qs(query)
clientContext = params["clientContext"]
new_login_url = "https://www.amazon.com.au/ap/signin/" + str(clientContext[0])

"""get BeautifulSoup object of the html of the login page"""
soup = BeautifulSoup(html, "lxml")

"""scrape login page to get all the needed inputs required for login"""
data = {}
form = soup.find("form", {"name": "signIn"})
for field in form.find_all("input"):
    try:
        data[field["name"]] = field["value"]
    except:
        pass

"""add username and password to the data for post request"""
data[u"email"] = "EMAIL"
data[u"password"] = "PASSWORD"

"""display: redirect URL, appActionToken, appAction, siteState, openid.return_to, prevRID, workflowState, create, email, password"""
print(new_login_url, data)

"""submit post request with username / password and other needed info"""
post_resp = session.post(new_login_url, data=data, allow_redirects=True)
post_soup = BeautifulSoup(post_resp.content, "lxml")

"""check the captcha"""
warning = post_soup.find("div", id="auth-warning-message-box")
if warning:
    print("Warning:", warning)
else: print(post_soup)

session.close()

在行4849上添加您的电子邮件地址和密码。另外,使用浏览器登录并检查流量,以查看计算机上的metadata1是什么,并将其添加到22行中。如果您很幸运并且不会被检测为机器人,您将进入,否则将获得验证码图像。

metadata1是base64中的巨大负载,它包含浏览器收集的数据,这些数据可以唯一地识别您的身份并将您与漫游器区分开(鼠标单击,键入延迟,页面脚本,浏览器信息,兼容性和扩展名,Flash版本,用户代理,脚本性能,硬件-GPU,本地存储,画布大小等...)

© www.soinside.com 2019 - 2024. All rights reserved.