背景说明
感觉微信公众号算得是比较难爬的平台之一,不过一番折腾之后还是小有收获的。没有用Scrapy(估计爬太快也有反爬限制),但后面会开始整理写一些实战出来。简单介绍下本次的开发环境:
- python3
- requests
- psycopg2 (操作postgres数据库)
抓包分析
本次实战对抓取的公众号没有限制,但不同公众号每次抓取之前都要进行分析。打开Fiddler,将手机配置好相关代理,为避免干扰过多,这里给Fiddler加个过滤规则,只需要指定微信域名mp.weixin.qq.com就好:
Fiddler配置Filter规则
平时关注的公众号也比较多,本次实战以“36氪”公众号为例,继续往下看:
“36氪”公众号
公众号右上角 -> 全部消息
在公众号主页,右上角有三个实心圆点,点击进入消息界面,下滑找到并点击“全部消息”,往下请求加载几次历史文章,然后回到Fiddler界面,不出意外的话应该可以看到这几次请求,可以看到返回的数据是json格式的,同时文章数据是以json字符串的形式定义在general_msg_list字段中:
公众号文章列表抓包请求
分析文章列表接口
把请求URL和Cookie贴上来进行分析:
https://mp.weixin.qq.com/mp/profile_ext"htmlcode">{ "ret": 0, "errmsg": "ok", "msg_count": 10, "can_msg_continue": 1, "general_msg_list": "{\"list\":[{\"comm_msg_info\":{\"id\":1000005700,\"type\":49,\"datetime\":1535100943,\"fakeid\":\"3264997043\",\"status\":2,\"content\":\"\"},\"app_msg_ext_info\":{\"title\":\"金融危机又十年:钱荒之下,二手基金迎来高光时刻\",\"digest\":\"退出永远是基金的主旋律。\",\"content\":\"\",\"fileid\":100034824,\"content_url\":\"http:\\/\\/mp.weixin.qq.com\\/s",\"source_url\":\"\",\"cover\":\"http:\\/\\/mmbiz.qpic.cn\\/mmbiz_jpg\\/QicyPhNHD5vYgdpprkibtnWCAN7l4ZaqibKvopNyCWWLQAwX7QpzWicnQSVfcBZmPrR5YuHS45JIUzVjb0dZTiaLPyA\\/0",\"subtype\":9,\"is_multi\":0,\"multi_app_msg_item_list\":[],\"author\":\"石亚琼\",\"copyright_stat\":11,\"duration\":0,\"del_flag\":1,\"item_show_type\":0,\"audio_fileid\":0,\"play_url\":\"\",\"malicious_title_reason_id\":0,\"malicious_content_type\":0}}]}", "next_offset": 20, "video_count": 1, "use_video_tab": 1, "real_type": 0 }可以简单抽取想要的数据,这里将文章表结构定义如下,顺便贴上建表的SQL语句:
文章数据表
-- ---------------------------- -- Table structure for tb_article -- ---------------------------- DROP TABLE IF EXISTS "public"."tb_article"; CREATE TABLE "public"."tb_article" ( "id" serial4 PRIMARY KEY, "msg_id" int8 NOT NULL, "title" varchar(200) COLLATE "pg_catalog"."default" NOT NULL, "author" varchar(20) COLLATE "pg_catalog"."default", "cover" varchar(500) COLLATE "pg_catalog"."default", "digest" varchar(200) COLLATE "pg_catalog"."default", "source_url" varchar(800) COLLATE "pg_catalog"."default", "content_url" varchar(600) COLLATE "pg_catalog"."default" NOT NULL, "post_time" timestamp(6), "create_time" timestamp(6) NOT NULL ) ; COMMENT ON COLUMN "public"."tb_article"."id" IS '自增主键'; COMMENT ON COLUMN "public"."tb_article"."msg_id" IS '消息id (唯一)'; COMMENT ON COLUMN "public"."tb_article"."title" IS '标题'; COMMENT ON COLUMN "public"."tb_article"."author" IS '作者'; COMMENT ON COLUMN "public"."tb_article"."cover" IS '封面图'; COMMENT ON COLUMN "public"."tb_article"."digest" IS '关键字'; COMMENT ON COLUMN "public"."tb_article"."source_url" IS '原文地址'; COMMENT ON COLUMN "public"."tb_article"."content_url" IS '文章地址'; COMMENT ON COLUMN "public"."tb_article"."post_time" IS '发布时间'; COMMENT ON COLUMN "public"."tb_article"."create_time" IS '入库时间'; COMMENT ON TABLE "public"."tb_article" IS '公众号文章表'; -- ---------------------------- -- Indexes structure for table tb_article -- ---------------------------- CREATE UNIQUE INDEX "unique_msg_id" ON "public"."tb_article" USING btree ( "msg_id" "pg_catalog"."int8_ops" ASC NULLS LAST );附请求文章接口并解析数据保存到数据库的相关代码:
class WxMps(object): """微信公众号文章、评论抓取爬虫""" def __init__(self, _biz, _pass_ticket, _app_msg_token, _cookie, _offset=0): self.offset = _offset self.biz = _biz # 公众号标志 self.msg_token = _app_msg_token # 票据(非固定) self.pass_ticket = _pass_ticket # 票据(非固定) self.headers = { 'Cookie': _cookie, # Cookie(非固定) 'User-Agent': 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.132 ' } wx_mps = 'wxmps' # 这里数据库、用户、密码一致(需替换成实际的) self.postgres = pgs.Pgs(host='localhost', port='5432', db_name=wx_mps, user=wx_mps, password=wx_mps) def start(self): """请求获取公众号的文章接口""" offset = self.offset while True: api = 'https://mp.weixin.qq.com/mp/profile_ext"""解析嵌套文章数据并保存入库""" title = info.get('title') # 标题 cover = info.get('cover') # 封面图 author = info.get('author') # 作者 digest = info.get('digest') # 关键字 source_url = info.get('source_url') # 原文地址 content_url = info.get('content_url') # 微信地址 # ext_data = json.dumps(info, ensure_ascii=False) # 原始数据 self.postgres.handler(self._save_article(), (msg_id, title, author, cover, digest, source_url, content_url, post_time, datetime.now()), fetch=True) @staticmethod def _save_article(): sql = 'insert into tb_article(msg_id,title,author,cover,digest,source_url,content_url,post_time,create_time) ' 'values(%s,%s,%s,%s,%s,%s,%s,%s,%s)' return sql if __name__ == '__main__': biz = 'MzI2NDk5NzA0Mw==' # "36氪" pass_ticket = 'NDndxxaZ7p6Z9PYulWpLqMbI0i3ULFeCPIHBFu1sf5pX2IhkGfyxZ6b9JieSYRUy' app_msg_token = '971_Z0lVNQBcGsWColSubRO9H13ZjrPhjuljyxLtiQ~~' cookie = 'wap_sid2=CO3YwOQHEogBQnN4VTNhNmxQWmc3UHI2U3kteWhUeVExZHFVMnN0QXlsbzVJRUJKc1pkdVFUU2Y5UzhSVEtOZmt1VVlYTkR4SEllQ2huejlTTThJWndMQzZfYUw2SldLVGVMQUthUjc3QWdVMUdoaGN0Nml2SU05cXR1dTN2RkhRUVd1V2Y3SFJ5d01BQUF+fjCB1pLcBTgNQJVO' # 以上信息不同公众号每次抓取都需要借助抓包工具做修改 wxMps = WxMps(biz, pass_ticket, app_msg_token, cookie) wxMps.start() # 开始爬取文章分析文章评论接口
获取评论的思路大致是一样的,只是会更加麻烦一点。首先在手机端点开一篇有评论的文章,然后查看Fiddler抓取的请求:
公众号文章评论
公众号文章评论接口抓包请求
提取其中的URL和Cookie再次分析:
https://mp.weixin.qq.com/mp/appmsg_comment"htmlcode">def _parse_article_detail(self, content_url, article_id): """从文章页提取相关参数用于获取评论,article_id是已保存的文章id""" try: api = content_url.replace('amp;', '').replace('#wechat_redirect', '').replace('http', 'https') html = requests.get(api, headers=self.headers).text except: print('获取评论失败' + content_url) else: # group(0) is current line str_comment = re.search(r'var comment_id = "(.*)" \|\| "(.*)" \* 1;', html) str_msg = re.search(r"var appmsgid = '' \|\| '(.*)'\|\|", html) str_token = re.search(r'window.appmsg_token = "(.*)";', html) if str_comment and str_msg and str_token: comment_id = str_comment.group(1) # 评论id(固定) app_msg_id = str_msg.group(1) # 票据id(非固定) appmsg_token = str_token.group(1) # 票据token(非固定)再回来看该接口返回的json数据,分析结构后然后定义数据表(含SQL):
文章评论数据表
-- ---------------------------- -- Table structure for tb_article_comment -- ---------------------------- DROP TABLE IF EXISTS "public"."tb_article_comment"; CREATE TABLE "public"."tb_article_comment" ( "id" serial4 PRIMARY KEY, "article_id" int4 NOT NULL, "comment_id" varchar(50) COLLATE "pg_catalog"."default", "nick_name" varchar(50) COLLATE "pg_catalog"."default" NOT NULL, "logo_url" varchar(300) COLLATE "pg_catalog"."default", "content_id" varchar(50) COLLATE "pg_catalog"."default" NOT NULL, "content" varchar(3000) COLLATE "pg_catalog"."default" NOT NULL, "like_num" int2, "comment_time" timestamp(6), "create_time" timestamp(6) NOT NULL ) ; COMMENT ON COLUMN "public"."tb_article_comment"."id" IS '自增主键'; COMMENT ON COLUMN "public"."tb_article_comment"."article_id" IS '文章外键id'; COMMENT ON COLUMN "public"."tb_article_comment"."comment_id" IS '评论接口id'; COMMENT ON COLUMN "public"."tb_article_comment"."nick_name" IS '用户昵称'; COMMENT ON COLUMN "public"."tb_article_comment"."logo_url" IS '头像地址'; COMMENT ON COLUMN "public"."tb_article_comment"."content_id" IS '评论id (唯一)'; COMMENT ON COLUMN "public"."tb_article_comment"."content" IS '评论内容'; COMMENT ON COLUMN "public"."tb_article_comment"."like_num" IS '点赞数'; COMMENT ON COLUMN "public"."tb_article_comment"."comment_time" IS '评论时间'; COMMENT ON COLUMN "public"."tb_article_comment"."create_time" IS '入库时间'; COMMENT ON TABLE "public"."tb_article_comment" IS '公众号文章评论表'; -- ---------------------------- -- Indexes structure for table tb_article_comment -- ---------------------------- CREATE UNIQUE INDEX "unique_content_id" ON "public"."tb_article_comment" USING btree ( "content_id" COLLATE "pg_catalog"."default" "pg_catalog"."text_ops" ASC NULLS LAST );万里长征快到头了,最后贴上这部分代码,由于要先获取文章地址,所以和上面获取文章数据的代码是一起的:
import json import re import time from datetime import datetime import requests from utils import pgs class WxMps(object): """微信公众号文章、评论抓取爬虫""" def __init__(self, _biz, _pass_ticket, _app_msg_token, _cookie, _offset=0): self.offset = _offset self.biz = _biz # 公众号标志 self.msg_token = _app_msg_token # 票据(非固定) self.pass_ticket = _pass_ticket # 票据(非固定) self.headers = { 'Cookie': _cookie, # Cookie(非固定) 'User-Agent': 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.132 ' } wx_mps = 'wxmps' # 这里数据库、用户、密码一致(需替换成实际的) self.postgres = pgs.Pgs(host='localhost', port='5432', db_name=wx_mps, user=wx_mps, password=wx_mps) def start(self): """请求获取公众号的文章接口""" offset = self.offset while True: api = 'https://mp.weixin.qq.com/mp/profile_ext"""解析嵌套文章数据并保存入库""" title = info.get('title') # 标题 cover = info.get('cover') # 封面图 author = info.get('author') # 作者 digest = info.get('digest') # 关键字 source_url = info.get('source_url') # 原文地址 content_url = info.get('content_url') # 微信地址 # ext_data = json.dumps(info, ensure_ascii=False) # 原始数据 content_url = content_url.replace('amp;', '').replace('#wechat_redirect', '').replace('http', 'https') article_id = self.postgres.handler(self._save_article(), (msg_id, title, author, cover, digest, source_url, content_url, post_time, datetime.now()), fetch=True) if article_id: self._parse_article_detail(content_url, article_id) def _parse_article_detail(self, content_url, article_id): """从文章页提取相关参数用于获取评论,article_id是已保存的文章id""" try: html = requests.get(content_url, headers=self.headers).text except: print('获取评论失败' + content_url) else: # group(0) is current line str_comment = re.search(r'var comment_id = "(.*)" \|\| "(.*)" \* 1;', html) str_msg = re.search(r"var appmsgid = '' \|\| '(.*)'\|\|", html) str_token = re.search(r'window.appmsg_token = "(.*)";', html) if str_comment and str_msg and str_token: comment_id = str_comment.group(1) # 评论id(固定) app_msg_id = str_msg.group(1) # 票据id(非固定) appmsg_token = str_token.group(1) # 票据token(非固定) # 缺一不可 if appmsg_token and app_msg_id and comment_id: print('Crawl article comments: ' + content_url) self._crawl_comments(app_msg_id, comment_id, appmsg_token, article_id) def _crawl_comments(self, app_msg_id, comment_id, appmsg_token, article_id): """抓取文章的评论""" api = 'https://mp.weixin.qq.com/mp/appmsg_comment"36氪" pass_ticket = 'NDndxxaZ7p6Z9PYulWpLqMbI0i3ULFeCPIHBFu1sf5pX2IhkGfyxZ6b9JieSYRUy' app_msg_token = '971_Z0lVNQBcGsWColSubRO9H13ZjrPhjuljyxLtiQ~~' cookie = 'wap_sid2=CO3YwOQHEogBQnN4VTNhNmxQWmc3UHI2U3kteWhUeVExZHFVMnN0QXlsbzVJRUJKc1pkdVFUU2Y5UzhSVEtOZmt1VVlYTkR4SEllQ2huejlTTThJWndMQzZfYUw2SldLVGVMQUthUjc3QWdVMUdoaGN0Nml2SU05cXR1dTN2RkhRUVd1V2Y3SFJ5d01BQUF+fjCB1pLcBTgNQJVO' # 以上信息不同公众号每次抓取都需要借助抓包工具做修改 wxMps = WxMps(biz, pass_ticket, app_msg_token, cookie) wxMps.start() # 开始爬取文章及评论文末小结
最后展示下数据库里的数据,单线程爬的慢而且又没这方面的数据需求,所以也只是随便试了下手:
抓取的部分数据
有时候写爬虫是个细心活,如果觉得太麻烦的话,推荐了解下WechatSogou这个工具。有问题的欢迎底部留言讨论。
完整代码:GitHub
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。
免责声明:本站文章均来自网站采集或用户投稿,网站不提供任何软件下载或自行开发的软件! 如有用户或公司发现本站内容信息存在侵权行为,请邮件告知! 858582#qq.com
更新日志
- 雨林唱片《赏》新曲+精选集SACD版[ISO][2.3G]
- 罗大佑与OK男女合唱团.1995-再会吧!素兰【音乐工厂】【WAV+CUE】
- 草蜢.1993-宝贝对不起(国)【宝丽金】【WAV+CUE】
- 杨培安.2009-抒·情(EP)【擎天娱乐】【WAV+CUE】
- 周慧敏《EndlessDream》[WAV+CUE]
- 彭芳《纯色角3》2007[WAV+CUE]
- 江志丰2008-今生为你[豪记][WAV+CUE]
- 罗大佑1994《恋曲2000》音乐工厂[WAV+CUE][1G]
- 群星《一首歌一个故事》赵英俊某些作品重唱企划[FLAC分轨][1G]
- 群星《网易云英文歌曲播放量TOP100》[MP3][1G]
- 方大同.2024-梦想家TheDreamer【赋音乐】【FLAC分轨】
- 李慧珍.2007-爱死了【华谊兄弟】【WAV+CUE】
- 王大文.2019-国际太空站【环球】【FLAC分轨】
- 群星《2022超好听的十倍音质网络歌曲(163)》U盘音乐[WAV分轨][1.1G]
- 童丽《啼笑姻缘》头版限量编号24K金碟[低速原抓WAV+CUE][1.1G]