就是一个简单的python查询百度关键词排名的函数,以下是一些简介:
1、UA随机
2、操作简单方便,直接getRank(关键词,域名)就可以了
3、编码转化。编码方面应该没啥问题了。
4、结果丰富。不仅有排名,还有搜索结果的title,URL,快照时间,符合SEO需求
5、拿来做个软件或者自己用都很方便。
功能是单线程实现,速度慢,大家可以参考修改成自己需要的。
复制代码 代码如下:
#coding=utf-8
import requests
import BeautifulSoup
import re
import random
def decodeAnyWord(w):
try:
w.decode('utf-8')
except:
w = w.decode('gb2312')
else:
w = w.decode('utf-8')
return w
def createURL(checkWord): #create baidu URL with search words
checkWord = checkWord.strip()
checkWord = checkWord.replace(' ', '+').replace('\n', '')
baiduURL = 'http://www.baidu.com/s?wd=%s&rn=100' % checkWord
return baiduURL
def getContent(baiduURL): #get the content of the serp
uaList = ['Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1;+.NET+CLR+1.1.4322;+TencentTraveler)',
'Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729)',
'Mozilla/5.0+(Windows+NT+5.1)+AppleWebKit/537.1+(KHTML,+like+Gecko)+Chrome/21.0.1180.89+Safari/537.1',
'Mozilla/4.0+(compatible;+MSIE+6.0;+Windows+NT+5.1;+SV1)',
'Mozilla/5.0+(Windows+NT+6.1;+rv:11.0)+Gecko/20100101+Firefox/11.0',
'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+SV1)',
'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+GTB7.1;+.NET+CLR+2.0.50727)',
'Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0;+KB974489)']
headers = {'User-Agent': random.choice(uaList)}
r = requests.get(baiduURL, headers = headers)
return r.content
def getLastURL(rawurl): #get final URL while there're redirects
r = requests.get(rawurl)
return r.url
def getAtext(atext): #get the text with <a> and </a>
pat = re.compile(r'<a .*?>(.*?)</a>')
match = pat.findall(atext.replace('\n', ''))
pureText = match[0].replace('<em>', '').replace('</em>', '')
return pureText.replace('\n', '')
def getCacheDate(t): #get the date of cache
pat = re.compile(r'<span class="g">.*?(\d{4}-\d{1,2}-\d{1,2}) </span>')
match = pat.findall(t)
cacheDate = match[0]
return cacheDate
def getRank(checkWord, domain): #main line
checkWord = checkWord.replace('\n', '')
checkWord = decodeAnyWord(checkWord)
baiduURL = createURL(checkWord)
cont = getContent(baiduURL)
soup = BeautifulSoup.BeautifulSoup(cont)
results = soup.findAll('table', {'class': 'result'}) #find all results in this page
for result in results:
checkData = unicode(result.find('span', {'class': 'g'}))
if re.compile(r'^[^/]*%s.*?' %domain).match(checkData.replace('<b>', '').replace('</b>', '')): #改正则
nowRank = result['id'] #get the rank if match the domain info
resLink = result.find('h3').a
resURL = resLink['href']
domainURL = getLastURL(resURL) #get the target URL
resTitle = getAtext(unicode(resLink)) #get the title of the target page
rescache = result.find('span', {'class': 'g'})
cacheDate = getCacheDate(unicode(rescache)) #get the cache date of the target page
res = u'%s, 第%s名, %s, %s, %s' % (checkWord, nowRank, resTitle, cacheDate, domainURL)
return res.encode('gb2312')
break
else:
return '>100'
domain = 'www.baidu.com' #set the domain which you want to search.
print getRank('百度', domain)
python,百度关键词排名查询
免责声明:本站文章均来自网站采集或用户投稿,网站不提供任何软件下载或自行开发的软件! 如有用户或公司发现本站内容信息存在侵权行为,请邮件告知! 858582#qq.com
更新日志
- 周华健.2015-水浒三部曲原创音乐选辑【滚石】【FLAC分轨】
- 钟志刚《为爱而歌DSD》[WAV+CUE]
- 孙露《情人的眼泪》[低速原抓WAV+CUE]
- 【雨果唱片】刘明源《胡琴专辑》1993[WAV+CUE]
- 黄莺莺《25周年纪念金曲专辑》[WAV+CUE][1.1G]
- 刘德丽《刘德丽新曲+精选》2023[WAV+CUE][1G]
- 潘美辰《鹰与月》双语专辑[WAV+CUE][1G]
- 梁咏琪.2007-女色新曲+精选2CD【华纳】【WAV+CUE】
- 黎亚.2006-我不在巴黎【星外星】【FLAC分轨】
- 陈洁仪.1994-心痛【立得唱片】【WAV+CUE】
- 车载必备专用超级选曲《劲爆中文DJ》2CD[WAV+CUE]
- 群星《民歌流淌60年(黑胶CD)》2CD[WAV+分轨]
- 群星《美丽时光》紫银合金AQCD[WAV+CUE]
- 群星《12大巨星畅销精选集》[WAV分轨][1.1G]
- 华语排行冠军曲《百事音乐风云榜》[WAV+CUE][1G]