Python爬虫(应朋友之邀)-功能实现版

  1. 云栖社区>
  2. 博客>
  3. 正文

Python爬虫(应朋友之邀)-功能实现版

swinblacksea 2018-11-03 18:01:47 浏览676
展开阅读全文

环境:win10 py37

工具:pyCharm anaconda

主要包:BeautifulSoup,re

代码:


#!/usr/bin/python
# -*- coding: UTF-8 -*-
import re
from urllib import request

from bs4 import BeautifulSoup

html = request.urlopen("http://data.eastmoney.com/report/20181101/APPISWTR4upPASearchReport.html")
bs = BeautifulSoup(html, "html.parser")
print("title")
print(bs.title)

print("meta")
links = bs.find_all("meta")
count = 0
for link in links:
    count = count + 1
    print(count)
    attrs = link.attrs
    if "name" in attrs.keys():
        print("name:", attrs['name'])
    if "http-equiv" in attrs.keys():
        print("httpEquiv:", attrs['http-equiv'])
    if "content" in attrs.keys():
        print("content:", attrs['content'])

print("p")
ps = bs.find_all("p")
index = -1
for p in ps:
    contents = p.contents
    if len(contents) > 0:
        content = contents[0]
        if str(content).__contains__("盈利预测"):
            index = ps.index(p)
            break
needContent = ""
if index != -1:
    index = index + 2
    needContent = str(ps[index])
print(needContent)

match1 = re.search(r'[\u4e00-\u9fa5]{4}20[0-9]{2}[\u4e00-\u9fa5]-20[0-9]{2}[\u4e00-\u9fa5]', needContent)
match2 = re.search(r'EPS为.*元', needContent)
match3 = re.search(r'([\u4e00-\u9fa5]{4}“).*”[\u4e00-\u9fa5]{2}', needContent)
print(match1.group())
print(match2.group())
print(match3.group())


网友评论

登录后评论
0/500
评论