首页 | 新闻 | 新品 | 文库 | 方案 | 视频 | 下载 | 商城 | 开发板 | 数据中心 | 座谈新版 | 培训 | 工具 | 博客 | 论坛 | 百科 | GEC | 活动 | 主题月 | 电子展
返回列表 回复 发帖

python--爬虫--获取和解析存储网页内容--以薄荷网为例(2)

python--爬虫--获取和解析存储网页内容--以薄荷网为例(2)

Python2中的urllib与urllib2
urllib2的用法

urllib2可以接受一个Request类的实例来设置URL请求的headers,可以带cooikes等登录信息和User Agent等伪装信息。
例如:

req = urllib2.Request(
        url=url,
        data=postdata,
        headers=headers
)
result = urllib2.urlopen(req)


urllib的用法

urllib仅可以接受URL。
这意味着,你不可以伪装你的User Agent字符串等。
但是urllib提供urlencode方法用来GET查询字符串的产生,而urllib2没有。这是就是为何urllib常和urllib2一起使用的原因,如下:

postdata = urllib.urlencode(postdata)


把字典形式的postdata编码一下
Python3 urllib、urllib2
urllib的用法

1、最简单
import urllib.request
response = urllib.request.urlopen('http://python.org/')
html = response.read()

2、使用 Request
import urllib.request
req = urllib.request.Request('http://python.org/')
response = urllib.request.urlopen(req)
the_page = response.read()


3、发送数据
import urllib.parse
import urllib.request
url = '"
values = {
'act' : 'login',
'login[email]' : '',
'login[password]' : ''
}
data = urllib.parse.urlencode(values)
req = urllib.request.Request(url, data)
req.add_header('Referer', 'http://www.python.org/')
response = urllib.request.urlopen(req)
the_page = response.read()
print(the_page.decode("utf8"))

4、发送数据和header
import urllib.parse
import urllib.request
url = ''
user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
values = {
'act' : 'login',
'login[email]' : '',
'login[password]' : ''
}
headers = { 'User-Agent' : user_agent }
data = urllib.parse.urlencode(values)
req = urllib.request.Request(url, data, headers)
response = urllib.request.urlopen(req)
the_page = response.read()
print(the_page.decode("utf8"))



5、http 错误
import urllib.request
req = urllib.request.Request(' ')
try:
urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
print(e.code)
print(e.read().decode("utf8"))

6、异常处理1
from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError
req = Request("http://www..net /")
try:
response = urlopen(req)
except HTTPError as e:
print('The server couldn't fulfill the request.')
print('Error code: ', e.code)
except URLError as e:
print('We failed to reach a server.')
print('Reason: ', e.reason)
else:
print("good!")
print(response.read().decode("utf8"))


7、异常处理2
from urllib.request import Request, urlopen
from urllib.error import  URLError
req = Request("http://www.Python.org/")
try:
response = urlopen(req)
except URLError as e:
if hasattr(e, 'reason'):
print('We failed to reach a server.')
print('Reason: ', e.reason)
elif hasattr(e, 'code'):
print('The server couldn't fulfill the request.')
print('Error code: ', e.code)
else:
print("good!")
print(response.read().decode("utf8"))


8、HTTP 认证
import urllib.request
# create a password manager
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
# Add the username and password.
# If we knew the realm, we could use it instead of None.
top_level_url = ""
password_mgr.add_password(None, top_level_url, 'rekfan', 'xxxxxx')
handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
# create "opener" (OpenerDirector instance)
opener = urllib.request.build_opener(handler)
# use the opener to fetch a URL
a_url = ""
x = opener.open(a_url)
print(x.read())
# Install the opener.
# Now all calls to urllib.request.urlopen use our opener.
urllib.request.install_opener(opener)
a = urllib.request.urlopen(a_url).read().decode('utf8')
print(a)



9、使用代理
import urllib.request
proxy_support = urllib.request.ProxyHandler({'sock5': 'localhost:1080'})
opener = urllib.request.build_opener(proxy_support)
urllib.request.install_opener(opener)
a = urllib.request.urlopen("").read().decode("utf8")
print(a)


10、超时
import socket
import urllib.request
# timeout in seconds
timeout = 2
socket.setdefaulttimeout(timeout)
# this call to urllib.request.urlopen now uses the default timeout
# we have set in the socket module
req = urllib.request.Request('')
a = urllib.request.urlopen(req).read()
print(a)
返回列表