Python + js 通过RSA算法对post表单数

HTTP采用明文传输,如果不对用户密码进行加密处理的话,会导致用户密码明文暴露在网络,通过监听抓包很容易获得。此问题处理方法一般有使用https代替http或对http 表单提交数据进行加解密处理。这里分享的是用RSA非对称加密算法对数据进行加解密,前端js使用公钥进行加密,后端python使用私钥进行解密。
HTTP采用明文传输,如果不对用户密码进行加密处理的话,会导致用户密码明文暴露在网络,通过监听抓包很容易获得,如下图:
22
此问题处理方法一般有使用https代替http或对http 表单提交数据进行加解密处理,这里分享的是用RSA非对称加密算法对数据进行加解密,前端js使用公钥进行加密,后端python使用私钥进行解密。

RSA加密算法简而言之就是服务端生成公钥私钥对,公钥给客户端,客户端拿着公钥去加密数据,然后服务端用私钥去解密数据。

RSA算法中,n、e两个参数决定公钥,n和d决定私钥。在本例的B/S应用中,来往数据如下图:
23
具体实现
生成公钥私钥对
为了避免加密后的密码重复使用,后端处理时每次请求都重新生成公钥私钥对。

当用户请求登录时,生成公钥私钥对,并将n和e写在http页面中返回给前端。需要注意的是生成e和n需要转成十六进制再传递给前端,代码如下:
import rsa
def login(request):
(pub_key, priv_key) = rsa.newkeys(256) ## 生成公钥私钥
pubkey_e = hex(pub_key.e)
pubkey_n = hex(pub_key.n)
request.session['privkey'] = priv_key
return render_to_response('login.html', {'pubkey_e': pubkey_e , 'pubkey_n': pubkey_n})

前端js获取公钥n和e,并使用公钥对用户输入的密码进行加密,然后将加密后的密码发送给服务器请求认证。需要注意的是前端需要将python传输过来的16进制数据进行处理,去掉前面的0x和后面的L。
24
后端使用私钥进行解密验证
def dologin(request):
username=request.POST.get('username','')
en_password=request.POST.get('password','')
priv_key = request.session.get('privkey')
request.session['privkey'] = None ##清除私钥
try:
##使用私钥解密得到明文密码
password = rsa.decrypt(en_password.decode('hex'),priv_key)
except Exception as error:
password = None
##这里省略密码验证500字

验证一下,通过firebug看到post数据中密码已经进行了加密,如下图:
25

Python–urllib3库详解

Python–urllib3库中文详解

User Guide(英文)

Making requests

First things first, import the urllib3 module:

>>> import urllib3

You’ll need a PoolManager instance to make requests. This object handles all of the details of connection pooling and thread safety so that you don’t have to:

>>> http = urllib3.PoolManager()

To make a request use request():

>>> r = http.request('GET', 'http://httpbin.org/robots.txt')
>>> r.data
b'User-agent: *\nDisallow: /deny\n'

request() returns a HTTPResponse object, the Response content section explains how to handle various responses.

You can use request() to make requests using any HTTP verb:

>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     fields={'hello': 'world'})

The Request data section covers sending other kinds of requests data, including JSON, files, and binary data.

Response content

The HTTPResponse object provides statusdata, and header attributes:

>>> r = http.request('GET', 'http://httpbin.org/ip')
>>> r.status
200
>>> r.data
b'{\n  "origin": "104.232.115.37"\n}\n'
>>> r.headers
HTTPHeaderDict({'Content-Length': '33', ...})

JSON content

JSON content can be loaded by decoding and deserializing the data attribute of the request:

>>> import json
>>> r = http.request('GET', 'http://httpbin.org/ip')
>>> json.loads(r.data.decode('utf-8'))
{'origin': '127.0.0.1'}

Binary content

The data attribute of the response is always set to a byte string representing the response content:

>>> r = http.request('GET', 'http://httpbin.org/bytes/8')
>>> r.data
b'\xaa\xa5H?\x95\xe9\x9b\x11'

Note

For larger responses, it’s sometimes better to stream the response.

Request data

Headers

You can specify headers as a dictionary in the headers argument in request():

>>> r = http.request(
...     'GET',
...     'http://httpbin.org/headers',
...     headers={
...         'X-Something': 'value'
...     })
>>> json.loads(r.data.decode('utf-8'))['headers']
{'X-Something': 'value', ...}

Query parameters

For GETHEAD, and DELETE requests, you can simply pass the arguments as a dictionary in the fields argument to request():

>>> r = http.request(
...     'GET',
...     'http://httpbin.org/get',
...     fields={'arg': 'value'})
>>> json.loads(r.data.decode('utf-8'))['args']
{'arg': 'value'}

For POST and PUT requests, you need to manually encode query parameters in the URL:

>>> from urllib.parse import urlencode
>>> encoded_args = urlencode({'arg': 'value'})
>>> url = 'http://httpbin.org/post?' + encoded_args
>>> r = http.request('POST', url)
>>> json.loads(r.data.decode('utf-8'))['args']
{'arg': 'value'}

Form data

For PUT and POST requests, urllib3 will automatically form-encode the dictionary in the fields argument provided to request():

>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     fields={'field': 'value'})
>>> json.loads(r.data.decode('utf-8'))['form']
{'field': 'value'}

JSON

You can send a JSON request by specifying the encoded data as the body argument and setting the Content-Type header when calling request():

>>> import json
>>> data = {'attribute': 'value'}
>>> encoded_data = json.dumps(data).encode('utf-8')
>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     body=encoded_data,
...     headers={'Content-Type': 'application/json'})
>>> json.loads(r.data.decode('utf-8'))['json']
{'attribute': 'value'}

Files & binary data

For uploading files using multipart/form-data encoding you can use the same approach as Form data and specify the file field as a tuple of (file_name,file_data):

>>> with open('example.txt') as fp:
...     file_data = fp.read()
>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     fields={
...         'filefield': ('example.txt', file_data),
...     })
>>> json.loads(r.data.decode('utf-8'))['files']
{'filefield': '...'}

While specifying the filename is not strictly required, it’s recommended in order to match browser behavior. You can also pass a third item in the tuple to specify the file’s MIME type explicitly:

>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     fields={
...         'filefield': ('example.txt', file_data, 'text/plain'),
...     })

For sending raw binary data simply specify the body argument. It’s also recommended to set the Content-Type header:

>>> with open('example.jpg', 'rb') as fp:
...     binary_data = fp.read()
>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     body=binary_data,
...     headers={'Content-Type': 'image/jpeg'})
>>> json.loads(r.data.decode('utf-8'))['data']
b'...'

Certificate verification

Note

New in version 1.25

HTTPS connections are now verified by default (cert_reqs ='CERT_REQUIRED').

While you can disable certification verification, it is highly recommend to leave it on.

Unless otherwise specified urllib3 will try to load the default system certificate stores. The most reliable cross-platform method is to use the certifi package which provides Mozilla’s root certificate bundle:

pip install certifi

You can also install certifi along with urllib3 by using the secure extra:

pip install urllib3[secure]

Warning

If you’re using Python 2 you may need additional packages. See the section below for more details.

Once you have certificates, you can create a PoolManager that verifies certificates when making requests:

>>> import certifi
>>> import urllib3
>>> http = urllib3.PoolManager(
...     cert_reqs='CERT_REQUIRED',
...     ca_certs=certifi.where())

The PoolManager will automatically handle certificate verification and will raise SSLError if verification fails:

>>> http.request('GET', 'https://google.com')
(No exception)
>>> http.request('GET', 'https://expired.badssl.com')
urllib3.exceptions.SSLError ...

Note

You can use OS-provided certificates if desired. Just specify the full path to the certificate bundle as the ca_certs argument instead ofcertifi.where(). For example, most Linux systems store the certificates at /etc/ssl/certs/ca-certificates.crt. Other operating systems can be difficult.

Certificate verification in Python 2

Older versions of Python 2 are built with an ssl module that lacks SNI support and can lag behind security updates. For these reasons it’s recommended to usepyOpenSSL.

If you install urllib3 with the secure extra, all required packages for certificate verification on Python 2 will be installed:

pip install urllib3[secure]

If you want to install the packages manually, you will need pyOpenSSL,cryptographyidna, and certifi.

Note

If you are not using macOS or Windows, note that cryptography requires additional system packages to compile. See building cryptography on Linuxfor the list of packages required.

Once installed, you can tell urllib3 to use pyOpenSSL by using urllib3.contrib.pyopenssl:

>>> import urllib3.contrib.pyopenssl
>>> urllib3.contrib.pyopenssl.inject_into_urllib3()

Finally, you can create a PoolManager that verifies certificates when performing requests:

>>> import certifi
>>> import urllib3
>>> http = urllib3.PoolManager(
...     cert_reqs='CERT_REQUIRED',
...     ca_certs=certifi.where())

If you do not wish to use pyOpenSSL, you can simply omit the call tourllib3.contrib.pyopenssl.inject_into_urllib3(). urllib3 will fall back to the standard-library ssl module. You may experience several warnings when doing this.

Warning

If you do not use pyOpenSSL, Python must be compiled with ssl support for certificate verification to work. It is uncommon, but it is possible to compile Python without SSL support. See this Stackoverflow thread for more details.

If you are on Google App Engine, you must explicitly enable SSL support in your app.yaml:

libraries:
- name: ssl
  version: latest

Using timeouts

Timeouts allow you to control how long requests are allowed to run before being aborted. In simple cases, you can specify a timeout as a float to request():

>>> http.request(
...     'GET', 'http://httpbin.org/delay/3', timeout=4.0)
<urllib3.response.HTTPResponse>
>>> http.request(
...     'GET', 'http://httpbin.org/delay/3', timeout=2.5)
MaxRetryError caused by ReadTimeoutError

For more granular control you can use a Timeout instance which lets you specify separate connect and read timeouts:

>>> http.request(
...     'GET',
...     'http://httpbin.org/delay/3',
...     timeout=urllib3.Timeout(connect=1.0))
<urllib3.response.HTTPResponse>
>>> http.request(
...     'GET',
...     'http://httpbin.org/delay/3',
...     timeout=urllib3.Timeout(connect=1.0, read=2.0))
MaxRetryError caused by ReadTimeoutError

If you want all requests to be subject to the same timeout, you can specify the timeout at the PoolManager level:

>>> http = urllib3.PoolManager(timeout=3.0)
>>> http = urllib3.PoolManager(
...     timeout=urllib3.Timeout(connect=1.0, read=2.0))

You still override this pool-level timeout by specifying timeout to request().

Retrying requests

urllib3 can automatically retry idempotent requests. This same mechanism also handles redirects. You can control the retries using the retries parameter to request(). By default, urllib3 will retry requests 3 times and follow up to 3 redirects.

To change the number of retries just specify an integer:

>>> http.requests('GET', 'http://httpbin.org/ip', retries=10)

To disable all retry and redirect logic specify retries=False:

>>> http.request(
...     'GET', 'http://nxdomain.example.com', retries=False)
NewConnectionError
>>> r = http.request(
...     'GET', 'http://httpbin.org/redirect/1', retries=False)
>>> r.status
302

To disable redirects but keep the retrying logic, specify redirect=False:

>>> r = http.request(
...     'GET', 'http://httpbin.org/redirect/1', redirect=False)
>>> r.status
302

For more granular control you can use a Retry instance. This class allows you far greater control of how requests are retried.

For example, to do a total of 3 retries, but limit to only 2 redirects:

>>> http.request(
...     'GET',
...     'http://httpbin.org/redirect/3',
...     retries=urllib3.Retry(3, redirect=2))
MaxRetryError

You can also disable exceptions for too many redirects and just return the 302response:

>>> r = http.request(
...     'GET',
...     'http://httpbin.org/redirect/3',
...     retries=urllib3.Retry(
...         redirect=2, raise_on_redirect=False))
>>> r.status
302

If you want all requests to be subject to the same retry policy, you can specify the retry at the PoolManager level:

>>> http = urllib3.PoolManager(retries=False)
>>> http = urllib3.PoolManager(
...     retries=urllib3.Retry(5, redirect=2))

You still override this pool-level retry policy by specifying retries to request().

Errors & Exceptions

urllib3 wraps lower-level exceptions, for example:

>>> try:
...     http.request('GET', 'nx.example.com', retries=False)
>>> except urllib3.exceptions.NewConnectionError:
...     print('Connection failed.')

See exceptions for the full list of all exceptions.

Logging

If you are using the standard library logging module urllib3 will emit several logs. In some cases this can be undesirable. You can use the standard logger interface to change the log level for urllib3’s logger:

>>> logging.getLogger("urllib3").setLevel(logging.WARNING)

 

 

python操作redis-过期时间

#!/usr/bin/python
#!coding:utf-8

import time
import redis

if __name__ == “__main__”:
try:
conn=redis.StrictRedis(host=’192.168.80.41′)
conn.set(‘name’,’大白兔’)
conn.expire(‘name’,10)
#设置键的过期时间为10s
for item in range(12):
value=conn.get(‘name’)
if value != None:
print(value.decode(‘utf8′))
else:
print(‘the key has been deleted…’)
break
time.sleep(1)
except Exception as err:
print(err)