Centos7使用Apache+mod_wsgi部署flask网站

1、安装配置Apache
首先需要注意的是,Apache在centos上叫做httpd
安装Apache服务

yum install httpd

配置Apache服务

vi /etc/httpd/conf/httpd.conf

在#Listen 12.34.56.78:80下面模仿注释添加listen的IP地址或域名加上端口号
启动或关闭Apache服务
启动

systemctl start httpd.service
关闭

systemctl stop httpd.service

访问
访问服务器地址(配置中Listen的地址),默认显示Apache的测试页面。
/etc/httpd是httpd的根目录
/var/www/html下是放置请求页面的目录,直接把静态网页的index.html网页放在该目录下,访问服务器地址即可访问网站。

2、安装配置mod_wsgi
首先安装httpd-devel

yum install -y httpd-devel

安装mod__wsgi

yum install mod_wsgi

安装完成之后,mod_wsgi.so会在Apache的modules目录中
需要将mod_wsgi.so加载到httpd.conf

vi /etc/httpd/conf/httpd.conf

在最后添加

LoadModule wsgi_module modules/mod_wsgi.so

3、部署flask
上传
首先使用将flask项目包上传到 /var/www/html下
推荐使用WinSCP

1

配置app.wsgi
在/var/www/html/app下配置app.wsgi文件

vi /var/www/html/app/app.wsgi

文件中写入

import sys
sys.path.insert(0, ‘/var/www/html/app’)
from app import app as application

配置wsgi.conf
在/etc/httpd/conf.d/下配置wsgi.conf
新建一个wsgi.conf文件

vi /etc/httpd/conf.d/wsgi.conf

在文件中输入

#配置虚拟环境地址
WSGIDaemonProcess app python-path=/var/www/app/lib/python3.6/site-packages
WSGIProcessGroup app
#路由是/app,通过访问服务器域名:端口/app直接访问项目
#/var/www/app.wsgi是配置文件,下面的/var/www/html/app是项目包,其中有app.py文件
WSGIScriptAlias /app /var/www/html/app/app.wsgi
Require all granted

其中需要注意,第一行使用了一个python虚拟环境,其中包含项目所需的所有包(包括flask),否则即使服务器上的python环境配置了flask环境,仍然无法使用,会报错找不到模块;这个问题时踩得最大的坑。

使用虚拟环境
下面介绍需要使用的虚拟环境,可以通过Pycharm获取,Pycharm每次创建项目,可以选择使用虚拟环境或本地环境。
也可以通过anaconda的conda来配置虚拟环境
创建python3.6虚拟环境,其中myenv是自己取得虚拟环境名

conda create -n myenv python=3.6

给虚拟环境安装包

conda install -n myenv package

创建的虚拟环境在anaconda3/env下面,可以复制到项目中,在上面改为相应的位置即可

访问
使用服务器地址:端口,仍然显示的是/var/www/html/下的index页面
使用服务器:端口/路由(如上面的是12.34.56.78:80/app),显示的是flask网站

日志
使用下面命令访问网站的错误日志

vi /var/log/httpd/error_log

使用域名
当然有域名的话,可以直接将域名解析到服务器地址,可以直接使用域名访问
解析参数配置如下

访问/var/www/html/index.html

3

访问/var/www/html/app下的flask项目

2
———————
作者:子耶
来源:CSDN
原文:https://blog.csdn.net/qq_36962569/article/details/80885396
版权声明:本文为博主原创文章,转载请附上博文链接!

centos下修改文件后如何保存退出

保存命令
按ESC键 跳到命令模式,然后:
:w   保存文件但不退出vi

:w file 将修改另外保存到file中,不退出vi

:w!   强制保存,不推出vi

:wq  保存文件并退出vi

:wq! 强制保存文件,并退出vi

:q 不保存文件,退出vi

:q! 不保存文件,强制退出vi

:e! 放弃所有修改,从上次保存文件开始再编辑

centos7配置IP地址

有关于centos7获取IP地址的方法主要有两种,1:动态获取ip;2:设置静态IP地址

在配置网络之前我们先要知道centos的网卡名称是什么,centos7不再使用ifconfig命令,可通过命令 IP addr查看,如图,网卡名为ens32,是没有IP地址的

 

1、动态获取ip(前提是你的路由器已经开启了DHCP)

修改网卡配置文件 vi /etc/sysconfig/network-scripts/ifcfg-ens32    (最后一个为网卡名称)

动态获取IP地址需要修改两处地方即可

(1)bootproto=dhcp

(2)onboot=yes

修改后重启一下网络服务即可 systemctl restart network

[root@mini ~]# systemctl restart network
[root@mini ~]#

这样动态配置IP地址就设置好了,这个时候再查看一下ip addr 就可以看到已经获取了IP地址,且可以上网(ping 百度)

 

2、配置静态IP地址

设置静态IP地址与动态iIP差不多,也是要修改网卡配置文件 vi /etc/sysconfig/network-scripts/ifcfg-ens32    (最后一个为网卡名称)

(1)bootproto=static

(2)onboot=yes

(3)在最后加上几行,IP地址、子网掩码、网关、dns服务器

IPADDR=192.168.1.160
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=119.29.29.29
DNS2=8.8.8.8

(4)重启网络服务

[root@mini ~]# systemctl restart network
[root@mini ~]#

DNS服务器可以只配一个,我用的是两个免费的dns服务器,查看IP地址,测试联网

[root@mini ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:d2:42:55 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.160/24 brd 192.168.1.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::f86e:939e:ff9b:9aec/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
[root@mini ~]# ping www.baidu.com
PING www.a.shifen.com (163.177.151.109) 56(84) bytes of data.
64 bytes from 163.177.151.109 (163.177.151.109): icmp_seq=1 ttl=55 time=27.5 ms
64 bytes from 163.177.151.109 (163.177.151.109): icmp_seq=2 ttl=55 time=35.2 ms
^C
--- www.a.shifen.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1008ms
rtt min/avg/max/mdev = 27.570/31.425/35.281/3.859 ms

linux-Centos7安装python3并与python2共存

1.查看是否已经安装Python

CentOS 7.2 默认安装了python2.7.5 因为一些命令要用它比如yum 它使用的是python2.7.5。

使用 python -V 命令查看一下是否安装Python

然后使用命令 which python 查看一下Python可执行文件的位置

可见执行文件在/usr/bin/ 目录下,切换到该目录下执行 ll python* 命令查看

python 指向的是python2.7

因为我们要安装python3版本,所以python要指向python3才行,目前还没有安装python3,先备份,备份之前先安装相关包,用于下载编译python3

yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel  libffi-devel gcc make

不能忽略相关包,我之前就没有安装readline-devel导致执行python模式无法使用键盘的上下左右键;

然后备份

mv python python.bak

2.开始编译安装python3

去官网下载编译安装包或者直接执行以下命令下载(看你需要的版本安装)

wget https://www.python.org/ftp/python/3.8.0/Python-3.8.0a4.tar.xz

解压

tar -xvJf  Python-3.8.0a4.tar.xz

切换进入

cd Python-3.8.0a4

编译安装

./configure prefix=/usr/local/python3

make && make install

安装完毕,/usr/local/目录下就会有python3了

因此我们可以添加软链到执行目录下/usr/bin

ln -s /usr/local/python3/bin/python3 /usr/bin/python

可以看到软链创建完成

测试安装成功了没,执行

python -V  看看输出的是不是python3的版本

执行python2 -V  看到的就是python2的版本

因为执行yum需要python2版本,所以我们还要修改yum的配置,执行:

vi /usr/bin/yum

把#! /usr/bin/python修改为#! /usr/bin/python2

 

同理 vi /usr/libexec/urlgrabber-ext-down 文件里面的#! /usr/bin/python 也要修改为#! /usr/bin/python2

 

这样python3版本就安装完成;同时python2也存在

python -V   版本3

python2 -V 版本2

Python–urllib3库详解

Python–urllib3库中文详解

User Guide(英文)

Making requests

First things first, import the urllib3 module:

>>> import urllib3

You’ll need a 

<span class="pre">PoolManager</span>

 instance to make requests. This object handles all of the details of connection pooling and thread safety so that you don’t have to:

>>> http = urllib3.PoolManager()

To make a request use 

<span class="pre">request()</span>

:

>>> r = http.request('GET', 'http://httpbin.org/robots.txt')
>>> r.data
b'User-agent: *\nDisallow: /deny\n'
<span class="pre">request()</span>

 returns a 

<span class="pre">HTTPResponse</span>

 object, the Response content section explains how to handle various responses.

You can use 

<span class="pre">request()</span>

 to make requests using any HTTP verb:

>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     fields={'hello': 'world'})

The Request data section covers sending other kinds of requests data, including JSON, files, and binary data.

Response content

The 

<span class="pre">HTTPResponse</span>

 object provides 

<span class="pre">status</span>

<span class="pre">data</span>

, and 

<span class="pre">header</span>

 attributes:

>>> r = http.request('GET', 'http://httpbin.org/ip')
>>> r.status
200
>>> r.data
b'{\n  "origin": "104.232.115.37"\n}\n'
>>> r.headers
HTTPHeaderDict({'Content-Length': '33', ...})

JSON content

JSON content can be loaded by decoding and deserializing the 

<span class="pre">data</span>

 attribute of the request:

>>> import json
>>> r = http.request('GET', 'http://httpbin.org/ip')
>>> json.loads(r.data.decode('utf-8'))
{'origin': '127.0.0.1'}

Binary content

The 

<span class="pre">data</span>

 attribute of the response is always set to a byte string representing the response content:

>>> r = http.request('GET', 'http://httpbin.org/bytes/8')
>>> r.data
b'\xaa\xa5H?\x95\xe9\x9b\x11'

Note

For larger responses, it’s sometimes better to stream the response.

Request data

Headers

You can specify headers as a dictionary in the 

<span class="pre">headers</span>

 argument in 

<span class="pre">request()</span>

:

>>> r = http.request(
...     'GET',
...     'http://httpbin.org/headers',
...     headers={
...         'X-Something': 'value'
...     })
>>> json.loads(r.data.decode('utf-8'))['headers']
{'X-Something': 'value', ...}

Query parameters

For 

<span class="pre">GET</span>

<span class="pre">HEAD</span>

, and 

<span class="pre">DELETE</span>

 requests, you can simply pass the arguments as a dictionary in the 

<span class="pre">fields</span>

 argument to 

<span class="pre">request()</span>

:

>>> r = http.request(
...     'GET',
...     'http://httpbin.org/get',
...     fields={'arg': 'value'})
>>> json.loads(r.data.decode('utf-8'))['args']
{'arg': 'value'}

For 

<span class="pre">POST</span>

 and 

<span class="pre">PUT</span>

 requests, you need to manually encode query parameters in the URL:

>>> from urllib.parse import urlencode
>>> encoded_args = urlencode({'arg': 'value'})
>>> url = 'http://httpbin.org/post?' + encoded_args
>>> r = http.request('POST', url)
>>> json.loads(r.data.decode('utf-8'))['args']
{'arg': 'value'}

Form data

For 

<span class="pre">PUT</span>

 and 

<span class="pre">POST</span>

 requests, urllib3 will automatically form-encode the dictionary in the 

<span class="pre">fields</span>

 argument provided to 

<span class="pre">request()</span>

:

>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     fields={'field': 'value'})
>>> json.loads(r.data.decode('utf-8'))['form']
{'field': 'value'}

JSON

You can send a JSON request by specifying the encoded data as the 

<span class="pre">body</span>

 argument and setting the 

<span class="pre">Content-Type</span>

 header when calling 

<span class="pre">request()</span>

:

>>> import json
>>> data = {'attribute': 'value'}
>>> encoded_data = json.dumps(data).encode('utf-8')
>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     body=encoded_data,
...     headers={'Content-Type': 'application/json'})
>>> json.loads(r.data.decode('utf-8'))['json']
{'attribute': 'value'}

Files & binary data

For uploading files using 

<span class="pre">multipart/form-data</span>

 encoding you can use the same approach as Form data and specify the file field as a tuple of 

<span class="pre">(file_name,</span><span class="pre">file_data)</span>

:

>>> with open('example.txt') as fp:
...     file_data = fp.read()
>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     fields={
...         'filefield': ('example.txt', file_data),
...     })
>>> json.loads(r.data.decode('utf-8'))['files']
{'filefield': '...'}

While specifying the filename is not strictly required, it’s recommended in order to match browser behavior. You can also pass a third item in the tuple to specify the file’s MIME type explicitly:

>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     fields={
...         'filefield': ('example.txt', file_data, 'text/plain'),
...     })

For sending raw binary data simply specify the 

<span class="pre">body</span>

 argument. It’s also recommended to set the 

<span class="pre">Content-Type</span>

 header:

>>> with open('example.jpg', 'rb') as fp:
...     binary_data = fp.read()
>>> r = http.request(
...     'POST',
...     'http://httpbin.org/post',
...     body=binary_data,
...     headers={'Content-Type': 'image/jpeg'})
>>> json.loads(r.data.decode('utf-8'))['data']
b'...'

Certificate verification

Note

New in version 1.25

HTTPS connections are now verified by default (

<span class="pre">cert_reqs</span> <span class="pre">=</span><span class="pre">'CERT_REQUIRED'</span>

).

While you can disable certification verification, it is highly recommend to leave it on.

Unless otherwise specified urllib3 will try to load the default system certificate stores. The most reliable cross-platform method is to use the certifi package which provides Mozilla’s root certificate bundle:

pip install certifi

You can also install certifi along with urllib3 by using the 

<span class="pre">secure</span>

 extra:

pip install urllib3[secure]

Warning

If you’re using Python 2 you may need additional packages. See the section below for more details.

Once you have certificates, you can create a 

<span class="pre">PoolManager</span>

 that verifies certificates when making requests:

>>> import certifi
>>> import urllib3
>>> http = urllib3.PoolManager(
...     cert_reqs='CERT_REQUIRED',
...     ca_certs=certifi.where())

The 

<span class="pre">PoolManager</span>

 will automatically handle certificate verification and will raise 

<span class="pre">SSLError</span>

 if verification fails:

>>> http.request('GET', 'https://google.com')
(No exception)
>>> http.request('GET', 'https://expired.badssl.com')
urllib3.exceptions.SSLError ...

Note

You can use OS-provided certificates if desired. Just specify the full path to the certificate bundle as the 

<span class="pre">ca_certs</span>

 argument instead of

<span class="pre">certifi.where()</span>

. For example, most Linux systems store the certificates at 

<span class="pre">/etc/ssl/certs/ca-certificates.crt</span>

. Other operating systems can be difficult.

Certificate verification in Python 2

Older versions of Python 2 are built with an 

<span class="pre">ssl</span>

 module that lacks SNI support and can lag behind security updates. For these reasons it’s recommended to usepyOpenSSL.

If you install urllib3 with the 

<span class="pre">secure</span>

 extra, all required packages for certificate verification on Python 2 will be installed:

pip install urllib3[secure]

If you want to install the packages manually, you will need 

<span class="pre">pyOpenSSL</span>

,

<span class="pre">cryptography</span>

<span class="pre">idna</span>

, and 

<span class="pre">certifi</span>

.

Note

If you are not using macOS or Windows, note that cryptography requires additional system packages to compile. See building cryptography on Linuxfor the list of packages required.

Once installed, you can tell urllib3 to use pyOpenSSL by using 

<span class="pre">urllib3.contrib.pyopenssl</span>

:

>>> import urllib3.contrib.pyopenssl
>>> urllib3.contrib.pyopenssl.inject_into_urllib3()

Finally, you can create a 

<span class="pre">PoolManager</span>

 that verifies certificates when performing requests:

>>> import certifi
>>> import urllib3
>>> http = urllib3.PoolManager(
...     cert_reqs='CERT_REQUIRED',
...     ca_certs=certifi.where())

If you do not wish to use pyOpenSSL, you can simply omit the call to

<span class="pre">urllib3.contrib.pyopenssl.inject_into_urllib3()</span>

. urllib3 will fall back to the standard-library 

<span class="pre">ssl</span>

 module. You may experience several warnings when doing this.

Warning

If you do not use pyOpenSSL, Python must be compiled with ssl support for certificate verification to work. It is uncommon, but it is possible to compile Python without SSL support. See this Stackoverflow thread for more details.

If you are on Google App Engine, you must explicitly enable SSL support in your 

<span class="pre">app.yaml</span>

:

libraries:
- name: ssl
  version: latest

Using timeouts

Timeouts allow you to control how long requests are allowed to run before being aborted. In simple cases, you can specify a timeout as a 

<span class="pre">float</span>

 to 

<span class="pre">request()</span>

:

>>> http.request(
...     'GET', 'http://httpbin.org/delay/3', timeout=4.0)
<urllib3.response.HTTPResponse>
>>> http.request(
...     'GET', 'http://httpbin.org/delay/3', timeout=2.5)
MaxRetryError caused by ReadTimeoutError

For more granular control you can use a 

<span class="pre">Timeout</span>

 instance which lets you specify separate connect and read timeouts:

>>> http.request(
...     'GET',
...     'http://httpbin.org/delay/3',
...     timeout=urllib3.Timeout(connect=1.0))
<urllib3.response.HTTPResponse>
>>> http.request(
...     'GET',
...     'http://httpbin.org/delay/3',
...     timeout=urllib3.Timeout(connect=1.0, read=2.0))
MaxRetryError caused by ReadTimeoutError

If you want all requests to be subject to the same timeout, you can specify the timeout at the 

<span class="pre">PoolManager</span>

 level:

>>> http = urllib3.PoolManager(timeout=3.0)
>>> http = urllib3.PoolManager(
...     timeout=urllib3.Timeout(connect=1.0, read=2.0))

You still override this pool-level timeout by specifying 

<span class="pre">timeout</span>

 to 

<span class="pre">request()</span>

.

Retrying requests

urllib3 can automatically retry idempotent requests. This same mechanism also handles redirects. You can control the retries using the 

<span class="pre">retries</span>

 parameter to 

<span class="pre">request()</span>

. By default, urllib3 will retry requests 3 times and follow up to 3 redirects.

To change the number of retries just specify an integer:

>>> http.requests('GET', 'http://httpbin.org/ip', retries=10)

To disable all retry and redirect logic specify 

<span class="pre">retries=False</span>

:

>>> http.request(
...     'GET', 'http://nxdomain.example.com', retries=False)
NewConnectionError
>>> r = http.request(
...     'GET', 'http://httpbin.org/redirect/1', retries=False)
>>> r.status
302

To disable redirects but keep the retrying logic, specify 

<span class="pre">redirect=False</span>

:

>>> r = http.request(
...     'GET', 'http://httpbin.org/redirect/1', redirect=False)
>>> r.status
302

For more granular control you can use a 

<span class="pre">Retry</span>

 instance. This class allows you far greater control of how requests are retried.

For example, to do a total of 3 retries, but limit to only 2 redirects:

>>> http.request(
...     'GET',
...     'http://httpbin.org/redirect/3',
...     retries=urllib3.Retry(3, redirect=2))
MaxRetryError

You can also disable exceptions for too many redirects and just return the 

<span class="pre">302</span>

response:

>>> r = http.request(
...     'GET',
...     'http://httpbin.org/redirect/3',
...     retries=urllib3.Retry(
...         redirect=2, raise_on_redirect=False))
>>> r.status
302

If you want all requests to be subject to the same retry policy, you can specify the retry at the 

<span class="pre">PoolManager</span>

 level:

>>> http = urllib3.PoolManager(retries=False)
>>> http = urllib3.PoolManager(
...     retries=urllib3.Retry(5, redirect=2))

You still override this pool-level retry policy by specifying 

<span class="pre">retries</span>

 to 

<span class="pre">request()</span>

.

Errors & Exceptions

urllib3 wraps lower-level exceptions, for example:

>>> try:
...     http.request('GET', 'nx.example.com', retries=False)
>>> except urllib3.exceptions.NewConnectionError:
...     print('Connection failed.')

See 

<span class="pre">exceptions</span>

 for the full list of all exceptions.

Logging

If you are using the standard library 

<span class="pre">logging</span>

 module urllib3 will emit several logs. In some cases this can be undesirable. You can use the standard logger interface to change the log level for urllib3’s logger:

>>> logging.getLogger("urllib3").setLevel(logging.WARNING)

 

 

PHP中使用CURL实现GET和POST请求

CURL 是一个利用URL语法规定来传输文件和数据的工具,支持很多协议,如HTTP、FTP、TELNET等。最爽的是,PHP也支持 CURL 库。使用PHP的CURL 库可以简单和有效地去抓网页。你只需要运行一个脚本,然后分析一下你所抓取的网页,然后就可以以程序的方式得到你想要的数据了。无论是你想从从一个链接上取部分数据,或是取一个XML文件并把其导入数据库,那怕就是简单的获取网页内容,CURL 是一个功能强大的PHP库。

PHP建立CURL请求的基本步骤

①:初始化

curl_init()

②:设置属性

curl_setopt().有一长串CURL 参数可供设置,它们能指定URL请求的各个细节。

③:执行并获取结果

curl_exec()

④:释放句柄

curl_close()

CURL实现GET和POST

①:GET方式实现

 1  //初始化
 2     $curl = curl_init();
 3     //设置抓取的url
 4     curl_setopt($curl, CURLOPT_URL, 'http://www.baidu.com');
 5     //设置头文件的信息作为数据流输出
 6     curl_setopt($curl, CURLOPT_HEADER, 1);
 7     //设置获取的信息以文件流的形式返回,而不是直接输出。
 8     curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
 9     //执行命令
10     $data = curl_exec($curl);
11     //关闭URL请求
12     curl_close($curl);
13     //显示获得的数据
14     print_r($data);
复制代码

②:POST方式实现

复制代码
 1 //初始化
 2     $curl = curl_init();
 3     //设置抓取的url
 4     curl_setopt($curl, CURLOPT_URL, 'http://www.baidu.com');
 5     //设置头文件的信息作为数据流输出
 6     curl_setopt($curl, CURLOPT_HEADER, 1);
 7     //设置获取的信息以文件流的形式返回,而不是直接输出。
 8     curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
 9     //设置post方式提交
10     curl_setopt($curl, CURLOPT_POST, 1);
11     //设置post数据
12     $post_data = array(
13         "username" => "coder",
14         "password" => "12345"
15         );
16     curl_setopt($curl, CURLOPT_POSTFIELDS, $post_data);
17     //执行命令
18     $data = curl_exec($curl);
19     //关闭URL请求
20     curl_close($curl);
21     //显示获得的数据
22     print_r($data);

③:如果获得的数据时json格式的,使用json_decode函数解释成数组。

$output_array = json_decode($data,true); //如果第二个参数为true,就转为数组的形式。如果不填就为对象的形式

如果使用json_decode($data)解析的话,将会得到object类型的数据。

封装的一个函数

复制代码
 1 //参数1:访问的URL,参数2:post数据(不填则为GET),参数3:提交的$cookies,参数4:是否返回$cookies
 2  function curl_request($url,$post='',$cookie='', $returnCookie=0){
 3         $curl = curl_init();
 4         curl_setopt($curl, CURLOPT_URL, $url);
 5         curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0)');
 6         curl_setopt($curl, CURLOPT_FOLLOWLOCATION, 1);
 7         curl_setopt($curl, CURLOPT_AUTOREFERER, 1);
 8         curl_setopt($curl, CURLOPT_REFERER, "http://XXX");
 9         if($post) {
10             curl_setopt($curl, CURLOPT_POST, 1);
11             curl_setopt($curl, CURLOPT_POSTFIELDS, http_build_query($post));
12         }
13         if($cookie) {
14             curl_setopt($curl, CURLOPT_COOKIE, $cookie);
15         }
16         curl_setopt($curl, CURLOPT_HEADER, $returnCookie);
17         curl_setopt($curl, CURLOPT_TIMEOUT, 10);
18         curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
19         $data = curl_exec($curl);
20         if (curl_errno($curl)) {
21             return curl_error($curl);
22         }
23         curl_close($curl);
24         if($returnCookie){
25             list($header, $body) = explode("\r\n\r\n", $data, 2);
26             preg_match_all("/Set\-Cookie:([^;]*);/", $header, $matches);
27             $info['cookie']  = substr($matches[1][0], 1);
28             $info['content'] = $body;
29             return $info;
30         }else{
31             return $data;
32         }
33 }
复制代码

这俩个函数虽然不难,但是还是值得学习一下的。因为在做接口或者调用的接口的时候,必定会用到这俩个函数。

这俩个函数虽然不难,但是还是值得学习一下的。因为在做接口或者调用的接口的时候,必定会用到这俩个函数。

redis-5.0 Linux 下安装

Linux 下安装

下载地址:http://redis.io/download,下载最新稳定版本。

本教程使用的最新文档版本为 5.0,下载并安装:

$ wget http://download.redis.io/releases/redis-5.0.4.tar.gz
$ tar xzf redis-2.8.17.tar.gz
$ cd redis-2.8.17
$ make

make完后 redis-2.8.17目录下会出现编译后的redis服务程序redis-server,还有用于测试的客户端程序redis-cli,两个程序位于安装目录 src 目录下:

下面启动redis服务.

$ cd src
$ ./redis-server

注意这种方式启动redis 使用的是默认配置。也可以通过启动参数告诉redis使用指定配置文件使用下面命令启动。

$ cd src
$ ./redis-server ../redis.conf

redis.conf 是一个默认的配置文件。我们可以根据需要使用自己的配置文件。

启动redis服务进程后,就可以使用测试客户端程序redis-cli和redis服务交互了。 比如:

$ cd src
$ ./redis-cli
redis> set foo bar
OK
redis> get foo
"bar"

Ubuntu 下安装

在 Ubuntu 系统安装 Redis 可以使用以下命令:

$sudo apt-get update
$sudo apt-get install redis-server

启动 Redis

$ redis-server

查看 redis 是否启动?

$ redis-cli

以上命令将打开以下终端:

redis 127.0.0.1:6379>

127.0.0.1 是本机 IP ,6379 是 redis 服务端口。现在我们输入 PING 命令。

redis 127.0.0.1:6379> ping
PONG

以上说明我们已经成功安装了redis。

python操作redis-过期时间

#!/usr/bin/python
#!coding:utf-8

import time
import redis

if __name__ == “__main__”:
try:
conn=redis.StrictRedis(host=’192.168.80.41′)
conn.set(‘name’,’大白兔’)
conn.expire(‘name’,10)
#设置键的过期时间为10s
for item in range(12):
value=conn.get(‘name’)
if value != None:
print(value.decode(‘utf8’))
else:
print(‘the key has been deleted…’)
break
time.sleep(1)
except Exception as err:
print(err)

EXPIRE key seconds

Set a timeout on 

key

. After the timeout has expired, the key will automatically be deleted. A key with an associated timeout is often said to be volatile in Redis terminology.

The timeout will only be cleared by commands that delete or overwrite the contents of the key, including DELSETGETSET and all the 

*STORE

commands. This means that all the operations that conceptually alter the value stored at the key without replacing it with a new one will leave the timeout untouched. For instance, incrementing the value of a key with INCR, pushing a new value into a list with LPUSH, or altering the field value of a hash with HSET are all operations that will leave the timeout untouched.

The timeout can also be cleared, turning the key back into a persistent key, using the PERSIST command.

If a key is renamed with RENAME, the associated time to live is transferred to the new key name.

If a key is overwritten by RENAME, like in the case of an existing key 

Key_A

that is overwritten by a call like 

RENAME Key_B Key_A

, it does not matter if the original 

Key_A

 had a timeout associated or not, the new key 

Key_A

 will inherit all the characteristics of 

Key_B

.

Note that calling EXPIRE/PEXPIRE with a non-positive timeout orEXPIREAT/PEXPIREAT with a time in the past will result in the key beingdeleted rather than expired (accordingly, the emitted key event will be 

del

, not 

expired

).

Refreshing expires

It is possible to call EXPIRE using as argument a key that already has an existing expire set. In this case the time to live of a key is updated to the new value. There are many useful applications for this, an example is documented in the Navigation session pattern section below.

Differences in Redis prior 2.1.3

In Redis versions prior 2.1.3 altering a key with an expire set using a command altering its value had the effect of removing the key entirely. This semantics was needed because of limitations in the replication layer that are now fixed.

EXPIRE would return 0 and not alter the timeout for a key with a timeout set.

Return value

Integer reply, specifically:

  • 1

     if the timeout was set.

  • 0

     if 

    key

     does not exist.

Examples

redis> SET mykey “Hello”

"OK"

redis> EXPIRE mykey 10

(integer) 1

redis> TTL mykey

(integer) 10

redis> SET mykey “Hello World”

"OK"

redis> TTL mykey

(integer) -1
redis> 

Pattern: Navigation session

Imagine you have a web service and you are interested in the latest N pagesrecently visited by your users, such that each adjacent page view was not performed more than 60 seconds after the previous. Conceptually you may consider this set of page views as a Navigation session of your user, that may contain interesting information about what kind of products he or she is looking for currently, so that you can recommend related products.

You can easily model this pattern in Redis using the following strategy: every time the user does a page view you call the following commands:


MULTI
RPUSH pagewviews.user:&lt;userid&gt; http://.....
EXPIRE pagewviews.user:&lt;userid&gt; 60
EXEC

If the user will be idle more than 60 seconds, the key will be deleted and only subsequent page views that have less than 60 seconds of difference will be recorded.

This pattern is easily modified to use counters using INCR instead of lists using RPUSH.

Appendix: Redis expires

Keys with an expire

Normally Redis keys are created without an associated time to live. The key will simply live forever, unless it is removed by the user in an explicit way, for instance using the DEL command.

The EXPIRE family of commands is able to associate an expire to a given key, at the cost of some additional memory used by the key. When a key has an expire set, Redis will make sure to remove the key when the specified amount of time elapsed.

The key time to live can be updated or entirely removed using the EXPIREand PERSIST command (or other strictly related commands).

Expire accuracy

In Redis 2.4 the expire might not be pin-point accurate, and it could be between zero to one seconds out.

Since Redis 2.6 the expire error is from 0 to 1 milliseconds.

Expires and persistence

Keys expiring information is stored as absolute Unix timestamps (in milliseconds in case of Redis version 2.6 or greater). This means that the time is flowing even when the Redis instance is not active.

For expires to work well, the computer time must be taken stable. If you move an RDB file from two computers with a big desync in their clocks, funny things may happen (like all the keys loaded to be expired at loading time).

Even running instances will always check the computer clock, so for instance if you set a key with a time to live of 1000 seconds, and then set your computer time 2000 seconds in the future, the key will be expired immediately, instead of lasting for 1000 seconds.

How Redis expires keys

Redis keys are expired in two ways: a passive way, and an active way.

A key is passively expired simply when some client tries to access it, and the key is found to be timed out.

Of course this is not enough as there are expired keys that will never be accessed again. These keys should be expired anyway, so periodically Redis tests a few keys at random among keys with an expire set. All the keys that are already expired are deleted from the keyspace.

Specifically this is what Redis does 10 times per second:

  1. Test 20 random keys from the set of keys with an associated expire.
  2. Delete all the keys found expired.
  3. If more than 25% of keys were expired, start again from step 1.

This is a trivial probabilistic algorithm, basically the assumption is that our sample is representative of the whole key space, and we continue to expire until the percentage of keys that are likely to be expired is under 25%

This means that at any given moment the maximum amount of keys already expired that are using memory is at max equal to max amount of write operations per second divided by 4.

How expires are handled in the replication link and AOF file

In order to obtain a correct behavior without sacrificing consistency, when a key expires, a DEL operation is synthesized in both the AOF file and gains all the attached replicas nodes. This way the expiration process is centralized in the master instance, and there is no chance of consistency errors.

However while the replicas connected to a master will not expire keys independently (but will wait for the DEL coming from the master), they’ll still take the full state of the expires existing in the dataset, so when a replica is elected to master it will be able to expire the keys independently, fully acting as a master.

ubuntu下安装 mod_wsgi

mod_wsgi
1.pip install mod_wsgi
报错:
Collecting mod_wsgi
  Downloading mod_wsgi-4.5.18.tar.gz (2.5MB)
    100% |████████████████████████████████| 2.5MB 21kB/s 
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File “”, line 1, in
      File “/tmp/pip-build-lemsW_/mod-wsgi/setup.py”, line 164, in
        ‘missing Apache httpd server packages.’ % APXS)
    RuntimeError: The ‘apxs’ command appears not to be installed or is not executable. Please check the list of prerequisites in the documentation for this package and install any missing Apache httpd server packages.
    
    —————————————-
Command “python setup.py egg_info” failed with error code 1 in /tmp/pip-build-lemsW_/mod-wsgi/
执行:
sudo apt-get install apache2-dev
再次安装mod_wsgi:
well@well:/usr/local/lib$ sudo pip install mod_wsgi
The directory ‘/home/well/.cache/pip/http’ or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag.
The directory ‘/home/well/.cache/pip’ or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag.
Collecting mod_wsgi
  Downloading mod_wsgi-4.5.18.tar.gz (2.5MB)
    100% |████████████████████████████████| 2.5MB 47kB/s 
Installing collected packages: mod-wsgi
  Running setup.py install for mod-wsgi … done
Successfully installed mod-wsgi-4.5.18
安装成功。