MySQL:
复制一份配置到my.cnf
[root@node1 system]# cp /usr/share/mysql/my-medium.cnf /etc/my.cnf
cp: overwrite ‘/etc/my.cnf’? y
[root@node1 system]#
然后把以下的配置放入mysqld下面。
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = ‘SET NAMES utf8’
character-set-server = utf8
启动数据库并设置开机启动
[root@node1 system]# systemctl enable mariadb.service
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
[root@node1 system]# systemctl start mariadb.service
[root@node1 system]#
初始化数据库
mysql_secure_installation 回车,按照提示输入就可以了.
创建数据库:
Keystone数据库
[root@linux-node1 ~]# mysql -u root -p -e “CREATE DATABASE keystone;”
[root@linux-node1 ~]#mysql -u root -p -e “GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’localhost’ IDENTIFIED BY ‘keystone’;”
[root@linux-node1 ~]# mysql -u root -p -e “GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ IDENTIFIED BY ‘keystone’;”
Glance数据库
[root@linux-node1 ~]# mysql -u root -p -e “CREATE DATABASE glance;”
[root@linux-node1 ~]# mysql -u root -p -e “GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’localhost’ IDENTIFIED BY ‘glance’;”
[root@linux-node1 ~]# mysql -u root -p -e “GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’%’ IDENTIFIED BY ‘glance’;”
Nova数据库
[root@linux-node1 ~]# mysql -u root -p -e “CREATE DATABASE nova;”
[root@linux-node1 ~]# mysql -u root -p -e “GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’localhost’ IDENTIFIED BY ‘nova’;”
[root@linux-node1 ~]# mysql -u root -p -e “GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’%’ IDENTIFIED BY ‘nova’;”
Neutron 数据库
[root@linux-node1 ~]# mysql -u root -p -e “CREATE DATABASE neutron;”
[root@linux-node1 ~]# mysql -u root -p -e “GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’localhost’ IDENTIFIED BY ‘neutron’;”
[root@linux-node1 ~]# mysql -u root -p -e “GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’ IDENTIFIED BY ‘neutron’;”
Cinder数据库
[root@linux-node1 ~]# mysql -u root -p -e “CREATE DATABASE cinder;”
[root@linux-node1 ~]# mysql -u root -p -e “GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’localhost’ IDENTIFIED BY ‘cinder’;”
[root@linux-node1 ~]# mysql -u root -p -e “GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’%’ IDENTIFIED BY ‘cinder’;”
RabbitMQ:
设置启动和开机启动
[root@node1 system]# systemctl enable rabbitmq-server.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service to /usr/lib/systemd/system/rabbitmq-server.service.
[root@node1 system]# systemctl start rebbitmq-server.service
创建用户和设置权限:
[root@node1 system]# rabbitmqctl add_user openstack openstack
Creating user “openstack” …
[root@node1 system]# rabbitmqctl set_permissions openstack “.*” “.*” “.*”
Setting permissions for user “openstack” in vhost “/” …
[root@node1 system]#
启用管理插件:
rabbitmq-plugins enable rabbitmq_management
启用管理插件之后,重启rabbitmq之后会有一个web端口15672,在web上打开,会有一个默认的用户和密码都是guest。要使用上面的openstack用户生效,用guets登录进去把openstack用户的标签更改为administrators就可以了。http://192.168.0.11:15672/
二、KeyStone认证服务:
用户与认证:用户权限与用户行为跟踪
服务目录:提供一个服务目录,包括所有服务项与相关api的端点,所有服务需要在keystone上注册,这样才能被其他应用调用。
user:用户
tenant:租户 项目
token:令牌(用户名和密码验证通过,会产生一个令牌)
role:角色
service:服务
endpoint:端点
1、安装:
##Keystone
yum install -y openstack-keystone httpd mod_wsgi memcached python-memcached
2、配置:/etc/keystone/keystone.conf
配置了admin_token和mysql数据库连接
admin_token = 863d35676a5632e846d9
connection = mysql://keystone:keystone@192.1
68.0.11/keystone #这里有用户名和密码,下面一步产生表结构的时候会用到。
3、创建keystone数据库的表结构
在keystone数据库中创建表结构,我们用以下命令同步一下就可以创建了。
su -s /bin/sh -c “keystone-manage db_sync” keystone
上面的命令为什么要切换到keystone用户下面去执行呢,原因是keystone服务运行时是keystone用户会写入日志到/var/log/keystone/keystone.log里面,如果用root执行,那么它就没有写入权限了。
4、在/etc/keystone/keystone.conf配置文件中配置memcache选项,让token写入到memcache中,以前的版本是写入到数据库,导致表很大,token用过了就没有用,所以以前还要删除表。现在可以写入数据库也可以写入memcache,可以在配置中配置,
在[memcache]标签下配置memcache地址如下:
servers = 192.168.0.11:11211
在[token]标签下配置如下:
provider = uuid
driver = memcache
在[revoke]标签下配置回滚如下:
driver = memcache
总体配置如下:
[root@node1 config]# grep ‘^[a-Z]’ /etc/keystone/keystone.conf
admin_token = 863d35676a5632e846d9
connection = mysql://keystone:keystone@192.168.0.11/keystone
servers = 192.168.0.11:11211
driver = sql
provider = uuid
driver = memcache
[root@node1 config]#
在配置文件中有一个verbose = true打开的话,也就是把#去掉,就可以有详细的输出,方便排除故障。
5、新建Keystone配置文件(http,web访问的)
[root@linux-node1 ~]# vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat “%{cu}t %M”
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat “%{cu}t %M”
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
6、配置apache的servername
[root@linux-node1 ~]# vim /etc/httpd/conf/httpd.conf
ServerName 192.168.0.11:80
如果不配置 servername的话,会遇到很多问题。
7、启动服务(两个httpd和memcached):
systemctl enable memcached
systemctl start memcached
systemctl enable httpd
systemctl start httpd
keystone有两个httpd端口,5000和35357
8、设置环境变量
export OS_TOKEN=863d35676a5632e846d9
export OS_URL=http://192.168.0.11:35357/v3
export OS_IDENTITY_API_VERSION=3
以上设置的环境变量是方便以下的操作:第9和第10。也就是创建用户、角色、项目的时候用的,创建完之后,我们就可以用用户名和密码访问了。
9、创建用户:
创建管理员admin项目、角色、用户,并加入到对应的项目和角色中
openstack project create –domain default –description “Admin Project” admin
openstack user create –domain default –password-prompt admin
openstack role create admin
openstack role add –project admin –user admin admin
创建demo项目、角色、用户,并加入到对应的项目和角色中
openstack project create –domain default –description “Demo Project” demo
openstack user create –domain default –password=demo demo
openstack role create user
openstack role add –project demo –user demo user
创建一个Service项目:
openstack project create –domain default –description “Service Project” service
创建完之后就可以查看:
[root@node1 config]# openstack user list
+———————————-+——-+
| ID | Name |
+———————————-+——-+
| 79e52fa739b34ae0aa18b3c8e83ea938 | demo |
| fc7b1f01312f486e88f8e87786898d66 | admin |
+———————————-+——-+
[root@node1 config]# openstack role list
+———————————-+——-+
| ID | Name |
+———————————-+——-+
| 7dbd2712ffd941339d86d360923b7fed | user |
| f2763d46aeb24260b90d6f41cfcf2d75 | admin |
+———————————-+——-+
[root@node1 config]# openstack project list
+———————————-+———+
| ID | Name |
+———————————-+———+
| 03741e9d423a444b8c661cb6b58bcfa3 | admin |
| 3c7e2e7327c54ee687644fac035c1fef | service |
| 5b44a32ce16c4e83a2299b67591359c0 | demo |
+———————————-+———+
[root@node1 config]#
10、服务注册
keystone这个服务本身也是需要注册的。
创建服务项目
openstack service create –name keystone –description “OpenStack Identity” identity
以下是三种场合的注册,公共,内网、管理员。
openstack endpoint create –region RegionOne identity public http://192.168.0.11:5000/v2.0
openstack endpoint create –region RegionOne identity internal http://192.168.0.11:5000/v2.0
openstack endpoint create –region RegionOne identity admin http://192.168.0.11:35357/v2.0
注册完之后要去掉环境变量,因为下面我们可以用用户名和密码访问了,不需要用admin_token了。
去掉环境变量:
unset OS_TOKEN
unset OS_URL
注册完之后可以测试:
openstack –os-auth-url http://192.168.0.11:35357/v3 \
–os-project-domain-id default –os-user-domain-id default \
–os-project-name admin –os-username admin –os-auth-type password \
token issue
返回以下说明成功:
[root@node1 config]# openstack –os-auth-url http://192.168.0.11:35357/v3 –os-project-domain-id default –os-user-domain-id default –os-project-name admin –os-username admin –os-auth-type password token issue
Password:
+————+———————————-+
| Field | Value |
+————+———————————-+
| expires | 2016-10-07T09:41:39.595596Z |
| id | 903e14a13ad44a549c8699c04b3f990f |
| project_id | 03741e9d423a444b8c661cb6b58bcfa3 |
| user_id | fc7b1f01312f486e88f8e87786898d66 |
+————+———————————-+
[root@node1 config]#
11、配置keystone环境变量,方便执行命令
加上执行权限,以后需要用source 一下就可以生效了。
admin环境变量:
[root@linux-node1 ~]# vim admin-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://192.168.0.11:35357/v3
export OS_IDENTITY_API_VERSION=3
demo环境变量:
[root@linux-node1 ~]# vim demo-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.0.11:5000/v3
export OS_IDENTITY_API_VERSION=3
在获取token 的时候,就不用像上面那么长的命令,直接openstack token issue
[root@node1 config]# chmod +x admin-openrc.sh demo-openrc.sh
[root@node1 config]# source admin-openrc.sh demo-openrc.sh
[root@node1 config]# openstack token issue
+————+———————————-+
| Field | Value |
+————+———————————-+
| expires | 2016-10-07T09:47:32.413436Z |
| id | dd844b3b4f594c7491dc63582a1e3105 |
| project_id | 03741e9d423a444b8c661cb6b58bcfa3 |
| user_id | fc7b1f01312f486e88f8e87786898d66 |
+————+———————————-+
[root@node1 config]#
三、glance镜像服务
glance主要由三个部分构成:glance-api、glance-registry以及image store
glance-api:接受云系统镜像的创建、删除、读取请求。
glance-registry:云系统的镜像注册服务。
1、安装:
yum install -y openstack-glance python-glance python-glanceclient
2、配置glance
vi /etc/glance/glance-api.conf 更改数据库的配置如下:
onnection=mysql://glance:glance@192
.168.0.11/glance
vi /etc/glance/glance-registry.conf 也是添加数据库onnection=mysql://glance:glance@192
.168.0.11/glance
这个添加是为了在数据库里创建表结构。
创建表结构:
su -s /bin/sh -c “glance-manage db_sync” glance
[root@node1 config]# su -s /bin/sh -c “glance-manage db_sync” glance
No handlers could be found for logger “oslo_config.cfg”
[root@node1 config]#
以上的提示没有关系 的。
3、创建一个用户:
openstack user create –domain default –password=glance glance
并加入一个角色和项目:
openstack role add –project service –user glance admin
创建用户的时候报以下错:
[root@node1 ~]# openstack user create –domain default –password=glance glance
Missing parameter(s):
Set a username with –os-username, OS_USERNAME, or auth.username
Set an authentication URL, with –os-auth-url, OS_AUTH_URL or auth.auth_url
Set a scope, such as a project or domain, set a project scope with –os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope with –os-domain-name, OS_DOMAIN_NAME or auth.domain_name
[root@node1 ~]#
解决方法:说明没有环境变量,source一下就可以了。如下:
[root@node1 ~]# source ./admin-openrc.sh
[root@node1 ~]# openstack user create –domain default –password=glance glance
+———–+———————————-+
| Field | Value |
+———–+———————————-+
| domain_id | default |
| enabled | True |
| id | 167dd4c1f356448db811760a7b4121ab |
| name | glance |
+———–+———————————-+
[root@node1 ~]#
4、在glance-api.conf配置文件中配置keystone参数
在/etc/glance/glance-api.conf中找到keystone-authtoken标签,在下面加入以下配置:
auth_uri = http://192.168.0.11:5000
auth_url = http://192.168.0.11:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance
其中一项配置成以下:
flavor=keystone
还有一个通知的配置 如下:
notification_driver = noop
还有一个镜像存储在【glance_store】标签下,相关的配置,如下:
default_store=file
filesystem_store_datadir=/var/lib/glance/images/ #镜像存储位置
还有一个把详细输出打开:
verbose=True
5、在/etc/glance/glance-registry.conf 配置keystone,和在api中配置一样,但只需配置其中两项。如下:
在[keystone-authtoken]标签,在下面加入以下配置:
auth_uri = http://192.168.0.11:5000
auth_url = http://192.168.0.11:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance
其中一项配置成以下:
flavor=keystone
6、最后的配置如下:
[root@node1 ~]# grep ‘^[a-z]’ /etc/glance/glance-api.conf
verbose=True
notification_driver = noop
connection=mysql://glance:glance@192.168.0.11/glance
default_store=file
filesystem_store_datadir=/var/lib/glance/images/
auth_uri = http://192.168.0.11:5000
auth_url = http://192.168.0.11:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance
flavor=keystone
[root@node1 ~]#
[root@node1 ~]# grep ‘^[a-z]’ /etc/glance/glance-registry.conf
connection=mysql://glance:glance@192.168.0.11/glance
auth_uri = http://192.168.0.11:5000
auth_url = http://192.168.0.11:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = glance
flavor=keystone
[root@node1 ~]#
7、启动glance
systemctl enable openstack-glance-api
systemctl enable openstack-glance-registry
systemctl start openstack-glance-api
systemctl start openstack-glance-registry
registry:9191端口
api:9192端口
8、在keystone上注册,才能使用,也就是服务注册,注册之后,别的服务才能调用glance服务了。
1)、创建服务:
source admin-openrc.sh
openstack service create –name glance –description “OpenStack Image service” image
2)、创建三个端点
openstack endpoint create –region RegionOne image public http://192.168.0.11:9292
openstack endpoint create –region RegionOne image internal http://192.168.0.11:9292
openstack endpoint create –region RegionOne image admin http://192.168.0.11:9292
9、验证是否成功:
加入一个环境变量:
echo “export OS_IMAGE_API_VERSION=2” \
| tee -a admin-openrc.sh demo-openrc.sh
然后验证:
[root@node1 ~]# glance image-list
+—-+——+
| ID | Name |
+—-+——+
+—-+——+
[root@node1 ~]#
出现以上说明正常,由于没有镜你,所以是空的。
我们可以下载一个小的镜像测试一下:
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
然后上传上去
glance image-create –name “cirros” \
–file cirros-0.3.4-x86_64-disk.img \
–disk-format qcow2 –container-format bare \
–visibility public –progress
然后在查就有了:
[root@node1 ~]# glance image-list
+————————————–+——–+
| ID | Name |
+————————————–+——–+
| 91ea4e90-4b27-47bf-8483-dd4de7947193 | cirros |
+————————————–+——–+
[root@node1 ~]#
然后也可以在存储镜像的目录:
[root@node1 ~]# ls /var/lib/glance/images/
91ea4e90-4b27-47bf-8483-dd4de7947193[root@node1 ~]# file /var/lib/glance/images/91ea4e90-4b27-47bf-8483-dd4de7947193
/var/lib/glance/images/91ea4e90-4b27-47bf-8483-dd4de7947193: QEMU QCOW Image (v2), 41126400 bytes
[root@node1 ~]#
四、nova计算服务(最早的一个项目,另一个是swift)
以下几个必备的组件:
API:负责接收和响应外部请求。支持OpenStack API,EC2API。nova-api组件实现了RESTful API功能,是外部访问Nova的唯一途径。
接收外部的请求并通过Message Queue将请求发送给其他的服务组件,同时也兼容EC2 API,所以也可以用EC2的管理工具对nova进行日常管理。
Cert:负责身份认证。
Scheduler:用于云主机调度。
Nova Scheduler模块在openstack中的作用就是决策虚拟机创建在哪个主机(计算节点)上。
决策一个虚机应该调度到某物理节点,需要分两个步骤:
过滤(Fliter)
计算权值(Weight)
Conductor:计算节点访问数据的中间件。
Consoleauth:用于控制台的授权验证。
Novncproxy:VNC代理。
1、安装:
yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
2、配置
需要配置的东西:数据库、keystone、rabbitMQ、网络相关
配置文件在:/etc/nova/nova.conf
1)、配置数据库:
vi /etc/nova/nova.conf
在【database】标签下配置:
connection=mysql://nova:nova@192.168.0.11/nova
2)、同步数据库(创建nova数据库里的表结构)
su -s /bin/sh -c “nova-manage db sync” nova
然后登录数据库可以查看是否有表,进行验证。
3)、配置rabbitMQ
rpc_backend=rabbit默认也是这个,把#去掉就可以了。
在[oslo_messaging_rabbit]标签下配置如下:rabbit_host=192.168.0.11
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=openstack
4)、配置keystone
配置之前需要创建用户
source admin-openrc.sh
openstack user create –domain default –password=nova nova
加入到角色和组
openstack role add –project service –user nova admin#这条命令没有输出。
在[keystone_authtoken]标签下配置:
auth_uri = http://192.168.0.11:5000
auth_url = http://192.168.0.11:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = nova
在[default]标签下配置如下:
auth_strategy=keystone #去掉#号就可以了。
配置网络,查找关键值如下:
838行左右 network_api_class=nova.network.neutr
onv2.api.API
设置安全组,把注释打,设置成如下:
security_group_api=neutron
还有一个网络接口的类方法要配置成如下:
linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
防火墙配置配成以下:
firewall_driver = nova.virt.firewall.NoopFirewallDriver
4)、vnc配置
配置如下:
# IP address of this host (string value)
my_ip=192.168.0.11
vncserver_listen=$my_ip
vncserver_proxyclient_address=$my_ip
5)、配置glance
在【glance】标签下配置如下:
host=$my_ip
配置锁的路径:
lock_path=/var/lib/nova/tmp#把#号去掉就可以,使用默认配置
还有一个配置值如下:
enabled_apis=osapi_compute,metadata
最后的配置如下:
[root@node1 ~]# grep ‘^[a-z]’ /etc/nova/nova.conf
my_ip=192.168.0.11
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
network_api_class=nova.network.neutronv2.api.API
linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
security_group_api=neutron
firewall_driver = nova.virt.firewall.NoopFirewallDriver
rpc_backend=rabbit
connection=mysql://nova:nova@192.168.0.11/nova
host=$my_ip
auth_uri = http://192.168.0.11:5000
auth_url = http://192.168.0.11:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = nova
lock_path=/var/lib/nova/tmp
rabbit_host=192.168.0.11
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=openstack
vncserver_listen=$my_ip
vncserver_proxyclient_address=$my_ip
[root@node1 ~]#
3、启动:
设置开机启动:
systemctl enable openstack-nova-api.service \
openstack-nova-cert.service openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service
启动:
systemctl start openstack-nova-api.service \
openstack-nova-cert.service openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service
4、注册之后才能使用
向keystone注册,命令如下:
openstack service create –name nova –description “OpenStack Compute” compute
openstack endpoint create –region RegionOne compute public http://192.168.0.11:8774/v2/%\(tenant_id\)s
# openstack endpoint create –region RegionOne compute internal http://192.168.0.11:8774/v2/%\(tenant_id\)s
openstack endpoint create –region RegionOne compute admin http://192.168.0.11:8774/v2/%\(tenant_id\)s
如果注册失败,需要souce 一下变量的那个文件。
5、验证
出现以下四个服务就是正常的
[root@node1 ~]# openstack host list
+———–+————-+———-+
| Host Name | Service | Zone |
+———–+————-+———-+
| node1 | consoleauth | internal |
| node1 | scheduler | internal |
| node1 | cert | internal |
| node1 | conductor | internal |
+———–+————-+———-+
[root@node1 ~]#
五、nova computer(计算节点)
nova-compute 一般运行在计算节点上,通过Message Queue接收并管理VM的生命周期。
Nova-compute 通过Libvirt管理KVM,通过XenAPI管理Xen等。
1、安装(node2的主机上安装)
yum install openstack-nova-compute sysfsutils -y
2、配置
把刚才那个node1上配置好的/etc/nova/nova.conf文件scp到node2上对应的位置就可以了,然后修改。
修改内容如下:my_ip要修改
my_ip=192.168.0.12
vncserver_listen=0.0.0.0
novncproxy_base_url=http://192.168.0.11:6080/vnc_auto.html
vnc_enabled=true
vnc_keymap=en-us
在【glance】标签下的host地址要变成以下:
host=192.168.0.11
配置虚拟机类型:
查看本机支持不支持kvm的方法是:
grep -E ‘(vmx|svm)’ /proc/cpuinfo如果能搜索到东西说明支持。那么我们virt_type=kvm,如果没有、不支持的话我们可以配置成virt_type=qemu
由于我的支持,所以配置成kvm,如下:
virt_type=kvm
最后的配置如下:
[root@node2 ~]# grep ‘^[a-z]’ /etc/nova/nova.conf
my_ip=192.168.0.12
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
network_api_class=nova.network.neutronv2.api.API
linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
security_group_api=neutron
firewall_driver = nova.virt.firewall.NoopFirewallDriver
rpc_backend=rabbit
connection=mysql://nova:nova@192.168.0.11/nova
host=192.168.0.11
auth_uri = http://192.168.0.11:5000
auth_url = http://192.168.0.11:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = nova
virt_type=kvm
lock_path=/var/lib/nova/tmp
rabbit_host=192.168.0.11
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=openstack
novncproxy_base_url=http://192.168.0.11:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$my_ip
vnc_enabled=true
vnc_keymap=en-us
[root@node2 ~]#
3、在node2的计算节点上配置时间
yum install chrony -y
配置文件配置成如下:
[root@node2 ~]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 192.168.0.11 iburst
也就是把node1设置成时间服务器,去与node1同步。
启动时间软件:
systemctl enable chronyd.service
systemctl start chronyd.service
验证时间同步软件:
chronyc sources #也就是同步时间,自己设置一下时间,然后在同步一下,不行的话,把这个软件重启一下,在试一下。
4、启动nova compute
systemctl enable libvirtd openstack-nova-compute
systemctl start libvirtd openstack-nova-compute
5、验证:
在管理节点node1上执行openstack host list可以查看到compute 服务在node2的主机上就可以了。
如下:
>[root@node1 ~]# openstack host list
>+———–+————-+———-+
>| Host Name | Service | Zone |
>+———–+————-+———-+
>| node1 | consoleauth | internal |
>| node1 | scheduler | internal |
>| node1 | cert | internal |
>| node1 | conductor | internal |
>| node2 | compute | nova |
>+———–+————-+———-+
>[root@node1 ~]#
在node1上可以查看glance服务是否正常服务:
nova image-list
在node1上可以查看keystone服务是正常服务:
nova endpoints
六、Neutron(网络)
计算节点和控制节点都要装
1、安装
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset
首先注册一下服务:
source admin-openrc.sh
openstack service create –name neutron –description “OpenStack Networking” network
openstack endpoint create –region RegionOne network public http://192.168.0.11:9696
openstack endpoint create –region RegionOne network internal http://192.168.0.11:9696
openstack endpoint create –region RegionOne network admin http://192.168.0.11:9696
创建用户: openstack user create –domain default –password=neutron neutron
openstack role add –project service –user neutron admin
同步数据库:
su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf \
–config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
启动服务:
由于更改了nova所以需要重启一下api
systemctl restart openstack-nova-api
systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-l3-agent.service
验证:出现以下表示正常:
[root@node1 neutron]# neutron agent-list
+————————————–+——————–+——-+——-+—————-+—————————+
| id | agent_type | host | alive | admin_state_up | binary |
+————————————–+——————–+——-+——-+—————-+—————————+
| 16dbc460-087f-4f3f-a18d-332f5ce1292d | Linux bridge agent | node2 | | True | neutron-linuxbridge-agent |
| 5ab10bb8-b9c6-472c-9748-917d9a2c1379 | Metadata agent | node1 | | True | neutron-metadata-agent |
| 9651587a-ac10-4ddd-9031-9115ef3f9c35 | Linux bridge agent | node1 | | True | neutron-linuxbridge-agent |
| c55ed77f-68c3-456c-b1f0-ffe5452f38ba | L3 agent | node1 | | True | neutron-l3-agent |
| e3f70954-b809-4072-ae84-816a1c0c2d96 | DHCP agent | node1 | | True | neutron-dhcp-agent |
+————————————–+——————–+——-+——-+—————-+—————————+
[root@node1 neutron]#
报错:
[root@node1 neutron]# systemctl status neutron-l3-agent.service
● neutron-l3-agent.service – OpenStack Neutron Layer 3 Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-l3-agent.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sat 2016-10-08 23:48:33 CST; 16min ago
Process: 3972 ExecStart=/usr/bin/neutron-l3-agent –config-file /usr/share/neutron/neutron-dist.conf –config-dir /usr/share/neutron/l3_agent –config-file /etc/neutron/neutron.conf –config-dir /etc/neutron/conf.d/common –config-dir /etc/neutron/conf.d/neutron-l3-agent –log-file /var/log/neutron/l3-agent.log (code=exited, status=1/FAILURE)
Main PID: 3972 (code=exited, status=1/FAILURE)
Oct 08 23:48:27 node1 systemd[1]: Started OpenStack Neutron Layer 3 Agent.
Oct 08 23:48:27 node1 systemd[1]: Starting OpenStack Neutron Layer 3 Agent…
Oct 08 23:48:30 node1 neutron-l3-agent[3972]: No handlers could be found for logger “oslo_config.cfg”
Oct 08 23:48:33 node1 systemd[1]: neutron-l3-agent.service: main process exited, code=exited, status=1/FAILURE
Oct 08 23:48:33 node1 systemd[1]: Unit neutron-l3-agent.service entered failed state.
Oct 08 23:48:33 node1 systemd[1]: neutron-l3-agent.service failed.
日志中错误 :
[root@node1 neutron]# tail l3-agent.log
2016-10-08 23:32:36.954 3536 ERROR neutron.agent.l3.agent [-] An interface driver must be specified
2016-10-08 23:48:15.942 3935 INFO neutron.common.config [-] Logging enabled!
2016-10-08 23:48:15.943 3935 INFO neutron.common.config [-] /usr/bin/neutron-l3-agent version 7.1.1
2016-10-08 23:48:15.976 3935 ERROR neutron.agent.l3.agent [-] An interface driver must be specified
2016-10-08 23:48:26.672 3953 INFO neutron.common.config [-] Logging enabled!
2016-10-08 23:48:26.673 3953 INFO neutron.common.config [-] /usr/bin/neutron-l3-agent version 7.1.1
2016-10-08 23:48:26.701 3953 ERROR neutron.agent.l3.agent [-] An interface driver must be specified
2016-10-08 23:48:32.942 3972 INFO neutron.common.config [-] Logging enabled!
2016-10-08 23:48:32.943 3972 INFO neutron.common.config [-] /usr/bin/neutron-l3-agent version 7.1.1
2016-10-08 23:48:32.986 3972 ERROR neutron.agent.l3.agent [-] An interface driver must be specified
[root@node1 neutron]# tree ./
解决方法:从日志中看出l3.agent的配置文件interface driver必须指定,配置。把默认的配置打开就可以,配置如下:
[root@node1 neutron]# grep -v “#” l3_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
[AGENT]
[root@node1 neutron]#
计算节点:
scp /etc/neutron/neutron.conf 192.168.0.12:/etc/neutron/
scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.0.12:/etc/neutron/plugins/ml2/
scp /etc/neutron/plugins/ml2/ml2_conf.ini 192.168.0.12:/etc/neutron/plugins/ml2/
vi /etc/nova/nova.conf在[neutron]标签中添加以下:
url = http://192.168.0.11:9696
auth_url = http://192.168.0.11:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = True
metadata_proxy_shared_secret = neutron
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
systemctl restart openstack-nova-compute
启动计算节点网络:
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
验证:在node1上neutron agent-list中可以看到node2节点就可以了
七、创建虚拟机
1、创建网络
neutron net-create flat –shared –provider:physical_network physnet1 –provider:network_type flat
neutron subnet-create flat 192.168.0.0/24 –name flat-subnet –allocation-pool start=192.168.0.200,end=192.168.0.245 –dns-nameserver 192.168.0.1 –gateway 192.168.0.1
2、创建一个key
ssh-keygen -q -N “”
nova keypair-add –pub-key .ssh/id_rsa.pub mykey
source demo-openrc.sh
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
3、nova flavor-list
nova image-list
nova net-list
4、[root@node1 ~]# nova list
+————————————–+—————-+——–+————+————-+——————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+—————-+——–+————+————-+——————–+
| ff679e4c-a46d-4535-8711-487e7bc1fa82 | hello-instance | ACTIVE | – | Running | flat=192.168.0.201 |
+————————————–+—————-+——–+————+————-+——————–+
查看虚拟机已经启动了
现在可以ssh连接了。直接可以连哦,刚才创建了密钥登录。
5、命令管理虚拟机的启动、停止、暂停
nova stop/start/suspend/ vm-name
[root@node1 ~]# ssh cirros@192.168.0.201
The authenticity of host ‘192.168.0.201 (192.168.0.201)’ can’t be established.
RSA key fingerprint is 87:40:88:6f:3a:7c:bf:67:bb:35:3d:ec:b6:fb:19:f9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.0.201’ (RSA) to the list of known hosts.
$ ifconfig
eth0 Link encap:Ethernet HWaddr FA:16:3E:CA:5B:2A
inet addr:192.168.0.201 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:feca:5b2a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:640 errors:0 dropped:0 overruns:0 frame:0
TX packets:150 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:48025 (46.8 KiB) TX bytes:16188 (15.8 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
$
4、[root@node1 ~]# nova get-vnc-console hello-instance novnc
+——-+———————————————————————————–+
| Type | Url |
+——-+———————————————————————————–+
| novnc | http://192.168.0.11:6080/vnc_auto.html?token=5abb0983-c570-4768-a9be-55942ab2a08d |
+——-+———————————————————————————–+
[root@node1 ~]#
打开上面的地址,也可以直接链接哦。
5、在看一下node1和node2的eth0已经没有了,变成了网桥了。
[root@node1 ~]# ifconfig
brqf470e4d4-3d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.11 netmask 255.255.255.0 broadcast 192.168.0.255
ether 00:0c:29:93:54:16 txqueuelen 0 (Ethernet)
RX packets 12855 bytes 1955153 (1.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10668 bytes 15516280 (14.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::20c:29ff:fe93:5416 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:93:54:16 txqueuelen 1000 (Ethernet)
RX packets 28086 bytes 5301239 (5.0 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 30769 bytes 25996026 (24.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 147063 bytes 38034255 (36.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 147063 bytes 38034255 (36.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
tapfa200e16-15: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 1a:a4:b7:0e:ca:1d txqueuelen 1000 (Ethernet)
RX packets 79 bytes 8603 (8.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1334 bytes 145352 (141.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@node1 ~]#
[root@node2 neutron]# ifconfig
brqf470e4d4-3d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.12 netmask 255.255.255.0 broadcast 192.168.0.255
inet6 fe80::6833:9eff:fe02:4b7b prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:1a:33:00 txqueuelen 0 (Ethernet)
RX packets 4719 bytes 624286 (609.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4977 bytes 929447 (907.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::20c:29ff:fe1a:3300 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:1a:33:00 txqueuelen 1000 (Ethernet)
RX packets 40003 bytes 19789706 (18.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 26625 bytes 9379122 (8.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 37071 bytes 1946600 (1.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 37071 bytes 1946600 (1.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
tap88f37f89-41: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::fc16:3eff:feca:5b2a prefixlen 64 scopeid 0x20<link>
ether fe:16:3e:ca:5b:2a txqueuelen 500 (Ethernet)
RX packets 163 bytes 18454 (18.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 660 bytes 49669 (48.5 KiB)
TX errors 0 dropped 262 overruns 0 carrier 0 collisions 0
[root@node2 neutron]#
6、虚拟机的位置:
/var/lib/nova/instances/
脚本:日志和lock文件很重要,需要防止重复运行。
libvirt.xml是动态生成的,更改是没有用的,在kvm下直接更改是生效的,但在openstack是动态生成,改了没有用的。
七、dashboard
1、更改配置文件
1)、138行, OPENSTACK_HOST = “192.168.0.11” #更改你的ip(keystone的ip)
2)、OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user” #角色改成user
3)、ALLOWED_HOSTS = [‘*’, ] #允许哪些主机访问dashboard
4)、CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.ba
ckends.memcached.MemcachedCache’,
‘LOCATION’: ‘192.168.0.11:11211’
,
}
}#打开注释,更改memecached的地址
5)、时区设置
TIME_ZONE = “Asia/Shanghai”
2、重启apache
systemctl restart httpd
3、web访问:http://192.168.0.11/dashboard
普通用户名:demo和密码:demo
管理员账号:admin,密码admin
4、虚拟机创建流程
5、有时候创建网络不成功,原因是你的宿主机的ip地址设置不能用dhcp 要用static
八、cinder(存储,云存储)
1、cinder-api。接受API请求并将请求路由到 cinder-volume 来执行。一般装在控制节点
cinder-volume。响应请求,读取或写向块存储数据库为维护状态,通过信息队列机制与其他进程交互(如cinder-scheduler),或直接与上层块存储提供的硬件或软件进行交互。通过驱动结构,他可以与众多的存储提供者进行交互。 一般装在提供磁盘的节点
cinder-scheduler。 守护进程。类似于nova-scheduler,为存储卷的实例选取最优的块存储供应节点。一般装在控制节点
支持nfs,本地磁盘、分布式文件系统
2、安装:
控制节点:
yum install openstack-cinder python-cinderclient
3、配置
1)、vim /etc/cinder/cinder.conf
配置数据库连接:
在[database]项目下
connection = mysql://cinder:cinder@192.168.0.11/cinder
2)、配置完数据库连接就可以同步数据库了,需要在cinder用户下同步(用以下命令就可以同步):
su -s /bin/bash -c “cinder-manage db sync” cinder
进入数据库查看cinder数据库是否有表,cinder数据库在前面已经创建好了,没有创建好的话,需要重新创建。
3)、创建一个cinder用户:
openstack user create –domain default –password-prompt cinder
4)、加入项目,并给admin角色
openstack role add –project service –user cinder admin
5、配置文件中配置keystone
auth_strategy = keystone #把这个打开,536行。
在【keyston_authtoken]中加入以下:
[keystone_authtoken]
auth_uri = http://192.168.0.11:5000
auth_url = http://192.168.0.11:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
6、在cinder配置文件中配置rabbitmq
把2294行的注释打开:
rpc_backend = rabbit
在[oslo_messaging_rabbit]项目中配置:
rabbit_host = 192.168.0.11
rabbit_port = 5672
rabbit_userid = openstack
rabbit_password = openstack
7、配置一下glance:
glance_host = 192.168.0.11
lock_path = /var/lib/cinder/tmp
8、在/etc/nova/nova.conf中的cinder项目中配置如下:
[cinder]
os_region_name = RegionOne
9、重启cinder及nova相关服务 :
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
10、在keystone注册
1)、先创建服务(v1和v2两个版本都要创建)
openstack service create –name cinder –description “OpenStack Block Storage” volume
openstack service create –name cinderv2 –description “OpenStack Block Storage” volumev2
openstack endpoint create –region RegionOne volume public http://192.168.0.11:8776/v1/%\(tenant_id\)s
openstack endpoint create –region RegionOne volume internal http://192.168.0.11:8776/v1/%\(tenant_id\)s
openstack endpoint create –region RegionOne volume admin http://192.168.0.11:8776/v1/%\(tenant_id\)s
openstack endpoint create –region RegionOne volumev2 public http://192.168.0.11:8776/v2/%\(tenant_id\)s
openstack endpoint create –region RegionOne volumev2 internal http://192.168.0.11:8776/v2/%\(tenant_id\)s
openstack endpoint create –region RegionOne volumev2 admin http://192.168.0.11:8776/v2/%\(tenant_id\)s
配置文件如下:
[root@node1 ~]# grep ‘^[a-z]’ /etc/cinder/cinder.conf
glance_host = 192.168.0.11
auth_strategy = keystone
rpc_backend = rabbit
connection = mysql://cinder:cinder@192.168.0.11/cinder
auth_uri = http://192.168.0.11:5000
auth_url = http://192.168.0.11:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
lock_path = /var/lib/cinder/tmp
rabbit_host = 192.168.0.11
rabbit_port = 5672
rabbit_userid = openstack
rabbit_password = openstack
[root@node1 ~]#
11、在node2上配置存储节点:
在node2上添加一块磁盘,做cinder存储用,用lvs做存储
1)、创建lvs
[root@node2 ~]# pvcreate /dev/sdb
Physical volume “/dev/sdb” successfully created
[root@node2 ~]# vgcreate cinder-volumes /dev/sdb
Volume group “cinder-volumes” successfully created
[root@node2 ~]#
2)、配置/etc/lvm/lvm.conf中的[device]中添加以下:
filter = [ “a/sdb/”, “r/.*/”]
3)、在存储节点上安装:
yum install openstack-cinder targetcli python-oslo-policy
4)、存储节点上的配置,可以把控制节点上的/etc/cinder/cinder.conf的文件复制过来,然后更改。
在配置文件的最下面添加以下:
[lvm]
volume_driver = cinder.volume.drivers.lvm.LV
MVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
在540行中配置如下:
enabled_backends = lvm #调用上面的【lvm】
5)、启动(存储节点)
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
12、也可以用nfs做后段存储,但配置上要改变成NFS
13、启动事例的时候,可以在web界面上设置管理员密码的配置,
1)、先配置dashboard
vim /etc/openstack-dashboard/local_settings
把以下配置设置成true。
OPENSTACK_HYPERVISOR_FEATURES = {
‘can_set_mount_point’: True,
‘can_set_password’: True,
‘requires_keypair’: True,
}
然后重启:systemctl restart httpd
2)、配置nova节点
vi /etc//nova/nova.conf
把以下两项配置成True
inject_password=True
inject_key=True
重启服务生效。
原文链接:openstack 大概安装,转载请注明来源!