OpenStack_Q版部署_10-安装和配置Cinder节点Menopoma

最后更新于:2018-09-05 03:34:47

块存储服务(cinder)为实例提供块存储。存储的分配和消耗是由块存储驱动器,或者多后端配置的驱动器决定的。还有很多驱动程序可用:NAS/SAN,NFS,ISCSI,Ceph等。

 

1.1. 安装并配置控制节点

1.1.1. 先决条件724-595-0517

  • 创建DB

CREATE DATABASE cinder;

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'Dx123456';

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'Dx123456';

  • 使用admin用户,创建cinder凭证

openstack user create --domain default --password-prompt cinder

  • 在service项目中,给cinder用户添加admin角色

openstack role add --project service --user cinder admin

  • 创建cinder服务

openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

  • 创建endpoint

openstack endpoint create --region RegionOne volumev2 public /controller:8776/v2/%\(project_id\)s

openstack endpoint create --region RegionOne volumev2 internal /controller:8776/v2/%\(project_id\)s

openstack endpoint create --region RegionOne volumev2 admin /controller:8776/v2/%\(project_id\)s

openstack endpoint create --region RegionOne volumev3 public /controller:8776/v3/%\(project_id\)s

openstack endpoint create --region RegionOne volumev3 internal /controller:8776/v3/%\(project_id\)s

openstack endpoint create --region RegionOne volumev3 admin /controller:8776/v3/%\(project_id\)s

 

1.1.2. 安全并配置组件(安装、配置、同步数据库)

  • 安装

yum install -y openstack-cinder

 编辑

vi /etc/cinder/cinder.conf

[database]选项

connection = mysql+pymysql:/cinder:Dx123456@controller/cinder

[DEFAULT] 选项

rpc_backend = rabbit

transport_url = rabbit:/openstack:Dx123456@controller

auth_strategy = keystone

my_ip = 192.168.10.211                                       (本机管理IP)

 [keystone_authtoken] 选项

auth_uri = /controller:5000

auth_url = /controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = cinder

password = Dx123456

[oslo_concurrency] 选项

lock_path = /var/lib/cinder/tmp

 

  • 最后修改结果

 

  • 同步数据库

su -s /bin/sh -c "cinder-manage db sync" cinder

忽略deprecated信息

 

1.2. 配置计算节点以使用块设备存储macule

编辑文件 /etc/nova/nova.conf 并添加如下到其中

[cinder] 选项

os_region_name = RegionOne

 

1.3. 完成安装

  • 重启计算API 服务

systemctl restart openstack-nova-api.service

  • 启动块设备存储服务,并将其配置为开机自启

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

 

1.4. 配置计算节点以使用块设备存储(非CEPH)877-990-9213

1.4.1. 安装支持的工具包(未安装Lvm,如果安装跳转【11.4.2.配置LVM存储】)9169531880

  • 安装 LVM 包

yum install lvm2

  • 启动LVM的metadata服务并且设置该服务随系统启动

systemctl enable lvm2-lvmetad.service

systemctl start lvm2-lvmetad.service

 

1.4.2.   配置LVM存储

  • 创建LVM 物理卷/dev/sdb

pvcreate /dev/sdb

Physical volume "/dev/sdb" successfully created

 

  • 创建 LVM 卷组 cinder-volumes

vgcreate cinder-volumes /dev/sdb

Volume group "cinder-volumes" successfully created

 

  • 编辑 vim /etc/lvm/lvm.conf,找到global_filter一行,配置如下

filter = [ "a/sda/", "a/sdb/", "r/.*/"]

注:

每个过滤器组中的元素都以``a``开头,即为 accept,或以 r 开头,即为**reject**,并且包括一个设备名称的正则表达式规则。过滤器组必须以``r/.*/``结束,过滤所有保留设备。

另一个写法:global_filter = [ "a|.*/|","a|sdb1|"]

 

[DEFAULT]选项

transport_url = rabbit:/openstack:Dx123456@controller

auth_strategy = keystone

my_ip = 192.168.10.213

enabled_backends = lvm

glance_api_servers = /controller:9292

[database]选项

connection = mysql+pymysql:/cinder:Dx123456@controller/cinder

[keystone_authtoken]选项

auth_uri = /controller:5000

auth_url = /controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_id = default

user_domain_id = default

project_name = service

username = cinder

password = Dx123456

[lvm]选项(LVM段为新增,原配置文件没有)

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver

volume_group = cinder-volumes

iscsi_protocol = iscsi

iscsi_helper = lioadm

[oslo_concurrency]选项

lock_path = /var/lib/cinder/tmp

 

  • 设置存储服务开机启动

systemctl enable openstack-cinder-volume.service target.service

systemctl start openstack-cinder-volume.service target.service

 

1.5. 验证操作8648046584

  • 获得 admin 凭证来获取只有管理员能执行的命令的访问权限

. admin-openrc

  • 列出服务组件以验证是否每个进程都成功启动

 

1.6. 使用cinder(当前用户,如demo)

  • 创建1G的卷volume48,该操作在cinder主机上会自动创建iSCSI:

openstack volume create --size 1 volume48

 

  • 查看创建的volume

openstack volume list

  • 将volume分配给instance

openstack server add volume provider-instance volume48

 

注:此条命令无输出

  • 再次查看volume
  • 启动虚机,查看新加的卷

 

OpenStack_Q版部署_9-运行一个实例

最后更新于:2018-09-05 03:32:00

在controller节点安装、配置

1.1. 安装4159257573

yum install -y openstack-dashboard

 

1.2. 配置local_settings

vi /etc/openstack-dashboard/local_settings

 

  • 修改

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*']

 

  • 配置memcache会话存储

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

 

CACHES = {

'default': {

'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

'LOCATION': 'controller:11211',

}

}

 

  • 开启身份认证API 版本v3

OPENSTACK_KEYSTONE_URL = "/%s:5000/v3" % OPENSTACK_HOST

  • 开启domains版本支持

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

  • 配置API版本

OPENSTACK_API_VERSIONS = {

"identity": 3,

"image": 2,

"volume": 2,

}

 

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

:

 

OPENSTACK_NEUTRON_NETWORK = {

 

'enable_router': False,

'enable_quotas': False,

'enable_distributed_router': False,

'enable_ha_router': False,

'enable_lb': False,

'enable_firewall': False,

'enable_***': False,

'enable_fip_topology_check': False,

}

 

  • 完成安装,重启web服务和会话存储

systemctl restart httpd.service memcached.service

在浏览器输入/192.168.1.211/dashboard.,访问openstack的web页面

default

admin

Dx123456

OpenStack_Q版部署_8-运行一个实例

最后更新于:2018-09-05 03:29:50

注:清除OpenStack数据库中已删除虚机数据

nova-manage db archive_deleted_rows –verbose

 

1.1. 使用provider网络

实例使用provider网络,而provider网络通过layer-2连接到物理网络,通过DHCP给instance提供IP。

1.2. 创建虚拟网络(admin用户)(412) 281-2246

. admin-openrc

openstack network create  --share --external \

--provider-physical-network provider \

--provider-network-type flat provider

 

参数:

--share,所有project都可使用该虚拟网络

--external,定义为外部网络,--internal则为内部网络

--provider-physical-network provider、  --provider-network-type flat,将虚拟网络连接到在“/etc/neutron/plugins/ml2/ml2_conf.ini”和“/etc/neutron/plugins/ml2/linuxbridge_agent.ini”配置文件指定的网卡所在的物理网络,这里就是ens33网卡。

 

  • 创建名称为“provider”的子网

openstack subnet create --network provider \

--allocation-pool start=192.168.1.40,end=192.168.1.80  \

--dns-nameserver 8.8.8.8 --gateway 192.168.1.1  \

--subnet-range 192.168.1.0/24 provider

 

1.3. 创建虚拟模板(admin)

openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano

模板mi.nano包含一个cpu、64M内存、1G磁盘

 

注:实例类型

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |

| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |

| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |

| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |

| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

1.4. 创建公钥(demo用户)904-454-9540

大多数云映像支持公钥认证,而不是传统的密码认证。在启动实例之前,必须向计算服务添加公钥。

用demo用户操作,生成密钥、添加公钥:

. demo-openrc

ssh-keygen -q -N ""

openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

 

  • 检查

openstack keypair list

 

1.5. 添加安全组规则以允许ping/ssh(admin)

以下用admin用户操作

  • 添加允许icmp

openstack security group rule create --proto icmp default

  • 添加允许SSH

openstack security group rule create --proto tcp --dst-port 22 default

 

1.6. 检查环境(demo用户)(307) 771-1343

  • 查看模版

openstack flavor list

 

  • 查看镜像

openstack image list

 

  • 查看网络

openstack network list

 

★★记录下provider网络的ID:7680edfb-f3b6-4791-a4f1-a44d8b4d9f2a

 

  • 查看安全组

openstack security group list

 

1.7. 创建实例(demo用户)

  • 创建实例

openstack server create --flavor m1.nano --image cirros \

--nic net-id=7680edfb-f3b6-4791-a4f1-a44d8b4d9f2a --security-group default \

--key-name mykey provider-instance

以上命令创建一个名称为“provider-instance”的实例,模板是m1.nano,镜像是cirros,网络ID是7680edfb-f3b6-4791-a4f1-a44d8b4d9f2a

 

  • 查看实例状态

openstack server list

 

1.8.  使用virtual console访问实例(demo用户)

  • 获得“provider-instance”实例的VNC访问URL

openstack console url show provider-instance

 

  • 使用浏览器访问该URL(若没配置dns/hosts,可使用IP)

 

  • Openstack常用操作

openstack flavor list         查看实例类型

openstack image list        查看镜像

openstack server list        查看虚机

openstack network list    查看网络

openstack security group list  查看安全组

openstack console url show test-instance      查看浏览器vnc窗口访问连接

openstack ip floating pool list         查看浮动IP池

openstack ip floating create nova   从浮动IP池中获取一个浮动IP

openstack ip floating list                    查看已经获取的浮动IP

openstack ip floating add 192.168.10.129 test-instance        绑定浮动IP给实例

openstack server create --flavor m1.medium --image win7_x64 --nic

net-id=6e85dfe1-f976-4407-87ae-a217a46c9dff --security-group default test-instance      创建实例

openstack server start dxw-vm1     启动虚机

OpenStack_Q版部署_7-安装配置Neutron 网络服务

最后更新于:2018-09-05 03:26:46

1.1. 控制节点上操作

1.1.1. 创建数据库hypostomous

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'Dx123456';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'Dx123456';

1.1.2. 创建连接并创建 keystone 的用户5093423108

  • 创建neutron用户

openstack user create --domain default --password-prompt neutron

  • 添加admin角色给neutron用户

openstack role add --project service --user neutron admin

 

  • 创建neutron服务实体

openstack service create --name neutron --description "OpenStack Networking" network

 

  • 创建endpoint

openstack endpoint create --region RegionOne network public /controller:9696

openstack endpoint create --region RegionOne network internal /controller:9696

openstack endpoint create --region RegionOne network admin /controller:9696

1.1.3. 网络选型及安装、配置(选option1,provider networks)

  • 安装软件

yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

  • 配置服务组件
  • 配置文件1:conf

vi /etc/neutron/neutron.conf

[DEFAULT] 选项

core_plugin = ml2

service_plugins =

transport_url = rabbit:/openstack:Dx123456@controller

auth_strategy = keystone

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

[database] 选项

connection = mysql+pymysql:/neutron:Dx123456@controller/neutron

[keystone_authtoken] 选项

auth_uri = /controller:5000

auth_url = /controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = Dx123456

[nova] 选项

auth_url = /controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = Dx123456

[oslo_concurrency] 选项

lock_path = /var/lib/neutron/tmp

 

  • 配置文件2:ini

配置ML2插件,编辑

vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2] 选项

type_drivers = flat,vlan

tenant_network_types =

mechanism_drivers = linuxbridge

extension_drivers = port_security

[ml2_type_flat] 选项

flat_networks = provider

[securitygroup] 选项

enable_ipset = true

 

  • 配置文件3:ini

配置Linux bridge agent

vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge] 选项

physical_interface_mappings = provider:ens33                 provider网络接口名称-外网

[vxlan] 选项

enable_vxlan = false

[securitygroup] 选项

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

  • 配置文件4:ini

配置DHCP agent,编辑

vi /etc/neutron/dhcp_agent.ini

[DEFAULT] 选项

interface_driver = linuxbridge

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

  • 配置文件5:ini

配置元数据agent,编辑

vi /etc/neutron/metadata_agent.ini

[DEFAULT] 选项

nova_metadata_ip = controller

metadata_proxy_shared_secret = Dx123456

  • 配置文件5:conf

编辑

vi /etc/nova/nova.conf

[neutron] 选项

url = /controller:9696

auth_url = /controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = Dx123456

service_metadata_proxy = true

metadata_proxy_shared_secret = Dx123456

 

1.1.4. 收尾

  • 网络服务需要符号链接/etc/neutron/plugin.ini指向/etc/neutron/plugins/ml2/ml2_conf.ini

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 

  • 装配数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

  • 重启compute API服务

systemctl restart openstack-nova-api.service

  • 启动network service

systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service  neutron-metadata-agent.service

systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

注:若选择网络模型2,还有额外的layer-3服务需要启动。

 

1.2. compute节点上操作

1.2.1. 安装组件

yum install -y openstack-neutron-linuxbridge ebtables ipset

 

1.2.2. 配置组件

  • 配置文件1:conf

配置授权、MQ、插件,编辑

vi /etc/neutron/neutron.conf

[DEFAULT] 选项

transport_url = rabbit:/openstack:Dx123456@controller

auth_strategy = keystone

[keystone_authtoken] 选项

auth_uri = /controller:5000

auth_url = /controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = Dx123456

[oslo_concurrency] 选项

lock_path = /var/lib/neutron/tmp

 

  • 配置文件2:ini

配置Linux bridge agent

vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge] 选项

physical_interface_mappings = provider:ens33                 provider物理网卡的名称

[vxlan] 选项

enable_vxlan = false

[securitygroup] 选项

enable_security_group = true

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

  • 配置文件3:conf

编辑

vi /etc/nova/nova.conf

[neutron] 选项

url = /controller:9696

auth_url = /controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = Dx123456

service_metadata_proxy = true

metadata_proxy_shared_secret = Dx123456

 

1.2.3. 启动相应服务trilogic

  • 重启

systemctl restart openstack-nova-compute.service

  • 启动

systemctl enable neutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

 

1.2.4. 检查安装、配置

  • 在controller节点上

. admin-openrc

openstack extension list --network

  • 查看agent情况

openstack network agent list

到此步,既可以创建instance,也可以安装别的服务。

OpenStack_Q版部署_6-配置 nova 计算服务2547769393

最后更新于:2018-09-05 03:23:27

1.1. 控制节点安装配置部分509-790-0177

1.1.1. 数据库创建

CREATE DATABASE nova_api;

CREATE DATABASE nova;

CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'Dx123456';

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'Dx123456';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'Dx123456';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'Dx123456';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'Dx123456';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'Dx123456';

 

1.1.2. 创建 nova 的 keystone 用户(902) 381-9702

  • 创建nova用户

openstack user create --domain default --password-prompt nova

  • 赋予nova用户admin角色

openstack role add --project service --user nova admin

 

  • 创建名称为“nova”的服务实体,其类型为compute:

# openstack service create --name nova --description "OpenStack Compute" compute

 

  • 创建compute服务的endpoint

openstack endpoint create --region RegionOne compute public /controller:8774/v2.1

openstack endpoint create --region RegionOne compute internal (682) 200-6904

openstack endpoint create --region RegionOne compute admin /controller:8774/v2.1

 

1.1.3. 创建“placement”用户

openstack user create --domain default --password-prompt placement

 

  • 将placement用户添加到service项目、赋予admin角色

openstack role add --project service --user placement admin

 

  • 在服务目录中创建名称为placement的服务,类型是placement:

openstack service create --name placement --description "Placement API" placement

  • 创建placement服务的endpoint

openstack endpoint create --region RegionOne placement public 713-268-8034

openstack endpoint create --region RegionOne placement internal 8622130367

openstack endpoint create --region RegionOne placement admin /controller:8778

1.1.4. 安装、配置组件

yum install -y openstack-nova-api openstack-nova-conductor \

openstack-nova-console openstack-nova-novncproxy \

openstack-nova-scheduler openstack-nova-placement-api

 

  • 编辑

vi /etc/nova/nova.conf

[DEFAULT] 选项

enabled_apis = osapi_compute,metadata

transport_url = rabbit:/openstack:Dx123456@controller

my_ip = 191.168.10.211              (本机管理口IP)

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database] 选项

connection = mysql+pymysql:/nova:Dx123456@controller/nova_api

[database] 选项

connection = mysql+pymysql:/nova:Dx123456@controller/nova

[api] 选项

auth_strategy=keystone

[keystone_authtoken] 选项

auth_uri = /controller:5000

auth_url = /controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = Dx123456

[vnc] 选项

enabled = true

vncserver_listen = $my_ip

vncserver_proxyclient_address = $my_ip

[glance] 选项

api_servers = /controller:9292

[oslo_concurrency] 选项

lock_path = /var/lib/nova/tmp

 

[placement] 选项

os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = /controller:35357/v3

username = placement

password = Dx123456

 

  • 编辑

vi /etc/httpd/conf.d/00-nova-placement-api.conf,最后行添加:

 

<Directory /usr/bin>

<IfVersion >= 2.4>

Require all granted

</IfVersion>

<IfVersion < 2.4>

Order allow,deny

Allow from all

</IfVersion>

</Directory>

 

  • 重启httpd

systemctl restart httpd

  • 装配nova-api数据库

su -s /bin/sh -c "nova-manage api_db sync" nova

  • 注册cell0数据库

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

 

  • 创建cell1:

# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

输出:b3e6e982-fd42-466f-80c9-d4b7f77cb6ff

  • 装配nova数据库

su -s /bin/sh -c "nova-manage db sync" nova

 

★注:在注册nova-api、cell0、cell1、nova数据库等操作时,有报错/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported

exception.NotSupportedWarning

这是Q版的一个Bug,处理方法

将'db_max_retry_interval', 'backend'])

修改为:'db_max_retry_interval', 'backend', 'use_tpool'])

参考文件:

bug:/bugs.launchpad.net/nova/+bug/1746530

pacth:/github.com/openstack/oslo.db/commit/c432d9e93884d6962592f6d19aaec3f8f66ac3a2

1.1.5. 校验cell0和cell1是否正确注册(418) 573-4615

nova-manage cell_v2 list_cells

1.1.6. 启动服务,完成控制节点的安装部分310-847-9397

systemctl enable openstack-nova-api.service \

openstack-nova-consoleauth.service openstack-nova-scheduler.service \

openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service \

openstack-nova-consoleauth.service openstack-nova-scheduler.service \

openstack-nova-conductor.service openstack-nova-novncproxy.service

 

1.2. Compute节点安装、配置部分(760) 703-8203

1.2.1. 安装组件

yum install -y openstack-nova-compute

1.2.2. 配置

  • 编辑

vi /etc/nova/nova.conf

[DEFAULT] 选项

enabled_apis = osapi_compute,metadata

transport_url = rabbit:/openstack:Dx123456@controller

my_ip = 191.168.10.211              (计算节点的管理IP)

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api] 选项

auth_strategy=keystone

[keystone_authtoken] 选项

auth_uri = /controller:5000

auth_url = /controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = Dx123456

[vnc] 选项

enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = $my_ip

novncproxy_base_url = /controller:6080/vnc_auto.html

 

★★★计算节点与管理节点在同一台设备上,vnc模块参数★★★

enabled = true

novncproxy_port=6080

novncproxy_host=0.0.0.0

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = 192.168.1.201

novncproxy_base_url = /192.168.1.201:6080/vnc_auto.html

[glance] 选项

api_servers = /controller:9292

[oslo_concurrency] 选项

lock_path = /var/lib/nova/tmp

[placement] 选项

os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = /controller:35357/v3

username = placement

password = Dx123456

 

  • compute节点是否支持硬件虚拟化

egrep -c '(vmx|svm)' /proc/cpuinfo

返回0,表示不支持,则需要设置/etc/nova/nova.conf

★注:建议虚拟机上使用qemu

 

1.2.3. 启动服务

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl start libvirtd.service openstack-nova-compute.service

 

1.2.4. 在控制节点上将compute节点加入cell数据库中Caxtonian

openstack hypervisor list

1.2.5. 发现compute节点4015957526

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

1.2.6. 在控制节点上检验安装、配置706-725-1075

  • 查询服务列表

openstack compute service list

  • Endpoint列表

openstack catalog list

  • 镜像列表

openstack image list

  • 检查cell、placement API正常:

nova-status upgrade check