189 8069 5689

openstackM版本部署

系统解决方案

 

目前成都创新互联公司已为上千家的企业提供了网站建设、域名、网页空间、网站托管、服务器租用、企业网站设计、道县网站维护等服务,公司将坚持客户导向、应用为本的策略,正道将秉承"和谐、参与、激情"的文化,与客户和合作伙伴齐心协力一起成长,共同发展。

一、环境需求

1、网卡


em1

em2

em3

em4

controller1

172.16.16.1

172.16.17.1

none

none

controller1

172.16.16.2

172.16.17.2

none

none

compute1

172.16.16.3

172.16.17.3

none

none

compute2

172.16.16.4

172.16.17.4

none

none

compute3

172.16.16.5

172.16.17.5

none

none

……





 

2、消息队列

使用mirror-queue mode,详细部署方式,参见禅道上的rabbtmq集群部署文档。

 

3、数据库

使用mariaDB+innodb+gelera,版本10.0.18以上, 详细部署方式,参见禅道上的rabbtmq集群部署文档。

 

4、中间件

使用memcached,未采用集群形式,编辑/etc/sysconfig/memcached,修改127.0.0.1为本地主机名(或者IP)。

 

二、部署方案

本机使用controller1作为认证名,

所有服务密码使用$MODULE+manager,例如:novamanager,glancemanager。

数据库使用dftc+$MODULE例如:DB_PASS,DB_PASS。

规划IP段:172.16.16.0/24作为管理网段;172.16.17.0/24存储网段;172.16.18.0/23作为外部网络的网段。

操作之前使用MYIP=`ip add show em1|grep inet|head -1|awk  '{print $2}'|awk-F'/' '{print  $1}'`赋值变量

本文使用flat+vxlan网络部署方式,如果需要更改,请自行百度;

 

1、database

MySQL -uroot-p****** -e "create database keystone;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "create database glance;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIEDBY 'DB_PASS';"

mysql -uroot-p****** -e "create database nova;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY'DB_PASS';"

mysql -uroot-p****** -e "create database nova_api;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIEDBY 'DB_PASS';"

mysql -uroot-p****** -e "create database neutron;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIEDBY 'DB_PASS';"

mysql -uroot-p****** -e "create database cinder;"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'IDENTIFIED BY 'DB_PASS';"

mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIEDBY 'DB_PASS';"

mysql -uroot-p****** -e "FLUSH PRIVILEGES;"

 

2、keystone

### 安装依赖包

yum installopenstack-keystone httpd mod_wsgi

 

### 修改配置文件

openstack-config--set /etc/keystone/keystone.conf DEFAULT admin_token 749d6ead6be998642461

openstack-config--set /etc/keystone/keystone.conf database connectionmysql+pymysql://keystone:DB_PASS@controller1/keystone

openstack-config--set /etc/keystone/keystone.conf token provider fernet

 

### 同步数据库并生成fernet

su -s /bin/sh -c"keystone-manage db_sync" keystone

keystone-managefernet_setup --keystone-user keystone --keystone-group keystone

 

###/etc/httpd/conf/httpd.conf

touch /etc/httpd/conf.d/wsgi-keystone.conf

echo <

Listen 5000

Listen 35357

 

    WSGIDaemonProcess keystone-publicprocesses=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

    WSGIProcessGroup keystone-public

    WSGIScriptAlias //usr/bin/keystone-wsgi-public

    WSGIApplicationGroup %{GLOBAL}

    WSGIPassAuthorization On

    ErrorLogFormat "%{cu}t %M"

    ErrorLog /var/log/httpd/keystone-error.log

    CustomLog/var/log/httpd/keystone-access.log combined

 

   

        Require all granted

   

 

    WSGIDaemonProcess keystone-adminprocesses=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

    WSGIProcessGroup keystone-admin

    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

    WSGIApplicationGroup %{GLOBAL}

    WSGIPassAuthorization On

    ErrorLogFormat "%{cu}t %M"

    ErrorLog /var/log/httpd/keystone-error.log

    CustomLog/var/log/httpd/keystone-access.log combined

 

   

        Require all granted

   

EOF

 

 

####

systemctl enablehttpd.service && systemctl start httpd.service

 

 

 

###

exportOS_TOKEN=749d6ead6be998642461

exportOS_URL=http://controller1:35357/v3

exportOS_IDENTITY_API_VERSION=3

 

 

 

openstack servicecreate --name keystone --description "DFTCIAAS Identity" identity

 

openstackendpoint create --region scxbxxzx identity public http://controller1:5000/v3

openstackendpoint create --region scxbxxzx identity internal http://controller1:5000/v3

openstackendpoint create --region scxbxxzx identity admin http://controller1:35357/v3

 

 

openstack domaincreate --description "Default Domain" default

 

openstack projectcreate --domain default  --description"Admin Project" admin

openstack user create--domain default  --password-prompt admin

######## createrole project and user

 

openstack rolecreate admin

openstack roleadd --project admin --user admin admin

openstack projectcreate --domain default  --description"Service Project" service

openstack projectcreate --domain default  --description"Demo Project" demo

openstack usercreate --domain default --password-prompt demo

 

echo########  create glance server andendpoint

openstack rolecreate user

openstack roleadd --project demo --user demo user

sed -i"/^pipeline/ s#admin_token_auth##g" /etc/keystone/keystone-paste.ini

unsetOS_TOKEN OS_URL

 

openstack usercreate --domain default --password-prompt glance

 

echo ########create p_w_picpath server and endpoint

openstack roleadd --project service --user glance admin

openstack servicecreate --name glance  --description"DFTCIAAS Image" p_w_picpath

openstackendpoint create --region scxbxxzx p_w_picpath public http://controller1:9292

openstackendpoint create --region scxbxxzx p_w_picpath internal http://controller1:9292

openstackendpoint create --region scxbxxzx p_w_picpath admin http://controller1:9292

openstack usercreate --domain default --password-prompt nova

 

echo ########create compute server and endpoint

openstack roleadd --project service --user nova admin

openstack servicecreate --name nova  --description"DFTCIAAS Compute" compute

openstackendpoint create --region scxbxxzx compute publichttp://controller1:8774/v2.1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx compute internal http://controller1:8774/v2.1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx compute adminhttp://controller1:8774/v2.1/%\(tenant_id\)s

openstack usercreate --domain default --password-prompt neutron

 

echo ########create network server and endpoint

openstack roleadd --project service --user neutron admin

openstack servicecreate --name neutron  --description"DFTCIAAS Networking" network

openstackendpoint create --region scxbxxzx network public http://controller1:9696

openstackendpoint create --region scxbxxzx network internal http://controller1:9696

openstackendpoint create --region scxbxxzx network admin http://controller1:9696

openstack usercreate --domain default --password-prompt cinder

echo ########create volume server and endpoint

openstack roleadd --project service --user cinder admin

openstack servicecreate --name cinder --description "DFTCIAAS Block Storage" volume

openstack servicecreate --name cinderv2 --description "DFTCIAAS Block Storage"volumev2

openstackendpoint create --region scxbxxzx  volumepublic http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volumeinternal http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volumeadmin http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx volumev2 public http://controller1:8776/v2/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx volumev2 internal http://controller1:8776/v2/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx volumev2 admin http://controller1:8776/v2/%\(tenant_id\)s

 

 

 

3、glance

#### 安装依赖包

yum installopenstack-glance

 

 

#### 修改配置文件内容

openstack-config--set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:DB_PASS@controller1/glance

 

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken auth_uri  http://controller1:5000

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken auth_url  http://controller1:35357

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken memcached_servers  controller1:11211

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken auth_type  password

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken project_name  service

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken username  glance

openstack-config--set /etc/glance/glance-api.conf keystone_authtoken password  glancemanager

 

openstack-config--set /etc/glance/glance-api.conf paste_deploy flavor  keystone

 

openstack-config--set /etc/glance/glance-api.conf glance_store stores  file,http

openstack-config--set /etc/glance/glance-api.conf glance_store default_store  file

openstack-config--set /etc/glance/glance-api.conf glance_store filesystem_store_datadir  /var/lib/glance/p_w_picpaths/

 

openstack-config--set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:DB_PASS@controller1/glance

 

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken auth_uri  http://controller1:5000

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken auth_url  http://controller1:35357

openstack-config--set /etc/glance/glance-registry.conf keystone_authtokenmemcached_servers  controller1:11211

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken auth_type  password

openstack-config--set /etc/glance/glance-registry.conf keystone_authtokenproject_domain_name  default

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken project_name  service

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken username  glance

openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken password  glancemanager

 

openstack-config--set /etc/glance/glance-registry.conf paste_deploy flavor  keystone

 

 

###同步数据库

su -s /bin/sh -c"glance-manage db_sync" glance

 

 

###启动服务

systemctl enableopenstack-glance-api.service openstack-glance-registry.service

systemctl startopenstack-glance-api.service openstack-glance-registry.service

 

4、nova

4.1控制节点

#安装依赖包

yum installopenstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy  openstack-nova-scheduler

 

#修改配置文件内容

openstack-config--set /etc/nova/nova.conf DEFAULT  enabled_apis osapi_compute,metadata

 

openstack-config--set /etc/nova/nova.conf api_database   connection mysql+pymysql://nova:DB_PASS@controller1/nova_api

openstack-config--set /etc/nova/nova.conf database  connection  mysql+pymysql://nova:DB_PASS@controller1/nova

 

openstack-config--set /etc/nova/nova.conf DEFAULT rpc_backend  rabbit

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts  controller1:5672

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid  dftc

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password  ******

 

openstack-config--set /etc/nova/nova.conf DEFAULT auth_strategy  keystone

openstack-config--set /etc/nova/nova.conf keystone_authtoken  auth_uri http://controller1:5000/v3

openstack-config--set /etc/nova/nova.conf keystone_authtoken auth_url http://controller1:35357/v3

openstack-config--set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211

openstack-config --set/etc/nova/nova.conf keystone_authtoken auth_type  password

openstack-config--set /etc/nova/nova.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/nova/nova.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/nova/nova.conf keystone_authtoken project_name  service

openstack-config--set /etc/nova/nova.conf keystone_authtoken username  nova

openstack-config--set /etc/nova/nova.conf keystone_authtoken password  novamanager

 

openstack-config--set /etc/nova/nova.conf DEFAULT  my_ip  $MYIP

openstack-config--set /etc/nova/nova.conf DEFAULT  use_neutron  True

openstack-config--set /etc/nova/nova.conf DEFAULT  firewall_driver nova.virt.firewall.NoopFirewallDriver

 

openstack-config--set /etc/nova/nova.conf vnc vncserver_listen  $MYIP

openstack-config--set /etc/nova/nova.conf vnc vncserver_proxyclient_address $MYIP

 

openstack-config--set /etc/nova/nova.conf glance api_servers http://controller1:9292

 

openstack-config--set /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp

 

 

#同步数据库

su -s /bin/sh -c"nova-manage api_db sync" nova

su -s /bin/sh -c"nova-manage db sync" nova

 

#启动服务

systemctl enableopenstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service   openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

systemctl startopenstack-nova-api.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service

 

 

4.2计算节点

#安装依赖包

yum installopenstack-nova-compute

 

#修改配置文件

openstack-config--set /etc/nova/nova.conf DEFAULT rpc_backend  rabbit

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts  controller1:5672

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid  dftc

openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password  ******

 

openstack-config--set /etc/nova/nova.conf DEFAULT auth_strategy  keystone

openstack-config --set/etc/nova/nova.conf keystone_authtoken auth_uri http://controller1:5000/v3

openstack-config--set /etc/nova/nova.conf keystone_authtoken auth_url http://controller1:35357/v3

openstack-config--set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211

openstack-config--set /etc/nova/nova.conf keystone_authtoken auth_type  password

openstack-config--set /etc/nova/nova.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/nova/nova.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/nova/nova.conf keystone_authtoken project_name  service

openstack-config--set /etc/nova/nova.conf keystone_authtoken username  nova

openstack-config--set /etc/nova/nova.conf keystone_authtoken password  novamanager

 

openstack-config--set /etc/nova/nova.conf DEFAULT  my_ip  $MYIP

openstack-config--set /etc/nova/nova.conf DEFAULT  use_neutron  True

openstack-config--set /etc/nova/nova.conf DEFAULT  firewall_driver nova.virt.firewall.NoopFirewallDriver

 

openstack-config--set /etc/nova/nova.conf vnc enabled True

openstack-config--set /etc/nova/nova.conf vnc vncserver_listen $my_ip

openstack-config--set /etc/nova/nova.conf vnc vncserver_proxyclient_address  $my_ip

openstack-config--set /etc/nova/nova.conf vnc novncproxy_base_url  http://controller1:6080/vnc_auto.html

 

openstack-config--set /etc/nova/nova.conf glance api_servers http://controller1:9292

 

openstack-config--set /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp

 

openstack-config--set /etc/nova/nova.conf libvirt virt_type qemu

 

# 启动服务

systemctl enablelibvirtd.service openstack-nova-compute.service

systemctl startlibvirtd.service openstack-nova-compute.service

 

5、neutron

5.1 控制节点

#安装依赖包

yum install openstack-neutronopenstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

 

 

#修改neutron.conf

openstack-config--set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:DB_PASS@controller1/neutron

 

openstack-config--set /etc/neutron/neutron.conf DEFAULT core_plugin  ml2

openstack-config--set /etc/neutron/neutron.conf DEFAULT service_plugins  router

openstack-config--set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips  True

 

openstack-config--set /etc/neutron/neutron.conf DEFAULT rpc_backend   rabbit

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hostscontroller1:5672

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid dftc

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password dftcpass

 

 

openstack-config--set /etc/neutron/neutron.conf DEFAULT auth_strategy   keystone

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_urihttp://controller1:5000/v3

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_urlhttp://controller1:35357/v3

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken memcached  controller1:11211

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_type  password

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_name  service

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken username  neutron

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken password  neutronmanager

 

 

openstack-config--set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes  True

openstack-config--set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes  True

openstack-config--set /etc/neutron/neutron.conf nova auth_url http://controller1:35357/v3

openstack-config--set /etc/neutron/neutron.conf nova auth_type password

openstack-config--set /etc/neutron/neutron.conf nova project_domain_name default

openstack-config--set /etc/neutron/neutron.conf nova user_domain_name default

openstack-config --set/etc/neutron/neutron.conf nova region_name scxbxxzx

openstack-config--set /etc/neutron/neutron.conf nova project_name  service

openstack-config--set /etc/neutron/neutron.conf nova username nova

openstack-config--set /etc/neutron/neutron.conf nova password novamanager

 

openstack-config--set /etc/neutron/neutron.conf oslo_concurrency lock_path  /var/lib/neutron/tmp

 

 

 

##修改ml2_config.ini

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan,vxlan

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types  vxlan

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  linuxbridge,l2population

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers  port_security

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  public

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges  1:500

openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  True

 

 

 

##修改linuxbridge.ini

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridgephysical_interface_mappings default:em3,public:em3

 

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan  True

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip  $MYIP

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population  True

 

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroupenable_security_group  True

openstack-config--set /etc/neutron/plugins/ml2//inuxbridge_agent.ini securitygroupfirewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

 

 

##i修改l3-agent.ini

openstack-config--set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver

openstack-config--set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge `echo ' '`

 

 

##修改dhcp_agent.ini

openstack-config--set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver

openstack-config--set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver  neutron.agent.linux.dhcp.DNSmasq

openstack-config--set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata  True

 

 

##修改metadata_agent.ini

openstack-config--set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip   controller1

openstack-config--set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret   metadatamanager

 

 

##修改nova.conf,使nova使用网络服务

openstack-config--set /etc/nova/nova.conf neutron url http://controller1:9696

openstack-config--set /etc/nova/nova.conf neutron auth_url http://controller1:35357/v3

openstack-config--set /etc/nova/nova.conf neutron auth_type password

openstack-config--set /etc/nova/nova.conf neutron project_domain_name  default

openstack-config--set /etc/nova/nova.conf neutron user_domain_name  default

openstack-config--set /etc/nova/nova.conf neutron region_name scxbxxzx

openstack-config--set /etc/nova/nova.conf neutron project_name service

openstack-config--set /etc/nova/nova.conf neutron username neutron

openstack-config--set /etc/nova/nova.conf neutron password neutronmanager

openstack-config--set /etc/nova/nova.conf neutron service_metadata_proxy  True

openstack-config--set /etc/nova/nova.conf neutron metadata_proxy_shared_secret  metadatamanager

 

 

#创建链接文件

ln -s/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 

 

#同步数据库

su -s /bin/sh -c"neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 

 

#启动服务

systemctl restartopenstack-nova-api.service

 

systemctl enableneutron-server.service neutron-linuxbridge-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

systemctl startneutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service  neutron-metadata-agent.service

 

systemctl startneutron-l3-agent.service

 

 

 

5.2 计算节点

##安装依赖包

yum installopenstack-neutron-linuxbridge ebtables ipset

 

 

##修改neutron.conf文件内容

openstack-config--set /etc/neutron/neutron.conf DEFAULT rpc_backend   rabbit

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hostscontroller1:5672

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid  dftc

openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password  dftcpass

 

openstack-config--set /etc/neutron/neutron.conf DEFAULT auth_strategy  keystone

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_uri  http://controller1:5000/v3

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_url  http://controller1:35357/v3

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken memcached  controller1:11211

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_type  password

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_name  service

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken username  neutron

openstack-config--set /etc/neutron/neutron.conf keystone_authtoken password  neutronmanager

 

openstack-config--set /etc/neutron/neutron.conf oslo_concurrency lock_path   /var/lib/neutron/tmp

 

 

##修改linuxbridge_agent.ini

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridgephysical_interface_mappings default:em3,public:em4

 

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan  True

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip  $MYIP

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population  True

 

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroupenable_security_group  True

openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroupfirewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

 

openstack-config--set /etc/nova/nova.conf neutron url http://controller1:9696

openstack-config--set /etc/nova/nova.conf neutron auth_url http://controller1:35357/v3

openstack-config--set /etc/nova/nova.conf neutron auth_type password

openstack-config--set /etc/nova/nova.conf neutron project_domain_name default

openstack-config--set /etc/nova/nova.conf neutron user_domain_name  default

openstack-config--set /etc/nova/nova.conf neutron region_name scxbxxzx

openstack-config--set /etc/nova/nova.conf neutron project_name service

openstack-config--set /etc/nova/nova.conf neutron username neutron

openstack-config--set /etc/nova/nova.conf neutron password neutronmanager

 

 

#启动服务

systemctl restartopenstack-nova-compute.service

systemctl enableneutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

 

 

6、dashboard

## 安装依赖包

yum installopenstack-dashboard

 

##编辑文件/etc/openstack-dashboard/local_settings修改如下内容

 

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*', ]

SESSION_ENGINE ='django.contrib.sessions.backends.cache'

 

CACHES = {

    'default': {

         'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',

         'LOCATION': 'controller:11211',

    }

}

OPENSTACK_KEYSTONE_URL ="http://%s:5000/v3" % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT =True

OPENSTACK_API_VERSIONS= {

    "identity": 3,

    "p_w_picpath": 2,

    "volume": 2,

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN ="default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE ="user"

OPENSTACK_NEUTRON_NETWORK= {

    ...

    'enable_router': False,

    'enable_quotas': False,

    'enable_distributed_router': False,

    'enable_ha_router': False,

    'enable_lb': False,

    'enable_firewall': False,

    'enable_***': False,

    'enable_fip_topology_check': False,

}

TIME_ZONE = "Asia/Chongqing"

 

7、cinder

##修改配置文件

openstack-config--set /etc/cinder/cinder.conf DEFAULT rpc_backend   rabbit

openstack-config--set /etc/cinder/cinder.conf DEFAULT auth_strategy   keystone

openstack-config--set /etc/cinder/cinder.conf DEFAULT my_ip $MYIP

 

openstack-config--set /etc/cinder/cinder.conf database connection   mysql://cinder:DB_PASS@controller1/DB_PASS

 

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken auth_uri   http://controller1:5000/v3

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken auth_url   http://controller1:35357/v3

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken memcached  controller1:11211

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken auth_type   password

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken project_domain_name  default

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken user_domain_name  default

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken project_name  service

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken username  cinder

openstack-config--set /etc/cinder/cinder.conf keystone_authtoken password  cindermanager

openstack-config--set /etc/cinder/cinder.conf oslo_concurrency lock_path   /var/lib/cinder/tmp

openstack-config--set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts  controller1:5672

openstack-config--set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid   dftc

openstack-config--set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password   dftcpass

 

openstackendpoint create --region scxbxxzx  volume public http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volume internal http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volume admin http://controller1:8776/v1/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volumev2 public http://controller1:8776/v2/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volumev2 internal http://controller1:8776/v2/%\(tenant_id\)s

openstackendpoint create --region scxbxxzx  volumev2 admin http://controller1:8776/v2/%\(tenant_id\)s

 

8、###clean all ceph configure file andpackage

ceph-deploy purgecontroller1 compute1 compute2 compute3

 

ceph-deploypurgedata controller1 compute1 compute2 compute3

 

ceph-deployforgetkeys

 

ssh compute1 sudorm -rf /osd/osd0/*

ssh compute2 sudorm -rf /osd/osd1/*

ssh compute3 sudorm -rf /osd/osd2/*

 

 

###install newceph-cluster

su - dftc

mkdir cluster

cd cluster

 

#initial mon node

ceph-deploynew  controller1

 

##changeconfigure file

echo "osdpool default size = 2" >> ceph.conf

echo "publicnetwork = 172.16.16.0/24" >> ceph.conf

echo"cluster network = 172.16.17.0/24" >> ceph.conf

 

## 安装ceph节点

###  ceph.x86_64 1:10.2.5-0.el7                         ceph-base.x86_641:10.2.5-0.el7             

###  ceph-common.x86_64 1:10.2.5-0.el7                  ceph-mds.x86_641:10.2.5-0.el7                   

###  ceph-mon.x86_64 1:10.2.5-0.el7                     ceph-osd.x86_64 1:10.2.5-0.el7                   

###  ceph-radosgw.x86_64 1:10.2.5-0.el7                 ceph-selinux.x86_641:10.2.5-0.el7  

ceph-deployinstall controller1 compute1 compute2 compute3

 

##初始化ceph-mon

ceph-deploy moncreate-initial

 

###########errmessage

[compute3][DEBUG] detect platform information from remote host

[compute3][DEBUG] detect machine type

[compute3][DEBUG] find the location of an executable

[compute3][INFO  ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status

[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 5

[ceph_deploy.mon][WARNIN]waiting 5 seconds before retrying

[compute3][INFO  ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status

[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 4

[ceph_deploy.mon][WARNIN]waiting 10 seconds before retrying

[compute3][INFO  ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status

[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 3

[ceph_deploy.mon][WARNIN]waiting 10 seconds before retrying

[compute3][INFO  ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status

[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 2

[ceph_deploy.mon][WARNIN]waiting 15 seconds before retrying

[compute3][INFO  ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status

[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 1

[ceph_deploy.mon][WARNIN]waiting 20 seconds before retrying

[ceph_deploy.mon][ERROR] Some monitors have still not reached quorum:

[ceph_deploy.mon][ERROR] compute1

[ceph_deploy.mon][ERROR] compute3

[ceph_deploy.mon][ERROR] compute2

 

########resolve

copy remoteconfigure file to localhost

compare two file, same file content

so , go aheadnext step

 

 

 

##initial osd

ceph-deploy osdprepare compute1:/osd/osd0/ compute2:/osd/osd1 compute3:/osd/osd2

 

ceph-deploy osdactivate compute1:/osd/osd0/ compute2:/osd/osd1 compute3:/osd/osd2

 

ceph-deploy admincontroller1 compute1 compute2 compute3

 

 

chmod +r/etc/ceph/ceph.client.admin.keyring

 

 

####

ceph authget-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefixrbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rxpool=p_w_picpaths';ceph auth get-or-create client.glance mon 'allow r' osd 'allowclass-read object_prefix rbd_children, allow rwx pool=p_w_picpaths';ceph authget-or-create client.cinder-backup mon 'allow r' osd 'allow class-readobject_prefix rbd_children, allow rwx pool=backups'

 

 

 

####

ceph authget-or-create client.glance | ssh controller1 sudo tee/etc/ceph/ceph.client.glance.keyring

ssh controller1sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring

 

ceph authget-or-create client.cinder | ssh compute1 sudo tee/etc/ceph/ceph.client.cinder.keyring

ssh compute1 sudochown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

ceph authget-or-create client.cinder | ssh compute2 sudo tee/etc/ceph/ceph.client.cinder.keyring

ssh compute2 sudochown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

ceph authget-or-create client.cinder | ssh compute3 sudo tee/etc/ceph/ceph.client.cinder.keyring

ssh compute3 sudochown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

 

ceph authget-or-create client.cinder-backup | ssh compute1 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring

ssh compute1 sudochown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

ceph authget-or-create client.cinder-backup | ssh compute2 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring

ssh compute2 sudochown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

ceph authget-or-create client.cinder-backup | ssh compute3 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring

ssh compute3 sudochown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

 

###  controller node run belowe command#########################

ceph auth get-keyclient.cinder | ssh compute1 tee client.cinder.key

ceph auth get-keyclient.cinder | ssh compute2 tee client.cinder.key

ceph auth get-keyclient.cinder | ssh compute3 tee client.cinder.key

 

### compute'snode dftc user  run ################

cat >secret.xml <

 c2ad36f3-f184-48b3-81c3-49411cc6566f

 

    client.cindersecret

 

EOF

sudo virshsecret-define --file secret.xml

sudo virshsecret-set-value --secret c2ad36f3-f184-48b3-81c3-49411cc6566f --base64AQAhhXhYL3ApHhAAYO5wYNEdz63pNxermCgjFg== && rm client.cinder.keysecret.xml

 

 

######

virshsecret-set-value --secret c2ad36f3-f184-48b3-81c3-49411cc6566f --base64AQAhhXhYL3ApHhAAYO5wYNEdz63pNxermCgjFg==

 

 

##### OLD VERSION

openstack-config--set /etc/glance/glance-api.conf DEFAULT default_store  rbd

##### NEW VERSION

openstack-config--set /etc/glance/glance-api.conf glance_store default_store  rbd

 

 

openstack-config--set /etc/glance/glance-api.conf DEFAULT show_p_w_picpath_direct_url  True

openstack-config--set /etc/glance/glance-api.conf glance_store stores  rbd

openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_pool  p_w_picpaths

openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_user  glance

openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_ceph_conf /etc/ceph/ceph.conf

openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_chunk_size  8

openstack-config--set /etc/glance/glance-api.conf paste_deploy flavor  keystone

 

##Image 属性

###建议配置如下 p_w_picpath 属性:

###  hw_scsi_model=virtio-scsi: 添加 virtio-scsi 控制器以获得更好的性能、并支持 discard 操作;

###  hw_disk_bus=scsi: 把所有 cinder 块设备都连到这个控制器;

###  hw_qemu_guest_agent=yes: 启用 QEMU guest agent (访客代理)

###  os_require_quiesce=yes: 通过 QEMU guest agent 发送 fs-freeze/thaw 调用

 

 

 

 

openstack-config--set /etc/cinder/cinder.conf DEFAULT enabled_backends  ceph

openstack-config--set /etc/cinder/cinder.conf DEFAULT glance_api_version  2

 

 

openstack-config--set /etc/cinder/cinder.conf ceph volume_driver  cinder.volume.drivers.rbd.RBDDriver

openstack-config--set /etc/cinder/cinder.conf ceph rbd_pool volumes

openstack-config--set /etc/cinder/cinder.conf ceph rbd_ceph_conf  /etc/ceph/ceph.conf

openstack-config--set /etc/cinder/cinder.conf ceph rbd_flatten_volume_from_snapshot  false

openstack-config--set /etc/cinder/cinder.conf ceph rbd_max_clone_depth  5

openstack-config--set /etc/cinder/cinder.conf ceph rbd_store_chunk_size  4

openstack-config--set /etc/cinder/cinder.conf ceph rados_connect_timeout  -1

openstack-config--set /etc/cinder/cinder.conf ceph glance_api_version  2

openstack-config--set /etc/cinder/cinder.conf ceph rbd_user cinder

openstack-config--set /etc/cinder/cinder.conf ceph rbd_secret_uuid  c2ad36f3-f184-48b3-81c3-49411cc6566f

 

 

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_driver  cinder.backup.drivers.ceph

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_conf  /etc/ceph/ceph.conf

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_user  cinder-backup

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_chunk_size  134217728

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_pool  backups

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_stripe_unit  0

openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_stripe_count  0

openstack-config--set /etc/cinder/cinder.conf DEFAULT restore_discard_excess_bytes  true

 

openstack-config--set /etc/nova/nova.conf libvirt rbd_user cinder

openstack-config--set /etc/nova/nova.conf libvirt rbd_secret_uuid  457eb676-33da-42ec-9a8c-9293d545c337

 

 

 

 

 

###############

 

[client]

        rbd cache = true

        rbd cache writethrough until flush =true

        admin socket =/var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok

        log file =/var/log/qemu/qemu-guest-$pid.log

        rbd concurrent management ops = 20

 

 

 

 

 

 

 

 

 

 

 

Note:Addopen issues that you identify while writing or reviewing this document to theopen issues section.  As you resolveissues, move them to the closed issues section and keep the issue ID the same.  Include an explanation of the resolution.

When this deliverable is complete, any open issues should be transferred to theproject- or process-level Risk and Issue Log (PJM.CR.040) and managed using aproject level Risk and Issue Form (PJM.CR.040). In addition, the open items should remain in the open issues section ofthis deliverable, but flagged in the resolution column as being transferred.

流程流程说Add open issues that you identifywhile writing or reviewing this document to the open issues section.  As you resolve issues, move them to theclosed issues section and keep the issue ID the same.  Include an explanation of the resolution.

When this deliverable is complete, any open issues should be transferred to theproject- or process-level Risk and Issue Log (PJM.CR.040) and managed using aproject level Risk and Issue Form (PJM.CR.040). In addition, the open items should remain in the open issues section ofthis deliverable, but flagged in the resolution column as being transferred.

 

待解决与已解决问题

Note:Addopen issues that you identify while writing or reviewing this document to theopen issues section.  As you resolveissues, move them to the closed issues section and keep the issue ID thesame.  Include an explanation of theresolution.

When this deliverable is complete, any open issues should be transferred to theproject- or process-level Risk and Issue Log (PJM.CR.040) and managed using aproject level Risk and Issue Form (PJM.CR.040). In addition, the open items should remain in the open issues section ofthis deliverable, but flagged in the resolution column as being transferred.

 

待解决问题

 

ID: 001

Issue: DVR功能实现

Resolution: 无

Tips: openstack-openvswitch来实现东西向流量之后,可以实现DVR;

 

ID: 002

Issue: HA功能实现

Resolution:

Tips: 使用keepalived提供虚拟IP, haproxy提供均衡负载和端口数据转发;

 

ID:003

Issue: glance模块使用虚拟IP,端口不可达,无法上传镜像,nova,neutron同样问题

Resolution:

暂无,

 

……

已解决问题

 

ID:001

Issue: 需要重置keystone数据库

Resolution:

#### clear old database and old data########

mysql -uroot -p**** -e "createdatabase keystone;"

mysql -uroot -p**** -e "createdatabase keystone;"

mysql -uroot -p**** -e "GRANT ALLPRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY'DB_PASS';"

mysql -uroot -p**** -e "GRANT ALLPRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'DB_PASS';"

mysql -uroot -p**** -e "createdatabase glance;"

openstack-config --set /etc/keystone/keystone.confDEFAULT admin_token 749d6ead6be998642461

openstack-config --set/etc/keystone/keystone.conf database connectionmysql+pymysql://keystone:DB_PASS@controller1/keystone

openstack-config --set/etc/keystone/keystone.conf token provider fernet

 

### sync database and use fernetkey #######

su -s /bin/sh -c "keystone-managedb_sync" keystone

keystone-manage fernet_setup--keystone-user keystone --keystone-group keystone

 

ID:002

Issue: 某个模块使用CLI,信息提示auth faild!

Resolution:

重置模块所有的用户,服务和认证端点,重新建立用户添加到admin,重建模块服务和模块的认证端点。

 

 

ID:003

Issue: vnc打不开问题

Resolution:

compute节点执行:

MYIP=`ip add show em1|grep  inet|head -1|awk  '{print $2}'|awk -F'/' '{print  $1}'`

openstack-config --set /etc/nova/nova.confvnc novncproxy_base_url http://$MYIP:6080/

 

ID:004

Issue: glance模块使用本地存储,无法上传镜像

Resolution:

检查端口9292是否启动,是否可以telnet。检查发现端口无法启动,重新检查配置文件,与ceph对接不能使用“virt_type”的选项,ceph自身直接使用rbd格式对所有的对象进行统一标记管理;

 

ID:005

Issue: 创建虚拟机失败,界面提示链接http://controller:9696失败

Resolution:

检查9696端口启用正常,本地主机名实际为controller1,更新配置文件/etc/nova/nova.conf

[neutron]

url = http://controller1:9696为正确可解析主机名;

 

ID:006

Issue: glance-api显示服务正常,端口每10秒出现一次,无法正常链接;api日志无异常输出,systemctl status抛出python异常ERROR:Store for schema file not found

Resolution:

ceph对接过程使用default_store在旧版本中添加在[DEFAULT]选项之后,新版本中添加到[glance]选项之后,更新之后正常

default_store = rbd

 

 

ID:007

Issue: 无法执行完成 openstack-nova-compute.service 命令,一直卡住不动

Resolution: 检查配置文件,无法链接消息队列,发现之前更新文件,忘记修改rabbitmq端口;修改到正确的5672端口后重新执行。

 

ID:008

Issue: openstack-nova-api.service 无法启动,报错: ACCESS_REFUSED - Login wasrefused using authentication mechanism AMQPLAIN

Resolution:

rabbitmq消息队列的配置不对,用户名填写错误,更改正确后正常。

 

 

ID:009

Issue: 无法安装ceph-deploy依赖的包

Processing Dependency:

python-distribute for package:

ceph-deploy-1.5.34-0.noarch Package python-setuptools-0.9.8-4.el7.noarch isobsoleted by python2-setuptools-22.0.5-1.el7.noarch

 which is already installed --> FinishedDependency Resolution

 Error:

Package: ceph-deploy-1.5.34-0.noarch (ceph-noarch) Requires:python-distribute Available: python-setuptools-0.9.8-4.el7.noarch (base)python-distribute = 0.9.8-4.el7

You could try using --skip-broken to work around the problem You could tryrunning: rpm -Va --nofiles --nodigest

包冲突 $ rpm -qa|grep setuptools

python2-setuptools-22.0.5-1.el7.noarch

卸载

 

Resolution:

利用 pip 安装解决

yum install python-pip

pip installceph-deploy

 

ID:010

Issue: 配置完成dashboard后,界面无法正常访问

Resolution:

memcached can not bind the hostname port!

 

ID:011

Issue: dashboard界面总是抛出异常错误?
在点击openstack的dashboard时右上角总是弹出一些错误的提示,再次刷新时又不提示
 
Resolution:  

MYSQL数据库安装完成后,默认最大连接数是100,一般流量稍微大一点这个连接数是远远不够

1、修改mairadb的配置文件,将最大连接数改为1500

echo"max_connections=1500" >>/etc/my.cnf.d/server.cnf

2、重新启动数据库
service mariadb  restart

……


名称栏目:openstackM版本部署
标题网址:http://jkwzsj.com/article/gidoip.html

其他资讯