CentOS 7.4 installation and deployment openstack [Liberty version]

Posted by edikasim81 on Mon, 27 Apr 2020 07:21:58 +0200

Next blog CentOS 7.4 installation and deployment openstack [Liberty version] (I) , this part continues to cover the following parts

1, Add block device storage service

1. Service Description:

OpenStack block storage service provides block storage for instances. The allocation and consumption of storage is determined by block storage drives, or drives in multi backend configurations. There are many other drivers available: NAS/SAN, NFS, ISCSI, Ceph, and so on. Block storage API s and scheduler services are usually run on control nodes. Depending on the driver used, volume services can run on a control, compute, or standalone storage node.
OpenStack block storage service (cinder) adds persistent storage for virtual machines. Block storage provides an infrastructure to manage volumes and interact with OpenStack computing services to provide volumes for instances. This service also activates the ability to manage snapshots and volume types of volumes.
Block storage services typically include the following components:
cinder-api
Accept the API request and route it to the "cinder volume" execution.
cinder-volume
Interact directly with block storage services and processes such as the "cinder scheduler.". It can also interact with these processes through a message queue. The "cinder volume" service maintains state in response to read and write requests sent to the block storage service. It can also interact with a variety of storage providers in a driver architecture.
Cinder scheduler Daemons
Select the optimal storage provider node to create the volume. It is similar to the Nova scheduler component.
Cinder backup Daemons
The "cinder backup" service provides any kind of backup volume to a backup storage provider. Like the "cinder volume" service, it interacts with a variety of storage providers in a driven architecture.
Message queuing
Route information between processes stored in blocks.

2. Deployment requirements: database, service certificate and API endpoint must be created before installation and configuration of block storage service.

[root@controller ~]#mysql -u root -p123456       #Create database and access rights
MariaDB [(none)]>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]>GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]>\q
[root@controller ~]#. admin-openrc
[root@controller ~]# openstack user create --domain default --password-prompt cinder
User Password:    #The password is:123456
Repeat User Password:
[root@controller ~]#openstack role add --project service --user cinder admin         #Add the admin role to the cinder user. There is no output after this command is executed.
[root@controller ~]#openstack service create --name cinder  --description "OpenStack Block Storage" volume       #Create cinder and cinderv2 service entities. Block device storage service requires two service entities.
[root@controller ~]#openstack service create --name cinderv2  --description "OpenStack Block Storage" volumev2
[root@controller ~]#openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s     #Create the API entry point of the block device storage service. Each service entity of the block device storage service needs an endpoint. 
[root@controller ~]#openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

3. Service installation

Control node:

[root@controller ~]#yum install -y openstack-cinder python-cinderclient
[root@controller ~]# egrep -v "^$|^#" /etc/cinder/cinder.conf #Edit cinder.conf
[DEFAULT] 
rpc_backend = rabbit #Configure RabbitMQ message queue access
auth_strategy = keystone #Configure authentication service access
my_ip = 192.168.1.101 #Configure my IP to use the IP address of the management interface of the control node
verbose = True #Enable detailed logging
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[cors]
[cors.subdomain]
[database]
connection = mysql://cinder:123456@controller/cinder #Configure database access
[fc-zone-manager]
[keymgr]
[keystone_authtoken] #Configure the authentication service access, comment or delete other options in [keystone [authToken].
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 123456
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp #Configure lock path
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]  #Configure RabbitMQ message queue access
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[profiler]
[root@controller ~]#su -s /bin/sh -c "cinder-manage db sync" cinder #Initializing the database for the block device service
[root@controller ~]#[root@controller ~]# grep -A 1  "\[cinder\]" /etc/nova/nova.conf  #Configure compute nodes to use block device storage,Edit file /etc/nova/nova.conf And add the following
[cinder]
os_region_name = RegionOne
[root@controller ~]# systemctl restart openstack-nova-api.service
[root@controller ~]#systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service 
[root@controller ~]#systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

Storage nodes:

[root@block1 ~]# yum install lvm2 -y
[root@block1 ~]# systemctl enable lvm2-lvmetad.service
[root@block1 ~]# systemctl start lvm2-lvmetad.service
[root@block1 ~]#pvcreate /dev/sdb #Establish LVM Physical volume /dev/sdb
Physical volume "/dev/sdb" successfully created
[root@block1 ~]#vgcreate cinder-volumes /dev/sdb #Establish LVM Volume group cinder-volumes,The block storage service creates logical volumes in this volume group
Volume group "cinder-volumes" successfully created
[root@block1 ~]# vim /etc/lvm/lvm.conf  #edit etc/lvm/lvm.conf File, in devices Section, add a filter, accept only/dev/sdb Device, reject all other devices
devices {
filter = [ "a/sda/", "a/sdb/", "r/.*/"] #If the storage node also uses LVM on the operating system disk, relevant devices need to be added to the filter
[root@block1 ~]# yum install openstack-cinder targetcli python-oslo-policy -y
[root@block1 ~]# systemctl enable openstack-cinder-volume.service target.service
[root@block1 ~]# systemctl restart openstack-cinder-volume.service target.service
[root@block1 ~]# egrep -v "^$|^#" /etc/cinder/cinder.conf 
[DEFAULT]
rpc_backend = rabbit #Configure RabbitMQ message queue access
auth_strategy = keystone #Configure authentication service access
my_ip = 192.168.1.103 #IP address of the management network interface on the storage node
enabled_backends = lvm #Enable LVM backend
glance_host = controller #Configure the location of the mirror service
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[cors]
[cors.subdomain]
[database]
connection = mysql://cinder:123456@controller/cinder #Configure database access
[fc-zone-manager]
[keymgr]
[keystone_authtoken]  #Configure authentication service access,stay [keystone_authtoken] Note in or delete other options. 
auth_uri = http://controller:5000 
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 123456
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp #Configure lock path
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit] #Configure RabbitMQ message queue access
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[profiler]
[lvm]  #To configure LVM Back end to LVM End of drive, volume group cinder-volumes,iSCSI Agreement and correct iSCSI service
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
[root@block1 ~]# systemctl enable openstack-cinder-volume.service target.service
[root@block1 ~]# systemctl start openstack-cinder-volume.service target.service

verification:

[root@controller ~]#source admin-openrc.sh
[root@controller ~]#cinder service-list #List service components to verify that each process started successfully
+------------------+------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  | 2014-10-18T01:30:54.000000 |       None      |
| cinder-volume    | block1@lvm | nova | enabled |   up  | 2014-10-18T01:30:57.000000 |       None      |
+------------------+------------+------+---------+-------+----------------------------+-----------------+

2, Add object storage service

1. Service Brief

OpenStack object storage service (swift) provides object storage and recovery services through a series of rest APIs. Before deploying object storage, your environment must include at least keystone.
OpenStack object storage is a multi tenant object storage system, which supports large-scale expansion. It can manage large-scale unstructured data at low cost through RESTful HTTP application interface.

It contains the following components:
Proxy server (swift proxy server)
Receive OpenStack object storage API and pure HTTP requests to upload files, change metadata, and create containers. It can serve to display a list of files and containers in a web browser. In order to improve performance, the proxy service can use optional cache, which is usually deployed as memcache.
Swift account server
Manage accounts defined by the object store.
Container server (swift container server)
Manage the mapping of containers or folders, inside the object store.
Object server (swift object server)
Manage the actual objects, such as files, on the storage node.
Various periodic processes
In order to control the task of large-scale data storage, replication services need to ensure consistency and availability within the cluster. Other regular processes include audit, update and Reaser.
WSGI Middleware
Control authentication and use OpenStack authentication service.
swift client
Users can submit commands to the REST API through this command-line client. The authorized user roles can be administrator user, dealer user, or swift user.
swift-init
Initializes the script generated by the chain file, takes the name of the daemons as parameters and provides commands. Archived at http://docs.openstack.org/developer/swift/admin/guide.html/managing-services.
swift-recon
A command line interface tool used to retrieve a variety of measurement and measurement information about a cluster has been collected by swift Recon middleware.
swift-ring-builder
Storage ring chain building and rebalancing utility. Archived at http://docs.openstack.org/developer/swift/admin/guide.html ා managing the rings.

2. Deployment requirements: before configuring the object storage service, service credentials and API endpoints must be created.

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack user create --domain default --password-prompt swift #Create swift user
User Password:   #The password is:123456
Repeat User Password:
[root@controller ~]# openstack role add --project service --user swift admin #Add admin role to swift user
[root@controller ~]#openstack service create --name swift --description "OpenStack Object Storage" object-store #Create swift service entity
[root@controller ~]#openstack endpoint create --region RegionOne  object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s #Create an object storage service API endpoint
[root@controller ~]#openstack endpoint create --region RegionOne  object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s
[root@controller ~]#openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1

3. Service installation

Control node:

[root@controller ~]#yum install -y openstack-swift-proxy python-swiftclient  python-keystoneclient python-keystonemiddleware  memcached
[root@controller ~]# vim /etc/swift/proxy-server.conf         #Configuration files may be different in each release.Instead of modifying existing parts and options, you may need to add them!!!
[DEFAULT]             #In the [DEFAULT] section, configure the binding port, user and configuration directory
bind_port = 8080
user = swift
swift_dir = /etc/swift
[pipeline:main]            #In the [pipeline:main] section, enable the appropriate module
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
[app:proxy-server]          #stay[app:proxy-server]Section, enable automatic account creation
use = egg:swift#proxy
account_autocreate = true
[filter:keystoneauth]      #In the [filter: Keystone auth] section, configure the operator role
use = egg:swift#keystoneauth
operator_roles = admin,user
[filter:authtoken]          #In the [filter:authtoken] section, configure authentication service access
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = 123456
delay_auth_decision = true
[filter:cache]  #In the [filter:cache] section, configure the memcached location
use = egg:swift#memcache
memcache_servers = 127.0.0.1:11211

Storage nodes: (perform these steps on each storage node)

[root@object1 ~]#yum install xfsprogs rsync -y  #Install supported kits
[root@object1 ~]#mkfs.xfs /dev/sdb #Use XFS format/dev/sdb and/dev/sdc equipment
[root@object1 ~]#mkfs.xfs /dev/sdc
[root@object1 ~]#mkdir -p /srv/node/sdb #Create mount point directory structure
[root@object1 ~]#mkdir -p /srv/node/sdc
[root@object1 ~]#tail -2 /etc/fstab  #Edit the "/ etc/fstab" file and include the following
/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
[root@object1 ~]#mount /srv/node/sdb #Mount device
[root@object1 ~]#mount /srv/node/sdc
[root@object1 ~]#cat /etc/rsyncd.conf #edit"/etc/rsyncd.conf" The file contains the following
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.1.104  #Local network management interface

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
[root@object1 ~]#systemctl enable rsyncd.service
[root@object1 ~]# systemctl start rsyncd.service
[root@object1 ~]# yum install openstack-swift-account openstack-swift-container  openstack-swift-object -y 
[root@object1 ~]#vim /etc/swift/account-server.conf
[DEFAULT] #In the [DEFAULT] 'section, configure the binding IP address, binding port, user, configuration directory and mount directory
bind_ip = 192.168.1.104
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main] #In the [pipeline:main] section, enable the appropriate module
pipeline = healthcheck recon account-server
[filter:recon] #In the [filter:recon] section, configure the recon (meters) cache directory
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[root@object1 ~]# vim /etc/swift/container-server.conf
[DEFAULT] #In the [DEFAULT] section, configure the binding IP address, binding port, user, configuration directory and mount directory
bind_ip = 192.168.1.104
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main] #In the [pipeline:main] section, enable the appropriate module
pipeline = healthcheck recon container-server
[filter:recon] #In the [filter:recon] section, configure the recon (meters) cache directory
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[root@object1 ~]#vim /etc/swift/object-server.conf
[DEFAULT] #In the [DEFAULT] section, configure the binding IP address, binding port, user, configuration directory and mount directory
bind_ip = 192.168.1.104
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main] #In the [pipeline:main] section, enable the appropriate module
pipeline = healthcheck recon object-server
[filter:recon] #In the [filter:recon] section, configure the recon (meters) cache directory and lock file directory
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
[root@object1 ~]#chown -R swift:swift /srv/node
[root@object1 ~]#restorecon -R /srv/node 
[root@object1 ~]#mkdir -p /var/cache/swift
[root@object1 ~]#chown -R root:swift /var/cache/swift

Create and distribute initialization rings

Control node:

[root@controller ~]# cd /etc/swift/
[root@controller swift]# swift-ring-builder account.builder create 10 3 1 #Create account.builder file
[root@controller swift]# wift-ring-builder account.builder add  --region 1 --zone 1 --ip 192.168.1.104 --port 6002 --device sdb --weight 100 #Add each node to ring
[root@controller swift]# swift-ring-builder account.builder add  --region 1 --zone 1 --ip 192.168.1.104 --port 6002 --device sdc --weight 100
[root@controller swift]# swift-ring-builder account.builder add  --region 1 --zone 1 --ip 192.168.1.105 --port 6002 --device sdb --weight 100
[root@controller swift]# swift-ring-builder account.builder add  --region 1 --zone 1 --ip 192.168.1.105 --port 6002 --device sdc --weight 100
[root@controller swift]# swift-ring-builder account.builder #Verify ring content
account.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
             0       1     1   192.168.1.104  6002   192.168.1.104              6002       sdb 100.00        768    0.00 
             1       1     1   192.168.1.104  6002   192.168.1.104              6002       sdc 100.00        768    0.00 
             2       1     1   192.168.1.105  6002   192.168.1.105              6002       sdb 100.00        768    0.00 
             3       1     1   192.168.1.105  6002   192.168.1.105              6002       sdc 100.00        768    0.00 
[root@controller swift]# swift-ring-builder account.builder rebalance #Balancing ring
Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00
[root@controller swift]#  swift-ring-builder container.builder create 10 3 1 #Create container.builder file
[root@controller swift]#  swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6001 --device sdb --weight 100 #Add each node to ring
[root@controller swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6001 --device sdc --weight 100
[root@controller swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6001 --device sdb --weight 100
[root@controller swift]# swift-ring-builder container.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6001 --device sdc --weight 100
[root@controller swift]# swift-ring-builder container.builder #Verify ring content
container.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
             0       1     1   192.168.1.104  6001   192.168.1.104              6001       sdb 100.00        768    0.00 
             1       1     1   192.168.1.104  6001   192.168.1.104              6001       sdc 100.00        768    0.00 
             2       1     1   192.168.1.105  6001   192.168.1.105              6001       sdb 100.00        768    0.00 
             3       1     1   192.168.1.105  6001   192.168.1.105              6001       sdc 100.00        768    0.00 
[root@controller swift]# swift-ring-builder container.builder rebalance #Balancing ring
Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00
[root@controller swift]# swift-ring-builder object.builder create 10 3 1 #Create object.builder file
[root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6000 --device sdb --weight 100 #Add each node to ring
[root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.104 --port 6000 --device sdc --weight 100
[root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6000 --device sdb --weight 100
[root@controller swift]# swift-ring-builder object.builder add --region 1 --zone 1 --ip 192.168.1.105 --port 6000 --device sdc --weight 100
[root@controller swift]# swift-ring-builder object.builder #Verify ring content
object.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
             0       1     1   192.168.1.105  6000   192.168.1.105              6000       sdb 100.00        768    0.00 
             1       1     1   192.168.1.105  6000   192.168.1.105              6000       sdc 100.00        768    0.00 
             2       1     1   192.168.1.104  6000   192.168.1.104              6000       sdb 100.00        768    0.00 
             3       1     1   192.168.1.104  6000   192.168.1.104              6000       sdc 100.00        768    0.00 
[root@controller swift]# swift-ring-builder object.builder rebalance #Balancing ring
Reassigned 1024 (100.00%) partitions. Balance is now 0.00.  Dispersion is now 0.00
[root@controller swift]# scp account.ring.gz container.ring.gz object.ring.gz 192.168.1.104:/etc/swift/ #copy account.ring.gz,container.ring.gz and object.ring.gz File to each storage node and other additional nodes running the agent service /etc/swift Catalog
[root@controller swift]# scp account.ring.gz container.ring.gz object.ring.gz 192.168.1.105:/etc/swift/
[root@controller swift]# vim /etc/swift/swift.conf  #edit/etc/swift/swift.conf File and complete the following
[swift-hash] #stay[swift-hash]Section, configure the hash path prefix and suffix for your environment,These values are confidential and should not be modified or lost.
swift_hash_path_suffix = 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
swift_hash_path_prefix = 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
[storage-policy:0] #stay[storage-policy:0]Section, configure the default storage policy
name = Policy-0
default = yes
[root@controller swift]# chown -R root:swift /etc/swift
[root@controller swift]#  systemctl enable openstack-swift-proxy.service memcached.service #Start the object storage agent service and its dependent services on the control node and other nodes running the agent service, and configure them to start with the system
[root@controller swift]# systemctl start openstack-swift-proxy.service memcached.service

Storage nodes:

[root@object1 ~]#systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
[root@object1 ~]#systemctl enable openstack-swift-container.service  openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
[root@object1 ~]# systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service  openstack-swift-object-replicator.service openstack-swift-object-updater.service
[root@object1 ~]#systemctl start openstack-swift-account.service openstack-swift-account-auditor.service  openstack-swift-account-reaper.service openstack-swift-account-replicator.service
[root@object1 ~]#systemctl start openstack-swift-container.service  openstack-swift-container-auditor.service openstack-swift-container-replicator.service  openstack-swift-container-updater.service
[root@object1 ~]#systemctl start openstack-swift-object.service openstack-swift-object-auditor.service  openstack-swift-object-replicator.service openstack-swift-object-updater.service

Verify operation:

Control node:

[root@controller swift]#cd
[root@controller ~]# echo "export OS_AUTH_VERSION=3" | tee -a admin-openrc.sh demo-openrc.sh #Configure the object storage service client to use the version 3 authentication API
[root@controller ~]# swift stat #Show service status
        Account: AUTH_444fce5db34546a7907af45df36d6e99
     Containers: 0
        Objects: 0
          Bytes: 0
X-Put-Timestamp: 1518798659.41272
    X-Timestamp: 1518798659.41272
     X-Trans-Id: tx304f1ed71c194b1f90dd2-005a870740
   Content-Type: text/plain; charset=utf-8             
[root@controller ~]#  swift upload container1 demo-openrc.sh #Upload a test file
demo-openrc.sh
[root@controller ~]# swift list #List containers
container1
[root@controller ~]# swift download container1 demo-openrc.sh #Download a test file
demo-openrc.sh [auth 0.295s, headers 0.339s, total 0.339s, 0.005 MB/s]

III. add orchestration service

1. Service Brief

The orchestration service provides template based orchestration for describing cloud applications by running and calling the OpenStack API to generate running cloud applications. The software integrates other OpenStack core components into a single file template system. Templates allow you to create many kinds of OpenStack resources, such as instance, floating-point IP, cloud disk, security group and user. It also provides advanced features, such as high availability, automatic scaling of instances, and nested stacks. This makes OpenStack's core project have a huge user base.
Services enable Deployers to integrate with orchestration services directly or through customized plug-ins
The orchestration service consists of the following components:
The heat command line client
A command-line tool that communicates with the 'heat API' to run the: Term: AWS cloudformat API. The final developer can directly use the Orchestration REST API.
Heat API component
An OpenStack local REST API, which sends API requests to the heat engine through remote procedure call (RPC).
Heat API CFN component
AWS queue API, which is compatible with AWS cloudformat, sends API requests to 'heat engine', and calls them through remote procedures.
heat-engine
Launch templates and provide feedback events to API consumers.

2. Deployment requirements: before installing and configuring process services, you must create databases, service credentials and API endpoints. The process also needs to add additional information to the authentication service.

On the control node:

[root@controller ~]# mysql -u root -p123456  #Create database and set permissions
MariaDB [(none)]>CREATE DATABASE heat;
MariaDB [(none)]>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost'  IDENTIFIED BY '123456';
MariaDB [(none)]>GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%'  IDENTIFIED BY '123456';
MariaDB [(none)]>\q
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack user create --domain default --password-prompt heat #Create the heat user
User Password:            #The password is:1234546
Repeat User Password:
[root@controller ~]# openstack role add --project service --user heat admin #Add admin role to the heat user
[root@controller ~]# openstack service create --name heat  --description "Orchestration" orchestration #Establish heat and heat-cfn service entity
[root@controller ~]# openstack service create --name heat-cfn --description "Orchestration"  cloudformation
[root@controller ~]# openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s #Create API endpoint for Orchestration service
[root@controller ~]# openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
[root@controller ~]# openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
[root@controller ~]# openstack domain create --description "Stack projects and users" heat #Create a domain for the stack that contains items and users
[root@controller ~]# openstack user create --domain heat --password-prompt heat_domain_admin #Create a heat domain admin user to manage projects and users in the heat domain
User Password:   #The password is:1234546
Repeat User Password:
[root@controller ~]# openstack role add --domain heat --user heat_domain_admin admin #Add the admin role to the "heat" domain "admin user in the" heat "domain, and enable the administrative rights of the" heat "domain" admin user management stack
[root@controller ~]# openstack role create heat_stack_owner #Create the "heat" stack "owner role
[root@controller ~]# openstack role add --project demo --user demo heat_stack_owner #Add the "heat" stack "owner role to the demo project and users, and enable the demo user management stack
[root@controller ~]# openstack role create heat_stack_user #Create the "heat" stack "user role, and the Orchestration automatically assigns the" heat "stack" user role to the users created during the stack deployment process. By default, this role restricts API operations. To avoid conflicts, do not add the "heat" stack "owner role to users.

3. Service deployment

Control node:

[root@controller ~]# yum install -y openstack-heat-api openstack-heat-api-cfn  openstack-heat-engine python-heatclient
[root@controller ~]# vim /etc/heat/heat.conf #edit /etc/heat/heat.conf Document and complete the following
[database]
connection = mysql://heat:123456@controller/heat #Configure database access
[DEFAULT]
rpc_backend = rabbit #Configure RabbitMQ message queue access
heat_metadata_server_url = http://controller:8000 #Configure metadata and wait condition URLs
heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
stack_domain_admin = heat_domain_admin #Configure stack domain and management credentials
stack_domain_admin_password = 123456
stack_user_domain_name = heat
verbose = True #Partial enable verbose logging
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken] #Configure authentication service access
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = heat
password = 123456
[trustee] #Configure authentication service access
auth_plugin = password
auth_url = http://controller:35357
username = heat
password = 123456
user_domain_id = default
[clients_keystone] #Configure authentication service access
auth_uri = http://controller:5000
[ec2authtoken] #Configure authentication service access
auth_uri = http://controller:5000/v3
[root@controller ~]# su -s /bin/sh -c "heat-manage db_sync" heat  #Synchronize Orchestration database
[root@controller ~]# systemctl enable openstack-heat-api.service  openstack-heat-api-cfn.service openstack-heat-engine.service
[root@controller ~]#systemctl start openstack-heat-api.service  openstack-heat-api-cfn.service openstack-heat-engine.service 

Verify operation

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# heat service-list  #This output shows that there should be four heat engine components on the control node
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| hostname   | binary      | engine_id                            | host       | topic  | updated_at                 | status |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+
| controller | heat-engine | 0d26b5d3-ec8a-44ad-9003-b2be72ccfaa7 | controller | engine | 2017-02-16T11:59:41.000000 | up     |
| controller | heat-engine | 587b87e2-9e91-4cac-a8b2-53f51898a9c5 | controller | engine | 2017-02-16T11:59:41.000000 | up     |
| controller | heat-engine | 8891e45b-beda-49b2-bfc7-29642f072eac | controller | engine | 2017-02-16T11:59:41.000000 | up     |
| controller | heat-engine | b0ef7bbb-cfb9-4000-a214-db9049b12a25 | controller | engine | 2017-02-16T11:59:41.000000 | up     |
+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+

4, Add telemetry service

1. Service overview

The Telemetry service provides the following functions:
1. Effective survey and measurement data of relevant OpenStack services.
2. Collect event and measurement data sent from various services through monitoring notification.
3. Publish the collected data to multiple targets, including data storage and message queuing.
The Telemetry service includes the following components:
Ceilometer agent compute
Running in each computing node, push resource usage status, there may be other types of agents in the future, but for now, the community focuses on creating computing node agents.
Central agent (ceilometer agent central)
Run on the central management server to push resource usage status, neither bundled to the instance nor in the compute node. An agent can start multiple services to scale out its services.
ceilometer notification agent;
Run in the central management server (s), get the messages from the message queue (s) to build the event and measurement data.
Ceilometer collector (responsible for receiving information for persistent storage)
Run on the central management server (s), distribute the collected telemetry data to the data store or external consumers, but do not make any changes.
API server (ceilometer API)
Run on one or more central management servers to provide data access from datastores.
Check alarm service
When the collected measurement or event data breaks the defined rules, the measurement alarm service will start to alarm.

The metering alarm service consists of the following components:
API server (aodh API)
Running on one or more central management servers provides access to warnings stored in the data center.
Alarm evaluator (aodh evaluator)
Run on one or more central management servers, when a warning occurs due to an associated statistical trend that exceeds the threshold above the sliding time window, then make a decision.
Aodh listener
Run on a central management server to detect when an alarm is issued. According to some pre-defined rules for some events, corresponding alarms will be generated, which can be captured by the notification agent of the Telemetry data collection service at the same time.
Alarm notifier (aodh notifier)
Runs on one or more central management servers, allowing warnings to be set for a collection of instances based on Evaluation thresholds.

These services use the OpenStack message bus to communicate, and only collectors and API services can access the data store.

2. Deployment requirements: before installing and configuring the telemetry service, you must create and create a database, service credentials and API endpoint. However, unlike other services, the Telemetry service uses the NoSQL database

Control node:

[root@controller ~]#  yum install -y mongodb-server mongodb
[root@controller ~]# vim /etc/mongod.conf #edit /etc/mongod.conf file,And modify or add the following
bind_ip = 192.168.1.101
smallfiles = true #By default, MongoDB Will be in/var/lib/mongodb/journal Create several 1 GB Log file size. If you want to reduce the size of each log file to 128 MB And limit the total space occupied by log files to 512 MB,To configure smallfiles Value
[root@controller ~]# systemctl enable mongod.service
[root@controller ~]# systemctl start mongod.service
[root@controller ~]#  mongo --host controller --eval 'db = db.getSiblingDB("ceilometer"); db.createUser({user: "ceilometer",pwd: "123456",roles: [ "readWrite", "dbAdmin" ]})' #Create ceilometer database
MongoDB shell version: 2.6.12
connecting to: controller:27017/test
Successfully added user: { "user" : "ceilometer", "roles" : [ "readWrite", "dbAdmin" ] }
[root@controller ~]#  source admin-openrc.sh
[root@controller ~]#  openstack user create --domain default --password-prompt ceilometer #Create ceilometer user
User Password:    #The password is:123456
Repeat User Password:
[root@controller ~]# openstack role add --project service --user ceilometer admin #Add admin role to ceilometer user
[root@controller ~]#  openstack service create --name ceilometer --description "Telemetry" metering #Create ceilometer service entity
[root@controller ~]# openstack endpoint create --region RegionOne metering public http://controller:8777 #Create Telemetry service API endpoint
[root@controller ~]# openstack endpoint create --region RegionOne metering internal http://controller:8777
[root@controller ~]# openstack endpoint create --region RegionOne metering admin http://controller:8777

3. Service deployment

Control node:

[root@controller ~]# yum install openstack-ceilometer-api openstack-ceilometer-collector openstack-ceilometer-notification openstack-ceilometer-central openstack-ceilometer-alarm python-ceilometerclient -y
[root@controller ~]# vim /etc/ceilometer/ceilometer.conf    #edit /etc/ceilometer/ceilometer.conf,Modify or add the following
[DEFAULT]
rpc_backend = rabbit #Configure RabbitMQ message queue access
auth_strategy = keystone #Configure authentication service access
verbose = True
[oslo_messaging_rabbit] #Configure RabbitMQ message queue access
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456
[keystone_authtoken] #Configure authentication service access
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = ceilometer
password = 123456
[service_credentials] #Configure service certificate
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = 123456
os_endpoint_type = internalURL
os_region_name = RegionOne
[root@controller ~]# systemctl enable openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service
[root@controller ~]# systemctl start  openstack-ceilometer-api.service openstack-ceilometer-notification.service openstack-ceilometer-central.service openstack-ceilometer-collector.service openstack-ceilometer-alarm-evaluator.service openstack-ceilometer-alarm-notifier.service

4. Enable image service metering

[root@controller ~]# vim /etc/glance/glance-api.conf  #edit /etc/glance/glance-api.conf and /etc/glance/glance-registry.conf Document, and modify or add the following
[DEFAULT]   #Configure notifications and RabbitMQ message queue access
notification_driver = messagingv2
rpc_backend = rabbit
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456
[root@controller ~]# systemctl restart openstack-glance-api.service openstack-glance-registry.service #Restart the image service

5 enable calculation service measurement

[root@controller ~]#  yum install -y openstack-ceilometer-compute python-ceilometerclient python-pecan
[root@controller ~]#  vim /etc/ceilometer/ceilometer.conf #edit /etc/ceilometer/ceilometer.conf,Add or modify the following
[DEFAULT]
rpc_backend = rabbit #Configure RabbitMQ message queue access
auth_strategy = keystone #Configure authentication service access
verbose = True
[oslo_messaging_rabbit] #Configure RabbitMQ message queue access
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = 123456
[keystone_authtoken] #Configure authentication service access
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = ceilometer
password = 123456
[service_credentials] #Configure service certificate
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = 123456
os_endpoint_type = internalURL
os_region_name = RegionOne
[root@controller ~]#vim /etc/nova/nova.conf #edit /etc/nova/nova.conf File, add or modify the following
[DEFAULT]
instance_usage_audit = True #Configure notifications
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = messagingv2
[root@controller ~]# systemctl enable openstack-ceilometer-compute.service #Start agent and configure boot
[root@controller ~]# systemctl start openstack-ceilometer-compute.service
[root@controller ~]# systemctl restart openstack-nova-compute.service #Restart computing service

6. Enable block storage metering

Perform these steps on the control node and the block storage node

[root@controller ~]# vim /etc/cinder/cinder.conf #edit /etc/cinder/cinder.conf,At the same time, complete the following
[DEFAULT]
notification_driver = messagingv2
[root@controller ~]
[root@controller ~]# systemctl restart openstack-cinder-volume.service # Restart the block device storage service on the control node!!!
On the storage node:
[root@block1 ~]#  systemctl restart openstack-cinder-volume.service #Restart the block device storage service on the storage node!!!

7. Enable object storage metering

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack role create ResellerAdmin
[root@controller ~]# openstack role add --project service --user ceilometer ResellerAdmin
[root@controller ~]# yum install -y python-ceilometermiddleware
[root@controller ~]# vim/etc/swift/proxy-server.conf   #edit /etc/swift/proxy-server.conf file,Add or modify the following
[filter:keystoneauth]
operator_roles = admin, user, ResellerAdmin #Add ResellerAdmin role
[pipeline:main] #Add ceilometer
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging ceilometer proxy-server
proxy-server
[filter:ceilometer] #Configure reminders
paste.filter_factory = ceilometermiddleware.swift:filter_factory
control_exchange = swift
url = rabbit://openstack:123456@controller:5672/
driver = messagingv2
topic = notifications
log_level = WARN
[root@controller ~]# systemctl restart openstack-swift-proxy.service  #Restart the agent service of the object store

8. Verification

Execute on control node

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# ceilometer meter-list |grep  image #List available meters, filter image services
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| Name                            | Type       | Unit      | Resource ID                                                           | User ID                          | Project ID                       |
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
| image                           | gauge      | image     | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | None                             | b1d045eb3d62421592616d56a69c4de3 |
| image.size                      | gauge      | B         | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | None                             | 
+---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
[root@controller ~]# glance image-list | grep 'cirros' | awk '{ print $2 }' #Download CirrOS image from image service
68259f9f-c5c1-4975-9323-cef301cedb2b
[root@controller ~]# glance image-download 68259f9f-c5c1-4975-9323-cef301cedb2b > /tmp/cirros.img
[root@controller ~]# ceilometer meter-list|grep image #List the available meters again to verify the check of the image download
| image                           | gauge      | image     | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.download                  | delta      | B         | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.serve                     | delta      | B         | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
| image.size                      | gauge      | B         | 68259f9f-c5c1-4975-9323-cef301cedb2b                                  | 7bafc586c1f442c6b4c92f42ba90efd4 | b1d045eb3d62421592616d56a69c4de3 |
[root@controller ~]# ceilometer statistics -m image.download -p 60 #Read usage statistics from image.download table
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| Period | Period Start               | Period End                 | Max        | Min        | Avg        | Sum        | Count | Duration | Duration Start             | Duration End               |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
| 60     | 2018-02-16T12:47:46.351000 | 2018-02-16T12:48:46.351000 | 13287936.0 | 13287936.0 | 13287936.0 | 13287936.0 | 1     | 0.0      | 2018-02-16T12:48:23.052000 | 2018-02-16T12:48:23.052000 |
+--------+----------------------------+----------------------------+------------+------------+------------+------------+-------+----------+----------------------------+----------------------------+
[root@controller ~]# ll  /tmp/cirros.img #Check whether the size and usage of the downloaded image file are consistent
-rw-r--r-- 1 root root 13287936 2 month  16 20:48 /tmp/cirros.img

Topics: Swift OpenStack Database vim