This is an original article by Chen Gang, a Suning network architect.
01 prepare the tester
The 16G laptop didn't run, so it simply put together a game studio level machine: dual E5-2860v3 CPU, 24 core 48 threads, 128G DDR4 ECC memory, NVME disk 512G. Open 5 VM S on it and pretend to be physical servers.
· 192.16.35.110 deployer
·192.16.35.111 TF controller
·192.16.35.112 openstack server, also a computing node
· 192.16.35.113 k8s master
·Node k01 of 192.16.35.114 k8s is also the computing node of ops
It will be very slow to pull the image directly with vagrant. Download it first:
https://cloud.centos.org/centos/7/vagrant/x86_64/images/
Download the corresponding VirtualBox.box Documents.
Then use the command to name the box vagrant:
vagrant box add centos/7 CentOS-7-x86_64-Vagrant-2004_01.VirtualBox.box
cat << EEOOFF > vagrantfile ###start #-*- mode: ruby -*- #vi: set ft=ruby : Vagrant.require_version ">=2.0.3" #All Vagrant configuration is done below. The "2" in Vagrant.configure #configures the configuration version (we support older styles for #backwards compatibility). Please don't change it unless you know what #you're doing. ENV["LC_ALL"] = "en_US.UTF-8" VAGRANTFILE_API_VERSION = "2" Vagrant.configure("2") do |config| # The most common configuration options are documented and commented below. # For a complete reference, please see the online documentation at # https://docs.vagrantup.com. # Every Vagrant development environment requires a box. You can search for # boxes at https://atlas.hashicorp.com/search. config.vm.box = "geerlingguy/centos7" # config.vbguest.auto_update = false # config.vbguest.no_remote = true config.vm.define "deployer" do | dp | dp.vm.provider "virtualbox" do | v | v.memory = "8000" v.cpus = 2 end dp.vm.network "private_network", ip: "192.16.35.110", auto_config: true dp.vm.hostname = "deployer" end config.vm.define "tf" do | tf | tf.vm.provider "virtualbox" do | v | v.memory = "64000" v.cpus = 16 end tf.vm.network "private_network", ip: "192.16.35.111", auto_config: true tf.vm.hostname = "tf" end config.vm.define "ops" do | os | os.vm.provider "virtualbox" do | v | v.memory = "16000" v.cpus = 4 end os.vm.network "private_network",ip: "192.16.35.112", auto_config: true os.vm.hostname = "ops" end config.vm.define "k8s" do | k8 | k8.vm.provider "virtualbox" do | v | v.memory = "8000" v.cpus = 2 end k8.vm.network "private_network", ip: "192.16.35.113", auto_config: true k8.vm.hostname = "k8s" end config.vm.define "k01" do | k1 | k1.vm.provider "virtualbox" do | v | v.memory = "4000" v.cpus = 2 end k1.vm.network "private_network", ip: "192.16.35.114", auto_config: true k1.vm.hostname = "k01" end config.vm.provision "shell", privileged: true, path: "./setup.sh" end EEOOFF cat << EEOOFF > setup.sh #!/bin/bash # #Setup vagrant vms. # set -eu #Copy hosts info cat <<EOF > /etc/hosts 127.0.0.1 localhost 127.0.1.1 vagrant.vm vagrant 192.16.35.110 deployer 192.16.35.111 tf 192.16.35.112 ops 192.16.35.113 k8s 192.16.35.114 k01 #The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters EOF systemctl stop firewalld systemctl disable firewalld iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat iptables -P FORWARD ACCEPT swapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab #swapoff -a && sysctl -w vm.swappiness=0 #setenforce 0 sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config #modprobe ip_vs_rr modprobe br_netfilter yum -y update #sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory #sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory #yum install -y bridge-utils.x86_64 #modprobe bridge #modprobe br_netfilter #Setup system vars yum install -y epel-release yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools vim chrony python python-setuptools python-pip iproute lrzsz tree git yum install -y libguestfs-tools libvirt-python virt-install libvirt ansible pip install wheel --upgrade -i https://mirrors.aliyun.com/pypi/simple/ pip install pip --upgrade -i https://mirrors.aliyun.com/pypi/simple/ pip install ansible netaddr --upgrade -i https://mirrors.aliyun.com/pypi/simple/ #python-urllib3 should be installed before "pip install requests" #if install failed, pip uninstall urllib3, then reinstall python-urllib3 #pip uninstall -y urllib3 | true #yum install -y python-urllib3 pip install requests -i https://mirrors.aliyun.com/pypi/simple/ systemctl disable libvirtd.service systemctl disable dnsmasq systemctl stop libvirtd.service systemctl stop dnsmasq if [ -d "/root/.ssh" ]; then rm -rf /root/.ssh fi ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys chmod go-rwx ~/.ssh/authorized_keys # #timedatectl set-timezone Asia/Shanghai if [ -f "/etc/chrony.conf" ]; then mv /etc/chrony.conf /etc/chrony.conf.bak fi cat <<EOF > /etc/chrony.conf allow 192.16.35.0/24 server ntp1.aliyun.com iburst local stratum 10 logdir /var/log/chrony rtcsync makestep 1.0 3 driftfile /var/lib/chrony/drift EOF systemctl restart chronyd.service systemctl enable chronyd.service echo "* soft nofile 65536" >> /etc/security/limits.conf echo "* hard nofile 65536" >> /etc/security/limits.conf echo "* soft nproc 65536" >> /etc/security/limits.conf echo "* hard nproc 65536" >> /etc/security/limits.conf echo "* soft memlock unlimited" >> /etc/security/limits.conf echo "* hard memlock unlimited" >> /etc/security/limits.conf if [ ! -d "/var/log/journal" ]; then mkdir /var/log/journal fi if [ ! -d "/etc/systemd/journald.conf.d" ]; then mkdir /etc/systemd/journald.conf.d fi cat <<EOF > /etc/systemd/journald.conf.d/99-prophet.conf [Journal] Storage=persistent Compress=yes SyncIntervalSec=5m RateLimitInterval=30s RateLimitBurst=1000 SystemMaxUse=10G SystemMaxFileSize=200M ForwardToSyslog=no EOF systemctl restart systemd-journald EEOOFF
02 install docker on all nodes
CentOS
For example, if the speed of pip software installation is slow, you can consider using aliyun based pip acceleration
·Set pip acceleration for each node
mkdir .pip && tee ~/.pip/pip.conf <<-'EOF'[global]trusted-host = mirrors.aliyun.comindex-url = https://mirrors.aliyun.com/pypi/simpleEOF
Note that the requests package cannot be installed after urllib3, otherwise an error will occur:
pip uninstall urllib3 pip uninstall chardet pip install requests
(these orders should have been in setup.sh Implemented in)
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools iproute lrzsz tree git yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum makecache fast yum install -y docker-ce yum -y install epel-release systemctl daemon-reload systemctl enable docker systemctl restart docker yum install -y chrony systemctl start chronyd systemctl enable chronyd
03 pull and start the contract kolla ansible deployer container
The Nightly builds of the container can be accessed from here: Docker Hub
https://hub.docker.com/r/opencontrailnightly/contrail-kolla-ansible-deployer/tags
For example:
vim /etc/docker/daemon.json { "registry-mirrors" : [ "https://hub-mirror.c.163.com", "https://registry.docker-cn.com" ] } systemctl restart docker export CAD_IMAGE=opencontrailnightly/contrail-kolla-ansible-deployer:master-latest docker run -td --net host --name contrail_kolla_ansible_deployer $CAD_IMAGE
04 copy profile to container
instance.yaml : template file used to configure the Tungsten Fabric cluster.
https://github.com/Juniper/contrail-ansible-deployer/wiki/Contrail-with-Openstack-Kolla#13-configure-necessary-parameters-configinstancesyaml-under-appropriate-parameters
For information on how to configure all the parameters available in this file, read here:
https://github.com/Juniper/contrail-ansible-deployer/blob/master/README.md#configuration
cat << EOF > instances.yaml provider_config: bms: ssh_pwd: vagrant ssh_user: root ntpserver: ntp1.aliyun.com domainsuffix: localinstances: tf: provider: bms ip: 192.16.35.111 roles: config_database: config: control: analytics_database: analytics: webui: ops: provider: bms ip: 192.16.35.112 roles: openstack: openstack_compute: vrouter: PHYSICAL_INTERFACE: enp0s8 k8s: provider: bms ip: 192.16.35.113 roles: k8s_master: k8s_node: kubemanager: vrouter: PHYSICAL_INTERFACE: enp0s8 k01: provider: bms ip: 192.16.35.114 roles: openstack_compute: k8s_node: vrouter: PHYSICAL_INTERFACE: enp0s8 contrail_configuration: AUTH_MODE: keystone KEYSTONE_AUTH_URL_VERSION: /v3 KEYSTONE_AUTH_ADMIN_PASSWORD: vagrant CLOUD_ORCHESTRATOR: openstack CONTRAIL_VERSION: latest UPGRADE_KERNEL: true ENCAP_PRIORITY: "VXLAN,MPLSoUDP,MPLSoGRE" PHYSICAL_INTERFACE: enp0s8 global_configuration: CONTAINER_REGISTRY: opencontrailnightly kolla_config: kolla_globals: enable_haproxy: no enable_ironic: "no" enable_swift: "no" network_interface: "enp0s8" kolla_passwords: keystone_admin_password: vagrant EOF export INSTANCES_FILE=instances.yaml docker cp $INSTANCES_FILE contrail_kolla_ansible_deployer:/root/contrail-ansible-deployer/config/instances.yaml
05 prepare the environment for all nodes
I did it on all nodes except for the deployer.
The normal way is to build your own repository to store all kinds of image s. There are few nodes in the experimental environment, and the direct domestic download is also very fast.
Note that two packages, Python and python py, are in conflict. You can only install one of them. It is better to uninstall all of them first and then install one of them:
pip uninstall docker-py docker pip install python yum -y install python-devel python-subprocess32 python-setuptools python-pip pip install --upgrade pip find / -name *subpro*.egg-info find / -name *subpro*.egg-info |xargs rm -rf pip install -I sixpip install -I docker-compose
Change k8s repository to Alibaba's, the default Google source is too slow or not working: vi
playbooks/roles/k8s/tasks/RedHat.yml yum_repository: name: Kubernetes description: k8s repo baseurl: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 gpgkey: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg repo_gpgcheck: yes gpgcheck: yes when: k8s_package_version is defined
To install these in playbook, you need to visit overseas websites. You can download them from home, and then change the tag:
k8s.gcr.io/kube-apiserver:v1.14.8 k8s.gcr.io/kube-controller-manager:v1.14.8 k8s.gcr.io/kube-scheduler:v1.14.8 k8s.gcr.io/kube-proxy:v1.14.8 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1
Take a different approach
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.8 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.8 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.8 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.8 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 docker pull coredns/coredns:1.3.1 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.8.3
tag the downloaded one again
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.8 k8s.gcr.io/kube-apiserver:v1.14.8 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.8 k8s.gcr.io/kube-controller-manager:v1.14.8 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.8 k8s.gcr.io/kube-scheduler:v1.14.8 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.8 k8s.gcr.io/kube-proxy:v1.14.8 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10 docker tag docker.io/coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
06 start the deployer container and enter it for deployment
docker start contrail_kolla_ansible_deployer
Enter the deployer container:
docker exec -it contrail_kolla_ansible_deployer bashcd /root/contrail-ansible-deployer ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/provision_instances.yml ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/configure_instances.yml ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_openstack.yml ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_k8s.yml ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_contrail.yml kubectl taint nodes k8s node-role.kubernetes.io/master-
The last time kubelet is upgraded to the latest, when encountering a CSI bug, modify the configuration file and restart kubelet:
After experiencing the same issue, editing /var/lib/kubelet/config.yaml to add: featureGates: CSIMigration: false
07 after installation, build 2 VM S and containers to test
yum install -y gcc python-devel pip install python-openstackclient pip install python-ironicclient source /etc/kolla/kolla-toolbox/admin-openrc.sh
If the openstack command has the following "queue" error, python3 is required:
File "/usr/lib/python2.7/site-packages/openstack/utils.py", line 13, in <module> import queue ImportError: No module named queue
rm -f /usr/bin/python ln -s /usr/bin/python3 /usr/bin/python pip install python-openstackclient pip install python-ironicclient yum install -y python3-pip yum install -y gcc python-devel wgetpip install --upgrade setuptoolspip install --ignore-installed python-openstackclient //I need python3 every time, so I just installed this: pip3 install python-openstackclient -i https://mirrors.aliyun.com/pypi/simple/ pip3 install python-ironicclient -i https://mirrors.aliyun.com/pypi/simple/
Go to Tungsten Fabric and use the browser: https://192.16.35.111:8143
Enter openstack and use the browser: https://192.16.35.112
On k8s master (192.16.35.113):
scp root@192.16.35.114:/opt/cni/bin/contrail-k8s-cni /opt/cni/bin/ mkdir /etc/cni/net.d scp root@192.16.35.114:/etc/cni/net.d/10-contrail.conf /etc/cni/net.d/10-contrail.conf
wget
https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img
Official download address
https://download.cirros-cloud.net/
curl -O
https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
wget
http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
wget
http://download.cirros-cloud.net/daily/20161201/cirros-d161201-x86_64-disk.img
(no version with tcpdump found)
reboot
source /etc/kolla/kolla-toolbox/admin-openrc.sh
openstack image create cirros --disk-format qcow2 --public --container-format bare --file cirros-0.4.0-x86_64-disk.imgnova flavor-create m1.tiny auto 512 1 1openstack network create net1openstack subnet create --subnet-range 10.1.1.0/24 --network net1 mysubnet1 NET_ID=`openstack network list | grep net1 | awk -F '|' '{print $2}' | tr -d ' '` nova boot --image cirros --flavor m1.tiny --nic net-id=${NET_ID} VM1 nova boot --image cirros --flavor m1.tiny --nic net-id=${NET_ID} VM2
Enter k8s_master, 192.16.35.113:
yum install -y git git clone https://github.com/virtualhops/k8s-demo kubectl create -f k8s-demo/po-ubuntuapp.yml kubectl create -f k8s-demo/rc-frontend.yml kubectl expose rc/frontend kubectl exec -it ubuntuapp curl frontend # many times
Reference scheme:
https://github.com/Juniper/contrail-ansible-deployer/wiki/%5B-Container-Workflow%5D-Deploying-Contrail-with-OpenStack
Tungsten Fabric: docking with vMX virtual routing platform
Tungsten Fabric actual battle: Deployment Based on K8s
TF Q & A: you don't understand it. You don't know how to deal with problems
TF real battle Q & A is only in this network, and the cloud doesn't know where