TF actually uses Vagrant to install Tungsten Fabric

Posted by prasanthmj on Fri, 19 Jun 2020 04:27:47 +0200

This is an original article by Chen Gang, a Suning network architect.

01 prepare the tester

The 16G laptop didn't run, so it simply put together a game studio level machine: dual E5-2860v3 CPU, 24 core 48 threads, 128G DDR4 ECC memory, NVME disk 512G. Open 5 VM S on it and pretend to be physical servers.

· deployer

· TF controller

· openstack server, also a computing node

· k8s master

·Node k01 of k8s is also the computing node of ops

It will be very slow to pull the image directly with vagrant. Download it first:

Download the corresponding Documents.

Then use the command to name the box vagrant:

vagrant box add centos/7

cat << EEOOFF > vagrantfile
#-*- mode: ruby -*-
#vi: set ft=ruby :
Vagrant.require_version ">=2.0.3"

#All Vagrant configuration is done below. The "2" in Vagrant.configure
#configures the configuration version (we support older styles for
#backwards compatibility). Please don't change it unless you know what
#you're doing.

ENV["LC_ALL"] = "en_US.UTF-8"


Vagrant.configure("2") do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at

  # Every Vagrant development environment requires a box. You can search for
  # boxes at = "geerlingguy/centos7"
  # config.vbguest.auto_update = false
  # config.vbguest.no_remote = true  

  config.vm.define "deployer" do | dp |
    dp.vm.provider "virtualbox" do | v |
      v.memory = "8000"
      v.cpus = 2
    end "private_network", ip: "", auto_config: true
    dp.vm.hostname = "deployer"

  config.vm.define "tf" do | tf |
    tf.vm.provider "virtualbox" do | v |
      v.memory = "64000"
      v.cpus = 16
    end "private_network", ip: "", auto_config: true
    tf.vm.hostname = "tf"

  config.vm.define "ops" do | os |
    os.vm.provider "virtualbox" do | v |
      v.memory = "16000"
      v.cpus = 4
    end "private_network",ip: "",  auto_config: true
    os.vm.hostname = "ops"

  config.vm.define "k8s" do | k8 |
    k8.vm.provider "virtualbox" do | v |
      v.memory = "8000"
      v.cpus = 2
    end "private_network", ip: "", auto_config: true
    k8.vm.hostname = "k8s"

  config.vm.define "k01" do | k1 |
    k1.vm.provider "virtualbox" do | v |
      v.memory = "4000"
      v.cpus = 2
    end "private_network", ip: "", auto_config: true
    k1.vm.hostname = "k01"

  config.vm.provision "shell", privileged: true, path: "./"



cat << EEOOFF >
#Setup vagrant vms.

set -eu

#Copy hosts info
cat <<EOF > /etc/hosts localhost vagrant.vm vagrant deployer tf ops k8s k01

#The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat

swapoff -a 
sed -i 's/.*swap.*/#&/' /etc/fstab
#swapoff -a && sysctl -w vm.swappiness=0

#setenforce  0 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config  

#modprobe ip_vs_rr
modprobe br_netfilter

yum -y update

#sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
#sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
#yum install -y bridge-utils.x86_64
#modprobe bridge
#modprobe br_netfilter
#Setup system vars

yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools vim chrony python python-setuptools python-pip iproute lrzsz tree git

yum install -y libguestfs-tools libvirt-python virt-install libvirt ansible

pip install wheel --upgrade -i
pip install pip --upgrade -i
pip install ansible  netaddr --upgrade -i

#python-urllib3 should be installed before "pip install requests"
#if install failed, pip uninstall urllib3, then reinstall python-urllib3
#pip uninstall -y urllib3 | true
#yum install -y python-urllib3 
pip install requests -i

systemctl disable libvirtd.service
systemctl disable dnsmasq
systemctl stop libvirtd.service
systemctl stop dnsmasq

if [  -d "/root/.ssh" ]; then
      rm -rf /root/.ssh

ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa

cat ~/.ssh/ > ~/.ssh/authorized_keys
chmod go-rwx ~/.ssh/authorized_keys

#timedatectl set-timezone Asia/Shanghai

if [ -f "/etc/chrony.conf" ]; then
   mv /etc/chrony.conf /etc/chrony.conf.bak

cat <<EOF > /etc/chrony.conf
      server iburst
      local stratum 10
      logdir /var/log/chrony
      makestep 1.0 3
      driftfile /var/lib/chrony/drift

systemctl restart chronyd.service
systemctl enable chronyd.service

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf

if [ ! -d "/var/log/journal" ]; then
  mkdir /var/log/journal

if [ ! -d "/etc/systemd/journald.conf.d" ]; then
  mkdir /etc/systemd/journald.conf.d

cat <<EOF > /etc/systemd/journald.conf.d/99-prophet.conf 







systemctl restart systemd-journald


02 install docker on all nodes


For example, if the speed of pip software installation is slow, you can consider using aliyun based pip acceleration

·Set pip acceleration for each node

mkdir .pip && tee ~/.pip/pip.conf <<-'EOF'[global]trusted-host =  mirrors.aliyun.comindex-url =

Note that the requests package cannot be installed after urllib3, otherwise an error will occur:

pip uninstall urllib3
pip uninstall chardet
pip install requests

(these orders should have been in Implemented in)

yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools iproute lrzsz tree git
yum-config-manager   --add-repo
yum makecache fast
yum install -y docker-ce
yum -y install epel-release
systemctl daemon-reload
systemctl enable docker
systemctl restart docker
yum install -y chrony
systemctl start chronyd
systemctl enable chronyd

03 pull and start the contract kolla ansible deployer container

The Nightly builds of the container can be accessed from here: Docker Hub

For example:

vim /etc/docker/daemon.json
{ "registry-mirrors" : [ "",
    "" ] }
systemctl restart docker

export CAD_IMAGE=opencontrailnightly/contrail-kolla-ansible-deployer:master-latest
docker run -td --net host --name contrail_kolla_ansible_deployer $CAD_IMAGE

04 copy profile to container

instance.yaml : template file used to configure the Tungsten Fabric cluster.

For information on how to configure all the parameters available in this file, read here:

cat << EOF > instances.yaml
provider_config:  bms:    ssh_pwd: vagrant    ssh_user: root    ntpserver:    domainsuffix: localinstances:  tf:    provider: bms    ip:    roles:      config_database:      config:      control:      analytics_database:      analytics:      webui:  ops:    provider: bms    ip:    roles:
        PHYSICAL_INTERFACE: enp0s8
  k8s:    provider: bms    ip:    roles:
        PHYSICAL_INTERFACE: enp0s8
  k01:    provider: bms    ip:    roles:
        PHYSICAL_INTERFACE: enp0s8
contrail_configuration:  AUTH_MODE: keystone  KEYSTONE_AUTH_URL_VERSION: /v3
  CONTAINER_REGISTRY: opencontrailnightly
  kolla_globals:    enable_haproxy: no    enable_ironic: "no"    enable_swift: "no"
    network_interface: "enp0s8"
  kolla_passwords:    keystone_admin_password: vagrant


export INSTANCES_FILE=instances.yaml
docker cp $INSTANCES_FILE contrail_kolla_ansible_deployer:/root/contrail-ansible-deployer/config/instances.yaml

05 prepare the environment for all nodes

I did it on all nodes except for the deployer.

The normal way is to build your own repository to store all kinds of image s. There are few nodes in the experimental environment, and the direct domestic download is also very fast.

Note that two packages, Python and python py, are in conflict. You can only install one of them. It is better to uninstall all of them first and then install one of them:

pip uninstall docker-py docker
 pip install python

yum -y install python-devel python-subprocess32 python-setuptools python-pip

 pip install --upgrade pip

 find / -name *subpro*.egg-info
 find / -name *subpro*.egg-info |xargs rm -rf

pip install -I sixpip install -I docker-compose

Change k8s repository to Alibaba's, the default Google source is too slow or not working: vi


name: Kubernetes
description: k8s repo
repo_gpgcheck: yes
gpgcheck: yes
when: k8s_package_version is defined

To install these in playbook, you need to visit overseas websites. You can download them from home, and then change the tag:

Take a different approach

docker pull
docker pull
docker pull
docker pull
docker pull
docker pull
docker pull coredns/coredns:1.3.1
docker pull

tag the downloaded one again

docker tag
docker tag
docker tag
docker tag
docker tag
docker tag
docker tag
docker tag

06 start the deployer container and enter it for deployment

docker start contrail_kolla_ansible_deployer

Enter the deployer container:

docker exec -it contrail_kolla_ansible_deployer bashcd /root/contrail-ansible-deployer
ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/provision_instances.yml
ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/configure_instances.yml
ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_openstack.yml
ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_k8s.yml
ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_contrail.yml

kubectl taint nodes k8s

The last time kubelet is upgraded to the latest, when encountering a CSI bug, modify the configuration file and restart kubelet:

After experiencing the same issue, editing /var/lib/kubelet/config.yaml to add:
featureGates:  CSIMigration: false

07 after installation, build 2 VM S and containers to test

yum install -y gcc python-devel
pip install python-openstackclient
pip install python-ironicclient

source /etc/kolla/kolla-toolbox/

If the openstack command has the following "queue" error, python3 is required:

File "/usr/lib/python2.7/site-packages/openstack/", line 13, in <module>
    import queue
ImportError: No module named queue
rm -f /usr/bin/python
ln -s /usr/bin/python3 /usr/bin/python
pip install python-openstackclient
pip install python-ironicclient
yum install -y python3-pip

yum install -y gcc python-devel wgetpip install --upgrade setuptoolspip install --ignore-installed python-openstackclient

//I need python3 every time, so I just installed this:
pip3 install python-openstackclient -i
pip3 install python-ironicclient -i

Go to Tungsten Fabric and use the browser:

Enter openstack and use the browser:

On k8s master (

scp root@ /opt/cni/bin/
mkdir /etc/cni/net.d
scp root@ /etc/cni/net.d/10-contrail.conf


Official download address

curl -O



(no version with tcpdump found)


source /etc/kolla/kolla-toolbox/

openstack image create cirros --disk-format qcow2 --public --container-format bare --file cirros-0.4.0-x86_64-disk.imgnova flavor-create m1.tiny auto 512 1 1openstack network create net1openstack subnet create --subnet-range --network net1 mysubnet1
NET_ID=`openstack network list | grep net1 | awk -F '|' '{print $2}' | tr -d ' '`
nova boot --image cirros --flavor m1.tiny --nic net-id=${NET_ID} VM1
nova boot --image cirros --flavor m1.tiny --nic net-id=${NET_ID} VM2

Enter k8s_master,

yum install -y git
git clone
kubectl create -f k8s-demo/po-ubuntuapp.yml
kubectl create -f k8s-demo/rc-frontend.yml
kubectl expose rc/frontend
kubectl exec -it ubuntuapp curl frontend # many times

Reference scheme:

Tungsten Fabric: docking with vMX virtual routing platform
Tungsten Fabric actual battle: Deployment Based on K8s
TF Q & A: you don't understand it. You don't know how to deal with problems
TF real battle Q & A is only in this network, and the cloud doesn't know where

Topics: Docker pip Python yum