Anible Notes - Batch Initialization Server

Posted by nigelbashford on Sat, 25 Jan 2020 07:29:46 +0100

The initial configuration goals to be achieved in this paper are as follows:

  • ansible configures ssh secret login;
  • ansible remote configuration host name;
  • ansible controls remote hosts to add DNS resolution records to each other;
  • ansible configures the yum mirror source on the remote host and installs some software;
  • ansible configures time synchronization on remote hosts;
  • ansible shuts down selinux on the remote host;
  • ansible configures the firewall on the remote host;
  • ansible remotely modifies the sshd configuration file and restarts sshd to make it more secure;

1. Host to be initialized by ansible

[root@nginx ansible]# tail -3 /etc/ansible/hosts   #The initial hosts are as follows
[node]
192.168.20.4
192.168.20.5

2. Configure ssh Secret-Free Login

The playbook file is as follows:

[root@nginx ansible]# cat ssh.yaml 
---
- name: configure ssh connection
  hosts: node
  gather_facts: false
  connection: local
  tasks:
    - name: configure ssh connection
      shell: |
        ssh-keyscan {{inventory_hostname}} >>~/.ssh/known_hosts
        sshpass -p '123.com' ssh-copy-id root@{{inventory_hostname}}
...

Note:

  • gather_facts: If the value is false, it means that node information on the target host is not collected. The default is true. To collect node information, gathering node information is much slower. If no information on the node is needed in the next operation, it can be set to false.
  • Connection:local means that tasks are performed locally on the ansible side. Hosts:localhost and connection:local are confusing. Although both effects are performed locally, hosts:localhost filters out the target node localhost from inventory to perform tasks, while connection:local filters out that the target host to perform tasks is a node in the node group becauseTo specify the local connection type so that how many nodes are in the node group, the play is performed locally in ansible several times.

3. Configure Host Name

Configuring a hostname can use a shell module, but for less specialized purposes, ansible provides a module dedicated to configuring a hostname: the hostname module.

Of course, to use ansible to set multiple host names requires that the target host and the target name are already associated, otherwise multiple hosts and multiple host names cannot be set accordingly.

For example, set the host names of two nodes in the node group to node01 and node02 respectively, and the playbook contents are as follows:

[root@ansible ansible]# cat test.yaml 
---
- name: set hostname
  hosts: node
  gather_facts: false
  vars:
    hostnames:
      - host: 192.168.20.4
        name: node01
      - host: 192.168.20.5
        name: node02
  tasks:
    - name: set hostname
      hostname:
        name: "{{item.name}}"
      when: item.host == inventory_hostname
      loop: "{{hostnames}}"

In the hostname module above, the vars directive and the when and loop directives need to be detailed.

1) vars set variable

The vars directive can be used to set variables, and one or more variables can be set.The following are reasonable ways:

# Setting a single variable
vars:
  var1: value1

vars:
  - var1: value1

# Setting up multiple variables

vars:
  var1: value1
  var2: value2

vars:
  - var1: value1
  - var2: value2

vars can be set at the play level, or at the task level, at the play level, where the task can access these variables, but not in other play levels; at the task level, only the task can access these variables, while other tasks and other play cannot.

For example:

[root@ansible ansible]# cat test.yaml 
---
- name: play1
  hosts: localhost
  gather_facts: false
  vars:
    - var1: "value1"
  tasks:
    - name: access var1
      debug:
        msg: "var1's value: {{var1}}"

- name: play2
  hosts: localhost
  gather_facts: false
  tasks:
    - name: cat's access vars from play1
      debug:
        var: var1

    - name: set and access var2 in this task
      debug:
        var: var2
      vars:
        var2: "value2"

    - name: cat't accesss var2
      debug:
        var: var2

The results are as follows:

[root@ansible ansible]# ansible-playbook test.yaml 

PLAY [play1] **************************************************************************

TASK [access var1] ********************************************************************
ok: [localhost] => {
    "msg": "var1's value: value1"
}

PLAY [play2] **************************************************************************

TASK [cat's access vars from play1] ***************************************************
ok: [localhost] => {
    "var1": "VARIABLE IS NOT DEFINED!"
}

TASK [set and access var2 in this task] ***********************************************
ok: [localhost] => {
    "var2": "value2"
}

TASK [cat't accesss var2] *************************************************************
ok: [localhost] => {
    "var2": "VARIABLE IS NOT DEFINED!"
}

PLAY RECAP ****************************************************************************
localhost                  : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Back to the configuration vars directive where we changed the hostname:

  vars:
    hostnames:
      - host: 192.168.20.4
        name: node01
      - host: 192.168.20.5
        name: node02

There is only one variable hostnames set above, but the value of this variable is an array structure, and both elements of the array are object (dictionary/hash) structures.

So to access the host name node01 and its IP address 192.168.20.4, you can:

  tasks:
    - debug:
        var: hostnames[0].name
    - debug:
        var: hostnames[0].host

2) when Conditional Judgment

In ansible, the only generic condition judgment provided is the when directive, which executes when the value of the when directive is true, otherwise it does not execute.

For example:

[root@ansible ansible]# cat test.yaml 
---
- name: play1
  hosts: localhost
  gather_facts: false
  vars:
    - myname: "Ray"
  tasks:
    - name: task will skip
      debug:
        msg: "myname is : {{myname}}"
      when: myname == "lv"

    - name: task will execute
      debug:
        msg: "myname is : {{myname}}"
      when: myname == "Ray"

With the myname value set to Ray above, the first task executes because when's judgement is myname=="lv", so the result is false, the task does not execute, similarly, the second task executes because when's value is true.

The result of the playbook execution:

PLAY [play1] **************************************************************************

TASK [task will skip] *****************************************************************
skipping: [localhost]

TASK [task will execute] **************************************************************
ok: [localhost] => {
    "msg": "myname is : Ray"
}

PLAY RECAP ****************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0  

4. Add DNS parsing records to each other

[root@ansible ansible]# cat add_dns.yaml 
---
- name: play1
  hosts: node
  gather_facts: true
  tasks:
    - name: add DNS
      lineinfile:
        path: "/etc/hosts"
        line: "{{item}} {{hostvars[item].ansible_hostname}}"
      when: item != inventory_hostname
      loop: "{{ play_hosts }}"

The results are as follows:

TASK [Gathering Facts] ****************************************************************
ok: [192.168.20.4]
ok: [192.168.20.5]

TASK [add DNS] ************************************************************************
skipping: [192.168.20.4] => (item=192.168.20.4) 
changed: [192.168.20.4] => (item=192.168.20.5)
changed: [192.168.20.5] => (item=192.168.20.4)
skipping: [192.168.20.5] => (item=192.168.20.5) 

5. Configure yum mirror source and install software

The requirements are as follows:

  • Back up the original Yum mirror source file and configure the yum mirror source of Tsinghua University: os source and epel source
  • Install common software, including lrzsz, dos2unix, wget, curl, vim, etc.

The playbook is as follows:

[root@ansible ansible]# cat config_yum.yaml 
- name: config yum repo add install software
  hosts: node
  gather_facts: false
  tasks:
    - name: backup origin yum repos
      shell:
        cmd: "mkdir bak; mv *.repo bak"
        chdir: /etc/yum.repos.d
        creates: /etc/yum.repos.d/bak

    - name: add os repo and epel repo
      yum_repository:
        name: "{{item.name}}"
        description: "{{item.name}} repo"
        baseurl: "{{item.baseurl}}"
        file: "{{item.name}}"
        enabled: 1
        gpgcheck: 0
        reposdir: /etc/yum.repos.d
      loop:
        - name: os
          baseurl: "https://mirrors.tuna.tsinghua.edu.cn/centos/7/os/$basearch"
        - name: epel
          baseurl: "https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch"

    - name: install pkgs
      yum:
        name: lrzsz,vim,dos2unix,wget,curl
        state: present

In the above yaml file, the first task is to back up all system default repo files to the bak directory. The chdir parameter indicates that the shell module is switched to the / etc/yum.repos.d directory before executing the command of the shell module, and the creates parameter indicates that the shell module is not executed when the bak directory exists.

The second task is to configure the yum source using the yum_repository module, which can add or remove the yum source.

The relevant parameters are as follows:

  • Name: Specifies the name of the repo, corresponding to [name] in the repo file;
  • Description:repo's description information, corresponding to name:xxx in the repo file;
  • baseurl: Specify the path to the repo;
  • File: Specify the name of the repo file, without the.Repo suffix, it will be added automatically;
  • The directory where the reposdir:repo file is located, defaulting to the / etc/yum.repos.d directory;
  • Enabled: whether the repo is enabled or not, corresponds to the enabled in the repo file;
  • Gpgcheck: Does the repo enable gpgcheck, which corresponds to the gpgcheck in the repo file?
  • state:press means guaranteeing that the repo exists, absent means removing it.

A loop is used in the configuration above to add two repo:os and epel.

The third task is to install some rpm packages using the yum module, which can update, install, remove, and download packages.

Description of common yum parameters:

  • Name: Specify the package name to operate on
    • Can have version number;
    • It can be a single package name, a list of package names, or a comma separating multiple package names.
    • It can be a url;
    • Can be a local rpm package
  • state:
    • present and installed: Ensure that the package is installed, they are equivalent aliases;
    • Latest: Make sure that the latest version of the package is installed or updated if not;
    • absent and removed: Remove packages, which are equivalent aliases;
  • download_only: Only download non-install packages (supported by ansible 2.7)
  • download_dir: Where is the download package stored (ansible 2.8 is supported)

The yum module is a package manager for the RHEL family. If ubuntu is unavailable, you can use another more general package manager module: package, which automatically detects the package manager types of the target nodes and uses them to manage the software.It's okay to use packages instead of yum or apt-install most of the time, but it's important to note that some package names are different on different operating systems.

6. Time Synchronization

Ensuring time synchronization can avoid many cryptic issues, especially for nodes in a cluster.

ntpd time servers are often used to ensure time synchronization, where aliyun provides a time server to ensure time synchronization and synchronize the synchronized time to hardware.

The playbook file is as follows:

---
- name: sync time
  hosts: node
  gather_facts: false
  tasks:
    - name: install and sync time
      block:
        - name: install ntpdate
          yum:
            name: ntpdate
            state: present
        - name: ntpdate to sync time
          shell: |
            ntpdate ntp1.aliyun.com
            hwclock -w

The above uses a block directive to organize two related tasks as a whole.Blocks are more used for exception handling between multiple related tasks.

7. Close selinux

Turn off the playbook for selinux as follows:

[root@ansible roles]# cat disable_selinux.yaml 
---
- name: disable selinux
  hosts: node
  gather_facts: false
  tasks:
    - name: disable on the fly
      shell: setenforce 0
      ignore_errors: true   #Since the return status code after the last command is executed is not necessarily 0, ignore_errors is used to avoid non-zero errors and stop palsybook's next tasks

    - name: disable forever in config
      lineinfile:
        path: /etc/selinux/config
        line: "SELINUX=disabled"     #Modify the values in the configuration file to close permanently
        regexp: '^SELINUX='          #What to modify

Note: ignore_errors are also often used in conjunction with blocks, as exception handling is set at the block level to handle all errors within the block.

8. Configure iptables rules

The playbook file is as follows:

- name: Set Firewall
  hosts: node
  gather_facts: false
  tasks: 
    - name: set iptables rule
      shell: |
        # Back up existing rules
        iptables-save > /tmp/iptables.bak$(date +"%F-%T")
        # Give it three axes
        iptables -X
        iptables -F
        iptables -Z

        # Release lo network card and allow ping
        iptables -A INPUT -i lo -j ACCEPT
        iptables -A INPUT -p icmp -j ACCEPT

        # Release associated and connected packages, port 22, 443, 80
        iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
        iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
        iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
        iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT

        # Configure the three-chain default rule for the filter table, INPUT chain discards all packages
        iptables -P INPUT DROP
        iptables -P FORWARD DROP
        iptables -P OUTPUT ACCEPT

9. Modify sshd profile remotely and restart

Sometimes, for the sake of server security, it is possible to modify the default configuration of sshd service on the target node, such as prohibiting root user login, prohibiting password authentication login and allowing only ssh password authentication.

There are generally several ways to modify a service's configuration file:

  • Modify the configuration file by remotely executing commands such as sed;
  • Modify the configuration file through the lineinfile module;
  • Write the configuration file in the local section of ansible and transfer it to the target node using the copy module or template module.

Relatively, the third option is the most unified and maintainable.

In addition, for service processes, modifying the configuration file often means restarting the service to load a new configuration file, as for sshd, but sshd is more special than other services because ansible defaults to ssh-based connections, restarting the sshd service disconnects the ansible connection, but ansible defaults to retrying the connection, simply waiting a few seconds longer.However, rebuilding the connection may fail, such as modifying the configuration file to disallow retries, modifying the listening port of sshd, and so on, which may prevent ansible from continuing with subsequent tasks due to connection failure.

Therefore, when modifying the sshd configuration file, the following suggestions are made:

  • Make this the last task to initialize the server, even if the connection fails.
  • Add exception handling for connection failure in playbook;
  • If the target node modifies the sshd port number, it is recommended that the ssh connection port number in the inventory file be modified either automatically by ansible or manually by us.

For simplicity, I'm going to use the lineinfile module to modify the configuration file. There are only two things to modify:

  • Setting the PermitRootLogin directive to no prevents root users from logging in directly;
  • Setting PasswordAuthentication directive to no does not allow login with password authentication

playbook is as follows:

[root@ansible roles]# cat sshd_config.yaml 
---
- name: modify sshd_config
  hosts: node
  gather_facts: false
  tasks:
    # 1. Backup/etc/ssh/sshd_config file
    - name: backup sshd config
      shell:
        /usr/bin/cp -f {{path}} {{path}}.bak
      vars:
        - path: /etc/ssh/sshd_config

    # 2. Set PermitRootLogin no
    - name: disable root login
      lineinfile:
        path: "/etc/ssh/sshd_config"
        line: "PermitRootLogin no"
        insertafter: "^#PermitRootLogin"
        regexp: "^PermitRootLogin"
      notify: "restart sshd"
    # 3. Set PasswordAuthentication no
    - name: disable password auth
      lineinfile:
        path: "/etc/ssh/sshd_config"
        line: "PasswordAuthentication no"
        regexp: "^PasswordAuthentication yes"
      notify: "restart sshd"

  handlers:
    - name: "restart sshd"
      service:
        name: sshd
        state: restarted

The roles for notify and handlers are as follows:

ansible monitors the state of changed after playbook is executed. If changed=1, the state of concern changes, that is, the task is not idempotent, if changed=0, the task is either not executed or executed without impact, that is, the task is idempotent.

Ansible provides notify directives and handlers functionality. If a notify directive is defined in a task, when ansible monitors the task change=1, it triggers the handler defined by the notify directive and executes the handler.The so-called handler, in fact, is a task. It is not different from task either in writing or in function. The only difference is that the handler is triggered and executed passively. It does not follow the normal process as a normal task.

The only thing to note is that the names of tasks in notify and handler must be the same.For example, notify:'restart sshd', then a task in handlers must have name set:'restart sshd'.

In addition, in the playbook above, both lineinfile tasks have the same notify set, but ansible does not restart sshd multiple times, but does restart it last.In fact, ansible does not execute the handler immediately after a task has been executed, but instead executes the handler after all the common tasks in the current play have been executed. The advantage is that notify can be triggered multiple times, but the corresponding handler can be executed only once, thereby avoiding multiple restarts.

10. Integrate all tasks into a single playbook

This aggregates all the previous playbooks into a single playbook file so that you can perform all the tasks at once.

The integrated playbook is as follows:

---
- name: Configure ssh Connection
  hosts: node
  gather_facts: false
  connection: local
  tasks:
    - name: configure ssh connection
      shell: |
        ssh-keyscan {{inventory_hostname}} >>~/.ssh/known_hosts
        sshpass -p'123.com' ssh-copy-id root@{{inventory_hostname}}

- name: Set Hostname
  hosts: node
  gather_facts: false
  vars:
    hostnames:
      - host: 192.168.20.4
        name: node01
      - host: 192.168.20.5
        name: node02
  tasks: 
    - name: set hostname
      hostname: 
        name: "{{item.name}}"
      when: item.host == inventory_hostname
      loop: "{{hostnames}}"

- name: Add DNS For Each
  hosts: node
  gather_facts: true
  tasks: 
    - name: add DNS
      lineinfile: 
        path: "/etc/hosts"
        line: "{{item}} {{hostvars[item].ansible_hostname}}"
      when: item != inventory_hostname
      loop: "{{ play_hosts }}"

- name: Config Yum Repo And Install Software
  hosts: node
  gather_facts: false
  tasks: 
    - name: backup origin yum repos
      shell: 
        cmd: "mkdir bak; mv *.repo bak"
        chdir: /etc/yum.repos.d
        creates: /etc/yum.repos.d/bak

    - name: add os repo and epel repo
      yum_repository: 
        name: "{{item.name}}"
        description: "{{item.name}} repo"
        baseurl: "{{item.baseurl}}"
        file: "{{item.name}}"
        enabled: 1
        gpgcheck: 0
        reposdir: /etc/yum.repos.d
      loop:
        - name: os
          baseurl: "https://mirrors.tuna.tsinghua.edu.cn/centos/7/os/$basearch"
        - name: epel
          baseurl: "https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch"

    - name: install pkgs
      yum: 
        name: lrzsz,vim,dos2unix,wget,curl
        state: present

- name: Sync Time
  hosts: node
  gather_facts: false
  tasks: 
    - name: install and sync time
      block: 
        - name: install ntpdate
          yum: 
            name: ntpdate
            state: present

        - name: ntpdate to sync time
          shell: |
            ntpdate ntp1.aliyun.com
            hwclock -w

- name: Disable Selinux
  hosts: node
  gather_facts: false
  tasks: 
    - block: 
        - name: disable on the fly
          shell: setenforce 0

        - name: disable forever in config
          lineinfile: 
            path: /etc/selinux/config
            line: "SELINUX=disabled"
            regexp: '^SELINUX='
      ignore_errors: true

- name: Set Firewall
  hosts: node
  gather_facts: false
  tasks: 
    - name: set iptables rule
      shell: |
        # Back up existing rules
        iptables-save > /tmp/iptables.bak$(date +"%F-%T")
        # Give it three axes
        iptables -X
        iptables -F
        iptables -Z

        # Release lo network card and allow ping
        iptables -A INPUT -i lo -j ACCEPT
        iptables -A INPUT -p icmp -j ACCEPT

        # Release associated and connected packages, port 22, 443, 80
        iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
        iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
        iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
        iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT

        # Configure the three-chain default rule for the filter table, INPUT chain discards all packages
        iptables -P INPUT DROP
        iptables -P FORWARD DROP
        iptables -P OUTPUT ACCEPT

- name: Modify sshd_config
  hosts: node
  gather_facts: false
  tasks:
    - name: backup sshd config
      shell: 
        /usr/bin/cp -f {{path}} {{path}}.bak
      vars: 
        - path: /etc/ssh/sshd_config

    - name: disable root login
      lineinfile: 
        path: "/etc/ssh/sshd_config"
        line: "PermitRootLogin no"
        insertafter: "^#PermitRootLogin"
        regexp: "^PermitRootLogin"
      notify: "restart sshd"

    - name: disable password auth
      lineinfile: 
        path: "/etc/ssh/sshd_config"
        line: "PasswordAuthentication no"
        regexp: "^PasswordAuthentication yes"
      notify: "restart sshd"

  handlers: 
    - name: "restart sshd"
      service: 
        name: sshd
        state: restarted

Topics: Linux ansible iptables yum ssh