Tech/OSS/Ansible
Bootstrapping Ansible
If the host you are trying to administer does not have Python then Ansible will not be useful. Just add Python via this bootstrap in your playbook site.yml. Remember to gather facts next after this so your playbook can rock and roll.
---
- name: Ansible Bootstrapping Debian
hosts: debian
gather_facts: no
tasks:
- name: Ensure Python on Debian
raw: which python || (apt -qq update && apt -qq install python)
Portable Ansible with Vault Security
So you want to develop a complex playbook and share it with a team. The team may or may not currently use Ansible. To ease usage for all try a portable setup starting with Documentation in a project directory. One issue is getting a root or sudo password for inventory items which we will cover with a per host example.
mkdir -p project/playbook/roles/base/tasks && cd project/playbook && touch README hosts.yml site.yml roles/base/tasks/main.yml
- README
Portable Ansible and Playbook Howto
1. Create an Ansible Vault password file:
1. $ echo "thepassword" > ~/.vault
2. In the project directory where this README is we will get Ansible 2.4 Stable branch
1. $ git clone -b stable-2.4 --single-branch https://github.com/ansible/ansible.git ansible
2. $ cd ansible
3. $ source ./hacking/env-setup
3. We can run the playbook with the Ansible Vault
1. $ ./bin/ansible-playbook --vault-id ~/.vault -e @../playbook/secret.yml -i ../playbook/hosts.yml ../playbook/site.yml
4. Enjoy
- hosts.yml
all:
children:
group_a:
hosts:
host1.example.com
vars:
ansible_user: production
ansible_become_pass: "{{ example_root_pass}}"
vars:
become: yes
become_method: su
become_user: root
</code>
;site.yml
<code lang="YAML">
---
- hosts: group_a
roles:
- base
- secret.yml (./bin/ansible-vault edit --vault-id ~/.vault ../playbook/secret.yml)
--- example_root_pass: one2three4five6seven8nine10 app1password: qwertyuiop shortkey: TF1DaAFxfeJ9zcVdE
- roles/base/tasks/main.yml
---
- name: do stuffs
apt:
name: vlan
state: latest
- name: install key
copy:
name: /root/.privatekey
contents: {{ shortkey }}
Security tasks in Ansible to ban services
I was playing around and just wrote the following playbook task to keep people off of production hardware.
- group_vars/all
banned_services: - screen - tmux
- security.yml task
- name: Kill banned services
shell: "pkill -f {{ item }}"
with_items: "{{ banned_services }}"
ignore_errors: yes
changed_when: False
failed_when: False
Which will run pkill against a list of names which is both dangerous and effective at the same time. This will look like:
TASK [common : Kill banned services] ********************************** ok: [192.168.15.12] => (item=screen) ok: [192.168.15.13] => (item=screen) ok: [192.168.15.11] => (item=screen) ok: [192.168.15.12] => (item=tmux) ok: [192.168.15.11] => (item=tmux) ok: [192.168.15.13] => (item=tmux)
Which should be all green and evil at the same time.
Ansible task for libvirt setup
Playing with some libvirt stuffs and setup a quick task to get my HVM nodes working the way I want. Will update with some fine tuning over time.
---
- name: HVM Packages to install
apt:
name: "{{ item }}"
state: latest
with_items:
- qemu-kvm
- libvirt-clients
- libvirt-daemon-system
- name: Add user to group
user:
name: hvm
groups: libvirt-qemu,libvirt
append: yes
Docker on Debian
# System might not be ready yet so check
- name: Wait for SSH to come up
wait_for_connection:
delay: 5
# Basic packages
- name: Basic Packages to install
apt:
name: ["apt-transport-https", "unzip"]
update_cache: yes
# Upgrade all packages
- name: Upgrade all packages to the latest version
command: "apt-get -qq upgrade"
changed_when: False
# Use Docker repo as upstream, install their key
- name: Docker Repo Signing key
apt_key:
url: https://download.docker.com/linux/debian/gpg
state: present
# Setup repo for upstream Docker
- name: Docker Repo Setup
apt_repository:
repo: "deb https://download.docker.com/linux/debian/ stretch stable"
state: present
# With docker.com as upstream install packages.
# Docker Compose installs Python tooling
- name: install docker
apt:
name: ["docker-ce", "docker-compose"]
update_cache: yes
One Liners
Example from a local ansible source tree without using any install to run adhoc commands
Assume key works
$ ansible all -i 192.168.15.11, -a "uname -a" 192.168.15.11 | SUCCESS | rc=0 >> Linux nodeone 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1 (2016-12-30) x86_64 GNU/Linux
Set key
$ ansible all -i 192.168.15.11, -a "uname -a" --private-key=~/.ssh/id_rsa 192.168.15.11 | SUCCESS | rc=0 >> Linux nodeone 3.16.0-4-amd64 #1 SMP Debian 3.16.39-1 (2016-12-30) x86_64 GNU/Linux
whoami
$ ansible all -i 192.168.15.11, -a "whoami" --private-key=~/.ssh/id_rsa 192.168.15.11 | SUCCESS | rc=0 >> lathama
become root via su
$ ansible all -i 192.168.15.11, --private-key=~/.ssh/id_rsa -b --become-method=su -K -a "whoami" SU password: 192.168.15.11 | SUCCESS | rc=0 >> root
Setting up EC2 on Amazon Web Services AWS example
- name: Spin up EC2
hosts: localhost
connection: local
gather_facts: false
vars:
ec2_size: t2.micro
ec2_image: ami-0ef798b78daa90ad3 # Debian Stable
ec2_count: 1
ec2_group: demo
aws_region: us-west-2
tasks:
# AWS specific config for SSH keys
- name: SSH key pair
ec2_key:
name: ski_keys
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
key_material: "{{ ssh_key }}"
# AWS specific config for firewall rules - lack of IPv6 by default
- name: Security Group
ec2_group:
name: "{{ ec2_group }}"
description: "{{ ec2_group }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
state: present
purge_rules: true
purge_rules_egress: true
rules:
- proto: tcp
ports: 80
cidr_ip: 0.0.0.0/0
rule_desc: HTTP
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
rule_desc: SSH
rules_egress:
- proto: all
cidr_ip: 0.0.0.0/0
rule_desc: Allow All
# Inspect existing inventory for tag
- name: Check Instances
ec2_instance_facts:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: '{{ aws_region }}'
filters:
"tag:Name": demo
register: ec2_facts
# If it exists and is stopped, start it
- name: If it exists and is stopped, start it
ec2:
instance_ids: '{{ ec2_facts.instances[0].instance_id }}'
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: '{{ aws_region }}'
state: running
wait: True
register: ec2_instances
when: ec2_facts.instances[0].instance_id is defined
# AWS specific node startup - need wrapper for various states
- name: Setup Instance
ec2:
id: demo
instance_tags:
Name: skigame
key_name: ski_keys
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
instance_type: "{{ ec2_size }}"
image: "{{ ec2_image }}"
group: "{{ ec2_group }}"
count: "{{ ec2_count }}"
wait: yes
register: ec2_instances
no_log: True
# Register created hosts in dynamic inventory
- name: Add hosts
add_host:
hostname: "{{ item.public_ip }}"
groupname: aws
ansible_user: admin
with_items: "{{ ec2_instances.instances }}"
changed_when: False