Thursday, April 23, 2026

CIS Compliance Report Genration | Ansible AAP or AWX

 Hello Guys,

Its an extended version of my previous project where an i need to also generate the report after applying the CIS compliance to RHEL 9 server. I also need to display the report so what i did is i have installed httpd server on top of report genration and display it in the html format on the same server

here is my sample playbook.

---
- name: Genrate openscaf report | Rhel 9
hosts: all
gather_facts: true
become: true
tasks:
- name: Install all required packages
ansible.builtin.dnf:
name: "{{ item }}"
state: present
loop:
- openscap-scanner
- scap-security-guide
- name: Get stats of the file
ansible.builtin.stat:
path: /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
register: file_data

- name: Assert that the file exists
ansible.builtin.assert:
that:
- file_data.stat.exists
fail_msg: "The required file /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml does not exist."
success_msg: "File existence verified we have profile."

- name: Create directory for compliance report
ansible.builtin.file:
path: /var/log/compliance
state: directory
mode: '0755'

- name: scan and genrate report
ansible.builtin.shell: |
oscap xccdf eval \
--profile xccdf_org.ssgproject.content_profile_cis_server_l1 \
--results /var/log/compliance/cis-l1-results.xml \
--report /var/log/compliance/cis-l1-report.html \
/usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml
register: report
ignore_errors: true
- name: "install some issential package"
ansible.builtin.dnf:
name: "{{ item }}"
state: present
loop:
- httpd
- python3-pip

- name: Install bottle python package
ansible.builtin.pip:
name: json2html

- name: "start httpd service"
service:
name: httpd
state: started
enabled: true

- name: make sure port 80 is allowed in firewall
ansible.posix.firewalld:
service: http
permanent: true
state: enabled
immediate: true

- name: Move report to html root for validation
ansible.builtin.copy:
src: /var/log/compliance/cis-l1-report.html
dest: /var/www/html/cis-l1-report.html
mode: '0644'
remote_src: true

Wednesday, April 22, 2026

Automating CIS Benchmark Compliance on RHEL 9 Using Ansible | RedHat AAP or AWX

Security compliance is no longer optional—especially in regulated industries like BFSI. Organizations are expected to adhere to well-defined benchmarks such as the Center for Internet Security (CIS) standards to ensure hardened and secure systems.

In this blog, we’ll explore how to automate CIS Benchmark enforcement on RHEL 9 using Ansible, making compliance repeatable, auditable, and scalable.


Why CIS Benchmarking Matters

CIS benchmarks provide:

  • Industry-recognized security standards
  • Hardened configurations for OS and applications
  • Reduced attack surface
  • Compliance alignment (PCI-DSS, ISO 27001, etc.)

Manual implementation is complex and error-prone—automation solves that.


Playbook Overview

This playbook performs:

  1. GRUB password hardening
  2. User account preparation
  3. CIS role execution (Level 1 Server profile)
  4. Audit and compliance validation

Key Components Explained

1. Secure Bootloader Configuration (GRUB Hardening)

One of the critical CIS controls is protecting bootloader settings to prevent unauthorized changes.

This playbook:

  • Uses expect to automate interactive password setup
  • Sets a GRUB2 password securely
  • Updates GRUB configuration
grub2-setpassword

🔐 Why it matters:
Prevents attackers from modifying boot parameters (e.g., entering single-user mode).


2. Secure Password Handling

The playbook uses:

no_log: true

This ensures:

  • Passwords are not exposed in logs
  • Sensitive data remains protected

⚠️ Best Practice:
Use Ansible Vault instead of plain text passwords.


3. User Management

The playbook:

  • Gathers existing users
  • Sets password for the automation user
password_hash('sha512')

🔐 This aligns with secure password storage practices.


4. CIS Role Execution

The core of the automation is the RHEL9-CIS role, which enforces multiple controls:

  • File permissions
  • SSH hardening
  • Audit configuration
  • Kernel parameters
  • Logging and monitoring

Key configurations:

  • setup_audit: true → Enables auditd setup
  • run_audit: true → Runs compliance checks
  • skip_reboot: false → Allows required reboots

5. Compliance Validation with Goss

The playbook integrates Goss for validation:

  • Lightweight validation tool
  • Ensures system state matches expectations
  • Provides quick compliance feedback

Execution Flow

Install Dependencies → Set GRUB Password → Update Config
→ Gather Users → Apply CIS Role → Run Audit → Validate Compliance

Security Considerations

Handled Well

  • Sensitive data masked (no_log)
  • Idempotent execution
  • Automated audit validation

⚠️ Needs Improvement

  • Avoid hardcoded passwords (primod123)
  • Use Ansible Vault for secrets
  • Validate impact before enabling reboot

Best Practices for Production

  • Run in audit-only mode first:

    audit_only: true
  • Test in staging before production rollout
  • Maintain exception list for business-critical users
  • Integrate with SIEM tools for reporting
  • Schedule periodic compliance scans

Use Cases in Enterprise Environments

  • BFSI compliance enforcement
  • Cloud VM hardening (AWS, Azure, etc.)
  • Regulatory audits
  • Secure baseline creation

Benefits of This Approach

🚀 Automation at Scale

Apply CIS policies across hundreds of servers consistently.

🔁 Repeatability

Same configuration every time—no drift.

📊 Audit-Ready

Reports and validation built-in.

🔒 Improved Security Posture

Reduced vulnerabilities and misconfigurations.


Potential Enhancements

  • Integrate with **Red Hat Ansible Automation Platform workflows
  • Add approval gates (ServiceNow/Jira)
  • Enable Event-Driven Automation (EDA)
  • Centralized reporting dashboards
  • Role-based execution (Level 1 vs Level 2 CIS)

Conclusion

Automating CIS benchmark enforcement transforms security from a manual task into a continuous, reliable process. With Ansible, organizations can ensure systems remain compliant, secure, and audit-ready at all times.

This playbook is a strong foundation for building a compliance-as-code strategy, enabling proactive security management across your infrastructure.


This is a playbook which i have use to benchmark my server its score is around 91.00%

---
- name: CIS Benchmark mapping
hosts: all
become: true
vars:
# It is strongly recommended to store this in an Ansible Vault
grub_password: "YourSecurePasswordHere"

pre_tasks:
- name: Ensure the expect package is installed
ansible.builtin.package:
name: expect
state: present

- name: Set GRUB2 password for the root user
ansible.builtin.expect:
command: grub2-setpassword
responses:
'Enter password:': "{{ grub_password }}"
'Confirm password:': "{{ grub_password }}"
# Only run if the user configuration doesn't already exist
creates: /boot/grub2/user.cfg
register: grub_pw_set
no_log: true # Prevents the password from appearing in logs

- name: Update GRUB2 configuration
ansible.builtin.command:
cmd: grub2-mkconfig -o /boot/grub2/grub.cfg
when: grub_pw_set.changed

- name: Gather available local users
ansible.builtin.getent:
database: passwd
register: user_facts

# - name: "Setup Password for ec2-user"
# ansible.builtin.user:
# name: ec2-user
# password: "{{ 'primod123' | password_hash('sha512') }}"
# when: "'ec2-user' in user_facts"

# - name: "Setting password"
# ansible.builtin.debug:
# msg: "Password for ec2-user has been set. Please change it after first login."
# when: "'ec2-user' in user_facts"

# - name: "Setup Password for azureuser"
# ansible.builtin.user:
# name: azureuser
# password: "{{ 'primod123' | password_hash('sha512') }}"
# when: "'azureuser' in user_facts"

# - name: "Setting password"
# ansible.builtin.debug:
# msg: "Password for azureuser has been set. Please change it after first login."
# when: "'azureuser' in user_facts"

- name: "Setup password for ansible_user"
ansible.builtin.user:
name: "{{ ansible_user }}"
password: "{{ 'primod123' | password_hash('sha512') }}"
#when: "'{{ ansible_user }}' in user_facts"

roles:
- name: "RHEL9-CIS"
vars:
setup_audit: true
run_audit: true
# audit_only: true
rhel9cis_allow_authselect_updates: false
rhel9cis_crypto_policy_ansiblemanaged: false
skip_reboot: false
rhel9cis_warning_banner: |
'This Policy is Applied on RHEL9-CIS Benchmark.
Unauthorized access to this system is prohibited.
All activities on this system are logged and monitored.
By accessing this system, you consent to such monitoring and logging.
By Anible Automation Platform Team
Automation Engineering Team
for any new changes please reach out to us.'
rhel9cis_sudoers_exclude_nopasswd_list:
- "{{ ansible_user }}"
goss_url: https://github.com/goss-org/goss/releases/download/v0.4.9/goss-linux-arm64
goss_version:
release: v0.4.9
checksum: "sha256:87dd36cfa1b8b50554e6e2ca29168272e26755b19ba5438341f7c66b36decc19"
tags:
- level1-server


Friday, April 17, 2026

Automating Dynamic Disk Detection and LVM Setup Using Ansible | Anisle Automation Platform

 In modern infrastructure environments, storage requirements are dynamic—new disks are frequently added to systems, especially in cloud and virtualized environments. Manually detecting, configuring, and mounting these disks is time-consuming and error-prone.

This blog walks through an automated approach using Ansible to dynamically detect new disks and configure them with Logical Volume Manager (LVM)—end-to-end, without manual intervention.


Why Automate Disk Provisioning?

Traditionally, storage provisioning involves multiple manual steps:

  • Identifying unused disks
  • Creating physical volumes (PV)
  • Creating volume groups (VG)
  • Creating logical volumes (LV)
  • Formatting and mounting

Automation helps:

  • Reduce human errors
  • Ensure consistency across systems
  • Speed up provisioning (seconds vs minutes)
  • Enable event-driven infrastructure

Solution Overview

This playbook automates the complete lifecycle:

  1. Detect unpartitioned disks
  2. Filter valid target disks
  3. Install required dependencies
  4. Create LVM structure (PV → VG → LV)
  5. Format filesystem
  6. Mount and persist configuration

Key Components Explained

1. Dynamic Disk Detection

The playbook leverages Ansible facts (ansible_devices) to identify disks with no partitions:

unpartitioned_disks: "{{ ansible_devices | dict2items | selectattr('value.partitions', 'equalto', {}) }}"

This ensures only unused disks are considered.


2. Intelligent Disk Filtering

Not all devices should be used. The playbook excludes:

  • Device mapper entries (dm-*)
  • CD-ROM devices (sr*)
reject('match', '^dm-.*')
reject('match', '^sr.*')

This avoids accidental modification of system-critical or virtual devices.


3. Dependency Installation

Before performing LVM operations, required packages are installed:

  • lvm2 → LVM management
  • xfsprogs → Filesystem tools
  • sg3_utils → SCSI utilities

This ensures the system is ready for storage operations.


4. Safety Check

The playbook includes a fail-safe:

- name: Fail if no suitable empty disk was found

This prevents unintended execution when no valid disk is available.


5. LVM Creation Workflow

The playbook automates:

  • Physical Volume (PV) creation
  • Volume Group (VG) creation
  • Logical Volume (LV) allocation (100% of space)

Using Ansible modules:

  • community.general.lvg
  • community.general.lvol

This ensures idempotent and repeatable execution.


6. Filesystem Creation

A filesystem (default: XFS) is created:

fstype: xfs

XFS is widely used in enterprise Linux environments due to its scalability and performance.


7. Mounting and Persistence

Finally:

  • Mount point is created
  • Filesystem is mounted
  • /etc/fstab is updated automatically

This guarantees persistence across reboots.


End-to-End Workflow Visualization

New Disk → Detect → Filter → Validate
→ Create PV → Create VG → Create LV
→ Format → Mount → Persist

Key Benefits of This Approach

Fully Automated

No manual intervention required after disk attachment.

Idempotent

Safe to run multiple times—no duplicate configurations.

Scalable

Works across hundreds or thousands of servers.

Error-Resilient

Built-in checks prevent misconfiguration.

Standardized

Enforces consistent naming and structure.

Sample playbook

---

- name: Dynamic Disk Detection and LVM Setup

  hosts: all

  become: true

  vars:

    vg_name: "data_vg"

    lv_name: "data_lv"

    mount_path: "/mnt/data"

    fs_type: "xfs"

  tasks:

    - name: Find unpartitioned disks

      set_fact:

        unpartitioned_disks: "{{ ansible_devices | dict2items | selectattr('value.partitions', 'equalto', {}) | map(attribute='key') | list }}"


    - name: Filter for actual physical/virtual disks

      set_fact:

        target_disk: "{{ ansible_devices | dict2items | selectattr('value.partitions', 'equalto', {}) | 

                        map(attribute='key') | list |

                        reject('match', '^dm-.*') | 

                        reject('match', '^sr.*') | 

                        list }}"

        

    - name: Show the real new disk

      debug:

        msg: "The actual new disk is: /dev/{{ target_disk | first }}"

      

    - name: Install required packages for LVM and filesystem management

      ansible.builtin.package:

        name:

          - lvm2

          - xfsprogs

          - sg3_utils

        state: present


    - name: Fail if no suitable empty disk was found

      ansible.builtin.fail:

        msg: "No unpartitioned disks found on the system."

      when: target_disk is not defined


    - name: Create Physical Volume and Volume Group

      community.general.lvg:

        vg: "{{ vg_name }}"

        pvs: "/dev/{{ target_disk | first }}"

        state: present


    - name: Create Logical Volume (100% of VG space)

      community.general.lvol:

        vg: "{{ vg_name }}"

        lv: "{{ lv_name }}"

        size: 100%FREE

        state: present


    - name: Create Filesystem on Logical Volume

      community.general.filesystem:

        fstype: "{{ fs_type }}"

        dev: "/dev/{{ vg_name }}/{{ lv_name }}"


    - name: Ensure Mount Directory exists

      ansible.builtin.file:

        path: "{{ mount_path }}"

        state: directory

        mode: '0755'


    - name: Mount the volume and update fstab

      ansible.builtin.mount:

        path: "{{ mount_path }}"

        src: "/dev/{{ vg_name }}/{{ lv_name }}"

        fstype: "{{ fs_type }}"

        state: mounted

Thursday, April 2, 2026

Build a Report of Redhat AAP or AWX

 Hello Guys,

I am recently working for a client in which he has a ask that can i export the report what he can see in the AAP/AWX dashboard in html or PDF format for compliance so i started researching a bit on it and shared so report which are available on internet bit he did not like it.

He wanted a tailer made report showing job name start and finish time and how much time it took and final status also if i can show the logs of that execution that will be great.

So i use the API which are available with AAP or AWX to fetch the report of last 30 days with these field 

below is the job template report.yml


---
- name: Generate Last 1 Month Job Report
  hosts: aux_server 
##can be replace with any server name you want localhost will not work as it will destroy the report once the job is finish execution become: true gather_facts: true vars: controller_url: "https://<FQDN_AAP or AWX server>" tower_username: "admin" tower_password: "primod123" # Calculate date 30 days ago in ISO format (YYYY-MM-DD) last_month: "{{ lookup('pipe', 'date -d \"30 days ago\" +%Y-%m-%d') }}" page_size: 200 is_pdf: false ##Remove the word controller from api call if you are using awx tasks: - name: Fetch first page of jobs ansible.builtin.uri: url: "{{ controller_url }}/api/controller/v2/jobs/?finished__gte={{ last_month }}&order_by=-finished&page_size={{ page_size }}" method: GET user: "{{ tower_username }}" password: "{{ tower_password }}" force_basic_auth: yes validate_certs: false register: first_page - name: Calculate total pages ansible.builtin.set_fact: total_pages: "{{ (first_page.json.count / page_size) | round(0, 'ceil') | int }}" - name: Fetch remaining pages ansible.builtin.uri: url: "{{ controller_url }}/api/controller/v2/jobs/?finished__gte={{ last_month }}&order_by=-finished&page_size={{ page_size }}&page={{ item }}" method: GET user: "{{ tower_username }}" password: "{{ tower_password }}" force_basic_auth: yes validate_certs: false # Loop from page 2 to the end loop: "{{ range(2, total_pages | int + 1) | list }}" register: remaining_pages when: total_pages | int > 1 - name: Consolidate all results ansible.builtin.set_fact: all_jobs: >- {{ first_page.json.results + (remaining_pages.results | default([]) | map(attribute='json.results') | flatten) }} - name: Generate HTML Report ansible.builtin.template: src: report_template.j2 dest: "/usr/share/nginx/html/job_report_{{ ansible_date_time.date }}.html" vars: jobs: "{{ all_jobs }}" report_title: "Monthly Automation Summary" report_range: "Last 30 Days (Since {{ last_month }})"
and jinja template : report_template.j2

<!DOCTYPE html>
<html>
<head>
    <style>
        body { font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; margin: 40px; color: #333; }
        .header { border-bottom: 2px solid #005596; padding-bottom: 10px; margin-bottom: 20px; }
        .logo {height: 40px; /* Fixed height for a cleaner look */
               width: auto;  /* Maintains aspect ratio */
               margin-right: 20px; }
        table { width: 100%; border-collapse: collapse; margin-top: 20px; }
        th, td { padding: 12px; border: 1px solid #ddd; text-align: left; }
        th { background-color: #f8f9fa; font-weight: bold; }
        .status-successful { color: #28a745; font-weight: bold; }
        .status-failed { color: #dc3545; font-weight: bold; }
        .status-canceled { color: #f5f503fc; font-weight: bold; }
        .summary-box { background: #e9ecef; padding: 15px; border-radius: 5px; margin-bottom: 30px; }
    </style>
</head>
<body>
    <div class="header">
        <h1>{{ report_title }}</h1>
        <img src="https://raw.githubusercontent.com/benc-uk/icon-collection/refs/heads/master/logos/ansible.svg" alt="Ansible" class="logo"> <p><strong>Report Genrated on and using ansible automation platform 2.6</strong></p>
        <p><strong>Period:</strong> {{ report_range }}</p>
        <p><strong>Generated On:</strong> {{ ansible_date_time.date }}</p>
    </div>

    <div class="summary-box">
        <strong>Total Jobs Executed:</strong> {{ jobs | length }}
    </div>

    <table>
        <thead>
            <tr>
                <th>Job ID</th>
                <th>Template Name</th>
                <th>Status</th>
                 <th>Started At</th>
                <th>Finished At</th>
                <th>Duration</th>
    
            </tr>
        </thead>
        <tbody>
            {% for job in jobs %}
            <tr>
                <td>{{ job.id }}</td>
                <td>{{ job.name }}</td>
                <td class="status-{{ job.status }}">{{ job.status | capitalize }}</td>
                <td>{{ job.started | default(job.created) }}</td>
                <td>{{ job.finished }}</td>
                <td>{{'%M:%S' |ansible.builtin.strftime(job.elapsed)}}</td>
            </tr>{% endfor %}
        </tbody>
    </table>
</body>
</html>

sample output



Let me know you feedback in comments

Friday, December 12, 2025

Executeing a command in Pod of Kubernetes or Openshift

 I am working on an application which is hosted in kubernetes and currently it does not support any kind of API. The only way to automate the workflow what we are trying to achieve is to login into one of the pod of that application and run a commds which will run the conciliation 

 So i have written a ansible playbook and to keep it simple i have installed kubernetes python library on the target machine using which i am accessing the kubernetes application  

pip install kubernetes

Once done i have wrote a ansible playbook the config are loaded from ~/.kube/config file on that server and below is the sample playbook. here am executing a sample play which is printing the nginx pod host name but it can be replace with the actual command we want to run

- name: Execute command inside a Kubernetes pod selected by label
hosts: all
gather_facts: no
collections:
- kubernetes.core
vars:
namespace: "default" ## add namespace here
label_selector: "app=nginx" # add lable selector here
container_name: "" # optional — leave empty to use default container
exec_command: "/usr/bin/hostname" # command to run inside the container
tasks:
- name: Get pods matching label selector
kubernetes.core.k8s_info:
api_version: v1
kind: Pod
namespace: "{{ namespace }}"
label_selectors:
- "{{ label_selector }}"
register: pod_list

- name: Fail if no pods found
fail:
msg: "No pods found with label {{ label_selector }} in {{ namespace }}"
when: pod_list.resources | length == 0

- name: Select the first pod
set_fact:
target_pod: "{{ pod_list.resources[0].metadata.name }}"

- debug:
msg: "Selected pod: {{ target_pod }}"

- name: Exec command inside pod
kubernetes.core.k8s_exec:
namespace: "{{ namespace }}"
pod: "{{ target_pod }}"
container: "{{ container_name | default(omit) }}"
command: "{{ exec_command }}"
register: exec_output

- debug:
var: exec_output.stdout

Sample output



Wednesday, December 10, 2025

RHEL patching with Insight and Anisble - without writing the code

 Hello Guys,

While working i got a request that client want to patch the rhel servers with a specific CVES not the whole and at last they want to reboot the system as well so i have work on it and come to know its a very straight forward flow where i can build the playbook in a insights and patch it using ansible 

Login into the insights and make sure your system which you are planning to patch is registered with insights and move to security --> Vulnerability --> systems here you will find the list of system which you are planning to patch 



select the system and you will see the list of CVEs you can select the cves and click on plan remediation


 a dialog box will open you can select the existing playbook or you can select new playbook and click next for couple of time and your playbook is ready



now you need to create a project in Ansible of type insights

Once done you have your playbook is downloaded and ready to patch create a template just shown in the picture and you are all set just make sure name of host in your inventory in ansible and name of server in the insight show be the same 

when you will run it you will be able to see the same CVE get getting patch on your rhel machine


and its not only batch but also rebooted the system and its also informing the insight using insights client utility which all patches are applied in the system so insights based on this info remove the CVEs for that system.


and now if we check in the insights the same CVEs are missing for that system 


let me know what needs to be automated

Ansble Data migration from 2.4 to 2.5

 Hello Guys,


Recently while working i got a request to where a customer wanted to migrated the data from AWX to an Ansible AAP container based platform I tried to come up with an approach which is of least risk cant use 

I have use the API which can help in fetching all the required configurations and can be then imported back into the system. Downside is credentials and users can't be migrated as it contains sensitive information which is not expose to API here are the playbook

 ---

- name: export all aap config
hosts: ctrl
gather_facts: true
tasks:
- name: Export all assets
awx.awx.export:
controller_host: <ip address of controller>
controller_username: admin
controller_password: <password of controller>
validate_certs: false
all: True
register: export_output
delegate_to: localhost
run_once: true

- name: all assets from our export
ansible.builtin.debug:
var: export_output
- name: Display export completion message
debug:
msg: "AAP configuration export completed successfully."

- name: Save export to file
ansible.builtin.copy:
content: "{{ export_output.assets }}"
dest: "/home/nrathi/org.json"
delegate_to: ctrl
run_once: true

to import the playbook we need to place the file in the new AAP system with the import playbook and import it 

---
- name: export all aap config
hosts: ctrl
gather_facts: true
tasks:
- name: Display start message
debug:
msg: "Starting AAP configuration Import process."

- name: Export all assets
awx.awx.import:
controller_host: 192.168.64.67:8443
controller_username: admin
controller_password: primod123
validate_certs: false
assets: "{{ lookup('file', 'org.json') | from_json() }}"
delegate_to: localhost

- name: Display export completion message
debug:
msg: "AAP configuration Import completed successfully."


Thursday, June 26, 2025

OpenSource Complaince Management with Ansible

 Hello Guys,

recently i was working on a project for a compliance management and i was looking for a open source tool which can help me with identify the state of my server against the compliance policy. So after a some quick search i come across OpenSCAP which is an open source compliance management tool and solve my problem so i wrote a simple playbook which help me to identify the issue and also help me fix it 

here is the playbook which i write 

---
- name: Check for known CVEs
hosts: all
tasks:
- name: Install OpenSCAP (if not already installed)
ansible.builtin.package:
name: openscap-scanner
state: present

- name: Run OpenSCAP scan
ansible.builtin.command: oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_cis /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml
register: cve_scan

- name: Save CVE report to file
copy:
content: "{{ inventory_hostname }},{{ cve_scan.stdout }}"
dest: /var/ansible/cve_results.csv

delegate_to: localhost 

let me know your thought on this

 

Saturday, April 5, 2025

Get Notification on Failed hosts in Ansible Play

 Hello Guys,

I was working on a writing a simple playbook of making sure that the nginx service should be up and running. while writing i realise that just making sure that service is up and running is not enough so I have added one more module which make sure firewall port 80 is opened permanently I have added the same. I have also added handlers to reload the firewall to make sure changes are permanent. 

  Then I have realise I need to  also make sure that if the Linux host is unreachable then also i should get an alert stating server is unreachable 

i know its a limited use case but i am using it with EDA (event driven ansible) .

Below is the playbook i have came up with

---
- name: Restarting the Nginx if port 80 is down
hosts: all
gather_facts: false
force_handlers: true
ignore_unreachable: true
tasks:
- name: Ping the host
ansible.builtin.ping:
register: ping_result

- name: Ping is not successful
ansible.builtin.debug:
msg: "{{inventory_hostname}} is not from Ansible Controller...!"
when: ping_result.unreachable is defined

- name: Add unreachable hosts to a list
ansible.builtin.set_fact:
unreachable_hosts: "{{ unreachable_hosts | default([]) + [inventory_hostname] }}"
when: ping_result.unreachable is defined

- name: Firewalld |Open port 80 using firewalld
ansible.posix.firewalld:
port: 80/tcp
permanent: yes
state: enabled
notify: Reload firewalld to apply changes
when: ping_result.unreachable is not defined

- name: Make sure service is up and running | Nginx service
ansible.builtin.service:
name: nginx
state: started
become: true
register: nginx_restart
when: ping_result.unreachable is not defined

- name: Genrate Email content to Send in Email | Server is unreachable
ansible.builtin.template:
src: email_alert.html.j2
dest: /tmp/alert_email.html
run_once: true
delegate_to: 127.0.0.1
when: ping_result.unreachable is defined

- name: Email Alert if fail server is unreachable
when: ping_result.unreachable is defined
mail:
host: smtp.gmail.com
port: 587
subtype: html
to:
- "vijay9867206455@gmail.com"
subject: "Alert: Host not reachable on SSH {{ inventory_hostname }}"
body: "{{ lookup('file', '/tmp/alert_email.html') }}"
username: "abc@gmail.com"
password: "your_secure_password"
run_once: true
delegate_to: 127.0.0.1
changed_when: True
handlers:
- name: Reload firewalld | To apply changes
ansible.builtin.service:
name: firewalld
state: reloaded


below is the html template i am using

<!DOCTYPE html>

<html lang="en">

<head>

<meta charset="UTF-8">

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title> Host Down</title>
<style>
table {
width: 100%;
border-collapse: collapse;
font-family: Arial, sans-serif;
background-color: #FF0000;
margin: 0 auto;
}
th, td {
padding: 12px;
border: 1px solid #ddd;
text-align: left;
}

th {
background-color: #17469E;
color: white;
text-transform: uppercase;
font-size: 14px;
}

td {
background-color: #f9f9f9;
font-size: 14px;
}

td.label {
font-weight: bold;
background-color: #e0e0e0;
}

.title {
text-align: center;
font-size: 18px;
font-weight: bold;
margin-bottom: 20px;
color: #333;
}

.container {
width: 80%;
margin: 0 auto;
}
</style>
</head>

<body>

<p>Dear Team,</p>

<p>This is an automated alert to inform you:</p>
<p>Host isnot reachable from ansible on required ssh or WinRM </p>

<table>
<tr>
<th>Host</th>
</tr>
<tr>
<td>{{ inventory_hostname }}</td>
</tr>
</table>
<p>Best regards,<br/>abc@gmail.com</p>

</body>

</html>


#ansible-playbook -i inventory restart_nginx.yml 


Tuesday, February 25, 2025

Lock Linux users if do not login for 30 days

 Hello Guys,

Few days back a kind of strange requirement come to me where as an they wanted to lock the localuser if he/she do not login into the system for more than 30 days 

  1. He/She should not be able to login into the system
  2. SSH into the system
  3. His/Her account should be lock
---
- name: Lock users who have not logged in the last 30 days (excluding system users)
hosts: linux_servers
become: yes
tasks:

- name: Get list of users who have not logged in within 30 days
ansible.builtin.shell: "lastlog -b 30 | awk 'NR>1 && $3!=\"Never\" {print $1}'"
register: inactive_users
changed_when: false

- name: Get list of system users (UID < 1000)
ansible.builtin.shell: "awk -F: '$3 < 1000 {print $1}' /etc/passwd"
register: system_users
changed_when: false

- name: Display inactive users
ansible.builtin.debug:
var: inactive_users.stdout_lines

- name: Display system users (excluded)
ansible.builtin.debug:
var: system_users.stdout_lines

- name: Lock inactive users (excluding system users)
ansible.builtin.command: "usermod --lock {{ item }}"
loop: "{{ inactive_users.stdout_lines }}"
when: item not in system_users.stdout_lines and item not in ["root", "nrathi", "nobody", "other_imp_user"]


Enjoy...guys