LXD: Creating Reusable RHEL 9 VM Images

image 0

Following the LXD theme of my previous posts, today I’m going to walk you through creating a RHEL 9 VM on LXD, and then show you how to turn that VM into an image. This way, you can spin up new RHEL 9 VMs whenever you want—without having to sit through the whole install/setup process again.

To follow along, you’ll need an active Red Hat subscription. If you’re doing this for personal use (like self-development, home lab tinkering, or just because you enjoy virtual machines as much as Netflix), you can download RHEL for free when you sign up for the Red Hat Developer Subscription for Individuals.

No-cost Red Hat Enterprise Linux Individual Developer Subscription: FAQs

1. What is the Red Hat Developer program’s Red Hat Developer Subscription for Individuals?

The Red Hat Developer Subscription for Individuals is a no-cost offering of the Red Hat Developer program and includes access to Red Hat Enterprise Linux among other Red Hat products. It is a program and an offering designed for individual developers, available through the Red Hat Developer program.

2. What Red Hat Enterprise Linux developer subscription is made available at no cost?

The no-cost Red Hat Developer Subscription for Individuals is available and includes Red Hat Enterprise Linux along with numerous other Red Hat technologies. Users can access this no-cost subscription by joining the Red Hat Developer program at developers.redhat.com/register. Joining the program is free.

Creating the VM

I’ll be using RHEL 9 here, but the same steps apply to other versions.

a. Create an empty VM:

lxc init rhel9 --vm --empty

b. Give the VM a 20GB root disk:

lxc config device override rhel9 root size=20GiB

Optional: increase CPU and RAM limits if needed (depending on your profile).

c. Add the install disk:

lxc config device add rhel9 install disk source=/home/victor/Downloads/OS/rhel-9.6-x86_64-dvd.iso boot.priority=10

d. Start the VM and go through the RHEL install (not covered here):

lxc start rhel9

e. Remove the install disk when the install is complete:

lxc device remove rhel9 install

Install the LXD agent

It’s a good idea to include the LXD agent in your image—otherwise, you’ll miss out on nice integration features.

a. Mount the config device:

mkdir /mnt/iso

mount -t virtiofs config /mnt/iso

cd /mnt/iso

b. Install and reboot:

picture 0

c. Check that the agent is running:

$ lxc exec rhel9 -- systemctl is-active lxd-agent
active

Prepare the OS

Run Updates

This section is optional and you only need to follow in case you want to get your image updated with the current packages.

a. Connect the VM to your Red Hat account:

I’m using Red Hat Enterprise Linux Individual Developer Subscription

# rhc connect
Connecting rhel9 to Red Hat.
This might take a few seconds.

Username: jdoe
Password: *********

● Connected to Red Hat Subscription Management
● Connected to Red Hat Insights
● Activated the Remote Host Configuration daemon

Successfully connected to Red Hat!

Manage your connected systems: https://red.ht/connector

b. Install updates and reboot:

# yum update -y

# dnf needs-restarting -r || reboot

c. Install any packages or make tweaks you’d like baked into the image.

Clean up the OS

Before creating the image, let’s tidy things up.

a. Disconnect the VM from Red Hat’s network:

# rhc disconnect
Disconnecting rhel9 from Red Hat.
This might take a few seconds.

● Deactivated the Remote Host Configuration daemon
● Disconnected from Red Hat Insights
● Disconnected from Red Hat Subscription Management

Manage your connected systems: https://red.ht/connector

b. Connect to the terminal as root (either via the Terminal tab on LXD’s UI, or with lxc exec rhel9 -- /bin/bash).

c. Clean DNF cache:

dnf clean all

d. Clean subscription-manager data:

subscription-manager clean

e. Remove SSH keys:

rm -f /etc/ssh/ssh_host_*

f. Clean up logs:

journalctl --vacuum-time=1d

rm -rf /var/log/* /tmp/*

g. Change hostname to a default one:

hostnamectl set-hostname localhost.localdomain

h. Delete the machine ID:

rm -f /etc/machine-id

rm -f /var/lib/dbus/machine-id

touch /etc/machine-id

i. Delete the user if you created one (optional):

userdel -rf redhat

j. Delete root’s history (optional):

rm -f /root/.bash_history

k. Shutdown the machine:

poweroff

Edit the VM Metadata

Before publishing, edit metadata so your image looks nice and tidy:

lxc config metadata edit rhel9

Example:

architecture: x86_64
creation_date: 1758825660
expiry_date: 0
properties:
  architecture: x86_64
  description: Red Hat Enterprise Linux 9.6 (Plow)
  os: rhel
  release: Plow
  version: "9.6"
templates: {}

Tip: to get the current date in Unix time, use date +%s

Create the Image

We are now ready to create the image with lxc publish:

$ lxc publish rhel9 --alias rhel96.x86_64
Instance published with fingerprint: e4ff15c38889676cb3fe1ae0268122294856414b40275ef61cfe0baf23330ae3

Launch a New VM from the Image

And finally, create a fresh VM without all the hassle:

$ lxc launch rhel96.x86_64 rhel9-test
Launching rhel9-test

That’s it—you’ve got a reusable RHEL 9 image. Future VM creation is now basically a one-liner. Way better than babysitting an ISO every time.

How to install Windows on LXD

Yes, you can run Windows 10 or 11 inside LXD. It’s actually not that hard—just a few prep steps and some command-line magic. These instructions are written for Arch, but the process is nearly the same on other distros (package names may change a bit).

Let’s go.

Step 1: Preparing the Image

a. Grab the ISO:

Download Windows from Microsoft like a law-abiding citizen: 👉 https://www.microsoft.com/en-ca/software-download

b. Install the required packages

pacman -S distrobuilder libguestfs wimlib cdrkit

c. Repack the ISO for LXD. Note that I’m creating a Windows 10 VM

sudo distrobuilder repack-windows --windows-arch=amd64 Win10_22H2_English_x64v1.iso Win10_22H2_English_x64v1-lxd.iso

Step 2: Prepping the VM

a. Create an empty VM:

$ lxc init windows10 --empty --vm
Creating windows10

b. Give it a bigger disk

$ lxc config device override windows10 root size=55GiB
Device root overridden for windows10

c. Add CPU and memory

lxc config set windows10 limits.cpu=4 limits.memory=6GiB

d. Add a trusted platform module device

$ lxc config device add windows10 vtpm tpm path=/dev/tpm0
Device vtpm added to windows10

e. Add audio device

lxc config set windows10 raw.qemu -- "-device intel-hda -device hda-duplex -audio spice"

f. Add the install disk. Note that it needs the absolute path

$ lxc config device add windows10 install disk source=/home/victor/Downloads/OS/Win10_22H2_English_x64v1-lxd.iso boot.priority=10
Device install added to windows10

Step 3: Install Windows

a. Start the VM and connect:

lxc start windows10 --console=vga

b. “Press any key” to boot from CD and install Windows

c. Go through the usual Windows install:

You can also use the LXD WebUI console to monitor reboots:

Step 3: Clean Up

Remove the install ISO when you’re done:

$ lxc config device remove windows10 install
Device install removed from windows10

Conclusion

And that’s it—you now have Windows running on LXD. Pretty painless, right?

Exploring LXD (The Linux Container Daemon)

Last week, I attended a virtual talk hosted by Canonical on building a homelab with microservices. During the session, I discovered a new (to me) and very cool utility that’s perfect for managing virtual machines and containers from a centralized interface: LXD, or the Linux Container Daemon.

picture 0

While LXD isn’t new—it’s actually been around since 2016—it’s still flying under the radar for a lot of homelab enthusiasts. After seeing it in action, I realized it’s a hidden gem worth exploring.

Why LXD Stands Out

LXD offers a unified way to manage both containers and VMs—whether through the command line (lxc), a web-based UI, or via REST APIs (though I won’t dive into the API side here).

This makes LXD an excellent choice for home labs because it lets you manage everything from one central “application” instead of juggling multiple tools for VMs and containers separately.

Features That Caught My Attention

Here are some of the standout features that make LXD so powerful:

  • Run System Containers
    • LXD can run full Linux distributions inside containers (unlike Docker, which is app-focused).
    • Containers behave like lightweight virtual machines.
    • You can run Ubuntu, Alpine, Debian, CentOS, etc., inside LXD containers.
  • Run Virtual Machines
    • LXD supports running full virtual machines (VMs) using QEMU/KVM.
    • This allows mixing containers and VMs on the same host with a single tool.
  • Manage Container & VM Lifecycle
    • Create, start, stop, pause, delete containers/VMs.
    • Snapshots and restore functionality.
    • Clone containers or VMs.
  • Image Management
    • Download and use images from remote repositories.
    • Build and publish your own custom container/VM images.
  • Networking
    • LXD provides built-in bridged, macvlan, and fan networking modes.
    • Supports IPv4 and IPv6, NAT, and DNS.
  • Resource Limits
    • Apply resource limits like CPU, RAM, disk I/O, network bandwidth.
    • Useful for multi-tenant or production environments.
  • Cluster Mode
    • LXD supports clustering — multiple nodes sharing the same configuration.

There’s a lot more that LXD can do, but these are the features that really stood out for my personal homelab use case.

Getting Started

LXD currently ships via Snap packages. You can easily install it by running the command below, and that will also install all the requirements:

sudo snap install lxd

Once installed, you’ll need to initialize it. Running lxd init will prompt a series of configuration questions (e.g., storage, network type, clustering, etc.). If you’re just setting things up for testing, feel free to use the same answers I did below — but it’s always a good idea to consult the official LXD Documentation for a deeper understanding:

$ sudo lxd init

Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm, powerflex) [default=btrfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]:
Port to bind LXD to [default=8443]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

The lxc command is a command-line client used to interact with the LXD daemon. To use the lxc command as a non-root user, you’ll need to add your user to the ‘lxd’ group:

sudo usermod -a -G lxd [my_user]

⚠️ Important Notes:

  • This change only takes effect after you log out and back in.
  • Adding a non-root user to the ‘lxd’ group essentialy gives root access to that user.

Configuring Access to the WEB UI

At this point, LXD should be running. You can now access the Web UI at ‘https://127.0.0.1:8443/'.

Since it uses a self-signed certificate, your browser will warn you. Just click “Accept the Risk and Continue” (or equivalent in your browser):

webui1

LXD uses certificate authentication, so you’ll need to generate and trust one.

a. Click “Create a new certificate”:

webui2

b. Then click “Generate”:

webui3

c. Enter a password for the certificate and click on “Generate certificate”:

webui4

d. Download the certificate and trust it from the terminal:

webui5

$ lxc config trust add Downloads/lxd-ui-127.0.0.1.crt

To start your first container, try: lxc launch ubuntu:24.04
Or for a virtual machine: lxc launch ubuntu:24.04 --vm

f. To access the Web UI you’ll also need to import the .pfx file into your browser’s certificate store. Follow your browser’s instructions for importing client certificates:

webui6

g. Once imported, restart your browser and visit ‘https://127.0.0.1:8443/' again — you should be logged in automatically:

webui7

Additional Configuration

By default the storage, when ‘dir’ was set as type, will be inside the storage for the LXD snap (/var/snap/lxd/common/lxd/storage-pools/default). You will probably want to change that. Unfortunatelly you can’t change the default location, so we’ll just add a new one and set it to default.

Let’s make this change via command line.

a. First create a new volume pool (I created mine in /mnt/storage2/VMs/lxd):

$ lxc storage create main dir source=/mnt/storage2/VMs/lxd

Storage pool main created

b. You can check that it was created with:

$ lxc storage list

+---------+--------+------------------------------------------------+-------------+---------+---------+
|  NAME   | DRIVER |                     SOURCE                     | DESCRIPTION | USED BY |  STATE  |
+---------+--------+------------------------------------------------+-------------+---------+---------+
| default | dir    | /var/snap/lxd/common/lxd/storage-pools/default |             | 1       | CREATED |
+---------+--------+------------------------------------------------+-------------+---------+---------+
| main    | dir    | /mnt/storage2/VMs/lxd                          |             | 0       | CREATED |
+---------+--------+------------------------------------------------+-------------+---------+---------+

c. Now let’s set it as the default pool:

lxc profile device set default root pool=[pool name]

When you create an instance, LXD automatically creates the required storage volumes for it, so I will not cover it here.

Creating Instances

Creating both container and VM instances via the UI is extremelly easy.

Containers

We are going to create a container based on the AmazonLinux 2023 image.

a. Back on the UI, browse to “Instances” and click on “Create instance”:

b. Click on “Browse images”:

c. Filter the distribution by “Amazon Linux”, the type as “Container”, and click on “Select”:

d. Give it a name and description and click on “Create and start”:

Tip: Don’t forget to poke around the options on the left next time you create an instance.

It should only take a few seconds for your container to be created and start running. You can access the container from the “Terminal” tab:

You can also access it via the terminal with lxc console, however that will require a password:

VMs

We are going to create an Ubuntu VM with graphical interface.

a. Repeat the same steps as before, but now select Ubuntu as the distribution, VM as the type, and a LTS version:

b. Give it a name/description, and under “Resource limits” set the memory limit to ‘2 GiB’. Then click on “Create and start”:

c. Once the VM is running, access the terminal tab the same way we did before and install ubuntu-desktop:

apt update && apt install ubuntu-desktop

Now is a great time to go grab a coffee… this will take a little while…

d. When the install is complete change the password for the ‘ubuntu’ user and reboot:

# passwd ubuntu

New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: password updated successfully

# reboot

d. Go to the “Console” tab. You should see boot output and evetually be presented with the login window. You can login and use the VM from your browser:

And you can also access the VM graphical interface via the lxc connect command with the --type=vga flag:

lxc console ubuntu-2404 --type=vga

Conclusion

LXD might not be as flashy or widely known as Docker, but it fills a unique and valuable niche. It gives you the simplicity of containers and the power of full VMs — all under one roof. For homelabbers, that means less tool sprawl, cleaner management, and more flexibility to experiment.

Whether you’re just getting started with containers or looking to consolidate your virtualization setup, LXD is absolutely worth a try. With its web UI, clustering support, and straightforward setup, it can quickly become the backbone of a modern homelab.

Top Skills That Will Make You Stand Out as a Linux Admin: Part 1

Bash Linux

Working as a Linux administrator often means being pulled in many directions and juggling a variety of technologies. That doesn’t leave much time to revisit the basics. Over time, work becomes repetitive, and we end up doing things the same way we learned years ago—without stopping to refine or discover smarter approaches.

With this series of posts, I hope to share tips and shortcuts that will help you work faster and smarter—and maybe even stand out among your coworkers by pulling off a few neat tricks.

Try these out at least once—and if you’re still stuck going to the office, print them out and stick them on your cubicle wall.

Today, we’re going to focus on Bash navigation and history.

picture 0

Tab Completion

We all know that [TAB] can be used for both path and command completion, but I’m always surprised by how many admins who aren’t fully comfortable with the shell forget to use it. You’re not going to break anything or use up some imaginary “tab credits.” So use the hell out of tab completion—keep hitting that key until you land on what you need.

And here’s a bonus: don’t forget that [SHIFT]+[TAB] takes you back. So if you scroll past the option or path you were aiming for, just reverse it. Simple, fast, and much smoother than retyping.

Command Navigation

The commands below will help you navigate more efficiently and productively in a shell:

  • Ctrl+a - Move to the beginning of the line (same as [HOME])
  • Ctrl+e - Move to the end of the line (same as [END])
  • Ctrl+left/right - Move backward/forward one word
  • Ctrl+w - Delete the previous word
  • Alt+t - Swap the current word with the previous one
  • Esc+v - Edit the current line in an external editor (this is the edit-and-execute-command function). It uses the default editor set in $VISUAL or $EDITOR, which you can change if needed

picture 0

Note: I remapped my edit-and-execute-command shortcut to Esc+i, which feels more comfortable for me.

Path Navigation

Navigating between directories is something sysadmins do constantly. Being able to move around quickly—and knowing a few tricks—can speed up your work exponentially. It can even impress your coworkers when they see how fast you glide through the shell.

  • Tab completion — We already covered this, but it’s worth repeating: use it and abuse it!
  • cd — Jumps straight back to your home folder. It sounds obvious, but many people forget this one.
  • cd - — Switches to the previous directory you were in. Great for bouncing back and forth.
  • cd ~ — Navigates to your home folder. The tilde (~) is very handy for quickly moving to subdirectories inside your home directory, e.g.: cd ~/Downloads

picture 0

History

As you probably already know, history lets you go back to previous commands and re-execute them. Mastering how to quickly search and re-use previous commands saves you from tediously retyping the same things over and over.

  • Ctrl+r — Search your history for a specific string
    • Use [Shift] to go back if you skip past the command you wanted
    • Start typing the string before pressing Ctrl+r to act as a filter and only see matches for that string
  • !! — Re-execute the last command. Super handy when you forget sudo. Just type sudo !!
  • !$ — The last argument of your previous command. Useful if you mistyped a command and want to reuse its argument, e.g.:

    userdle testuser   # typo
    userdel !$         # fixes it using the last argument
    
    • It’s also great for running the last command with sudo, similar to !!
  • Alt+. — Inserts the last argument of your previous command directly into the current line (and shows it to you). It’s effectively the same as typing !$, but more interactive since it shows the result as you type. Keep pressing it to cycle through arguments from earlier commands.

💡 Tips:

  • Remember to use Esc+v to quickly open a previous command in your editor if it needs modification
  • Bash history can be customized to be even more powerful:

    • HISTTIMEFORMAT — Add timestamps to your command history. E.g.:

      $ history | tail -2
      1029  +2025/08/22 14:52:19 uptime
      1030  +2025/08/22 14:52:21 history | tail -2
      
    • HISTSIZE and HISTFILESIZE — Increase how many commands are stored in memory and on disk

    • shopt -s histappend — Append to your history file instead of overwriting it. This way you don’t lose commands when multiple shells are open

Conclusion

That wraps up our quick dive into Bash navigation, history, and some handy shortcuts to speed up your workflow. Try them out, and stay tuned—there’s plenty more cool stuff coming in the next posts!

Logrotate: Simplifying Log Management

logrotate is a system utility designed to manage log files, ensuring that logs don’t consume excessive disk space by rotating, compressing, and removing old logs according to user-defined rules. This post will help you get familiar with logrotate configuration and usage.

By configuring and running logrotate effectively, you can automate log management, saving space, and ensuring your logs stay manageable over time.

Main Config

The primary configuration file for logrotate is located at /etc/logrotate.conf. It sets default values and contains the directory for additional configuration files (typically /etc/logrotate.d):

# see "man logrotate" for details
# rotate log files weekly
weekly

# keep 4 weeks worth of backlogs
rotate 4

# create new (empty) log files after rotating old ones
create

# use date as a suffix of the rotated file
dateext

# uncomment this if you want your log files compressed
#compress

# RPM packages drop log rotation information into this directory
include /etc/logrotate.d

# no packages own wtmp and btmp -- we'll rotate them here
/var/log/wtmp {
    monthly
    create 0664 root utmp
    minsize 1M
    rotate 1
}

/var/log/btmp {
    missingok
    monthly
    create 0600 root utmp
    rotate 1
}

Configuration Files

The configuration file for each application or log location usually resides under /etc/logrotate.d/ and can contain various options to control how logs are handled.

Common Options

  • hourly – Rotate logs every hour (requires cron to run logrotate hourly).
  • daily – Rotate logs daily.
  • weekly [weekday] – Rotate logs once per week.
  • monthly – Rotate logs the first time logrotate is run in a month.
  • yearly – Rotate logs once a year.
  • rotate [count] – Defines how many rotated logs to keep. If set to 0, logs are deleted instead of being rotated.
  • minsize [size] – Rotate logs when they exceed a specific size, while also respecting the time interval (daily, weekly, etc.).
  • size [size] – Rotate only when log size exceeds the defined limit.
  • maxage [days] – Remove rotated logs after a specified number of days.
  • missingok – Continue without error if the log file is missing.
  • notifempty – Skip rotation if the log file is empty.
  • create [mode] [owner] [group] – Create a new log file with specified permissions, owner, and group.
  • compress – Compress rotated logs.
  • delaycompress – Delay compression until the next rotation cycle.
  • copytruncate – Truncate the log after copying it to the rotated file.
  • sharedscripts – Ensures that post-rotation scripts are executed only once.
  • postrotate/endscript – Define commands to be executed after log rotation:
postrotate
  /opt/life/tools/stop-approve.sh && /opt/life/tools/start-approve.sh
endscript

Examples

Example 1: Deleting Old NMON Files

This example deletes NMON output files that are older than 3 months or larger than 5 MB:

# Logrotate config for nmon
/var/log/nmon/*.nmon {
  rotate 0
  maxage 90
  size 5M
}

Example 2: Compressing LMS Logs

In this case, logrotate keeps 10 compressed versions of logs, rotating them when logs are older than 12 months or larger than 5 MB:

# Logrotate config for LMS
/home/my_admin/cmd/log/*.log {
  rotate 10
  maxage 360
  size 5M
  compress
}

Creating a New Configuration File

To create a new log rotation rule, drop a configuration file into the /etc/logrotate.d/ directory. Ensure the scheduler for logrotate (either cron or systemd) is set up correctly.

Tip: Double-check your scheduler settings to ensure smooth log rotation.

Testing and Running Logrotate

Testing

Dry-Run

Before applying your configuration, you can test it with a dry-run using the -d option:

logrotate -d [my_config_file].conf

Verbosity

To view detailed steps, run logrotate with the -v option (useful with -d for dry-run testing):

logrotate -vd [my_config_file].conf

Scheduling Logrotate

logrotate can be scheduled using either cron or systemd. Here’s a quick overview of both methods:

Cron

When using cron, the logrotate job is typically defined in /etc/cron.daily/logrotate:

#!/bin/sh

/usr/sbin/logrotate -s /var/lib/logrotate/logrotate.status /etc/logrotate.conf
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
    /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit 0

Systemd

Systemd can also manage logrotate via logrotate.timer and logrotate.service.

logrotate.timer:

[Unit]
Description=Daily rotation of log files

[Timer]
OnCalendar=daily
RandomizedDelaySec=1h
Persistent=true

[Install]
WantedBy=timers.target

logrotate.service:

[Unit]
Description=Rotate log files

[Service]
Type=oneshot
ExecStart=/usr/sbin/logrotate /etc/logrotate.conf

Personal Cron Job

You can also schedule logrotate manually through a personal cron job:

# Runs logrotate daily
@daily /sbin/logrotate -s /home/my_admin/cmd/log/logrotate.status /home/my_admin/cmd/logrotate.conf >> /home/my_admin/cmd/log/cron.log 2>&1

Creating Logrotate With Ansible

Managing log rotation through Ansible is a powerful way to automate log maintenance across multiple servers. Below is an example of how you can create a logrotate configuration using Ansible.

Ansible Playbook Example

This playbook installs logrotate (if it’s not already installed) and creates a new configuration file under /etc/logrotate.d/ for an application:

---

- hosts: all
  gather_facts: true
  become: true

  vars:
    logrotate_conf: |
      # Logrotate for application
      /var/log/application/* {
        # Keep 4 versions of file
        rotate 4

        # compress rotated files
        compress

        # Rotates the log files every week
        weekly

        # Ignores the error if the log file is missing
        missingok

        # Does not rotate the log if it is empty
        notifempty

        # Creates a new log file with specified permissions
        create 0755 apache splunk       
      }

  tasks:

    - name: Installs logrotate
      package:
        name: logrotate

    - name: Creates logrotate configuration
      copy:
        content: "{{ logrotate_conf }}"
        dest: /etc/logrotate.d/application
        owner: root
        group: root
        mode: '0644'

Explanation

  • logrotate_conf variable: Defines the configuration file for logrotate. It includes options such as file rotation frequency, compression, and file permissions.

    • rotate 4 – Keeps 4 old versions of the log.
    • compress – Compresses the rotated logs.
    • weekly – Rotates the logs every week.
    • missingok – Ignores errors if the log file is missing.
    • notifempty – Skips log rotation if the log is empty.
    • create 0755 apache splunk – Creates a new log file with specific permissions, owned by apache and splunk groups.
  • Installs logrotate: The task ensures that logrotate is installed on the target servers.

  • Creates logrotate configuration: The copy task creates the custom logrotate configuration file under /etc/logrotate.d/, with appropriate permissions.


By using Ansible, you can streamline the management of log rotation across your environment, ensuring consistency in how logs are maintained across all your systems.

code with