Automating OS Updates on LXD Instances with Ansible

image 0

Keeping your LXD containers and VMs up to date can be a bit of a hassle — especially when you’re managing different distributions or even Windows instances. That’s why I built lxd-os-update, an Ansible project designed to automate the process of running OS updates across all your LXD environments.

It’s a simple yet powerful role that takes care of the entire cycle for you:

  • Starts stopped and frozen instances (configurable)
  • Runs OS updates for both APT and YUM-based Linux distros
  • Updates Windows servers via SSH and PowerShell
  • Restores each instance to its previous stopped or frozen state afterward

Why I Built It

When managing LXD VMs and containers, I often needed a way to keep everything patched without manually starting, updating, and stopping each instance. Ansible seemed perfect for the job — but I quickly ran into a limitation with the Ansible LXD inventory plugin.

The plugin hardcodes ansible_connection to ssh whenever it detects an IP address. That means even if you explicitly set ansible_connection: lxd, Ansible will still try to connect via SSH, using the IP address as ansible_host.

Here’s an example of what happens:

$ ansible-inventory --host ubuntu-2404
{
    "ansible_connection": "ssh",
    "ansible_host": "10.48.212.128",
    "ansible_lxd_os": "ubuntu",
    "ansible_lxd_type": "virtual-machine"
}

And when you try to run a task with the LXD connection plugin:

$ ansible -m ping ubuntu-2404 -e 'ansible_connection=lxd' -vvvv

You’ll get the error:

Error: Failed to fetch instance "10" in project "default": Instance not found

That’s because Ansible is trying to connect to an instance literally named “10”, taken from the IP address.

The Workaround: A Custom Inventory Script

To get around this, I created a custom inventory script, included in the project. It defines instances manually and correctly sets ansible_connection: lxd (except for Windows, which still uses SSH).

This approach bypasses the plugin’s issue and ensures Ansible connects directly through LXD when possible.

Requirements

Before running the playbook, make sure your environment meets these conditions:

  • Ansible on the host
    • Ansible collections (see requirements.yml):
      • community.general
      • ansible.windows
  • Each instance must have the image.os config attribute set (see below)
  • python3 should be installed inside the instances
  • The LXD agent must be installed and running inside the instances
  • For Windows instances, OpenSSH server must be enabled and configured for an administrator user

Setting the image.os Attribute

The automation relies on the image.os attribute to detect the OS family. You can view and set it like this:

$ lxc list -cns,image.os:OS
+------------------+---------+--------+
|       NAME       |  STATE  |   OS   |
+------------------+---------+--------+
| jellyfin         | STOPPED | ubuntu |
| rhel9            | STOPPED | rhel   |
| windows11        | RUNNING |        |   <--- MISSING
+------------------+---------+--------+

You can manually set the value with:

# Example for a Windows 11 instance
$ lxc config set windows11 image.os=windows

# Example for a RHEL 9 instance
$ lxc config set rhel9 image.os=rhel

Installing the LXD Agent

The LXD agent enables direct Ansible communication through the LXD plugin. Refer to your distribution’s documentation on how to install and start it.

Configuring Windows for SSH

For Windows instances, Ansible connects over SSH. So you’ll need to configured it, preferrably with an SSH key.

a. Enable OpenSSH server and configure the firewall:

# Start the sshd service
Start-Service sshd

# OPTIONAL but recommended:
Set-Service -Name sshd -StartupType 'Automatic'

# Install the OpenSSH Server
Add-WindowsCapability -Online -Name (Get-WindowsCapability -Online | Where-Object Name -like 'OpenSSH.Server*').name

# Confirm the Firewall rule is configured. It should be created automatically by setup. Run the following to verify
if (!(Get-NetFirewallRule -Name "OpenSSH-Server-In-TCP" -ErrorAction SilentlyContinue)) {
    Write-Output "Firewall Rule 'OpenSSH-Server-In-TCP' does not exist, creating it..."
    New-NetFirewallRule -Name 'OpenSSH-Server-In-TCP' -DisplayName 'OpenSSH Server (sshd)' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 22
} else {
    Write-Output "Firewall rule 'OpenSSH-Server-In-TCP' has been created and exists."
}

b. Test the connection

c. Create a new ssh key pair if desired

d. Copy the contents of your public key (e.g.: id_rsa.pub) to C:\ProgramData\ssh\administrators_authorized_keys

e. Change de default shell to PowerShell in the registry key HKEY_LOCAL_MACHINE\SOFTWARE\OpenSSH\DefaultShell:

New-ItemProperty -Path "HKLM:\SOFTWARE\OpenSSH" -Name DefaultShell -Value "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -PropertyType String -Force

f. Install the PSWindowsUpdate PowerShell Module for updates

# Install module
Install-Module -Name PSWindowsUpdate

# Enable execution
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

g. Update your inventory variables (group_vars/windows.yml) with your SSH details:

ansible_user:
ansible_ssh_private_key_file:

h. Then test your connection:

$ ansible windows11 -m raw -a 'get-date'
windows11 | CHANGED | rc=0 >>
Tuesday, October 28, 2025 7:48:35 PM

Role Variables

You can control different behaviors using the defaults and vars role variables.

Defaults - roles/os-update/defaults/main.yml

Enable the reboot of instances, if needed, after the update:

reboot_if_needed: true

Enable/disable updates on non running instances:

update_non_running: true

Vars - roles/os-update/vars/main.yml

Specifies what distros are interpreted as apt/yum. You can add distros as needed.

apt_distros:
  - debian
  - ubuntu
  - kali
  - raspbian

yum_distros:
  - rhel
  - oracle

Running the Playbook

Once configured, just run:

ansible-playbook playbooks/os-update.yml

Ansible will:

  1. Start any stopped or frozen instances (if configured)
  2. Run system updates for each one
  3. Reboot if required
  4. Return the instances to their previous state

Final Thoughts

This setup has saved me a lot of time keeping my LXD environment consistent and secure. Whether you’re managing a small lab or a larger test cluster, automating OS updates with Ansible can make your maintenance much smoother — and less error-prone.

You can check out the full project on GitHub: 👉 https://github.com/victorbrca/lxd-os-update/

LXD: Creating Reusable RHEL 9 VM Images

image 0

Following the LXD theme of my previous posts, today I’m going to walk you through creating a RHEL 9 VM on LXD, and then show you how to turn that VM into an image. This way, you can spin up new RHEL 9 VMs whenever you want—without having to sit through the whole install/setup process again.

To follow along, you’ll need an active Red Hat subscription. If you’re doing this for personal use (like self-development, home lab tinkering, or just because you enjoy virtual machines as much as Netflix), you can download RHEL for free when you sign up for the Red Hat Developer Subscription for Individuals.

No-cost Red Hat Enterprise Linux Individual Developer Subscription: FAQs

1. What is the Red Hat Developer program’s Red Hat Developer Subscription for Individuals?

The Red Hat Developer Subscription for Individuals is a no-cost offering of the Red Hat Developer program and includes access to Red Hat Enterprise Linux among other Red Hat products. It is a program and an offering designed for individual developers, available through the Red Hat Developer program.

2. What Red Hat Enterprise Linux developer subscription is made available at no cost?

The no-cost Red Hat Developer Subscription for Individuals is available and includes Red Hat Enterprise Linux along with numerous other Red Hat technologies. Users can access this no-cost subscription by joining the Red Hat Developer program at developers.redhat.com/register. Joining the program is free.

Creating the VM

I’ll be using RHEL 9 here, but the same steps apply to other versions.

a. Create an empty VM:

lxc init rhel9 --vm --empty

b. Give the VM a 20GB root disk:

lxc config device override rhel9 root size=20GiB

Optional: increase CPU and RAM limits if needed (depending on your profile).

c. Add the install disk:

lxc config device add rhel9 install disk source=/home/victor/Downloads/OS/rhel-9.6-x86_64-dvd.iso boot.priority=10

d. Start the VM and go through the RHEL install (not covered here):

lxc start rhel9

e. Remove the install disk when the install is complete:

lxc device remove rhel9 install

Install the LXD agent

It’s a good idea to include the LXD agent in your image—otherwise, you’ll miss out on nice integration features.

a. Mount the config device:

mkdir /mnt/iso

mount -t virtiofs config /mnt/iso

cd /mnt/iso

b. Install and reboot:

picture 0

c. Check that the agent is running:

$ lxc exec rhel9 -- systemctl is-active lxd-agent
active

Prepare the OS

Run Updates

This section is optional and you only need to follow in case you want to get your image updated with the current packages.

a. Connect the VM to your Red Hat account:

I’m using Red Hat Enterprise Linux Individual Developer Subscription

# rhc connect
Connecting rhel9 to Red Hat.
This might take a few seconds.

Username: jdoe
Password: *********

● Connected to Red Hat Subscription Management
● Connected to Red Hat Insights
● Activated the Remote Host Configuration daemon

Successfully connected to Red Hat!

Manage your connected systems: https://red.ht/connector

b. Install updates and reboot:

# yum update -y

# dnf needs-restarting -r || reboot

c. Install any packages or make tweaks you’d like baked into the image.

Clean up the OS

Before creating the image, let’s tidy things up.

a. Disconnect the VM from Red Hat’s network:

# rhc disconnect
Disconnecting rhel9 from Red Hat.
This might take a few seconds.

● Deactivated the Remote Host Configuration daemon
● Disconnected from Red Hat Insights
● Disconnected from Red Hat Subscription Management

Manage your connected systems: https://red.ht/connector

b. Connect to the terminal as root (either via the Terminal tab on LXD’s UI, or with lxc exec rhel9 -- /bin/bash).

c. Clean DNF cache:

dnf clean all

d. Clean subscription-manager data:

subscription-manager clean

e. Remove SSH keys:

rm -f /etc/ssh/ssh_host_*

f. Clean up logs:

journalctl --vacuum-time=1d

rm -rf /var/log/* /tmp/*

g. Change hostname to a default one:

hostnamectl set-hostname localhost.localdomain

h. Delete the machine ID:

rm -f /etc/machine-id

rm -f /var/lib/dbus/machine-id

touch /etc/machine-id

i. Delete the user if you created one (optional):

userdel -rf redhat

j. Delete root’s history (optional):

rm -f /root/.bash_history

k. Shutdown the machine:

poweroff

Edit the VM Metadata

Before publishing, edit metadata so your image looks nice and tidy:

lxc config metadata edit rhel9

Example:

architecture: x86_64
creation_date: 1758825660
expiry_date: 0
properties:
  architecture: x86_64
  description: Red Hat Enterprise Linux 9.6 (Plow)
  os: rhel
  release: Plow
  version: "9.6"
templates: {}

Tip: to get the current date in Unix time, use date +%s

Create the Image

We are now ready to create the image with lxc publish:

$ lxc publish rhel9 --alias rhel96.x86_64
Instance published with fingerprint: e4ff15c38889676cb3fe1ae0268122294856414b40275ef61cfe0baf23330ae3

Launch a New VM from the Image

And finally, create a fresh VM without all the hassle:

$ lxc launch rhel96.x86_64 rhel9-test
Launching rhel9-test

That’s it—you’ve got a reusable RHEL 9 image. Future VM creation is now basically a one-liner. Way better than babysitting an ISO every time.

How to install Windows on LXD

Yes, you can run Windows 10 or 11 inside LXD. It’s actually not that hard—just a few prep steps and some command-line magic. These instructions are written for Arch, but the process is nearly the same on other distros (package names may change a bit).

Let’s go.

Step 1: Preparing the Image

a. Grab the ISO:

Download Windows from Microsoft like a law-abiding citizen: 👉 https://www.microsoft.com/en-ca/software-download

b. Install the required packages

pacman -S distrobuilder libguestfs wimlib cdrkit

c. Repack the ISO for LXD. Note that I’m creating a Windows 10 VM

sudo distrobuilder repack-windows --windows-arch=amd64 Win10_22H2_English_x64v1.iso Win10_22H2_English_x64v1-lxd.iso

Step 2: Prepping the VM

a. Create an empty VM:

$ lxc init windows10 --empty --vm
Creating windows10

b. Give it a bigger disk

$ lxc config device override windows10 root size=55GiB
Device root overridden for windows10

c. Add CPU and memory

lxc config set windows10 limits.cpu=4 limits.memory=6GiB

d. Add a trusted platform module device

$ lxc config device add windows10 vtpm tpm path=/dev/tpm0
Device vtpm added to windows10

e. Add audio device

lxc config set windows10 raw.qemu -- "-device intel-hda -device hda-duplex -audio spice"

f. Add the install disk. Note that it needs the absolute path

$ lxc config device add windows10 install disk source=/home/victor/Downloads/OS/Win10_22H2_English_x64v1-lxd.iso boot.priority=10
Device install added to windows10

Step 3: Install Windows

a. Start the VM and connect:

lxc start windows10 --console=vga

b. “Press any key” to boot from CD and install Windows

c. Go through the usual Windows install:

You can also use the LXD WebUI console to monitor reboots:

Step 3: Clean Up

Remove the install ISO when you’re done:

$ lxc config device remove windows10 install
Device install removed from windows10

Conclusion

And that’s it—you now have Windows running on LXD. Pretty painless, right?

Exploring LXD (The Linux Container Daemon)

Last week, I attended a virtual talk hosted by Canonical on building a homelab with microservices. During the session, I discovered a new (to me) and very cool utility that’s perfect for managing virtual machines and containers from a centralized interface: LXD, or the Linux Container Daemon.

picture 0

While LXD isn’t new—it’s actually been around since 2016—it’s still flying under the radar for a lot of homelab enthusiasts. After seeing it in action, I realized it’s a hidden gem worth exploring.

Why LXD Stands Out

LXD offers a unified way to manage both containers and VMs—whether through the command line (lxc), a web-based UI, or via REST APIs (though I won’t dive into the API side here).

This makes LXD an excellent choice for home labs because it lets you manage everything from one central “application” instead of juggling multiple tools for VMs and containers separately.

Features That Caught My Attention

Here are some of the standout features that make LXD so powerful:

  • Run System Containers
    • LXD can run full Linux distributions inside containers (unlike Docker, which is app-focused).
    • Containers behave like lightweight virtual machines.
    • You can run Ubuntu, Alpine, Debian, CentOS, etc., inside LXD containers.
  • Run Virtual Machines
    • LXD supports running full virtual machines (VMs) using QEMU/KVM.
    • This allows mixing containers and VMs on the same host with a single tool.
  • Manage Container & VM Lifecycle
    • Create, start, stop, pause, delete containers/VMs.
    • Snapshots and restore functionality.
    • Clone containers or VMs.
  • Image Management
    • Download and use images from remote repositories.
    • Build and publish your own custom container/VM images.
  • Networking
    • LXD provides built-in bridged, macvlan, and fan networking modes.
    • Supports IPv4 and IPv6, NAT, and DNS.
  • Resource Limits
    • Apply resource limits like CPU, RAM, disk I/O, network bandwidth.
    • Useful for multi-tenant or production environments.
  • Cluster Mode
    • LXD supports clustering — multiple nodes sharing the same configuration.

There’s a lot more that LXD can do, but these are the features that really stood out for my personal homelab use case.

Getting Started

LXD currently ships via Snap packages. You can easily install it by running the command below, and that will also install all the requirements:

sudo snap install lxd

Once installed, you’ll need to initialize it. Running lxd init will prompt a series of configuration questions (e.g., storage, network type, clustering, etc.). If you’re just setting things up for testing, feel free to use the same answers I did below — but it’s always a good idea to consult the official LXD Documentation for a deeper understanding:

$ sudo lxd init

Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm, powerflex) [default=btrfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]:
Port to bind LXD to [default=8443]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

The lxc command is a command-line client used to interact with the LXD daemon. To use the lxc command as a non-root user, you’ll need to add your user to the ‘lxd’ group:

sudo usermod -a -G lxd [my_user]

⚠️ Important Notes:

  • This change only takes effect after you log out and back in.
  • Adding a non-root user to the ‘lxd’ group essentialy gives root access to that user.

Configuring Access to the WEB UI

At this point, LXD should be running. You can now access the Web UI at ‘https://127.0.0.1:8443/'.

Since it uses a self-signed certificate, your browser will warn you. Just click “Accept the Risk and Continue” (or equivalent in your browser):

webui1

LXD uses certificate authentication, so you’ll need to generate and trust one.

a. Click “Create a new certificate”:

webui2

b. Then click “Generate”:

webui3

c. Enter a password for the certificate and click on “Generate certificate”:

webui4

d. Download the certificate and trust it from the terminal:

webui5

$ lxc config trust add Downloads/lxd-ui-127.0.0.1.crt

To start your first container, try: lxc launch ubuntu:24.04
Or for a virtual machine: lxc launch ubuntu:24.04 --vm

f. To access the Web UI you’ll also need to import the .pfx file into your browser’s certificate store. Follow your browser’s instructions for importing client certificates:

webui6

g. Once imported, restart your browser and visit ‘https://127.0.0.1:8443/' again — you should be logged in automatically:

webui7

Additional Configuration

By default the storage, when ‘dir’ was set as type, will be inside the storage for the LXD snap (/var/snap/lxd/common/lxd/storage-pools/default). You will probably want to change that. Unfortunatelly you can’t change the default location, so we’ll just add a new one and set it to default.

Let’s make this change via command line.

a. First create a new volume pool (I created mine in /mnt/storage2/VMs/lxd):

$ lxc storage create main dir source=/mnt/storage2/VMs/lxd

Storage pool main created

b. You can check that it was created with:

$ lxc storage list

+---------+--------+------------------------------------------------+-------------+---------+---------+
|  NAME   | DRIVER |                     SOURCE                     | DESCRIPTION | USED BY |  STATE  |
+---------+--------+------------------------------------------------+-------------+---------+---------+
| default | dir    | /var/snap/lxd/common/lxd/storage-pools/default |             | 1       | CREATED |
+---------+--------+------------------------------------------------+-------------+---------+---------+
| main    | dir    | /mnt/storage2/VMs/lxd                          |             | 0       | CREATED |
+---------+--------+------------------------------------------------+-------------+---------+---------+

c. Now let’s set it as the default pool:

lxc profile device set default root pool=[pool name]

When you create an instance, LXD automatically creates the required storage volumes for it, so I will not cover it here.

Creating Instances

Creating both container and VM instances via the UI is extremelly easy.

Containers

We are going to create a container based on the AmazonLinux 2023 image.

a. Back on the UI, browse to “Instances” and click on “Create instance”:

b. Click on “Browse images”:

c. Filter the distribution by “Amazon Linux”, the type as “Container”, and click on “Select”:

d. Give it a name and description and click on “Create and start”:

Tip: Don’t forget to poke around the options on the left next time you create an instance.

It should only take a few seconds for your container to be created and start running. You can access the container from the “Terminal” tab:

You can also access it via the terminal with lxc console, however that will require a password:

VMs

We are going to create an Ubuntu VM with graphical interface.

a. Repeat the same steps as before, but now select Ubuntu as the distribution, VM as the type, and a LTS version:

b. Give it a name/description, and under “Resource limits” set the memory limit to ‘2 GiB’. Then click on “Create and start”:

c. Once the VM is running, access the terminal tab the same way we did before and install ubuntu-desktop:

apt update && apt install ubuntu-desktop

Now is a great time to go grab a coffee… this will take a little while…

d. When the install is complete change the password for the ‘ubuntu’ user and reboot:

# passwd ubuntu

New password:
BAD PASSWORD: The password is shorter than 8 characters
Retype new password:
passwd: password updated successfully

# reboot

d. Go to the “Console” tab. You should see boot output and evetually be presented with the login window. You can login and use the VM from your browser:

And you can also access the VM graphical interface via the lxc connect command with the --type=vga flag:

lxc console ubuntu-2404 --type=vga

Conclusion

LXD might not be as flashy or widely known as Docker, but it fills a unique and valuable niche. It gives you the simplicity of containers and the power of full VMs — all under one roof. For homelabbers, that means less tool sprawl, cleaner management, and more flexibility to experiment.

Whether you’re just getting started with containers or looking to consolidate your virtualization setup, LXD is absolutely worth a try. With its web UI, clustering support, and straightforward setup, it can quickly become the backbone of a modern homelab.

Top Skills That Will Make You Stand Out as a Linux Admin: Part 1

Bash Linux

Working as a Linux administrator often means being pulled in many directions and juggling a variety of technologies. That doesn’t leave much time to revisit the basics. Over time, work becomes repetitive, and we end up doing things the same way we learned years ago—without stopping to refine or discover smarter approaches.

With this series of posts, I hope to share tips and shortcuts that will help you work faster and smarter—and maybe even stand out among your coworkers by pulling off a few neat tricks.

Try these out at least once—and if you’re still stuck going to the office, print them out and stick them on your cubicle wall.

Today, we’re going to focus on Bash navigation and history.

picture 0

Tab Completion

We all know that [TAB] can be used for both path and command completion, but I’m always surprised by how many admins who aren’t fully comfortable with the shell forget to use it. You’re not going to break anything or use up some imaginary “tab credits.” So use the hell out of tab completion—keep hitting that key until you land on what you need.

And here’s a bonus: don’t forget that [SHIFT]+[TAB] takes you back. So if you scroll past the option or path you were aiming for, just reverse it. Simple, fast, and much smoother than retyping.

Command Navigation

The commands below will help you navigate more efficiently and productively in a shell:

  • Ctrl+a - Move to the beginning of the line (same as [HOME])
  • Ctrl+e - Move to the end of the line (same as [END])
  • Ctrl+left/right - Move backward/forward one word
  • Ctrl+w - Delete the previous word
  • Alt+t - Swap the current word with the previous one
  • Esc+v - Edit the current line in an external editor (this is the edit-and-execute-command function). It uses the default editor set in $VISUAL or $EDITOR, which you can change if needed

picture 0

Note: I remapped my edit-and-execute-command shortcut to Esc+i, which feels more comfortable for me.

Path Navigation

Navigating between directories is something sysadmins do constantly. Being able to move around quickly—and knowing a few tricks—can speed up your work exponentially. It can even impress your coworkers when they see how fast you glide through the shell.

  • Tab completion — We already covered this, but it’s worth repeating: use it and abuse it!
  • cd — Jumps straight back to your home folder. It sounds obvious, but many people forget this one.
  • cd - — Switches to the previous directory you were in. Great for bouncing back and forth.
  • cd ~ — Navigates to your home folder. The tilde (~) is very handy for quickly moving to subdirectories inside your home directory, e.g.: cd ~/Downloads

picture 0

History

As you probably already know, history lets you go back to previous commands and re-execute them. Mastering how to quickly search and re-use previous commands saves you from tediously retyping the same things over and over.

  • Ctrl+r — Search your history for a specific string
    • Use [Shift] to go back if you skip past the command you wanted
    • Start typing the string before pressing Ctrl+r to act as a filter and only see matches for that string
  • !! — Re-execute the last command. Super handy when you forget sudo. Just type sudo !!
  • !$ — The last argument of your previous command. Useful if you mistyped a command and want to reuse its argument, e.g.:

    userdle testuser   # typo
    userdel !$         # fixes it using the last argument
    
    • It’s also great for running the last command with sudo, similar to !!
  • Alt+. — Inserts the last argument of your previous command directly into the current line (and shows it to you). It’s effectively the same as typing !$, but more interactive since it shows the result as you type. Keep pressing it to cycle through arguments from earlier commands.

💡 Tips:

  • Remember to use Esc+v to quickly open a previous command in your editor if it needs modification
  • Bash history can be customized to be even more powerful:

    • HISTTIMEFORMAT — Add timestamps to your command history. E.g.:

      $ history | tail -2
      1029  +2025/08/22 14:52:19 uptime
      1030  +2025/08/22 14:52:21 history | tail -2
      
    • HISTSIZE and HISTFILESIZE — Increase how many commands are stored in memory and on disk

    • shopt -s histappend — Append to your history file instead of overwriting it. This way you don’t lose commands when multiple shells are open

Conclusion

That wraps up our quick dive into Bash navigation, history, and some handy shortcuts to speed up your workflow. Try them out, and stay tuned—there’s plenty more cool stuff coming in the next posts!

code with