I created this practice exam as part of my preparation for the Red Hat RHCE exam (EX294V84K). It comprises 10 advanced-level tasks, with a focus on incorporating some of the most challenging RHCSA objectives.
Register for a free Red Hat Developer subscription - instructions
On the control node:
Register with you developer subscription. Set it to the 8.4 release
Uninstall Ansible if installed
Remove all EPEL repos
Add the ansible-2.9-for-rhel-8-x86_64-rpms repo via subscription manager
Add an additional 10GB disk to node4
Increase memory on node4 to 1024M
Objectives
The following top-level objectives are covered on this exam, as per ‘EX294V84K’.
Be able to perform all tasks expected of a Red Hat Certified System Administrator
Understand core components of Ansible
Install and configure an Ansible control node
Configure Ansible managed nodes
Script administration tasks
Create and use static inventories to define groups of hosts
Create Ansible plays and playbooks
Use Ansible modules for system administration tasks that work with:
Create and use templates to create customized configuration files
Work with Ansible variables and facts
Create and work with roles
Download roles from an Ansible Galaxy and use them
Use Ansible Vault in playbooks to protect sensitive data
Use provided documentation to look up specific information about Ansible modules and commands
Tasks
Task 1
Objectives covered:
2. Understand core components of Ansible
3. Install and configure an Ansible control node
4. Configure Ansible managed nodes
Tasks:
Install ansible on the control node
Create a user called ‘ansible’ on the control node
Create the directory /home/ansible/exam-files. This is where all files will be saved
Create the following sub-directories:
roles, vars, playbooks, scripts, files
Create an ssh key for the ‘ansible’ user in this folder
Create an inventory file with the nodes:
node1
node2
node3
node4
Create an ansible config file as follows:
Roles path is set to /home/ansible/exam-files/roles
Inventory file is /home/ansible/exam-files/inventory
User to SSH to remote nodes is ‘ansible’
Add the ssh key from the previous task
Disable:
Cow output
Retry files
Host key checking
SSH to all nodes and create the ‘ansible’ user. Give it a password
Make so that the ‘ansible’ user can elevate privileges without a password on all nodes
Distribute the ssh key created to the nodes (use any method)
Disable ssh password authentication for the ‘ansible’ user on all nodes
Create the ad-hoc script /home/ansible/exam-files/scripts/check-connection.sh that checks that the ssh connection works to all nodes
Task 2
Objectives covered:
5. Script administration tasks
Tasks:
Create the shell script /home/ansible/exam-files/scripts/get-server-info.sh that:
Gets the hostname, OS name, OS version, tuned service status, and the tuned profile that is currently active. Output should look like:
Hostname: control.ansi.example.com
Name: "Red Hat Enterprise Linux"
Version: "8.0 (Ootpa)"
Tuned status: active
Current active profile: virtual-guest
Create the ad-hoc script /home/ansible/exam-files/scripts/task2.sh that:
Uploads ‘get-server-info.sh’ to ‘node1’ with:
To /usr/local/bin/get-server-info.sh
Owned by ‘root:root’
Permission is ‘rwxr-xr-x’
Run the add-hoc script
Task 3
Objectives covered:
6. Create and use static inventories to define groups of hosts
Tasks:
Modify the inventory file to have the following groups
‘node1’ and ‘node2’ are in the ‘webservers’ group
‘node3’ and ‘node4’ are in the ‘databases’ group
‘node3’ is in the ‘mysql’ group
‘node4’ is in the ‘postgresql’ group
‘node1’ is in the ‘version1’ group
‘node2’ is in the ‘version2’ group
Task 4
Objectives covered:
7. Create Ansible plays and playbooks
Tasks:
Create the playbook /home/ansible/exam-files/playbooks/task4.ymlthat:
Creates the folder /data/backup on the ‘webservers’ group. The folder should have read and execute permission for group and others
Creates the file /etc/server_role on all servers
The content of the file should be ‘webservers’ or ‘databases’ according to the inventory group
Create a task that uses the rpm command to check if ‘httpd’ is installed on the webservers and databases groups
This task should only show as changed when it fails
Ceate two tasks that display the following output based on the exit code from the rpm task. These tasks should run against the same groups as the rpm task:
‘HTTPD is installed’ if it’s installed
‘HTTPD is not installed’ if it’s not installed
Makes sure that the default system target is set to ‘multi-user.target’
Should only set the target if not already set
Should show change on failure
Should ignore errors
Task 5
Objectives covered:
1. Be able to perform all tasks expected of a Red Hat Certified System Administrator
8. Use Ansible modules for system administration tasks that work with
10. Work with Ansible variables and facts
Tasks:
Create bash a script called /home/ansible/exam-files/files/root_space_check.sh that gets the used space percent for root (/) and:
Logs an info message to journald that looks like root_space_check.sh[PID]: / usage is within threshold when usage is below 70%
Logs a warning message to journald with root_space_check.sh[PID]: / usage is above 70% threshold when usage is above 70%
Create the playbook /home/ansible/exam-files/playbooks/task5.yml that
Uploads the root_space_check.sh script to /usr/local/bin/ to all servers and set execute bit all accross (ugo)
Adds an entry to root’s crontab to execute the script every hour on all servers
Does the following on the ‘webservers’ group
Installs ‘httpd’
Enables and starts the ‘httpd’ service
Enables the ‘http’ and ‘httpd’ service on firewalld (runtime and permanent)
Sets the Listen option in /etc/httpd/conf/httpd.conf to the internal IP. E.g.: Listen 192.168.55.201:80. Use facts variables for the internal IP
Whenever httpd.conf is changed
Make sure that the ‘httpd’ service is restarted
Backs up an archived (zip) version of httpd.conf to /data/backup/httpd.conf-[YYYYMMDD_HHMMSS].zip (change [YYYYMMDD_HHMMSS] to a date string, e.g.: ‘20231123_2400’)
Configures storage on the mysql group as follow:
PV using /dev/sdb
VG named ‘databases_vg’
LV name ‘databases_lv’
ext4 filesystem with the volume label of ‘DATABASES’
Mounted on fstab under /data/databases
Enables SELinux on the databases group with targeted policy
Task 6
Objectives covered:
9. Create and use templates to create customized configuration files
Create the role /home/ansible/exam-files/roles/start-page
Manually convert the index.html file into a jinja2 template that will set the following values and add it to the ‘start-page’ role as a template:
[HOSTNAME] - Should get the node FQDN value from an ansible fact variable
[VERSION] - Version group from the inventory
[IP ADDRESS] - Should get the node internal IP value from an ansible fact variable
[TIMEZONE] - Should
Create the main task for this role to push the template
Create the role /home/ansible/exam-files/roles/journald-persistent. This role should:
Enable persistent journald with all the required steps
Set the max storage to 100M
Reload the service when changes are made
Create the playbook /home/ansible/exam-files/playbooks/task6.yml that applies the ‘start-page’ role to the ‘webservers’ group and the ‘jounald-persistent’ role to all servers
Task 7
Objectives covered:
Work with Ansible variables and facts
Tasks:
Create the a custom fact for the ‘webservers’ group with the structure below:
app_version should be based on the version specified in the inventory file
NOTE: This task can be done via a playbook or manually
Task 8
Objectives covered:
1. Be able to perform all tasks expected of a Red Hat Certified System Administrator
11. Create and work with roles
Tasks:
Before you start, remember you should have added a 10GB disk to node4 and increased it’s memory to 1024M
Create the role /home/ansible/exam-files/roles/postgresql that does the following:
Creates a VDO on the 10G disk with:
VDO name is ‘databases_vdo’
20G logical size
Deduplication disabled
Auto write mode (write policy)
Perform needed VDO steps, as per RHCSA
Create a logical volume with:
PV using the vdo device
VG named ‘databases_vg’
LV name ‘databases_lv’
Format and mount with:
ext4 filesystem (using VDO requirements)
Mounted on fstab under /data/databases
Follow vdo mount requirements, as per RHCSA
Installs the postgresql package group - @postgresql
Modifies the value of Environment=PGDATA= in the systemd service for ‘postgresql.service’ to have the value below (remember the old path and make sure new value is reloaded)
Environment=PGDATA=/data/databases/postgresql_data
Creates the directory /data/databases/postgresql_data
Sets the ownership of /data/databases/postgresql_data to postgres:postgres with rwx------
Initializes the DB with postgresql-setup --initdb
Should only run during setup
Enables the SELinux boolean selinuxuser_postgresql_connect_enabled
Enables and starts the service postgresql.service
The service should be restarted whenever the systemd unit file for postgresql.service is changed
See warning below
Create the playbook /home/ansible/exam-files/playbooks/deploy-postgresql.yml that pushes this role to the ‘postgresql’ group
Add the following to the same playbook as tasks:
Creates the dir /data/db_troubleshoot
Sets the ownership of /data/db_troubleshoot to postgres:postgres with rwx------
Creates the group ‘pgsqladmin’
Creates the user ‘dbadmin’ with primary group of ‘pgsqladmin’
Adds an ACL that gives the ‘pgsqladmin’ group full access to /data/db_troubleshoot. This should also be the default ACL for new files
WARNING
The postgresql service will fail to start. You will need to logon to the server and fix the issue. The solution/fix can be done manually, but it needs to be part of the playbook.
TIP
While creating the VDO device you may run into the error below:
fatal: [node4]: FAILED! => {
"changed": false,
"module_stderr": "Shared connection to node4 closed.\r\n",
"module_stdout": "/tmp/ansible_vdo_payload_crp07req/ansible_vdo_payload.zip/ansible/modules/system/vdo.py:330: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.\r\n/bin/sh: line 1: 6280 Killed /usr/libexec/platform-python /home/ansible/.ansible/tmp/ansible-tmp-1701096243.3300107-7102-276967642935618/AnsiballZ_vdo.py\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 137
}
If that’s the case, fully remove the vdo device and then apply the patch below. While this is not part of the exam, it’s a good skill to aquire.
You can identify the path for the Ansible code with ansible --version. Then browse to the module shown in that commit message and modify the 2x lines. Note that the line number may not match, but should be pretty close.
Task 9
Objectives covered:
12. Download roles from an Ansible Galaxy and use them
13. Use Ansible Vault in playbooks to protect sensitive data
Tasks:
Using ansible-galaxy search for and download the ‘mysql’ role by ‘geerlingguy’
Create a vault password file and add it to ansible.cfg
Create the variable file /home/ansible/exam-files/vars/mysql.yml and add the following variables:
mysql_root_username: root
mysql_root_password: sqlrootpassword
Encrypt the variable file with ansible vault
Modify the role so that it:
Changes the root credentials
Saves the root credentials to ~/.my.rc
Create the playbook /home/ansible/exam-files/playbooks/deploy-mysql.yml that pushes the role to the mysql group
Task 10
Objectives covered:
Use provided documentation to look up specific information about Ansible modules and commands
Tasks:
Create the file /home/ansible/exam-files/ansible.cfg.template with a dump of all possible env and config values. For example:
ACTION_WARNINGS:
default: true
description: [By default Ansible will issue a warning when received from a task
action (module or action plugin), These warnings can be silenced by adjusting
this setting to False.]
env:
- {name: ANSIBLE_ACTION_WARNINGS}
ini:
- {key: action_warnings, section: defaults}
name: Toggle action warnings
type: boolean
version_added: '2.5'
Create the file /home/ansible/exam-files/ansible.cfg.dump with all the current variables/settings. For example:
So you got a spanking new hard drive for your NAS and you are ready to install it… but wait! What if the drive is bad?
This is not something that most people would think of, but that shiny new drive could already have come with defects (a.k.a extra features) from the factory. Or maybe it was part of a fun game of “throw the client’s package” that some delivery man like to play (as my preferred social media likes to show me). So before we install this new piece of hardware that has the potential to render all my data, accumulated from years of hoarding, useless, let’s do some testing.
S.M.A.R.T Testing
Let’s start with a S.M.A.R.T (Self-Monitoring, Analysis, and Reporting Technology) test.
SMART is an interface between the platform’s BIOS and the storage device. When SMART is enabled in the BIOS (mostly default), the BIOS can process information from the storage device and determine whether to send a warning message about potential failure of the storage device. The purpose of SMART is to warn a user of impending drive failure while there is still time to take action, such as backing up the data or copying the data to a replacement device.
First we need to identify if the drive is capable of S.M.A.R.T test. Most modern drives should be.
sudo smartctl -i /dev/sdX
You should get an output similar to the one below:
=== START OF INFORMATION SECTION ===
Device Model: WDC WD60EFPX-68C5ZN0
Serial Number: WD-WX12D12312345
LU WWN Device Id: 5 0014ee 26b395dd4
Firmware Version: 81.00A81
User Capacity: 6,001,175,126,016 bytes [6.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: Not in smartctl database 7.3/5528
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Thu Nov 9 08:21:36 2023 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
I got the following error because I’m using a USB-C adapter:
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.5.8-arch1-1] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
/dev/sda: Unknown USB bridge [0x14b0:0x0200 (0x100)]
Please specify device type with the -d option.
Use smartctl -h to get a usage summary
If that’s the same for you, you can try using it with -d sat, and if your adapter is supported it should work.
sudo smartctl -d sat -i /dev/sdX
Once we confirmed that the drive supports S.M.A.R.T. testing we can start. We are interested in the following three tests:
Short - The goal of the short test is the rapid identification of a defective hard drive. Therefore, a maximum run time for the short test is 2 min
Long - The long test was designed as the final test in production and is the same as the short test with two differences. The first: there is no time restriction and in the Read/Verify segment the entire disk is checked and not just a section
Conveyance Test - This test can be performed to determine damage during transport of the hard disk within just a few minutes
We specify the test using the -t flag:
smartctl -t [short|long|conveyance] [dev]
The test runs in the background and we can check it’s status by greping Self-test execution status.
On the example below we can see that the test is in progress and that is 80% complete:
$ sudo smartctl -a /dev/sda | grep -A 1 'Self-test execution status:'
Self-test execution status: ( 242) Self-test routine in progress...
20% of test remaining.
We can use the same command with to check the test result. Just change the -A to ‘2’ in grep:
$ sudo smartctl -a /dev/sda | grep -A 2 'Self-test execution status:'
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Another option is to use the -l selftest flag:
$ sudo smartctl -l selftest /dev/sda
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.5.8-arch1-1] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 20 -
Also check the following string after each test:
$ sudo smartctl -a /dev/sda | grep 'test result'
SMART overall-health self-assessment test result: PASSED
Now go ahead and run the short and conveyance tests (or all 3 if you have the time). Here’s how long it took for me to run on a 6TB WD Red Plus (WD60EFPX) over USB-C (Nov 2023):
conveyance -1m13s
short - 2m
long - 11h20m
If you have more than one hard drive to test, and you can plug them in at the same time, you can run the tests in parallel.
Once completed and you have confirmed they have passed, also check the thresholds at the end of the output. It will look similar to this:
Offline_Uncorrectable - Damaged sectors that don’t respond to any read/write requests (bad sectors). These sectors are remapped to spare sectors.
Reallocated_Sector_Ct - Count of damaged sectors that were remapped to spare sectors.
Current_Pending_Sector - Indicates the number of damaged sectors that are yet to be remapped or reallocated. This number could indicate that spare sectors are not available, and data from bad sectors can no longer be remapped.
Badblocks
Before we continue, let’s just make sure that you are indeed testing a new set of spinning rust (a.k.a. hard drive), and not an SSD or an NVMe. We don’t want to run badblocks on the later 2.
S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) is featured in almost every HDD still in use nowadays, and in some cases it can automatically retire defective HDD sectors. However, S.M.A.R.T. only passively waits for errors while badblocks can actively write simple patterns to every block of a device and then check them, searching for damaged areas (Just like memtest86* does with RAM).
Now that we have an understanding of what badblocks does, let’s take some time to digest it. We will be writing to all blocks on your new hard drive and then reading to confirm that the data was written correctly. As if that wouldn’t already take long, badblocks will do it not only once, but four times (with four different patterns).
As the pattern is written to every accessible block, the device effectively gets wiped. The default is an extensive test with four passes using four different patterns: 0xaa (10101010), 0x55 (01010101), 0xff (11111111) and 0x00 (00000000). For some devices this will take a couple of days to complete.
As with smartctl, you can run multiple instances of badblocks in parallel if you have multiple disks. You can also shorten the time of the test by increasing the number of blocks that are tested at time (-c), or by specifying a single pattern to be written with the -t option, e.g.: -t '0xaa', which will force it to do only one pass. If you specify multiple patterns, e.g.: -t '0xaa' -t '0x55', you will be essentially running multiple passes.
Another option is to use the random pattern option with -t random. This will make badblocks use random patterns for the test, with only one pass (unless you specify -p).
Just keep in mind that different patterns (used by the default write-mode) work better because you can validate against stuck bits. But based on the amount of drives, available drive buses, and time that you have, you might not actually be able to run the full test. But that’s a decision that only you can make.
I wanted to time my tests to help you making a decision, but as I ran them over a USB-C adapter my badblocks seem to have maxed out at around 41mb/s:
Total DISK READ : 0.00 B/s | Total DISK WRITE : 41.74 M/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 41.74 M/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
1516251 be/4 root 0.00 B/s 41.74 M/s ?unavailable? badblocks -wsvb 4096 -t 0x00 /dev/sda -o badblocks-output.log
With that in mind, here are the timings from my latest test on a 6TB WD Red Plus (WD60EFPX) over USB-C (Nov 2023):
Write mode - 83h
Random write mode - 81h
Write mode one pattern - 80h
And spoiler alert… we will be running the long test in smartctl once badblocks finishes. So also take that into account.
Running the Test
First, let’s take a look at what your drive’s recommended blocksize is:
sudo blockdev --getbsz /dev/sdX
Because this test will run for a while, start your preferred terminal multiplexer (e.g.: screen, tmux), change into root, and run badblocks:
time badblocks -wsvb {blocksize} /dev/sdX -o [output_file]
time a separate command to tell you the actual time badblocks ran for once complete.
-w uses write-mode test, which is a destructive action.
-s shows an estimate of the progress of the scan. This isn’t 100% accurate, but it’s better to see something than nothing at all.
-b {blocksize} specify the block size. Be sure to replace {blocksize} with the number you found with the previous command mentioned (blockdev --getbsz /dev/sdX).
/dev/sdX the drive you want to test. Replace with the actual drive. Be extra careful as you don’t want to accidentally destroy data on the wrong disk.
-v option is verbose mode
-o option is output file. Without the -obadblocks will simply use the STDOUT
S.M.A.R.T. Again
Once the badblocks test is complete, run another long smartctl test and check to make sure that everything is still good.
Migrating users from one Linux system to another can be a complex task, but with the right steps and careful planning, it can be accomplished smoothly. In this guide, we will walk you through the process of migrating users, their passwords, groups, and home folders from one Linux system to another. We’ll be using bash scripts to automate the backup and restore process, ensuring a seamless transition.
The following items will not be covered by our instructions:
User limits
Mail
Group password
Step 1: Backup User Information on the Source Server
First, log in to the source Linux system and open a terminal. Use the following command to get a list of all users:
Next, identify the users you want to migrate and add their names to the users_to_backup variable in the provided script:
backup_password_file="etc-passwd-bak"
backup_group_file="etc-group-bak"
backup_shadow_file="etc-shadow-bak"
# Add more users separated by spaces if needed
users_to_backup="linus lennart"
for user in $users_to_backup ; do
curuser=$user
echo -e "\n=> User is $curuser"
curid=$(id -u "$curuser")
curgid=$(id -g "$curuser")
cursupgit=$(id -G "$curuser")
echo "User id is $curid"
echo "User group id is $curgid"
echo "User supplementary groups are $cursupgit"
echo "Backing up /etc/passwd for $curuser"
grep -E "^${curuser}:.:${curid}" /etc/passwd >> "$backup_password_file"
echo "Backing up primary group for $curuser"
if [ -f "$backup_group_file" ] ; then
if grep -q ":${curid}:" "$backup_group_file" ; then
echo "Group already backed up"
else
grep ":${curid}:" /etc/group >> "$backup_group_file"
fi
else
grep ":${curid}:" /etc/group >> "$backup_group_file"
fi
echo "Backing up secondary groups for $curuser"
for groupid in $cursupgit ; do
if grep -q ":${groupid}:" "$backup_group_file" ; then
echo "Group already backed up"
else
grep ":${groupid}:" /etc/group >> "$backup_group_file"
fi
done
echo "Backing up /etc/shadow for $curuser"
sudo grep -E "^$curuser" /etc/shadow >> "$backup_shadow_file"
done
Save the script to a file, e.g., user_backup_script.sh, and make it executable:
chmod +x user_backup_script.sh
Execute the script as root:
sudo ./user_backup_script.sh
The script will back up user information, including passwords and group data, to the specified backup files.
Step 2: Copy Backup Files to the Destination Server
Next, copy the three backup files (etc-passwd-bak, etc-group-bak, and etc-shadow-bak) from the source server to the destination server using the scp command or any preferred method.
# Example using scp
scp etc-*-bak user@destination_server:/path/to/backup_files/
Step 3: Restore Users on the Destination Server
Log in to the destination Linux system and open a terminal. Navigate to the directory where the backup files are located and run the provided restore script as root:
backup_password_file="etc-passwd-bak"
backup_group_file="etc-group-bak"
backup_shadow_file="etc-shadow-bak"
echo -e "\n=> Backing up the system files"
for file in /etc/{passwd,group,shadow} ; do
cp -v "$file" "${file}_$(date -I)"
done
echo -e "\n=> Restoring users"
cat "$backup_password_file" | while read -r line ; do
userid="$(echo "$line" | awk -F":" '{print $3}')"
username="$(echo "$line" | awk -F":" '{print $1}')"
echo "-> Restoring user $username"
if grep -Eq "^${username}:.:${userid}" /etc/passwd ; then
echo " ERROR: User ID already exists. Manual intervention is needed"
else
echo "$line" >> /etc/passwd
if grep -qE "^${username}:" /etc/shadow ; then
echo " ERROR: User password already exists. Manual intervention is needed"
else
grep -E "^${username}:" "$backup_shadow_file" >> /etc/shadow
fi
fi
done
echo -e "\n=> Restoring groups"
cat "$backup_group_file" | while read -r line ; do
groupid="$(echo "$line" | awk -F":" '{print $3}')"
groupname="$(echo "$line" | awk -F":" '{print $1}')"
echo "-> Restoring group $groupname"
if grep -qE "^${groupname}:.:${groupid}" /etc/group ; then
echo " ERROR: Group already exists. Manual intervention may be needed"
else
echo "$line" >> /etc/group
fi
done
Run the restore script:
sudo ./user_restore_script.sh
The script will restore the users and their passwords, taking care to avoid conflicts with existing user IDs. Pay close attention to error messages and take action if needed. If there are any error messages related to system group creation, it might be acceptable as the groups may already exist on the new system.
Step 4: Copy Home Folders and Set Permissions
Finally, to complete the migration, copy the home folders for the desired users from the source server to the destination server using rsync:
# Replace 'user1,user2,user3' with appropriate values
# Replace 'destination_server' with appropriate values
rsync -avzh -e ssh /home/{user1,user2,user3} user@destination_server:/home/
Ensure that the permissions and ownership are set correctly for the home folder of each user on the destination server:
# Replace user and primary_group with appropriate values
sudo chown -R user:primary_group /home/user
Note:For SFTP users that use CHROOT, the user’s home folder needs to be owned by ‘root’
Conclusion:
Migrating users from one Linux system to another involves a series of crucial steps, including backup, restoration, and copying of home folders. By using the provided scripts and following this step-by-step guide, you can ensure a smooth and successful user migration process. Always exercise caution during the migration and verify the results to ensure everything is functioning as expected on the destination system.
When it comes to managing packages on a Linux system, the YUM (Yellowdog Updater Modified) package manager is widely used for its ease of use and robust features. One of the handy commands in YUM is yum clean all, which helps you keep your system clean and optimized. In this blog post, we will delve into the functionalities of yum clean all and explore how it can help you clear accumulated cache and improve system performance.
Cleaning Options
The yum clean command can be used with specific options to clean individual components. Here are some of the available options:
packages: Cleans package-related cache files
metadata: Removes metadata and other files related to enabled repositories
expire-cache: Cleans the cache for metadata that has expired
rpmdb: Cleans the YUM database cache
plugins: Clears any cache maintained by YUM plugins
all: Performs a comprehensive cleaning, covering all the above options
Clean Everything
The yum clean all command serves as a one-stop solution to clean various elements that accumulate in the YUM cache directory over time. It offers several cleaning options, allowing you to target specific items or perform a comprehensive clean.
It’s essential to note that yum clean all does not clean untracked “stale” repositories. This means that if a repository is no longer in use or has been disabled, its cache will not be cleared by this command. We’ll explore an alternative method to handle untracked repositories shortly.
Analyzing Cache Usage
Before diving into the cleaning process, it’s helpful to analyze the cache usage on your system. You can use the following command to check the cache usage:
$ df -hT /var/cache/yum/
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/rootvg-varlv xfs 8.0G 4.6G 3.5G 57% /var
Cleaning with ‘yum clean all’
When you execute yum clean all, the command will remove various cached files and improve system performance. However, you may sometimes notice a warning regarding other repository data:
Other repos take up 1.1 G of disk space (use --verbose for details)
If you run it with the --verbose flag, you should see a list of stale/untracked repos
$ sudo yum clean all --verbose
Not loading "rhnplugin" plugin, as it is disabled
Loading "langpacks" plugin
Loading "product-id" plugin
Loading "search-disabled-repos" plugin
Not loading "subscription-manager" plugin, as it is disabled
Adding en_US.UTF-8 to language list
Config time: 0.110
Yum version: 3.4.3
Cleaning repos: epel rhel-7-server-ansible-2-rhui-rpms rhel-7-server-rhui-extras-rpms rhel-7-server-rhui-optional-rpms rhel-7-server-rhui-rh-common-rpms rhel-7-server-rhui-rpms
: rhel-7-server-rhui-supplementary-rpms rhel-server-rhui-rhscl-7-rpms rhui-microsoft-azure-rhel7
Operating on /var/cache/yum/x86_64/7Server (see CLEAN OPTIONS in yum(8) for details)
Disk usage under /var/cache/yum/*/* after cleanup:
0 enabled repos
0 disabled repos
1.1 G untracked repos:
1.0 G /var/cache/yum/x86_64/7Server/rhui-rhel-7-server-rhui-rpms
90 M /var/cache/yum/x86_64/7Server/rhui-rhel-server-rhui-rhscl-7-rpms
9.8 M /var/cache/yum/x86_64/7Server/rhui-rhel-7-server-rhui-extras-rpms
6.4 M /var/cache/yum/x86_64/7Server/rhui-rhel-7-server-rhui-supplementary-rpms
5.3 M /var/cache/yum/x86_64/7Server/rhui-rhel-7-server-dotnet-rhui-rpms
1.6 M /var/cache/yum/x86_64/7Server/rhui-rhel-7-server-rhui-rh-common-rpms
4.0 k other data:
4.0 k /var/cache/yum/x86_64/7Serve
Manually Removing Untracked Repository Files
To handle untracked repository files, you can manually remove them from the cache directory with rm:
$ sudo rm -rf /var/cache/yum/*
Refreshing the Cache
Next time you run commands like yum check-update or any other operation that refreshes the cache, YUM will rebuild the package list and recreate the cache directory for the enabled repositories.
Check the Cache Usage After Cleanup
After performing the cleanup, you can verify the reduced cache usage. Use the df command again to check the cache size:
$ df -hT /var/cache/yum/
Doing it all with Ansible
And if you want to, you can use the Ansible playbook below to automate the YUM cache purge and re-creation:
Regularly cleaning the YUM cache with the yum clean all command can help optimize your system’s performance by clearing accumulated files. By understanding the available cleaning options and handling untracked repositories, you can ensure that your YUM cache remains streamlined and efficient. Keep your system running smoothly and enjoy the benefits of a clean YUM cache!
Remember, maintaining a clean and optimized system contributes to a seamless Linux experience.
The Linux find command is a very versatile tool in the pocket of any Linux administrator. It allows you to quickly search for files and take action based on the results.
The basic construct of the command is:
find [options] [path] [expression]
The options part of the command controls some basic functionality for find and I’m not going to cover it here. I will instead quickly look at the components for the expression part of the command, and provide some useful examples.
Expressions are composed of tests, actions, global options, positional options and operators:
Test - returns true or false (e.g.: -mtime, -name, -size)
Actions - Acts on something (e.g.: -exec, -delete)
Global options - Affect the operations of tests and actions (e.g.:-depth, -maxdepth,)
Positional options - Modifies tests and actions (e.g.: -follow, -regextype)
Operators - Include, join and modify items (e.g.: -a, -o)
Operators
Operators are the logical OR and AND of find, as well as negation and expression grouping.
Operator
Description
\( expr \)
Force precedence
! or -not
Negate
-a or -and
Logical AND. If two expressions are given without -a find will take it as implied. expr2 is not evaluated if expr1 is false
-o of -or
Logical OR. expr2 is not evaluated if expr1 is true
,
Both expr1 and expr2 are always evaluated. The value of expr1 is discarded
You can use the operators for repeated search options with different values.
Example 1: Find files with multiple file extensions
Example 3: find files that don’t finish with the ‘.log’ extension
find . -type f -not -name '*.log'
Example 4: Excludes everything in the folder named ‘directory’
find -name "*.js" -not -path "./directory/*"
Global Options
Two very useful global options are -maxdepth and -mindepth.
maxdepth levels: Descend at most levels (a non-negative integer) levels of directories below the starting-points. -maxdepth 0 means only apply the tests and actions to the starting-points themselves.
mindepth levels: Do not apply any tests or actions at levels less than levels (a non-negative integer). -mindepth 1 means process all files except the starting-points.
Example 5: find all files with ‘.txt’ extension on the current dir and do not descend into subdirectories
find . -maxdepth 1 -name '*.txt'
Tests
This is where you can target specific properties about the files that you are searching for. Some of my preferred tests are:
-iname - Like -name, but the match is case insensitive
-size - Matches based on size (e.g.: -size -2k, -size +1G)
-user - File Belongs to username
-newer - File is newer than file. It can be a powerful option
Example 6: Find files for user
find . -type f -user linustorvalds
Example 7: Find files larger than 1GB
find . -type f -size +1G
Example 8: Here’s a hidden gem! Let’s say you need to find what files a program is creating. All you need to do is create a file, run the program, and then use -newer to find what files were created
# Create a file
touch file_marker
# Here I run the script or program
./my_script.sh
# Now I can use the file I created to find newer files that were created by the script
find . -type f -newer 'file_marker'
Actions
Actions will execute the specified action against the matched filenames. Some useful actions are:
-ls - Lists matched files with ls -dils
-delete - Deletes the matched files
-exec - Execute the specified command against the files
-ok - Prompts user for before executing command
-print0 - Prints the full name of the matched files followed by a null character
Real-life Scenarios
Get a confirmation prompt
Use -ok to get a confirmation prompt before executing the action on the matched files.
Whenever possible, use -delete instead of -exec rm -rf {} \;.
find . -type f -name "*.bak" -exec rm -f {} \;
Instead use:
find . -type f -name "*.bak" -delete
Using xargs and {} +
The command bellow will run into a problem if files or directories with embedded spaces in their names are encountered. xargs will treat each part of that name as a separate argument:
That final + tells find that grep will accept multiple file name arguments. Like xargs, find will put as many names as possible into each invocation of grep.
Find and Move
Find files and move them to a different location.
-v - For verbose
-t - Move all SOURCE arguments into DIRECTORY
-exec command {} + - As we saw before, this variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invocations of the command will be much less than the number of matched files
Here we search the contents of PDF files for specif text. Two options are shown below, each require the install of an additional package (pdfgrep and ripgrep-all).
One of the problems with find is that it doesn’t return true or false based on search results. But we can still use it on our if condition by greping the result.
if find /var/log -name '*.log' | grep -q log ; then
echo "File exists"
fi
Conclusion
You should now have a better understanding of the find command, as well as some nifty use cases to impress you co-workers.
If you found something useful or want to share an interesting use for find, please leave a comment below.