The easiest way to perform massive identical installations on web servers that are located on the same network (VLAN) is through a network map of the remote location. With only two steps you can perform remote installations.
Firstly select map a network drive and assign a drive letter to the remote location.
Select your preferred drive letter.
change directory to your remote location through a cmd or GUI and perform your installations.
While deploying the playbook the below error appears:
ERROR! variable files must contain either a dictionary of variables, or a list of dictionaries. Got: user_password:password database_password:password ( <class ‘ansible.parsing.yaml.objects.AnsibleUnicode’>)
In order to resolve issue, you should just leave a blank between dictionary key and value.
Deploy again your playbook and the result will be successful.
RedHat Enterprise Linux 8 changes the way packages are delivered by splitting the main repository that was available on RedHat 7 systems to the below two (link with more information is attached at the bottom of the article)
As a result when you locally mount a DVD to be used as a repository for packages one should create now two repositories instead of one.
Attach dvd to virtual server, mount directory and create repositories on /etc/yum.repos.d
Create repository files
touch appstream.repo baseos.repo
Change permissions to repository files to 0644
name=Red Hat Enterprise Linux8.2.0AppStream
name=Red Hat Enterprise Linux8.2.0BaseOS
Validate that both repos are enabled
Find below the procedure documented from RedHat for Linux 7
With the latest version of pcs package there are some changes regarding the implementation of a highly available cluster which is based on centos/redhat operating system. Currently the latest package available for pcs is the 0.10.4 as shown below on which cluster with pacemaker 2.x and corosync 3.x are supported.
Firstly authenticate the nodes of the cluster (changed from previous versions)
Shutdown the node on which cluster resource IPaddr2 is running and verify that IP is still accessible. You can verify during the shutdown and the resource migration from one node to another that icmp package respond is a bit slower. (3ms instead of <1ms)
Verify that cluster resource cluster-ip is online on the second node.