While deploying the playbook the below error appears:
ERROR! variable files must contain either a dictionary of variables, or a list of dictionaries. Got: user_password:password database_password:password ( <class ‘ansible.parsing.yaml.objects.AnsibleUnicode’>)
In order to resolve issue, you should just leave a blank between dictionary key and value.
Deploy again your playbook and the result will be successful.
RedHat Enterprise Linux 8 changes the way packages are delivered by splitting the main repository that was available on RedHat 7 systems to the below two (link with more information is attached at the bottom of the article)
As a result when you locally mount a DVD to be used as a repository for packages one should create now two repositories instead of one.
Attach dvd to virtual server, mount directory and create repositories on /etc/yum.repos.d
mkdir -p /mnt/disc
mount /dev/sr0 /mnt/disc
Create repository files
touch appstream.repo baseos.repo
Change permissions to repository files to 0644
chmod 644 appstream.repo baseos.repo
name=Red Hat Enterprise Linux 8.2.0 AppStream
name=Red Hat Enterprise Linux 8.2.0 BaseOS
Validate that both repos are enabled
Find below the procedure documented from RedHat for Linux 7
With the latest version of pcs package there are some changes regarding the implementation of a highly available cluster which is based on centos/redhat operating system. Currently the latest package available for pcs is the 0.10.4 as shown below on which cluster with pacemaker 2.x and corosync 3.x are supported.
Firstly authenticate the nodes of the cluster (changed from previous versions)
Check cluster status after the creation and the online nodes.
Enable cluster on all nodes
Reboot one node to verify cluster functionality and check active nodes. When a node of cluster goes down, cluster is still online because of the second node.
Create a cluster resource of type IPaddr2 in order to support an application/service.
pcs resource create virtual_ip ocf:heartbeat:IPaddr2 ip=X.X.X.X cidr_netmask=24 op monitor interval=30s
Ping the IP to verify that it is online
Shutdown the node on which cluster resource IPaddr2 is running and verify that IP is still accessible. You can verify during the shutdown and the resource migration from one node to another that icmp package respond is a bit slower. (3ms instead of <1ms)
Verify that cluster resource cluster-ip is online on the second node.