create pxe boot / install server on a raspberry pi
- put image on sd card
- configure wlan with internet access
- connect eth0 to network connected to a broadcast domain with eth0 interface of the server to be installed
- install isc-dhcp-server package
apt install isc-dhcp-server
- install tftp server
apt install tftp-server
- install pxelinux
apt install pxelinux
- install nginx webserver
- get iso image (Rhel 7.8)
- mount iso image locally
mount -o loop /repo/iso/rhel-server-7.8-x86_64-boot.iso /var/www/html/repo/rhel78
* copy pxe stuff into tftpboot directory
root@gxfsberry:/srv/tftp# cp -v /usr/lib/PXELINUX/pxelinux.0 /srv/tftp/ '/usr/lib/PXELINUX/pxelinux.0' -> '/srv/tftp/pxelinux.0' root@gxfsberry:/srv/tftp# cp -v /usr/lib/syslinux/modules/bios/menu.c32 /srv/tftp/ '/usr/lib/syslinux/modules/bios/menu.c32' -> '/srv/tftp/menu.c32' root@gxfsberry:/srv/tftp# cp -v /usr/lib/syslinux/modules/bios/mboot.c32 /srv/tftp/ '/usr/lib/syslinux/modules/bios/mboot.c32' -> '/srv/tftp/mboot.c32' root@gxfsberry:/srv/tftp# cp -v /usr/lib/syslinux/modules/bios/chain.c32 /srv/tftp/ '/usr/lib/syslinux/modules/bios/chain.c32' -> '/srv/tftp/chain.c32' root@gxfsberry:/srv/tftp# cp -v /usr/lib/syslinux/modules/bios/ldlinux.c32 /srv/tftp/ '/usr/lib/syslinux/modules/bios/ldlinux.c32' -> '/srv/tftp/ldlinux.c32' root@gxfsberry:/srv/tftp# cp -v /usr/lib/syslinux/modules/bios/libutil.c32 /srv/tftp/ '/usr/lib/syslinux/modules/bios/libutil.c32' -> '/srv/tftp/libutil.c32' root@gxfsberry:/srv/tftp#
root@gxfsberry:/srv/tftp# mkdir /srv/tftp/pxelinux.cfg root@gxfsberry:/srv/tftp# mkdir /srv/tftp/networkboot root@gxfsberry:/srv/tftp# mkdir /srv/tftp/networkboot/rhel78 root@gxfsberry:/srv/tftp# cp /var/www/html/repo/rhel78/images/pxeboot/{initrd.img,vmlinuz} /srv/tftp/networkboot/rhel78/
walkthrough after initial automated setup of gxfsadm server
After the server has been successfully setup using the gxfsberry it needs to be attached to the RHEL license server. For that purpose the servers needs to have internet access. This can either be done by configuring a network connection. The gxfsberry is configured to do NAT on the WLAN interface. Or the normal way can be used. http_proxy and/or https_proxy needs to be set by environmental variable.<br>
subscription-manager register --username alexander.menck@emea.nec.com subscription-manager list --available subscription-manager attach --pool=8a85f99f78d761fb0178ea49019d6174 subscription-manager config --rhsm.manage_repos=1 subscription-manager repos --enable rhel-8-for-x86_64-highavailability-rpms rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [root@gxfsadm0 yum.repos.d]# subscription-manager repos --list-enabled +----------------------------------------------------------+ Available Repositories in /etc/yum.repos.d/redhat.repo +----------------------------------------------------------+ Repo ID: rhel-8-for-x86_64-baseos-rpms Repo Name: Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) Repo URL: https://cdn.redhat.com/content/dist/rhel8/$releasever/x86_64/baseos/os Enabled: 1 Repo ID: rhel-8-for-x86_64-appstream-rpms Repo Name: Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) Repo URL: https://cdn.redhat.com/content/dist/rhel8/$releasever/x86_64/appstream/os Enabled: 1 Repo ID: rhel-8-for-x86_64-highavailability-rpms Repo Name: Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs) Repo URL: https://cdn.redhat.com/content/dist/rhel8/$releasever/x86_64/highavailability/os Enabled: 1 [root@gxfsadm0 yum.repos.d]# [root@gxfsadm0 yum.repos.d]# yum install pacemaker corosync pcs Updating Subscription Management repositories. Last metadata expiration check: 0:00:07 ago on Tue 20 Apr 2021 16:22:18 CEST. Dependencies resolved. ============================================================================================================================================================================================ Package Architecture Version Repository Size ============================================================================================================================================================================================ Installing: corosync x86_64 3.0.3-4.el8 rhel-8-for-x86_64-highavailability-rpms 265 k pacemaker x86_64 2.0.4-6.el8_3.2 rhel-8-for-x86_64-highavailability-rpms 438 k pcs x86_64 0.10.6-4.el8_3.1 rhel-8-for-x86_64-highavailability-rpms 11 M Upgrading: pacemaker-cluster-libs x86_64 2.0.4-6.el8_3.2 rhel-8-for-x86_64-appstream-rpms 128 k pacemaker-libs x86_64 2.0.4-6.el8_3.2 rhel-8-for-x86_64-appstream-rpms 690 k pacemaker-schemas noarch 2.0.4-6.el8_3.2 rhel-8-for-x86_64-appstream-rpms 70 k Installing dependencies: cifs-utils x86_64 6.8-3.el8 gxfsstage-rhel83 96 k clufter-bin x86_64 0.77.1-5.el8 rhel-8-for-x86_64-highavailability-rpms 34 k clufter-common noarch 0.77.1-5.el8 rhel-8-for-x86_64-highavailability-rpms 81 k liberation-fonts-common noarch 1:2.00.3-7.el8 gxfsstage-rhel83 25 k liberation-sans-fonts noarch 1:2.00.3-7.el8 gxfsstage-rhel83 610 k libknet1 x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 78 k libknet1-compress-bzip2-plugin x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 15 k libknet1-compress-lz4-plugin x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 16 k libknet1-compress-lzma-plugin x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 15 k libknet1-compress-lzo2-plugin x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 15 k libknet1-compress-plugins-all x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 11 k libknet1-compress-zlib-plugin x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 15 k libknet1-crypto-nss-plugin x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 19 k libknet1-crypto-openssl-plugin x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 17 k libknet1-crypto-plugins-all x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 11 k libknet1-plugins-all x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 11 k libnozzle1 x86_64 1.16-1.el8 rhel-8-for-x86_64-highavailability-rpms 32 k overpass-fonts noarch 3.0.2-3.el8 gxfsstage-rhel83-appstream 1.1 M pacemaker-cli x86_64 2.0.4-6.el8_3.2 rhel-8-for-x86_64-highavailability-rpms 327 k perl-TimeDate noarch 1:2.30-15.module+el8.3.0+6498+9eecfe51 gxfsstage-rhel83-appstream 53 k python3-asn1crypto noarch 0.24.0-3.el8 gxfsstage-rhel83 181 k python3-cffi x86_64 1.11.5-5.el8 gxfsstage-rhel83 238 k python3-clufter noarch 0.77.1-5.el8 rhel-8-for-x86_64-highavailability-rpms 351 k python3-cryptography x86_64 2.3-3.el8 gxfsstage-rhel83 511 k python3-lxml x86_64 4.2.3-1.el8 gxfsstage-rhel83-appstream 1.5 M python3-pyOpenSSL noarch 18.0.0-1.el8 gxfsstage-rhel83-appstream 103 k python3-pycparser noarch 2.14-14.el8 gxfsstage-rhel83 109 k python3-webencodings noarch 0.5.1-6.el8 gxfsstage-rhel83-appstream 27 k resource-agents x86_64 4.1.1-68.el8_3.2 rhel-8-for-x86_64-highavailability-rpms 482 k ruby x86_64 2.5.5-106.module+el8.3.0+7153+c6f6daa5 gxfsstage-rhel83-appstream 86 k ruby-irb noarch 2.5.5-106.module+el8.3.0+7153+c6f6daa5 gxfsstage-rhel83-appstream 102 k ruby-libs x86_64 2.5.5-106.module+el8.3.0+7153+c6f6daa5 gxfsstage-rhel83-appstream 2.9 M rubygem-json x86_64 2.1.0-106.module+el8.3.0+7153+c6f6daa5 gxfsstage-rhel83-appstream 90 k rubygem-openssl x86_64 2.1.2-106.module+el8.3.0+7153+c6f6daa5 gxfsstage-rhel83-appstream 189 k rubygem-psych x86_64 3.0.2-106.module+el8.3.0+7153+c6f6daa5 gxfsstage-rhel83-appstream 95 k rubygems noarch 2.7.6.2-106.module+el8.3.0+7153+c6f6daa5 gxfsstage-rhel83-appstream 308 k Installing weak dependencies: python3-html5lib noarch 1:0.999999999-6.el8 gxfsstage-rhel83-appstream 214 k rubygem-bigdecimal x86_64 1.3.4-106.module+el8.3.0+7153+c6f6daa5 gxfsstage-rhel83-appstream 97 k rubygem-did_you_mean noarch 1.2.0-106.module+el8.3.0+7153+c6f6daa5 gxfsstage-rhel83-appstream 81 k rubygem-io-console x86_64 0.4.6-106.module+el8.3.0+7153+c6f6daa5 gxfsstage-rhel83-appstream 66 k rubygem-rdoc noarch 6.0.1-106.module+el8.3.0+7153+c6f6daa5 gxfsstage-rhel83-appstream 486 k Enabling module streams: ruby 2.5 Transaction Summary ============================================================================================================================================================================================ Install 44 Packages Upgrade 3 Packages Total download size: 24 M Is this ok [y/N]:
After the installation of corosync the corosync config file needs to be copied (and adpated) over to /etc/corosync. After that corosync, pacemaker and pcsd need to be enabled and started.
systemctl enable corosync systemctl enable pacemaker systemctl enable pcsd systemctl start corosync systemctl start pacemaker systemctl start pcsd
[root@gxfsadm1 ~]# wget http://10.1.17.7/configs/certificates/gxfsadm1_of_dwd_de.pem --2021-04-20 18:26:11-- http://10.1.17.7/configs/certificates/gxfsadm1_of_dwd_de.pem Connecting to 10.1.17.7:80... connected. HTTP request sent, awaiting response... 200 OK Length: 46633 (46K) Saving to: 'gxfsadm1_of_dwd_de.pem' gxfsadm1_of_dwd_de.pem 100%[======================================================================>] 45.54K --.-KB/s in 0.001s 2021-04-20 18:26:11 (56.5 MB/s) - 'gxfsadm1_of_dwd_de.pem' saved [46633/46633] [root@gxfsadm1 ~]# subscription-manager import --certificate=gxfsadm1_of_dwd_de.pem You are attempting to use a locale: "de_DE.UTF-8" that is not fully supported by this system. Zertifikat gxfsadm1_of_dwd_de.pem wurde erfolgreich importiert [root@gxfsadm1 ~]# subscription-manager list --consumed You are attempting to use a locale: "de_DE.UTF-8" that is not fully supported by this system. +-------------------------------------------+ Verbrauchte Subskriptionen +-------------------------------------------+ Subskriptionsname: Red Hat Developer Subscription for Individuals Bietet: Red Hat Enterprise Linux High Availability - Update Services for SAP Solutions Red Hat Enterprise Linux Atomic Host Red Hat Enterprise Linux Atomic Host Beta Red Hat Container Images Red Hat Developer Tools (for RHEL Server) Red Hat Container Images Beta Red Hat Developer Tools Beta (for RHEL Server) Red Hat Enterprise Linux for x86_64 Red Hat Enterprise Linux Resilient Storage for x86_64 Red Hat Enterprise Linux Resilient Storage for x86_64 - Extended Update Support dotNET on RHEL (for RHEL Server) Red Hat Enterprise Linux Scalable File System (for RHEL Server) dotNET on RHEL Beta (for RHEL Server) Red Hat Enterprise Linux Scalable File System (for RHEL Server) - Extended Update Support Red Hat Ansible Automation Platform Oracle Java (for RHEL Server) Red Hat Enterprise Linux for SAP HANA for x86_64 Red Hat Enterprise Linux for Real Time Red Hat Software Collections (for RHEL Server) RHEL for SAP - Extended Update Support Oracle Java (for RHEL Server) - Extended Update Support RHEL for SAP HANA - Extended Update Support Red Hat S-JIS Support (for RHEL Server) - Extended Update Support Red Hat Software Collections Beta (for RHEL Server) Red Hat Ansible Engine Red Hat Enterprise Linux Server Red Hat Container Development Kit MRG Realtime Red Hat CodeReady Linux Builder for x86_64 Red Hat CodeReady Linux Builder for ARM 64 Red Hat Developer Toolset (for RHEL Server) Red Hat Enterprise Linux High Performance Networking (for RHEL Server) Red Hat Enterprise Linux High Performance Networking (for RHEL Server) - Extended Update Support Red Hat Enterprise Linux High Performance Networking (for RHEL Compute Node) Red Hat Enterprise Linux for x86_64 - Extended Update Support Red Hat Enterprise Linux for ARM 64 Red Hat Beta Red Hat EUCJP Support (for RHEL Server) - Extended Update Support RHEL for SAP (for IBM Power LE) - Update Services for SAP Solutions Red Hat Enterprise Linux Server - Update Services for SAP Solutions Red Hat Enterprise Linux for SAP Applications for x86_64 RHEL for SAP - Update Services for SAP Solutions RHEL for SAP HANA - Update Services for SAP Solutions Red Hat CodeReady Linux Builder for x86_64 - Extended Update Support Red Hat Enterprise Linux High Availability for x86_64 Red Hat Enterprise Linux High Availability for x86_64 - Extended Update Support Red Hat Enterprise Linux Load Balancer (for RHEL Server) Red Hat Enterprise Linux Load Balancer (for RHEL Server) - Extended Update Support SKU: RH00798 Vertrag: Account: 5508222 Seriennummer: 3694900056169925488 Pool-ID: 8a85f99f78d761fb0178ea49019d6174 Bietet Verwaltung: Nein Aktiv: True Anzahl verbraucht: 1 Servicetyp: Roles: Servicelevel: Self-Support Usage: Add-ons: Statusdetails: Subskriptionsverwaltungsdienst unterstützt keine Statusdetails. Subskriptionstyp: Startet: 04/19/21 Endet: 04/19/22 Entitlement Type: Physisch [root@gxfsadm1 ~]# https://access.redhat.com/solutions/2158251
create LVM
[root@gxfsadm0 ~]# for i in c d e f g h; do pvcreate /dev/sd$i; done Physical volume "/dev/sdc" successfully created. Physical volume "/dev/sdd" successfully created. Physical volume "/dev/sde" successfully created. Physical volume "/dev/sdf" successfully created. Physical volume "/dev/sdg" successfully created. Physical volume "/dev/sdh" successfully created. [root@gxfsadm0 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdc lvm2 --- <1.75t <1.75t /dev/sdd lvm2 --- <1.75t <1.75t /dev/sde lvm2 --- <1.75t <1.75t /dev/sdf lvm2 --- <1.75t <1.75t /dev/sdg lvm2 --- <1.75t <1.75t /dev/sdh lvm2 --- <1.75t <1.75t [root@gxfsadm0 ~]# vgcreate drbdstor /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh Volume group "drbdstor" successfully created [root@gxfsadm0 ~]# vgs VG #PV #LV #SN Attr VSize VFree drbdstor 6 0 0 wz--n- <10.48t <10.48t [root@gxfsadm0 ~]# lvcreate --type raid6 -i 4 -n webapps_vol1 -L 500GiB drbdstor Using default stripesize 64.00 KiB. Logical volume "webapps_vol1" created. [root@gxfsadm0 ~]# lvcreate --type raid6 -i 4 -n monitor_vol1 -L 500GiB drbdstor Using default stripesize 64.00 KiB. Logical volume "monitor_vol1" created. [root@gxfsadm0 ~]# lvcreate --type raid6 -i 4 -n xcat_vol1 -L 2TiB drbdstor Using default stripesize 64.00 KiB. Logical volume "xcat_vol1" created. [root@gxfsadm0 ~]# lvcreate --type raid6 -i 4 -n database_vol1 -L 1TiB drbdstor Using default stripesize 64.00 KiB. Logical volume "database_vol1" created. [root@gxfsadm0 ~]# lvcreate --type raid6 -i 4 -n home_vol1 -L 2TiB drbdstor Using default stripesize 64.00 KiB. Logical volume "home_vol1" created. [root@gxfsadm0 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert database_vol1 drbdstor rwi-a-r--- 1.00t 0.38 home_vol1 drbdstor rwi-a-r--- 2.00t 0.00 monitor_vol1 drbdstor rwi-a-r--- 500.00g 1.93 webapps_vol1 drbdstor rwi-a-r--- 500.00g 5.55 xcat_vol1 drbdstor rwi-a-r--- 2.00t 0.26 [root@gxfsadm0 ~]#
lvcreate --type raid6 -i 4 -n webapps_vol1 -L 500GiB drbdstor lvcreate --type raid6 -i 4 -n monitor_vol1 -L 500GiB drbdstor lvcreate --type raid6 -i 4 -n gxfsq_vol1 -L 500GiB drbdstor lvcreate --type raid6 -i 4 -n xcat_vol1 -L 2TiB drbdstor lvcreate --type raid6 -i 4 -n database_vol1 -L 1TiB drbdstor lvcreate --type raid6 -i 4 -n home_vol1 -L 2TiB drbdstor
copy over drbd config file
cd /etc/drbd.d/ wget http://10.1.67.7/configs/gxfsadm/database_vol1.res wget http://10.1.67.7/configs/gxfsadm/home_vol1.res wget http://10.1.67.7/configs/gxfsadm/monitor_vol1.res wget http://10.1.67.7/configs/gxfsadm/webapps_vol1.res wget http://10.1.67.7/configs/gxfsadm/xcat_vol1.res wget http://10.1.67.7/configs/gxfsadm/gxfsq_vol1.res wget http://10.1.67.7/configs/gxfsadm/drbdmanage-resources.res [root@gxfsadm1 drbd.d]# ls -l total 28 -rw-r--r--. 1 root root 221 May 15 2020 database_vol1.res -rw-r--r--. 1 root root 211 May 15 2020 drbdmanage-resources.res -rw-r--r--. 1 root root 2563 Jun 24 2020 global_common.conf -rw-r--r--. 1 root root 213 May 15 2020 home_vol1.res -rw-r--r--. 1 root root 225 May 15 2020 monitor_vol1.res -rw-r--r--. 1 root root 219 May 15 2020 webapps_vol1.res -rw-r--r--. 1 root root 229 May 15 2020 xcat_vol1.res [root@gxfsadm1 drbd.d]#
create PCS / DRBD config
create metadata on drbd volumes =============================== drbdadm create-md database_vol1 --force drbdadm create-md monitor_vol1 --force drbdadm create-md webapps_vol1 --force drbdadm create-md xcat_vol1 --force drbdadm create-md home_vol1 --force drbdadm create-md gxfsq_vol1 --force startup drbd volumes ==================== drbdadm up database_vol1 drbdadm up monitor_vol1 drbdadm up webapps_vol1 drbdadm up xcat_vol1 drbdadm up home_vol1 drbdadm up gxfsq_vol1
Set each DRBD volume on only one server in te cluster to primary
The drbd service needs to be started to successful run this command. After that the synchronization starts between the two servers.
drbdadm primary database_vol1 --force drbdadm primary monitor_vol1 --force drbdadm primary webapps_vol1 --force drbdadm primary xcat_vol1 --force drbdadm primary home_vol1 --force drbdadm primary gxfsq_vol1 --force
create filesystems
mkfs.xfs /dev/drbd100 mkfs.xfs /dev/drbd101 mkfs.xfs /dev/drbd102 mkfs.xfs /dev/drbd103 mkfs.xfs /dev/drbd104 mkfs.xfs /dev/drbd105
create docker networks manually for xcat as the creation via docker-compose.yml file does not support definition of the gateway address.
docker network create dck_xcat_adm_net --subnet 172.17.118.128/25 --gateway 172.17.118.130 --driver=macvlan -o parent=eno1
IPMI stonith configuration ========================== DWDOF ----- pcs stonith create fence_zofmgmt01 fence_ipmilan pcmk_host_list="zofmgmt01" lanplus="1" method="onoff" ipaddr="172.17.119.123" login="ADMIN" passwd="WPPRZHUNFO" delay=15 op monitor interval=60s pcs stonith create fence_zofmgmt02 fence_ipmilan pcmk_host_list="zofmgmt02" lanplus="1" method="onoff" ipaddr="172.17.119.125" login="ADMIN" passwd="TFHPDTVQIH" delay=15 op monitor interval=60s DWDLU ----- pcs stonith create fence_zlumgmt01 fence_ipmilan pcmk_host_list="zlumgmt01" lanplus="1" method="onoff" ipaddr="172.17.119.251" login="ADMIN" passwd="FKMOAQYCZH" delay=15 op monitor interval=60s pcs stonith create fence_zlumgmt02 fence_ipmilan pcmk_host_list="zlumgmt02" lanplus="1" method="onoff" ipaddr="172.17.119.253" login="ADMIN" passwd="GTRBFTKYJV" delay=15 op monitor interval=60s DWDEUS ------ pcs stonith create fence_zeumgmt01 fence_ipmilan pcmk_host_list="zeumgmt01" lanplus="1" method="onoff" ipaddr="172.17.137.59" login="ADMIN" passwd="WEQHYIIDYY" delay=15 op monitor interval=60s pcs stonith create fence_zeumgmt02 fence_ipmilan pcmk_host_list="zeumgmt02" lanplus="1" method="onoff" ipaddr="172.17.137.61" login="ADMIN" passwd="CBECHTOYIK" delay=15 op monitor interval=60s for all ------- pcs property set stonith-action=reboot pcs property set stonith-enabled=true #### no quorum, otherwise both nodes would go down #### pcs property set no-quorum-policy=ignore database ======== pcs resource create drbd-database_vol1 ocf:linbit:drbd drbd_resource=database_vol1 op monitor interval=60s pcs resource promotable drbd-database_vol1 promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true pcs resource create database_vol1-fs ocf:heartbeat:Filesystem device="/dev/drbd100" directory="/drbd/database_vol1" fstype="xfs" --group database pcs constraint colocation add database with drbd-database_vol1-clone INFINITY with-rsc-role=Master pcs constraint order promote drbd-database_vol1-clone then start database pcs resource create VIP_mariadb-drbd IPaddr2 ip=10.12.120.20 cidr_netmask=25 op monitor interval=5s --group database pcs resource create DCK_mariadb ocf:heartbeat:docker image="necgxfs/mariadb_v1.2" op monitor timeout="30s" interval="30s" pcs resource update DCK_mariadb reuse=1 name=mariadb pcs resource group add database DCK_mariadb monitor ======= pcs resource create drbd-monitor_vol1 ocf:linbit:drbd drbd_resource=monitor_vol1 op monitor interval=60s pcs resource promotable drbd-monitor_vol1 promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true pcs resource create monitor_vol1-fs ocf:heartbeat:Filesystem device="/dev/drbd101" directory="/drbd/monitor_vol1" fstype="xfs" --group monitor pcs constraint colocation add monitor with drbd-monitor_vol1-clone INFINITY with-rsc-role=Master pcs constraint order promote drbd-monitor_vol1-clone then start monitor pcs resource create VIP_icinga2-drbd IPaddr2 ip=10.12.120.21 cidr_netmask=25 op monitor interval=5s --group monitor pcs resource create DCK_icinga2 ocf:heartbeat:docker image="necgxfs/icinga2_v1.1" op monitor timeout="30s" interval="30s" pcs resource update DCK_icinga2 reuse=1 name=icinga2 pcs resource group add monitor DCK_icinga2 pcs resource create VIP_icingaweb2-drbd IPaddr2 ip=10.12.120.22 cidr_netmask=25 op monitor interval=5s --group monitor pcs resource create DCK_icingaweb2 ocf:heartbeat:docker image="necgxfs/icingaweb2_v1.1" op monitor timeout="30s" interval="30s" pcs resource update DCK_icingaweb2 reuse=1 name=icingaweb2 pcs resource group add monitor DCK_icingaweb2 webapps ======= pcs resource create drbd-webapps_vol1 ocf:linbit:drbd drbd_resource=webapps_vol1 op monitor interval=60s pcs resource promotable drbd-webapps_vol1 promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true pcs resource create webapps_vol1-fs ocf:heartbeat:Filesystem device="/dev/drbd102" directory="/drbd/webapps_vol1" fstype="xfs" --group webapps pcs constraint colocation add webapps with drbd-webapps_vol1-clone INFINITY with-rsc-role=Master pcs constraint order promote drbd-webapps_vol1-clone then start webapps pcs resource create DCK_httpd ocf:heartbeat:docker image="necgxfs/httpd_v1.2" op monitor timeout="30s" interval="30s" pcs resource update DCK_httpd reuse=1 name=httpd pcs resource group add webapps DCK_httpd DWDOF ----- pcs resource create VIP_www-adm IPaddr2 ip=172.17.118.124 cidr_netmask=25 op monitor interval=5s --group webapps DWDLU ----- pcs resource create VIP_www-adm IPaddr2 ip=172.17.118.252 cidr_netmask=25 op monitor interval=5s --group webapps DWDEUS ----- pcs resource create VIP_www-adm IPaddr2 ip=172.17.137.28 cidr_netmask=27 op monitor interval=5s --group webapps xcat ==== pcs resource create drbd-xcat_vol1 ocf:linbit:drbd drbd_resource=xcat_vol1 op monitor interval=60s pcs resource promotable drbd-xcat_vol1 promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true pcs resource create xcat_vol1-fs ocf:heartbeat:Filesystem device="/dev/drbd103" directory="/drbd/xcat_vol1" fstype="xfs" --group xcat pcs constraint colocation add xcat with drbd-xcat_vol1-clone INFINITY with-rsc-role=Master pcs constraint order promote drbd-xcat_vol1-clone then start xcat pcs resource create DCK_xcat ocf:heartbeat:docker image="necgxfs/xcat2.16.1" op monitor timeout="30s" interval="30s" pcs resource update DCK_xcat reuse=1 name=xcat pcs resource group add xcat DCK_xcat DWDOF ----- pcs resource create VIP_xcat-adm IPaddr2 ip=172.17.118.8 cidr_netmask=25 op monitor interval=5s --group xcat pcs resource create VIP_xcat-bmc IPaddr2 ip=172.17.119.8 cidr_netmask=25 op monitor interval=5s --group xcat DWDLU ----- pcs resource create VIP_xcat-adm IPaddr2 ip=172.17.118.136 cidr_netmask=25 op monitor interval=5s --group xcat pcs resource create VIP_xcat-bmc IPaddr2 ip=172.17.119.136 cidr_netmask=25 op monitor interval=5s --group xcat DWDEUS ----- pcs resource create VIP_xcat-adm IPaddr2 ip=172.17.137.8 cidr_netmask=25 op monitor interval=5s --group xcat pcs resource create VIP_xcat-bmc IPaddr2 ip=172.17.137.8 cidr_netmask=25 op monitor interval=5s --group xcat home ==== pcs resource create drbd-home_vol1 ocf:linbit:drbd drbd_resource=home_vol1 op monitor interval=60s pcs resource promotable drbd-home_vol1 promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true pcs resource create home_vol1-fs ocf:heartbeat:Filesystem device="/dev/drbd104" directory="/drbd/home_vol1" fstype="xfs" --group home pcs constraint colocation add home with drbd-home_vol1-clone INFINITY with-rsc-role=Master pcs constraint order promote drbd-home_vol1-clone then start home gxfsq ===== pcs resource create drbd-gxfsq_vol1 ocf:linbit:drbd drbd_resource=gxfsq_vol1 op monitor interval=60s pcs resource promotable drbd-gxfsq_vol1 promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true pcs resource create gxfsq_vol1-fs ocf:heartbeat:Filesystem device="/dev/drbd105" directory="/drbd/gxfsq_vol1" fstype="xfs" --group gxfsq pcs constraint colocation add gxfsq with drbd-gxfsq_vol1-clone INFINITY with-rsc-role=Master pcs constraint order promote drbd-gxfsq_vol1-clone then start gxfsq
as next step pcs needs to be triggered to restart drbd. issue a pcs resource cleanup. After the drbd devices are visible, but are in state connecting. Use the guideline under https://www.recitalsoftware.com/blogs/29-howto-resolve-drbd-split-brain-recovery-manually to revalidate the connection.