Analytics Vidhya

Analytics Vidhya is a community of Generative AI and Data Science professionals. We are building…

Follow publication

MySQL HA InnoDB Cluster with MySQLRouter and KeepAlived

Hello everyone,

I am here with a new topic today :).

I am going to write about HA MySQL InnoDB Cluster. Let’s start!

First of all, I have 6 servers, 3 of them are for MySQL instances, 2 of them are for mysqlrouter and the last is for cluster management and monitoring(We use MySQL Enterprise Monitor for monitoring).

Here are the hostnames:

MySQL instances:

inno1
inno2
inno3

mysqlrouter servers:

mysqlrouter1
mysqlrouter2

Cluster management and monitoring server:

inno-server-mon

In order to configure innodb cluster we need to do some changes on our servers.

  1. Disabling selinux

In order to do this:

vi /etc/selinux/config

I commented out line that consist of “#SELINUX=enforcing” parameter and added parameter below instead:

SELINUX=disabled

2 Disable firewalld(if you have IP tables configure your settings)

To disable firewalld:

systemctl stop firewalld

I skipped the MySQL server installation on each database server because it’ s not our topic.

By the way, the MySQL version we use 8.0.23.

After MySQL server installation we create a user for management on the all database instances:

CREATE USER ‘root’@’%’ IDENTIFIED BY ‘your_password’;
GRANT ALL PRIVILEGES ON *.* TO ‘root’@’%’ WITH GRANT OPTION;
FLUSH PRIVILEGES;

We are going to deploy a managed InnoDB cluster, so we need to install mysqlsh. On my inno-server-mon server:

If you don’ t have MySQL yum repository you should follow the steps here:

sudo yum update

sudo yum install mysql-shell

After successful mysql-shell installation, on inno-server-mon

mysqlsh — uri root@inno1:3306 — user root

Now, we can check our MySQL server instances whether they are suitable for InnoDB cluster or not.

dba.checkInstanceConfiguration(‘root@inno1:3306’)
dba.checkInstanceConfiguration(‘root@inno2:3306’)
dba.checkInstanceConfiguration(‘root@inno3:3306’)

If your instances’ configurations do not meet the requirements you can configure your instance’ s with commands below:

dba.configureInstance(‘root@inno1:3306’)
dba.configureInstance(‘root@inno2:3306’)
dba.configureInstance(‘root@inno3:3306’)

Then you can check again configurations with:

dba.checkInstanceConfiguration(‘root@inno1:3306’)
dba.checkInstanceConfiguration(‘root@inno2:3306’)
dba.checkInstanceConfiguration(‘root@inno3:3306’)

Example output:

Validating MySQL instance at inno1:3306 for use in an InnoDB cluster…
This instance reports its own address as inno1:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
Checking whether existing tables comply with Group Replication requirements…
No incompatible tables detected
Checking instance configuration…
Instance configuration is compatible with InnoDB cluster
The instance 'inno1:3306' is valid to be used in an InnoDB cluster.
{
"status": "ok"
}

Now, we can create a cluster:

var cluster = dba.createCluster(‘myinnocluster’)
Validating instance at root@inno1:3306…
This instance reports its own address as ic-1
Instance configuration is suitable.
Creating InnoDB cluster ‘testCluster’ on ‘root@inno1:3306’…
Adding Seed Instance…
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

To add new instances:

cluster.addInstance(‘root@inno2:3306’)
cluster.addInstance(‘root@inno3:3306’)

Due to aferomentioned MySQL version(8.0.23) I got the output like for instance number 2(inno2):

Validating instance at inno2:3306…
This instance reports its own address as inno2:3306
Instance configuration is suitable.
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Adding instance to the cluster…
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish…
NOTE: inno2:3306 is being cloned from inno1:3306
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY ############################################################ 100% Completed
PAGE COPY ############################################################ 100% Completed
REDO COPY ############################################################ 100% Completed
NOTE: inno2:3306 is shutting down…
* Waiting for server restart… ready
* inno2:3306 has restarted, waiting for clone to finish…
** Stage RESTART: Completed
* Clone process has finished: 1.2 GB transferred in 38 sec (30.26 MB/s)
State recovery already finished for ‘inno2:3306’
The instance ‘inno2:3306’ was successfully added to the cluster

In order to check status of cluster we use cluster.status()

JS > cluster.status()
{
“clusterName”: “myinnocluster”,
“defaultReplicaSet”: {
“name”: “default”,
“primary”: “inno1:3306”,
“ssl”: “REQUIRED”,
“status”: “OK”,
“statusText”: “Cluster is ONLINE and can tolerate up to ONE failure.”,
“topology”: {
“inno1:3306”: {
“address”: “inno1:3306”,
“mode”: “R/W”,
“readReplicas”: {},
“replicationLag”: null,
“role”: “HA”,
“status”: “ONLINE”,
“version”: “8.0.23”
},
“inno2:3306”: {
“address”: “inno2:3306”,
“mode”: “R/O”,
“readReplicas”: {},
“replicationLag”: null,
“role”: “HA”,
“status”: “ONLINE”,
“version”: “8.0.23”
},
“inno3:3306”: {
“address”: “inno3:3306”,
“mode”: “R/O”,
“readReplicas”: {},
“replicationLag”: null,
“role”: “HA”,
“status”: “ONLINE”,
“version”: “8.0.23”
}
},
“topologyMode”: “Single-Primary”
},
“groupInformationSourceMember”: “inno1:3306”
}

We can set a new primary instance:

JS > cluster.setPrimaryInstance("inno2")
Setting instance 'inno2' as the primary instance of cluster 'myinnocluster'…
Instance 'inno3:3306' remains SECONDARY.
Instance 'inno1:3306' was switched from PRIMARY to SECONDARY.
Instance 'inno2:3306' was switched from SECONDARY to PRIMARY.
WARNING: The cluster internal session is not the primary member anymore. For cluster management operations please obtain a fresh cluster handle using dba.getCluster().
The instance 'inno2' was successfully elected as primary.

Checking status again:

JS > cluster.status()
{
"clusterName": "myinnocluster",
"defaultReplicaSet": {
"name": "default",
"primary": "inno2:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"inno1:3306": {
"address": "inno1:3306",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.23"
},
"inno2:3306": {
"address": "inno2:3306",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.23"
},
"inno3:3306": {
"address": "inno3:3306",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.23"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "inno1:3306"
}

We can set memberweight parameters for each database instance to indicate how primary selection should be when a failover incidence occurs.

cluster.setInstanceOption(“inno1:3306”, “memberWeight”, 100)
cluster.setInstanceOption(“inno2:3306”, “memberWeight”, 50)
cluster.setInstanceOption(“inno3:3306”, “memberWeight”, 25)

In order to see if failover will be worked successfully by the cluster when I restart mysqld service on inno2 database server:

JS > cluster.status()
{
“clusterName”: “myinnocluster”,
“defaultReplicaSet”: {
“name”: “default”,
“primary”: “inno1:3306”,
“ssl”: “REQUIRED”,
“status”: “OK”,
“statusText”: “Cluster is ONLINE and can tolerate up to ONE failure. 1 member is not active.”,
“topology”: {
“inno1:3306”: {
“address”: “inno1:3306”,
“mode”: “R/W”,
“readReplicas”: {},
“replicationLag”: null,
“role”: “HA”,
“status”: “ONLINE”,
“version”: “8.0.23”
},
“inno2:3306”: {
“address”: “inno2:3306”,
“mode”: “R/O”,
“readReplicas”: {},
“recoveryStatusText”: “Recovery in progress”,
“role”: “HA”,
“status”: “RECOVERING”,
“version”: “8.0.23”
},
“inno3:3306”: {
“address”: “inno3:3306”,
“mode”: “R/O”,
“readReplicas”: {},
“replicationLag”: null,
“role”: “HA”,
“status”: “ONLINE”,
“version”: “8.0.23”
}
},
“topologyMode”: “Single-Primary”
},
“groupInformationSourceMember”: “inno1:3306”
}

As we can see due to priority inno1 was selected as our new primary instance.

After cluster completes the recovering process:

JS > cluster.status()
{
“clusterName”: “myinnocluster”,
“defaultReplicaSet”: {
“name”: “default”,
“primary”: “inno1:3306”,
“ssl”: “REQUIRED”,
“status”: “OK”,
“statusText”: “Cluster is ONLINE and can tolerate up to ONE failure. 1 member is not active.”,
“topology”: {
“inno1:3306”: {
“address”: “inno1:3306”,
“mode”: “R/W”,
“readReplicas”: {},
“replicationLag”: null,
“role”: “HA”,
“status”: “ONLINE”,
“version”: “8.0.23”
},
“inno2:3306”: {
“address”: “inno2:3306”,
“mode”: “R/O”,
“readReplicas”: {},
“recoveryStatusText”: “Recovery in progress”,
“role”: “HA”,
“status”: “RECOVERING”,
“version”: “8.0.23”
},
“inno3:3306”: {
“address”: “inno3:3306”,
“mode”: “R/O”,
“readReplicas”: {},
“replicationLag”: null,
“role”: “HA”,
“status”: “ONLINE”,
“version”: “8.0.23”
}
},
“topologyMode”: “Single-Primary”
},
“groupInformationSourceMember”: “inno1:3306”
}

Next part is configuring mysqlrouters. I say routers because our setup is differ from MySQL recommendation.
MySQL recommends it is better to set mysqlrouter up on application servers in order to use socket connection.
But in an enterprise environment we do not reach our application server nor we have the right to manage application servers.

That’ s why we are configure 2 mysqlrouter through a virtual IP to strength our HA architecture.

We need install keepalive to have virtual IP configuration on both mysqlrouter servers:

yum install keepalived

After installation, we can configure our first node as a master:

I got a copy of original keepalived configuration file:

cd /etc/keepalived/
cp keepalived.conf keepalived.conf_bck
> keepalived.conf
vi keepalived.conf

I added configuration below to the file:

! Configuration File for keepalived
global_defs {
notification_email {
root@mydomain.com
}
notification_email_from svr1@mydomain.com
smtp_server localhost
smtp_connect_timeout 30
}
vrrp_instance VI_1 {
state MASTER
interface ens192
virtual_router_id 41
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass 1066
}
virtual_ipaddress {
your_virtual_ip_address
}
}

Then I did same things for the second instance with some changes, because second one will be a backup instance:

global_defs {
notification_email {
root@mydomain.com
}
notification_email_from svr2@mydomain.com
smtp_server localhost
smtp_connect_timeout 30
}
vrrp_instance VRRP1 {
state BACKUP
# Specify the network interface to which the virtual address is assigned
interface ens192
virtual_router_id 41
# Set the value of priority lower on the backup server than on the master server
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1066
}
virtual_ipaddress {
your_virtual_ip_address
}
}

Let’ s check keepalived services!

On mysqlrouter1:

systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2021–05–06 12:09:23 +03; 12s ago
Process: 10535 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 10538 (keepalived)
Tasks: 2
CGroup: /system.slice/keepalived.service
├─10538 /usr/sbin/keepalived -D
└─10539 /usr/sbin/keepalived -D
May 06 12:09:26 mysqlrouter1 Keepalived_vrrp[10539]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
May 06 12:09:26 mysqlrouter1 Keepalived_vrrp[10539]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
May 06 12:09:26 mysqlrouter1 Keepalived_vrrp[10539]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
May 06 12:09:26 mysqlrouter1 Keepalived_vrrp[10539]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
May 06 12:09:31 mysqlrouter1 Keepalived_vrrp[10539]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
May 06 12:09:31 mysqlrouter1 Keepalived_vrrp[10539]: (VI_1) Sending/queueing gratuitous ARPs on ens192 for your_virtual_ip_address
May 06 12:09:31 mysqlrouter1 Keepalived_vrrp[10539]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
May 06 12:09:31 mysqlrouter1 Keepalived_vrrp[10539]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
May 06 12:09:31 mysqlrouter1 Keepalived_vrrp[10539]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
May 06 12:09:31 mysqlrouter1 Keepalived_vrrp[10539]: Sending gratuitous ARP on ens192 for your_virtual_ip_address

On mysqlrouter2:

systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021–04–27 15:26:07 +03; 1 weeks 1 days ago
Main PID: 763 (keepalived)
Tasks: 2 (limit: 12438)
Memory: 6.8M
CGroup: /system.slice/keepalived.service
├─763 /usr/sbin/keepalived -D
└─764 /usr/sbin/keepalived -D
Apr 27 19:16:13 mysqlrouter2 Keepalived_vrrp[764]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
Apr 27 19:16:18 mysqlrouter2 Keepalived_vrrp[764]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
Apr 27 19:16:18 mysqlrouter2 Keepalived_vrrp[764]: (VRRP1) Sending/queueing gratuitous ARPs on ens192 for your_virtual_ip_address
Apr 27 19:16:18 mysqlrouter2 Keepalived_vrrp[764]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
Apr 27 19:16:18 mysqlrouter2 Keepalived_vrrp[764]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
Apr 27 19:16:18 mysqlrouter2 Keepalived_vrrp[764]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
Apr 27 19:16:18 mysqlrouter2 Keepalived_vrrp[764]: Sending gratuitous ARP on ens192 for your_virtual_ip_address
May 06 10:52:55 mysqlrouter2 Keepalived_vrrp[764]: (VRRP1) Master received advert from 10.0.3.114 with higher priority 200, ours 100
May 06 10:52:55 mysqlrouter2 Keepalived_vrrp[764]: (VRRP1) Entering BACKUP STATE
May 06 10:52:55 mysqlrouter2 Keepalived_vrrp[764]: (VRRP1) removing VIPs.

All set, so we can configure our mysqlrouters:

mysqlrouter — bootstrap root@mysql_cluster_rw_node:3306 — directory /var/lib/myrouter — conf-bind-address 0.0.0.0 — conf-base-port 3307 -u=mysqlrouter

Let’ s talk about parameters we use to bootstrap mysqlrouter.

With “-u mysqlrouter” parameter we mean the user mysqlrouter starts the service.

With “ — conf-bind-address 0.0.0.0” parameter we mean our service will listen every ehternet interface on this machine. We can set our virtual IP, but there is a problem.
Because, on the backup node mysqlrouter service does not work. Virtual IP can only be active only on the one router server at a time. So, for backup server(in case keepalived services
on the both mysqlrouter servers run succesfully) has no virtual IP to listen and it will not start mysqlrouter service. That’ s why we use “0.0.0.0”

With “ — directory /var/lib/myrouter” parameter we mean mysqlrouter will store it’ s configuration file.

We will talk about “ — conf-base-port 3307” parameter a little bit later.

Output:

Please enter MySQL password for dba: 
# Reconfiguring MySQL Router instance at '/var/lib/myrouter'…
- Fetching password for current account (mysql_router22_bi1dzf4jbgp2) from keyring
Executing statements failed with: 'Error executing MySQL query "INSERT INTO mysql_innodb_cluster_metadata.v2_routers (address, product_name, router_name) VALUES ('mysqlrouter1', 'MySQL Router', '')": The MySQL server is running with the - super-read-only option so it cannot execute this statement (1290)' (1290), trying to connect to another node
Fetching Cluster Members
disconnecting from mysql-server
trying to connect to mysql-server at inno3:3306
- Creating account(s) (only those that are needed, if any)
- Using existing certificates from the '/var/lib/myrouter/data' directory
- Verifying account (using it to run SQL queries that would be run by Router)
- Storing account in keyring
- Adjusting permissions of generated files
- Creating configuration /var/lib/myrouter/mysqlrouter.conf
# MySQL Router configured for the InnoDB Cluster 'myinnocluster'
After this MySQL Router has been started with the generated configuration
$ mysqlrouter -c /var/lib/myrouter/mysqlrouter.conf
the cluster 'myinnocluster' can be reached by connecting to:
## MySQL Classic protocol
- Read/Write Connections: localhost:3307
- Read/Only Connections: localhost:3308
## MySQL X protocol
- Read/Write Connections: localhost:3309
- Read/Only Connections: localhost:3310

After configuration of mysqlrouter instances with the command “mysqlrouter — bootstrap …” we use /var/lib/myrouter/start.sh to start mysqlrouter.

/var/lib/myrouter/start.sh
logging facility initialized, switching logging to loggers specified in configuration

Now, we can use our VIP as hostname and 3307 as Port for read/write connections
and ourVIP as hostname and 3308 as Port for read/only connections.

This is okay, but mysqlrouter is not a service. So, after a mysqlrouter server restart it will not run automatically(at least with configuration that we provide).

We need to configure mysqlroute service as well. On the both mysqlrouter servers:

vi /etc/systemd/system/mysqlrouter.service

Put lines as below:

Description=MySQL Router
After=network.target
After=syslog.target
[Service]
Type=notify
User=mysqlrouter
Group=mysqlrouter
# Start main service
ExecStart=/usr/bin/mysqlrouter -c /var/lib/myrouter/mysqlrouter.conf
# Sets open_files_limit
LimitNOFILE = 10000
Restart=on-failure
[Install]
WantedBy=multi-user.target

Then,

systemctl daemon-reload
systemctl enable mysqlrouter.service
systemctl start mysqlrouter.service
su - mysqlrouter
systemctl status mysqlrouter.service
● mysqlrouter.service
Loaded: loaded (/etc/systemd/system/mysqlrouter.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2021–05–06 13:31:18 +03; 9s ago
Main PID: 12768 (mysqlrouter)
Tasks: 25
CGroup: /system.slice/mysqlrouter.service
└─12768 /usr/bin/mysqlrouter -c /tmp/myrouter/mysqlrouter.conf

You need to do some minor changes if you want to start mysqlrouter service as root or any other user, but mysqlrouter.
we have to comment out user and group parameters like:

[Service]
Type=notify
#User=mysqlrouter
#Group=mysqlrouter

Now, we can start mysqlrouter service as root as well.

Thanks for reading :).

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Analytics Vidhya
Analytics Vidhya

Published in Analytics Vidhya

Analytics Vidhya is a community of Generative AI and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com

No responses yet

Write a response