How to migrate data from thickly provisioned to thinly provisioned volumes on distribute-replicate with two pairs of replicated bricks in Red Hat Gluster Storage 3.1 ?

Solution Verified - Updated

Environment

  • Red Hat Gluster Storage 3.X

Issue

  • How to migrate data from thickly provisioned to thinly provisioned volumes on distribute-replicate with two pairs of replicated bricks for snapshot feature ?

  • How to migrate data from thickly-provisioned to thinly-provisioned bricks ?

Resolution

Scenario 1 : Convert existing brick from Thick to Thin

  • This approach is similar to the way used for replacing failed brick from new brick using same name.
  • This method can applied for single brick at a time or even multiple bricks at a time. Make sure that we are not removing all the brick of smae replica pair.
  • As we are removing bricks, There is chances that we might not get HA in case the active brick goes down.

Steps

Scenario 2 : Add new thinly provisioned brick in the volume and remove the thick bricks

  • Below is the steps explained with example:

  • Current configuration includes one distribute-replicate volume with two pairs of replicated bricks, (brick1<->brick2) and (brick3<->brick4).

Volume Name: dist-vol
Type: Distributed-Replicate
Volume ID: 91a1bbe8-7c01-4875-9292-a1c476547a28
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1.redhat.com:/thicklv1/brick1         |  ----    Replica 1
Brick2: node2.redhat.com:/thickv1/brick2
Brick3: node3.redhat.com:/thicklv1/brick3         |  ----    Replica 2
Brick4: node4.redhat.com:/thickv1/brick4
Options Reconfigured:
performance.readdir-ahead: on
cluster.self-heal-daemon: off
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

Step 1:

With first replica pair (brick1<->brick2) and second replica pair (brick3<->brick4), add two thinly provisional bricks (brick5) to first replica pair and (brick6) to second replica pair and perform self-heal. As a result (brick1<->brick2)<->brick5 three replicas exist in first replica group and (brick3<->brick4)<->brick6 three replicas exist in second replica group.

Add thinly-provisioned bricks (brick5 & brick6) with replica 3,

# gluster volume add-brick dist-vol replica 3 node1.redhat.com:/thinlv1/brick5  node3.redhat.com:/thinlv1/brick6
Volume Name: dist-vol
Type: Distributed-Replicate
Volume ID: a4c24243-c981-40ef-8400-009c943cb698
Status: Started
Snap Volume: no
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: node1.redhat.com:/thicklv1/brick1           |    -----     Replica 1
Brick2: node2.redhat.com:/thicklv1/brick2
Brick3: node1.redhat.com:/thinlv1/brick5
Brick4: node3.redhat.com:/thicklv1/brick3           |    -----     Replica 2
Brick5: node4.redhat.com:/thicklv1/brick4
Brick6: node3.redhat.com:/thinlv1/brick6
Options Reconfigured:
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

Step 2:

  • Run self heal full to initiate healing data to newly added brick.
  • This might take some time
  • Wait for heal to complete before moving to the next step

Step 3:

Remove existing thickly-provisioned brick1 and brick3, leaving brick2<->brick5 replicas pair in first replica group and brick4<->brick6 replica pair in second replica group.

Remove thickly-provisioned bricks (brick1 and brick3) with replica 2,

# gluster remove-brick dist-vol replica 2  node1.redhat.com:/thicklv1/brick1  node3.redhat.com:/thicklv1/brick3 force
Volume Name: dist-vol
Type: Distributed-Replicate
Volume ID: a4c24243-c981-40ef-8400-009c943cb698
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1.redhat.com:/thicklv1/brick2       |    ------    Replica 1
Brick2: node3.redhat.com:/thinlv1/brick5
Brick3: node4.redhat.com:/thicklv1/brick4        |    ------    Replica 2
Brick4: node2.redhat.com:/thinlv1/brick6
Options Reconfigured:
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

Step 4:

Add two additional thinly provisional brick (brick7) to first replica group and another brick (brick8) to second replica-pair and perform self-heal. As a result
brick2<->brick5<->brick7 three replicas are in first replica group and brick4<->brick6<->brick8 three replicas in second replica group.

Add thinly-provisional bricks (brick7 & brick8) with replica 3:

# gluster volume add-brick dist-vol replica 3 node2.redhat.com:/thinlv1/brick7  node4.redhat.com:/thinlv1/brick8
Volume Name: dist-vol
Type: Distributed-Replicate
Volume ID: a4c24243-c981-40ef-8400-009c943cb698
Status: Started
Snap Volume: no
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: node2.redhat.com:/thicklv1/brick2      |  -----   Replica 1
Brick2: node1.redhat.com:/thinlv1/brick5
Brick3: node2.redhat.com:/thinlv1/brick7
Brick4: node4.redhat.com:/thicklv1/brick4       |  -----    Replica 2
Brick5: node3.redhat.com:/thinlv1/brick6
Brick6: node4.redhat.com:/thinlv1/brick8
Options Reconfigured:
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256

Step 5:

  • Run self heal full to initiate healing data to newly added brick.
  • This might take some time
  • Wait for heal to complete before moving to the next step

Step 6:

Remove brick2 and brick4 leaving brick5<->brick7 in first replica pair and brick6<->brick8 in second replica pair

Remove thickly-provisioned bricks (brick2 & brick4) with replica 2:

# gluster volume remove-brick dist-vol replica 2  node2.redhat.com:/thicklv1/brick2   node4.redhat.com:/thicklv1/brick4  force
Volume Name: dist-vol
Type: Distributed-Replicate
Volume ID: a4c24243-c981-40ef-8400-009c943cb698
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: dell-per510-3.gsslab.pnq.redhat.com:/thinlv1/brick5          |    -----   Replica 1
Brick2: dell-per510-4.gsslab.pnq.redhat.com:/thinlv1/brick7
Brick3: ibm-x3650m3-1.gsslab.pnq.redhat.com:/thinlv1/brick6       |    -----   Replica 2
Brick4: ibm-x3650m3-2.gsslab.pnq.redhat.com:/thinlv1/brick8
Options Reconfigured:
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
SBR
Components
Category

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.