GlusterFS文件系统3

前面两篇的研究基本是如何创建挂载,当我们的分布式存储存放了数据,空间不够用时,我们最需要做的就是扩容,这就涉及到volume的扩展移除以及相关卷标操作,如何做呢,我们以前面Distributed为例作介绍:

1、在已有volume里面添加bricks

# gluster volume create test-volume 192.168.1.66:/data/test-volume/ 192.168.1.64:/data/test-volume/
# gluster volume start test-volume
volume start: test-volume: success
# gluster volume info test-volume
 
Volume Name: test-volume
Type: Distribute
Volume ID: 2a321664-e46a-4198-8cf7-e7672a1f3441
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.1.66:/data/test-volume
Brick2: 192.168.1.64:/data/test-volume

这里注意:当你给分布式复制卷和分布式条带卷中增加bricks时,你增加的bricks的数目必须是复制或者条带数目的倍数,例如:你给一个分布式复制卷的replica为2,你在增加bricks的时候数量必须为2、4、6、8等,这里我们以分布式为例作介绍可能不会存在这样的错误!

添加bricks:

# gluster volume add-brick test-volume 192.168.1.244:/data/test-volume
volume add-brick: success
# gluster volume info test-volume
 
Volume Name: test-volume
Type: Distribute
Volume ID: 2a321664-e46a-4198-8cf7-e7672a1f3441
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 192.168.1.66:/data/test-volume
Brick2: 192.168.1.64:/data/test-volume
Brick3: 192.168.1.244:/data/test-volume
# gluster volume add-brick test-volume 192.168.1.63:/data/test-volume
volume add-brick: success
# gluster volume info test-volume
 
Volume Name: test-volume
Type: Distribute
Volume ID: 2a321664-e46a-4198-8cf7-e7672a1f3441
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.66:/data/test-volume
Brick2: 192.168.1.64:/data/test-volume
Brick3: 192.168.1.244:/data/test-volume
Brick4: 192.168.1.63:/data/test-volume

从客户端挂载测试容量进一步提升:

# mount -t glusterfs 192.168.1.63:test-volume /data/test-volume
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2              20G  1.5G   17G   9% /
tmpfs                 1.9G     0  1.9G   0% /dev/shm
/dev/sda1             194M   27M  158M  15% /boot
/dev/mapper/vg_web-LogVol00
                      251G   13G  226G   6% /data
/dev/sda3              20G  710M   18G   4% /home
192.168.1.63:dr       683G   63G  586G  10% /data/v3_upload
192.168.1.63:test-volume
                      2.4T  185G  2.1T   9% /data/test-volume

对扩容后的卷组进行rebalance:

# gluster volume rebalance test-volume start
volume rebalance: test-volume: success: Starting rebalance on volume test-volume has been successful.
ID: a96bad27-2238-40a1-9cd3-d2be3293e7ac
# gluster volume rebalance test-volume status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             0             0             0            completed               0.00
                            192.168.1.66                0        0Bytes             0             0             0            completed               0.00
                           192.168.1.244                0        0Bytes             0             0             0            completed               0.00
                            192.168.1.64                0        0Bytes             0             0             0            completed               0.00
volume rebalance: test-volume: success: 
# gluster volume rebalance test-volume stop
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             0             0             0            completed               0.00
                            192.168.1.66                0        0Bytes             0             0             0            completed               0.00
                           192.168.1.244                0        0Bytes             0             0             0            completed               0.00
                            192.168.1.64                0        0Bytes             0             0             0            completed               0.00
volume rebalance: test-volume: success: rebalance process may be in the middle of a file migration.
The process will be fully stopped once the migration of the file is complete.
Please check rebalance process for completion before doing any further brick related tasks on the volume.
# gluster volume rebalance test-volume fix-layout start
volume rebalance: test-volume: success: Starting rebalance on volume test-volume has been successful.
ID: 1bb5f8cb-3543-42c9-9fa2-579937bc0583
# gluster volume rebalance test-volume status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             0             0             0 fix-layout completed               0.00
                            192.168.1.66                0        0Bytes             0             0             0 fix-layout completed               0.00
                           192.168.1.244                0        0Bytes             0             0             0 fix-layout completed               0.00
                            192.168.1.64                0        0Bytes             0             0             0 fix-layout completed               0.00
volume rebalance: test-volume: success:

2、在已有volume里删除bricks

# gluster volume remove-brick test-volume 192.168.1.244:/data/test-volume/  start
volume remove-brick start: success
ID: b8e5eb80-8940-4511-a491-71ee59000f0c
# gluster volume remove-brick test-volume 192.168.1.244:/data/test-volume/  status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                           192.168.1.244                0        0Bytes             0             0             0 fix-layout completed               0.00
# gluster volume remove-brick test-volume 192.168.1.244:/data/test-volume/  commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
# gluster volume remove-brick test-volume 192.168.1.63:/data/test-volume/  start
volume remove-brick start: success
ID: 649d23e3-3864-454c-a7d6-302910b30a9c
# gluster volume remove-brick test-volume 192.168.1.63:/data/test-volume/  status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             0             0             0            completed               0.00
# gluster volume remove-brick test-volume 192.168.1.63:/data/test-volume/  commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
# gluster volume rebalance test-volume fix-layout start
volume rebalance: test-volume: success: Starting rebalance on volume test-volume has been successful.
ID: 1cd7635b-7f80-4c23-8bb7-55253f55fcf2
# gluster volume rebalance test-volume start
volume rebalance: test-volume: success: Starting rebalance on volume test-volume has been successful.
ID: 5b514555-cb6c-4e7d-be69-9e21da4db000
# gluster volume rebalance test-volume status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                            192.168.1.66                0        0Bytes             0             0             0            completed               0.00
                            192.168.1.64                0        0Bytes             0             0             0            completed               0.00
volume rebalance: test-volume: success:
# gluster volume info test-volume
 
Volume Name: test-volume
Type: Distribute
Volume ID: 2a321664-e46a-4198-8cf7-e7672a1f3441
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 192.168.1.66:/data/test-volume
Brick2: 192.168.1.64:/data/test-volume

3、Migrating Data / Replacing Bricks(迁移数据/替换bricks):

规则:Usage: volume replace-brick {start [force]|pause|abort|status|commit [force]}

# gluster volume replace-brick ds 192.168.1.66:/data/ds/ 192.168.1.66:/data/ds-replace/  start
# gluster volume replace-brick ds 192.168.1.66:/data/ds/ 192.168.1.66:/data/ds-replace/  status
# gluster volume replace-brick ds 192.168.1.66:/data/ds/ 192.168.1.66:/data/ds-replace/  commit

测试过程中数据确实有迁移,但是真个集群表现的有些异常(也有可能跟我的环境有关),比如:客户端挂载一直没反应,集群查询info,没任何响应,必须重启glusterd一切才变得正常,有机会在一个干次的环境一定要重新测试一下!:

[root@YQD-Intranet-DB-NO1 data]# ll -h ds/
total 8.0K
-rw-r--r-- 2 root root 55 Nov 19  2013 visioSN.txt
[root@YQD-Intranet-DB-NO1 data]# ll -h ds-replace/
total 8.0K
-rw-r--r-- 2 root root 55 Nov 19  2013 visioSN.txt

4、设置磁盘限制:

# gluster volume quota
Usage: volume quota <VOLNAME> {enable|disable|list [<path> ...]|remove <path>| default-soft-limit <percent>} |
volume quota <VOLNAME> {limit-usage <path> <size> [<percent>]} |
volume quota <VOLNAME> {alert-time|soft-timeout|hard-timeout} {<time>}</percent></path>
# gluster volume quota rs enable
volume quota : success
# gluster volume quota rs disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
volume quota : success

1)quota功能,主要是对挂载点下的某个目录进行空间限额。 如:/mnt/glusterfs/data目录.而不是对组成卷组的空间进行限制,如:/exp2 /exp3
2)gluster volume set test-volume features.quota-timeout,这个参数,主要用
于客户端,设置客户端何时重新读配置文件。因为相应的quota信息是在服务端设置的,而相应的限额生效,是在挂载点及客户端。所以,必须通知客户端,相应的配置文件何时从服务端重读。

在整个测试过程中遇到一些问题,不知道是测试服务器性能问题,还是配置问题,希望在以后的应用中,能进一步研究

还没有评论,快来抢沙发!

发表评论

  • 😉
  • 😐
  • 😡
  • 😈
  • 🙂
  • 😯
  • 🙁
  • 🙄
  • 😛
  • 😳
  • 😮
  • emoji-mrgree
  • 😆
  • 💡
  • 😀
  • 👿
  • 😥
  • 😎
  • ➡
  • 😕
  • ❓
  • ❗
  • 66 queries in 0.436 seconds