Thursday, May 24, 2018

How to set up ceph-volume OSDs with ceph-ansible

LVM Overview

  • https://www.digitalocean.com/community/tutorials/an-introduction-to-lvm-concepts-terminology-and-operations
  • https://docs.ansible.com/ansible/2.3/lvol_module.html

For a LVM OSD, we need to create a physical volume, volume group, followed by a set of logical volumes.

Seting up OSD in ceph-ansible:

Under ceph-ansible/group_var/osds.yml:
osd_scenario: lvm

lvm_volumes:
  - data: /dev/vdb
    db: /dev/vde1
    wal: /dev/vde2

  - data: /dev/vdc
    db: /dev/vde3
    wal: /dev/vde4

  - data: /dev/vdd
    db: /dev/vde5
    wal: /dev/vde6


You would get the following volumes after running Ansible:

$ sudo lsblk
NAME                                                                                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda                                                                                                   254:0    0   10G  0 disk
`-vda1                                                                                                254:1    0   10G  0 part /
vdb                                                                                                   254:16   0  3.7T  0 disk
`-ceph--6f4ea188--1115--4b01--98ac--2c0d9d5033f2-osd--block--975e8e76--6db6--4331--becb--0ec800ce6700 253:0    0  3.7T  0 lvm
vdc                                                                                                   254:32   0  3.7T  0 disk
`-ceph--26bd1be1--1f95--4389--b1b4--3a2416e38419-osd--block--ffff5f01--8925--44c9--9019--5c43e7323f14 253:1    0  3.7T  0 lvm
vdd                                                                                                   254:48   0  3.7T  0 disk
`-ceph--b2cfeafb--3c45--4f0b--8c31--fba4203877da-osd--block--d9bf9af9--63ea--48eb--9d24--f27cdb868ddd 253:2    0  3.7T  0 lvm
vde                                                                                                   254:64   0  335G  0 disk
|-vde1                                                                                                254:65   0  100G  0 part
|-vde2                                                                                                254:66   0    5G  0 part
|-vde3                                                                                                254:67   0  100G  0 part
|-vde4                                                                                                254:68   0    5G  0 part
|-vde5                                                                                                254:69   0  100G  0 part
`-vde6                                                                                                254:70   0    5G  0 part


$ sudo lvm
lvm> lvdisplay
  --- Logical volume ---
  LV Path                /dev/ceph-26bd1be1-1f95-4389-b1b4-3a2416e38419/osd-block-ffff5f01-8925-44c9-9019-5c43e7323f14
  LV Name                osd-block-ffff5f01-8925-44c9-9019-5c43e7323f14
  VG Name                ceph-26bd1be1-1f95-4389-b1b4-3a2416e38419
  LV UUID                8ejEk9-Nmxk-CJcu-qrmP-krB6-RNcV-7tJmx7
  LV Write Access        read/write
  LV Creation host, time osd-f-6-1067165, 2018-05-23 11:13:02 +0530
  LV Status              available
  # open                 4
  LV Size                3.64 TiB
  Current LE             953861
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

No comments:

Post a Comment