Friday, May 25, 2018

What is difference between rebalance and recovery in Ceph?

We set norebalance, noecover flags on a Ceph cluster during a cluster modification such as adding/removing OSDs. A cluster OSDs modification created a new mapping of PGs.

A new mapping of PG means:
  • The PG might move to a new primary
  • The PG might have a new secondary
  • A PG might have an all new primary and secondary
The first scenario means that PG is hosted on a new OSD so all of its data must move, by PG backfill. Since it's a new primary, the secondary OSDs might have changes too. A new seocndary means that backfill would start on secondaty OSDs too.
So, setting recover means that we don't do backfill and rather serve data from old mappings (which might have become unavailable if OSDs was down)
If we set norebalance, that means that we would not do backfill of secondary PGs and would rather live with fewer copies of PGs.


Thursday, May 24, 2018

How to set up ceph-volume OSDs with ceph-ansible

LVM Overview

  • https://www.digitalocean.com/community/tutorials/an-introduction-to-lvm-concepts-terminology-and-operations
  • https://docs.ansible.com/ansible/2.3/lvol_module.html

For a LVM OSD, we need to create a physical volume, volume group, followed by a set of logical volumes.

Seting up OSD in ceph-ansible:

Under ceph-ansible/group_var/osds.yml:
osd_scenario: lvm

lvm_volumes:
  - data: /dev/vdb
    db: /dev/vde1
    wal: /dev/vde2

  - data: /dev/vdc
    db: /dev/vde3
    wal: /dev/vde4

  - data: /dev/vdd
    db: /dev/vde5
    wal: /dev/vde6


You would get the following volumes after running Ansible:

$ sudo lsblk
NAME                                                                                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda                                                                                                   254:0    0   10G  0 disk
`-vda1                                                                                                254:1    0   10G  0 part /
vdb                                                                                                   254:16   0  3.7T  0 disk
`-ceph--6f4ea188--1115--4b01--98ac--2c0d9d5033f2-osd--block--975e8e76--6db6--4331--becb--0ec800ce6700 253:0    0  3.7T  0 lvm
vdc                                                                                                   254:32   0  3.7T  0 disk
`-ceph--26bd1be1--1f95--4389--b1b4--3a2416e38419-osd--block--ffff5f01--8925--44c9--9019--5c43e7323f14 253:1    0  3.7T  0 lvm
vdd                                                                                                   254:48   0  3.7T  0 disk
`-ceph--b2cfeafb--3c45--4f0b--8c31--fba4203877da-osd--block--d9bf9af9--63ea--48eb--9d24--f27cdb868ddd 253:2    0  3.7T  0 lvm
vde                                                                                                   254:64   0  335G  0 disk
|-vde1                                                                                                254:65   0  100G  0 part
|-vde2                                                                                                254:66   0    5G  0 part
|-vde3                                                                                                254:67   0  100G  0 part
|-vde4                                                                                                254:68   0    5G  0 part
|-vde5                                                                                                254:69   0  100G  0 part
`-vde6                                                                                                254:70   0    5G  0 part


$ sudo lvm
lvm> lvdisplay
  --- Logical volume ---
  LV Path                /dev/ceph-26bd1be1-1f95-4389-b1b4-3a2416e38419/osd-block-ffff5f01-8925-44c9-9019-5c43e7323f14
  LV Name                osd-block-ffff5f01-8925-44c9-9019-5c43e7323f14
  VG Name                ceph-26bd1be1-1f95-4389-b1b4-3a2416e38419
  LV UUID                8ejEk9-Nmxk-CJcu-qrmP-krB6-RNcV-7tJmx7
  LV Write Access        read/write
  LV Creation host, time osd-f-6-1067165, 2018-05-23 11:13:02 +0530
  LV Status              available
  # open                 4
  LV Size                3.64 TiB
  Current LE             953861
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

Wednesday, May 23, 2018

How to access ceph.conf variable from ceph-ansible play

There is a command 'ceph-conf' that can be used inside Ansible play to access sections in ceph.conf on a remote host.

# ceph-conf -c /etc/ceph/ceph.conf --lookup  "bluestore block wal size" -s osd
2147483648


Ref
====
http://docs.ceph.com/docs/master/man/8/ceph-conf/


Thursday, May 17, 2018

How to set config for all OSDs in a Ceph cluster

To set a config on all OSDs in a cluster:
$ sudo ceph tell osd.* injectargs -- --osd_recovery_max_active=8

To set a config on all Mons in a cluster:
$ sudo ceph tell mon.* injectargs -- --{conf option}


ceph-ansible: ceph-volume with lvm (Logical Volume Manager)


The command to create a ceph-volume for Bluestore OSD on a physical disk results in physical volume and followed by volume group creation. Then finally it creats a logical volume in the volume group.

$ vgcreate --force --yes ceph-7a1d0d7f-c6f4-468c-8fc4-3e74930ac1fd /dev/vdc

Tuesday, May 15, 2018

Changing shell on Mac OS

$ chsh -s /bin/bash

The above command works well and only needs your password to continue.

Wednesday, May 9, 2018

How to find Ansible facts for a host with IP

You'd have to run the following command as is:

$ ansible 10.34.xx.xx -m setup -i 10.34.xx.xx,
- Do add the comma add the end