-
Notifications
You must be signed in to change notification settings - Fork 3
Enable Cache Tier
Add a new volume to each node of the cluster.
From the admin node, into the working directory (cd cluster-ceph), run these commands:
ceph-deploy osd create node-1:vdX
ceph-deploy osd create node-2:vdX
ceph-deploy osd create node-3:vdX
Replace vdX with the right device name.
-
Get the crush map
To get the CRUSH map for your cluster, execute the following:
ceph osd getcrushmap -o {compiled-crushmap-filename} -
decompile the crush map
To decompile a CRUSH map, execute the following:
crushtool -d {compiled-crushmap-filename} -o {decompiled-crushmap-filename} -
Modify the crush map
root ssd { id -6 alg straw hash 0 item node-4 weight 4.00 } rule ssd { ruleset 4 type replicated min_size 0 max_size 4 step take ssd step chooseleaf firstn 0 type host step emit }Note: the ids of the added entities may be different - take care of replacing them in order to use unique ids inside the map. Now, you can apply the modified crush map
-
Compile the crush map
crushtool -c {decompiled-crush-map-filename} -o {compiled-crush-map-filename} -
Set the new crush map
To set the CRUSH map for your cluster, execute the following:
ceph osd setcrushmap -i {compiled-crushmap-filename}Note: you can disable updating the crushmap on start of the daemon:
[osd] osd crush update on start = falseCheck the cluster status:
ceph status ceph osd tree -
Create the pool 'ssdcache'
Create the pool "ssdcache" and assign the rule "ssd_ruleset" to the pool:
ceph osd pool create ssdcache <num_pg> ceph osd pool set ssdcache crush_ruleset <id>Check the results:
ceph osd dump -
Create the cache tier
Associate a backing storage pool with a cache pool:
ceph osd tier add {storagepool} {cachepool}Set the cache mode, execute the following:
ceph osd tier cache-mode {cachepool} {cache-mode}Set overlay of the storage pool, so that all the IOs are now routing to the cache pool
ceph osd tier set-overlay {storagepool} {cachepool} -
Configure the cache tier There are several parameters which can be set to configure the sizing of the cache tier. ‘target_max_bytes’ and ‘target_max_objects’ are used to set the max size of the cache tier in bytes or in number of objects. When either of these is reached, the cache tier is ‘full’. ‘cache_target_dirty_ratio’ is used to control when to start the flush operation. When the percentage of dirty data in bytes or number of objects reaches this ratio, the tiering agent starts to do flush. This is the same for ‘cache_target_full_ratio’. But it is for evict operation.
ceph osd pool set {cache-pool-name} target_max_bytes {#bytes} ceph osd pool set {cache-pool-name} target_max_objects {#objects} ceph osd pool set {cache-pool-name} cache_target_dirty_ratio {0.0..1.0} ceph osd pool set {cache-pool-name} cache_target_full_ratio {0.0..1.0}There are some other parameters for cache tiering, such as ‘cache_min_flush_age’ and ‘cache_min_evict_age’. These are not required settings. You can set them as needed.
Configure the cache tier as follows:
ceph osd pool set cache-pool hit_set_type bloom
ceph osd pool set cache-pool hit_set_count 1
ceph osd pool set cache-pool hit_set_period 180
ceph osd pool set cache-pool target_max_bytes 1000000
ceph osd pool set cache-pool target_max_objects 10000
ceph osd pool set cache-pool cache_min_flush_age 180
ceph osd pool set cache-pool cache_min_evict_age 180
ceph osd pool set cache-pool cache_target_dirty_ratio .01
ceph osd pool set cache-pool cache_target_full_ratio .02
Create a temporary file of 500 MB that we will use to write to the brd pool, which will eventually be written to the cache-pool: `# dd if=/dev/zero of=/tmp/file1 bs=1M count=500`` Put this file inside rbd pool: rados -p rbd put object1 /tmp/file1
After 180 seconds (as we have configured cache_min_evict_age to 300 seconds), the cache-tiering agent will migrate object1 from the cache-pool to brd pool); object1 will be removed from the cache-pool:
rados -p <storage-pool> ls
rados -p <cache-pool> ls
date
After 3 minutes, the data is migrated from the cache pool to the backing pool