Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 38 additions & 31 deletions tiflash/create-tiflash-replicas.md
Original file line number Diff line number Diff line change
Expand Up @@ -233,47 +233,50 @@ When configuring replicas, if you need to distribute TiFlash replicas to multipl

Note that the `flash.proxy.labels` configuration in earlier versions cannot handle special characters in the available zone name correctly. It is recommended to use the `server.labels` in `learner_config` to configure the name of an available zone.

2. After starting a cluster, specify the labels when creating replicas.
2. After starting a cluster, specify the number of TiFlash replicas for high availability. The syntax is as follows:

```sql
ALTER TABLE table_name SET TIFLASH REPLICA count LOCATION LABELS location_labels;
ALTER TABLE table_name SET TIFLASH REPLICA count;
```

For example:

```sql
ALTER TABLE t SET TIFLASH REPLICA 2 LOCATION LABELS "zone";
ALTER TABLE t SET TIFLASH REPLICA 2;
```

3. PD schedules the replicas based on the labels. In this example, PD respectively schedules two replicas of the table `t` to two available zones. You can use pd-ctl to view the scheduling.
3. PD schedules the replicas of the table `t` to different availability zones based on the `server.labels` in the TiFlash node's `learner_config` and the number (`count`) of the table's replicas, ensuring availability. For more information, see [Schedule Replicas by Topology Labels](https://docs.pingcap.com/tidb/stable/schedule-replicas-by-topology-labels/). You can use the following SQL statement to verify the distribution of a table's Regions across TiFlash nodes:

```shell
> tiup ctl:v<CLUSTER_VERSION> pd -u http://<PD_ADDRESS>:2379 store

...
"address": "172.16.5.82:23913",
"labels": [
{ "key": "engine", "value": "tiflash"},
{ "key": "zone", "value": "z1" }
],
"region_count": 4,

...
"address": "172.16.5.81:23913",
"labels": [
{ "key": "engine", "value": "tiflash"},
{ "key": "zone", "value": "z1" }
],
"region_count": 5,
...

"address": "172.16.5.85:23913",
"labels": [
{ "key": "engine", "value": "tiflash"},
{ "key": "zone", "value": "z2" }
],
"region_count": 9,
...
```sql
-- Non-partitioned table
SELECT table_id, p.store_id, address, COUNT(p.region_id)
FROM
information_schema.tikv_region_status r,
information_schema.tikv_region_peers p,
information_schema.tikv_store_status s
WHERE
r.db_name = 'test'
AND r.table_name = 'table_to_check'
AND r.region_id = p.region_id
AND p.store_id = s.store_id
AND JSON_EXTRACT(s.label, '$[0].value') = 'tiflash'
GROUP BY table_id, p.store_id, address;

-- Partitioned table
SELECT table_id, r.partition_name, p.store_id, address, COUNT(p.region_id)
FROM
information_schema.tikv_region_status r,
information_schema.tikv_region_peers p,
information_schema.tikv_store_status s
WHERE
r.db_name = 'test'
AND r.table_name = 'table_to_check'
AND r.partition_name LIKE 'p202312%'
AND r.region_id = p.region_id
AND p.store_id = s.store_id
AND JSON_EXTRACT(s.label, '$[0].value') = 'tiflash'
GROUP BY table_id, r.partition_name, p.store_id, address
ORDER BY table_id, r.partition_name, p.store_id;
```

<CustomContent platform="tidb">
Expand All @@ -283,3 +286,7 @@ For more information about scheduling replicas by using labels, see [Schedule Re
TiFlash supports configuring the replica selection strategy for different zones. For more information, see [`tiflash_replica_read`](/system-variables.md#tiflash_replica_read-new-in-v730).

</CustomContent>

> **Note:**
>
> In the syntax `ALTER TABLE table_name SET TIFLASH REPLICA count LOCATION LABELS location_labels;`, if you specify multiple labels for `location_labels`, TiDB cannot parse them correctly to set placement rules. Therefore, do not use `LOCATION LABELS` to configure TiFlash replicas.