Skip to content

Commit 17bc54d

Browse files
authored
Merge pull request #661 from fujitatomoya/remove-redundant-debug-description
remove redundant description for debug procedure, instead using link.
2 parents 633a169 + deb31fb commit 17bc54d

File tree

5 files changed

+272
-502
lines changed

5 files changed

+272
-502
lines changed

docs/advanced/debug.md

Lines changed: 105 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -3,67 +3,88 @@ title: Enable Kubectl logs/exec to debug pods on the edge
33
sidebar_position: 3
44
---
55

6-
## Prepare certs
6+
> Note for Helm deployments:
7+
> - Stream certificates are generated automatically and the CloudStream feature is enabled by default. Therefore, Steps 1-3 can be skipped unless customization is needed.
8+
> - Step 4 could be finished by iptablesmanager component by default, so manual operations are not needed. Refer to the [cloudcore helm values](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/values.yaml#L67).
9+
> - If CloudCore is deploy in container (by default), operations in Steps 5-6 can also be skipped.
710
8-
1. Make sure you can find the kubernetes `ca.crt` and `ca.key` files. If you set up your kubernetes cluster by `kubeadm` , those files will be in `/etc/kubernetes/pki/` dir.
11+
1. Make sure you can find the kubernetes `ca.crt` and `ca.key` files. If you set up your kubernetes cluster by `kubeadm` , those files will be in `/etc/kubernetes/pki/` directory.
912

1013
```shell
1114
ls /etc/kubernetes/pki/
1215
```
1316

14-
2. Set `CLOUDCOREIPS` env. The environment variable is set to specify the IP address of cloudcore, or a VIP if you have a highly available cluster.
17+
2. Set the `CLOUDCOREIPS` environment variable to specify the IP address of CloudCore, or a VIP if you have a highly available cluster. Set `CLOUDCORE_DOMAINS` instead if Kubernetes uses domain names to communicate with CloudCore.
1518

1619
```bash
1720
export CLOUDCOREIPS="192.168.0.139"
1821
```
22+
1923
(Warning: the same **terminal** is essential to continue the work, or it is necessary to type this command again.) Checking the environment variable with the following command:
24+
2025
```shell
2126
echo $CLOUDCOREIPS
2227
```
2328

24-
3. Generate the certificates for **CloudStream** on cloud node, however, the generation file is not in the `/etc/kubeedge/`, we need to copy it from the repository which was git cloned from GitHub.
25-
Change user to root:
29+
3. Generate the certificates for **CloudStream** on the cloud node. Since the generation file is not located in `/etc/kubeedge/`, copy it from the cloned GitHub repository.
30+
31+
Switch to the root user:
32+
2633
```shell
2734
sudo su
2835
```
36+
2937
Copy certificates generation file from original cloned repository:
38+
3039
```shell
3140
cp $GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh /etc/kubeedge/
3241
```
42+
3343
Change directory to the kubeedge directory:
44+
3445
```shell
3546
cd /etc/kubeedge/
3647
```
48+
3749
Generate certificates from **certgen.sh**
50+
3851
```bash
3952
/etc/kubeedge/certgen.sh stream
4053
```
4154

42-
## Set Iptables Rule
55+
4. It is needed to set iptables on the host. (This procedure should be executed on every node where an api-server is deployed. In this case, it is the control-plane node. Execute those commands as the root user.)
4356

44-
1. Set iptables on the host. This command should be executed on every node which deployed apiserver.(In this case, it is the master node, and execute this command by root.)
57+
**Note:** First, get the configmap containing all the CloudCore IPs and tunnel ports:
4558

46-
Run the following command on the host on which each apiserver runs:
59+
```bash
60+
kubectl get cm tunnelport -n kubeedge -o yaml
61+
62+
apiVersion: v1
63+
kind: ConfigMap
64+
metadata:
65+
annotations:
66+
tunnelportrecord.kubeedge.io: '{"ipTunnelPort":{"192.168.1.16":10350, "192.168.1.17":10351},"port":{"10350":true, "10351":true}}'
67+
creationTimestamp: "2021-06-01T04:10:20Z"
68+
...
69+
```
4770

48-
**Note:** Make sure `CLOUDCOREIPS` environment variable is set
71+
Then set all the iptables for multiple CloudCore instances to every node where the api-server runs. The CloudCore IPs and tunnel ports should be obtained from the configmap above.
4972

5073
```bash
51-
iptables -t nat -A OUTPUT -p tcp --dport 10350 -j DNAT --to $CLOUDCOREIPS:10003
74+
iptables -t nat -A OUTPUT -p tcp --dport $YOUR-TUNNEL-PORT -j DNAT --to $YOUR-CLOUDCORE-IP:10003
75+
iptables -t nat -A OUTPUT -p tcp --dport 10350 -j DNAT --to 192.168.1.16:10003
76+
iptables -t nat -A OUTPUT -p tcp --dport 10351 -j DNAT --to 192.168.1.17:10003
5277
```
53-
> Port 10003 and 10350 are the default ports for the CloudStream and edgecore,
54-
use your own ports if you have changed them.
5578

56-
If you are not sure whether you have a setting of iptables, and you want to clean all of them.
57-
(If you set up iptables wrongly, it will block you out of this feature)
79+
If you are unsure about the current iptables settings and want to clean all of them. (If you set up iptables wrongly, it will block you out of your `kubectl logs` feature)
5880

5981
The following command can be used to clean up iptables:
82+
6083
``` shell
6184
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
6285
```
6386

64-
## Update Configurations
65-
66-
1. Update `cloudcore` configuration to enable **cloudStream**. (The new version has this feature enabled by default in the cloud, so this configuration can be skipped.)
87+
5. Update `cloudcore` configuration to enable **cloudStream**. (The new version has this feature enabled by default in the cloud, so this configuration can be skipped.)
6788

6889
If `cloudcore` is installed as binary, you can directly modify `/etc/kubeedge/config/cloudcore.yaml` with using editor.
6990
If `cloudcore` is running as kubernetes deployment, you can use `kubectl edit cm -n kubeedge cloudcore` to update `cloudcore`'s ConfigurationMap.
@@ -81,10 +102,10 @@ sidebar_position: 3
81102
tunnelPort: 10004
82103
```
83104
84-
2. Update `edgecore` configuration to enable **edgeStream**.
105+
Update `edgecore` configuration to enable **edgeStream**.
85106
86107
This modification needs to be done all edge system where `edgecore` runs to update `/etc/kubeedge/config/edgecore.yaml`.
87-
Make sure the `server` IP address to the cloudcore IP (the same as $CLOUDCOREIPS).
108+
Make sure the `server` IP address to the CloudCore IP (the same as $CLOUDCOREIPS).
88109
89110
```yaml
90111
edgeStream:
@@ -98,22 +119,78 @@ sidebar_position: 3
98119
writeDeadline: 15
99120
```
100121
101-
## Restart
122+
6. Restart all the CloudCore and EdgeCore to apply the **Stream** configuration.
123+
124+
```shell
125+
sudo su
126+
```
127+
128+
If CloudCore is running in process mode:
129+
130+
```shell
131+
pkill cloudcore
132+
nohup cloudcore > cloudcore.log 2>&1 &
133+
```
134+
135+
If CloudCore is running in Kubernetes deployment mode:
136+
137+
```shell
138+
kubectl -n kubeedge rollout restart deployment cloudcore
139+
```
140+
141+
Restart the EdgeCore:
142+
143+
```shell
144+
systemctl restart edgecore.service
145+
```
146+
147+
If restarting EdgeCore fails, check if that is due to `kube-proxy` and kill it. **kubeedge** rejects it by default, we use a succedaneum called [edgemesh](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/edgemesh-design.md)
148+
149+
**Note:** It is important to avoid `kube-proxy` being deployed on edgenode and there are two methods to achieve this:
150+
151+
- **Method 1:** Add the following settings by calling `kubectl edit daemonsets.apps -n kube-system kube-proxy`:
102152
103-
1. Restart all the cloudcore and edgecore to apply the **Stream** configuration.
153+
```yaml
154+
spec:
155+
template:
156+
spec:
157+
affinity:
158+
nodeAffinity:
159+
requiredDuringSchedulingIgnoredDuringExecution:
160+
nodeSelectorTerms:
161+
- matchExpressions:
162+
- key: node-role.kubernetes.io/edge
163+
operator: DoesNotExist
164+
```
165+
166+
or just run the following command directly in the shell window:
104167
105-
If `cloudcore` is installed as binary (If the `cloudcore.yaml` has not been updated, there is no need to restart.)
106-
:
107168
```shell
108-
sudo systemctl restart cloudcore.service
169+
kubectl patch daemonset kube-proxy -n kube-system -p '{"spec": {"template": {"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key": "node-role.kubernetes.io/edge", "operator": "DoesNotExist"}]}]}}}}}}}'
109170
```
110171
111-
or `cloudcore` is running in kubernetes deployment:
172+
- **Method 2:** If you still want to run `kube-proxy`, instruct **edgecore** not to check the environment by adding the environment variable in `edgecore.service` :
173+
112174
```shell
113-
kubectl rollout restart deployment -n kubeedge cloudcore
175+
sudo vi /etc/kubeedge/edgecore.service
114176
```
115177
116-
At the all edge side where `edgecore` runs:
178+
Add the following line into the **edgecore.service** file:
179+
117180
```shell
118-
sudo systemctl restart edgecore.service
181+
Environment="CHECK_EDGECORE_ENVIRONMENT=false"
182+
```
183+
184+
The final file should look like this:
185+
186+
```
187+
Description=edgecore.service
188+
189+
[Service]
190+
Type=simple
191+
ExecStart=/root/cmd/ke/edgecore --logtostderr=false --log-file=/root/cmd/ke/edgecore.log
192+
Environment="CHECK_EDGECORE_ENVIRONMENT=false"
193+
194+
[Install]
195+
WantedBy=multi-user.target
119196
```

docs/setup/install-with-binary.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ make
143143
144144
The compiled kubeedge binaries will be put to `_output/local/bin` directory.
145145
146-
### Deploy demo on edge nodes
146+
## Deploy demo on edge nodes
147147
148148
After you start both `cloudcore` and `edgecore` successfully, you can run `kubectl get node` to ensure whether edgecore has already registered to cloudcore successfully. The edge nodes are in `Ready` status like below.
149149
```shell

0 commit comments

Comments
 (0)