diff --git a/apps/admission-webhook-k8s/admission-webhook.yaml b/apps/admission-webhook-k8s/admission-webhook.yaml index f09ec24850f1..39ecc4c14767 100644 --- a/apps/admission-webhook-k8s/admission-webhook.yaml +++ b/apps/admission-webhook-k8s/admission-webhook.yaml @@ -46,4 +46,4 @@ spec: - name: NSM_LABELS value: spiffe.io/spiffe-id:true - name: NSM_ENVS - value: NSM_LOG_LEVEL=TRACE + value: NSM_LIVENESS_CHECK_ENABLED=false \ No newline at end of file diff --git a/examples/features/annotated-namespace/README.md b/examples/features/annotated-namespace/README.md index 01e746ec0467..d12dfd762e8b 100644 --- a/examples/features/annotated-namespace/README.md +++ b/examples/features/annotated-namespace/README.md @@ -11,7 +11,7 @@ Make sure that you have completed steps from [basic](../../basic) or [memory](.. ## Run -Create test namespace and deploy NSE: +Create test namespace and deploy NSE and emoji client: ```bash kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/features/annotated-namespace?ref=40eba2b9d535b7e3c0e3f7463af6227d863c5a32 ``` diff --git a/examples/interdomain/nsm/README.md b/examples/interdomain/nsm/README.md index e92c0a2a2383..a87d86edfcac 100644 --- a/examples/interdomain/nsm/README.md +++ b/examples/interdomain/nsm/README.md @@ -7,8 +7,8 @@ This example simply show how can be deployed and configured two NSM on different Install NSM ```bash -kubectl --kubeconfig=$KUBECONFIG1 apply -k https://github.com/networkservicemesh/deployments-k8s/examples/interdomain/nsm/cluster1?ref=40eba2b9d535b7e3c0e3f7463af6227d863c5a32 -kubectl --kubeconfig=$KUBECONFIG2 apply -k https://github.com/networkservicemesh/deployments-k8s/examples/interdomain/nsm/cluster2?ref=40eba2b9d535b7e3c0e3f7463af6227d863c5a32 +kubectl --kubeconfig=$KUBECONFIG1 apply -k ./nsm/cluster1 +kubectl --kubeconfig=$KUBECONFIG2 apply -k ./nsm/cluster2 ``` Wait for admission-webhook-k8s: diff --git a/examples/interdomain/nsm/cluster1/kustomization.yaml b/examples/interdomain/nsm/cluster1/kustomization.yaml index f75614e750d8..c8328830fc96 100644 --- a/examples/interdomain/nsm/cluster1/kustomization.yaml +++ b/examples/interdomain/nsm/cluster1/kustomization.yaml @@ -10,7 +10,7 @@ bases: - https://github.com/networkservicemesh/deployments-k8s/apps/registry-k8s?ref=40eba2b9d535b7e3c0e3f7463af6227d863c5a32 - https://github.com/networkservicemesh/deployments-k8s/apps/registry-proxy-dns?ref=40eba2b9d535b7e3c0e3f7463af6227d863c5a32 - https://github.com/networkservicemesh/deployments-k8s/apps/nsmgr-proxy?ref=40eba2b9d535b7e3c0e3f7463af6227d863c5a32 -- https://github.com/networkservicemesh/deployments-k8s/apps/admission-webhook-k8s?ref=40eba2b9d535b7e3c0e3f7463af6227d863c5a32 +- ../../../../apps/admission-webhook-k8s resources: - namespace.yaml diff --git a/examples/interdomain/nsm/cluster2/kustomization.yaml b/examples/interdomain/nsm/cluster2/kustomization.yaml index d4b3df839ac8..492369d4d7a8 100644 --- a/examples/interdomain/nsm/cluster2/kustomization.yaml +++ b/examples/interdomain/nsm/cluster2/kustomization.yaml @@ -10,7 +10,7 @@ bases: - https://github.com/networkservicemesh/deployments-k8s/apps/registry-k8s?ref=40eba2b9d535b7e3c0e3f7463af6227d863c5a32 - https://github.com/networkservicemesh/deployments-k8s/apps/registry-proxy-dns?ref=40eba2b9d535b7e3c0e3f7463af6227d863c5a32 - https://github.com/networkservicemesh/deployments-k8s/apps/nsmgr-proxy?ref=40eba2b9d535b7e3c0e3f7463af6227d863c5a32 -- https://github.com/networkservicemesh/deployments-k8s/apps/admission-webhook-k8s?ref=40eba2b9d535b7e3c0e3f7463af6227d863c5a32 +- ../../../../apps/admission-webhook-k8s patchesStrategicMerge: - patch-nsmgr-proxy.yaml diff --git a/examples/interdomain/nsm_linkerd-1cluster/README.md b/examples/interdomain/nsm_linkerd-1cluster/README.md new file mode 100644 index 000000000000..c1564eb33354 --- /dev/null +++ b/examples/interdomain/nsm_linkerd-1cluster/README.md @@ -0,0 +1,212 @@ +# Test automatic scale from zero + +This example shows how Linkerd can be integrated with one of classic NSM examples. + +## Run + +Install Linkerd CLI: +```bash +curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh +``` +Verify Linkerd CLI is installed: +```bash +linkerd version +``` +If not, export linkerd path to $PATH: +export PATH=$PATH:/home/amalysheva/.linkerd2/bin + +Install Linkerd onto cluster: +```bash +linkerd check --pre +linkerd install --crds | kubectl apply -f - +linkerd install | kubectl apply -f - +linkerd check +``` + + +1. Create test namespace: +```bash +kubectl create ns ns-nsm-linkerd +``` + +2. Select nodes to deploy NSC and supplier: +```bash +NODES=($(kubectl get nodes -o go-template='{{range .items}}{{ if not .spec.taints }}{{ .metadata.name }} {{end}}{{end}}')) +NSC_NODE=${NODES[0]} +SUPPLIER_NODE=${NODES[1]} +if [ "$SUPPLIER_NODE" == "" ]; then SUPPLIER_NODE=$NSC_NODE; echo "Only 1 node found, testing that pod is created on the same node is useless"; fi +``` + +3. Create patch for NSC: +```bash +cat > patch-nsc.yaml < patch-supplier.yaml < kustomization.yaml < proxy-web-local-linkerd-exp2.log +``` + +Get curl for nsc: +```bash +kubectl --kubeconfig=$KUBECONFIG2 exec deploy/web-local -n ns-nsm-linkerd -c cmd-nsc -- apk add curl +``` +Verify connectivity: +```bash +kubectl --kubeconfig=$KUBECONFIG2 exec deploy/web-local -n ns-nsm-linkerd -c cmd-nsc -- curl -v greeting.ns-nsm-linkerd:8080 +``` +If something went wrong, add new rule to PROXY_LOCAL IPtables and try again. + +If you are using VM to run this example, you can use Ksniff utilite to analyse traffic with Wireshark later https://github.com/eldadru/ksniff: +```bash +kubectl krew install sniff +``` + +```bash +kubectl sniff $PROXY_LOCAL -n ns-nsm-linkerd -c nse -o exp1/proxy-local-nse.pcap +``` + +Interdomain integration: +To check and adjust intercluster communication start `web-svc` with networkservicemesh client on the first cluster: +```bash +kubectl --kubeconfig=$KUBECONFIG1 apply -k ./cluster1 +``` + +Now you can repeat same steps, but from the first cluster onto second: +```bash +kubectl --kubeconfig=$KUBECONFIG1 exec deploy/web -n ns-nsm-linkerd -c cmd-nsc -- curl -v greeting.ns-nsm-linkerd:8080 +``` + +Last step is to run emojivoto services on the second clsuter, inject Linkerd onto them, port-forward web-svc and and check that you can vote for your favorite emoji. +```bash +kubectl --kubeconfig=$KUBECONFIG2 apply -k ./cluster2/emojivoto +export KUBECONFIG=$KUBECONFIG2 +kubectl get -n ns-nsm-linkerd deploy emoji vote-bot voting -o yaml | linkerd inject --enable-debug-sidecar - | kubectl apply -f - +``` + +## Cleanup + +Uninject linkerd proxy from deployments: +```bash +export PATH=$PATH:/home/amalysheva/.linkerd2/bin +kubectl --kubeconfig=$KUBECONFIG2 get deploy -n ns-nsm-linkerd -o yaml | linkerd uninject - | kubectl apply -f - +``` +Delete network service: +```bash +export KUBECONFIG=$KUBECONFIG2 +kubectl delete -n ns-nsm-linkerd networkservices.networkservicemesh.io nsm-linkerd + +``` + +export PATH="${PATH}:${HOME}/.krew/bin" + +kubectl krew install sniff +kubectl sniff $PROXY -n ns-nsm-linkerd -c nse -o proxy-web-exp1.pcap +kubectl sniff $PROXY -n ns-nsm-linkerd -c nse -o proxy-web-exp1.pcap +kubectl sniff $PROXY -n ns-nsm-linkerd -c nse -o proxy-exp3.pcap +Delete namespace: +```bash +kubectl --kubeconfig=$KUBECONFIG1 delete ns ns-nsm-linkerd +kubectl --kubeconfig=$KUBECONFIG2 delete ns ns-nsm-linkerd +``` +Remove Linkerd control plane from cluster: +```bash +linkerd uninstall | kubectl delete -f - +``` + diff --git a/examples/interdomain/nsm_linkerd/README_exp.md b/examples/interdomain/nsm_linkerd/README_exp.md new file mode 100644 index 000000000000..1b4cf46cada1 --- /dev/null +++ b/examples/interdomain/nsm_linkerd/README_exp.md @@ -0,0 +1,338 @@ +Next experiments were made: + +Prepare to experiment: +```bash +WEB=web-659544f5f7-ks22r +PROXY=proxy-web-659544f5f7-ks22r + +WEB_LOCAL=web-local-67bfcd4d9c-z6fnh +PROXY_LOCAL=proxy-web-local-67bfcd4d9c-z6fnh + +export KUBECONFIG1=/tmp/config1 +export KUBECONFIG2=/tmp/config2 +export KUBECONFIG=$KUBECONFIG1 +export KUBECONFIG=$KUBECONFIG2 + +export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH" +kubectl krew install sniff +export KUBECONFIG=$KUBECONFIG2 +kubectl exec -it $WEB_LOCAL -n ns-nsm-linkerd -c web-svc -- /bin/sh -c 'apt-get install curl' +kubectl exec -it $WEB_LOCAL -n ns-nsm-linkerd -c cmd-nsc -- /bin/sh -c 'apk add curl' + +kubectl exec -it $PROXY_LOCAL -n ns-nsm-linkerd -c nse -- /bin/sh -c 'apk add curl iptables' + +kubectl exec -it $PROXY -n ns-nsm-linkerd -c nse -- /bin/sh -c 'apk add curl iptables' + +export KUBECONFIG=$KUBECONFIG1 +kubectl exec -it $WEB -n ns-nsm-linkerd -c web-svc -- /bin/sh -c 'apt-get install curl' +kubectl exec -it $WEB -n ns-nsm-linkerd -c cmd-nsc -- /bin/sh -c 'apk add curl' +``` + +Get dump for wireshark with sniff. Run in different terminal +```bash +# local 2 +mkdir sniff_dump_greetint_on_proxy_local +# local 3 +export KUBECONFIG=$KUBECONFIG2 +kubectl sniff $PROXY_LOCAL -n ns-nsm-linkerd -c linkerd-debug -o sniff_dump_greetint_on_proxy_local/proxy-local-linkerd.pcap +``` +test with DNS (now it failes, but should work: +```bash +export KUBECONFIG=$KUBECONFIG2 +kubectl exec -it $WEB_LOCAL -n ns-nsm-linkerd -c web-svc -- /bin/sh -c 'curl -v -H "Connection: close" greeting.ns-nsm-linkerd:9080' +``` + +To check how it should work, run and get dump with sniff: +```bash +export KUBECONFIG=$KUBECONFIG2 +kubectl exec -it $PROXY_LOCAL -n ns-nsm-linkerd -c web-svc -- /bin/sh -c 'curl -v -H "Connection: close" greeting.ns-nsm-linkerd:9080' +``` + +Run ifconfig to get NSM interface name and NSM IP address to use. Mostly 172.16.1.2 is used: +```bash +kubectl exec -it $PROXY_LOCAL -n ns-nsm-linkerd -c nse -- ifconfig +PROXY_LOCAL_NSM_ADDR=172.16.1.2 +NSM=nsm-linker-9090 +``` +Get Cluster IP, Pod IP for PROXY_LOCAL and greeting pod: +```bash +GREET_CLUSTER_IP=10.96.126.33``` +``` + +Check connectivity with NSM address: +Run +```bash +# terminal 2 +export KUBECONFIG=$KUBECONFIG2 +kubectl sniff $PROXY_LOCAL -n ns-nsm-linkerd -c linkerd-debug -o sniff_dump_web_local_to_clusterip/proxy-local-linkerd.pcap + +# terminal 1 +kubectl exec -it $WEB_LOCAL -n ns-nsm-linkerd -c cmd-nsc -- /bin/sh -c 'curl -v ${PROXY_LOCAL_NSM_ADDR}:9080' +``` + +Check connectivity with cluster IP 10.96.126.33:9080 (to greeting) +```bash +# local 2 +export KUBECONFIG=$KUBECONFIG2 +kubectl sniff $PROXY_LOCAL -n ns-nsm-linkerd -c linkerd-debug -o sniff_dump_web_local_to_clusterip/proxy-local-linkerd.pcap + +export KUBECONFIG=$KUBECONFIG2 +kubectl exec -it $WEB_LOCAL -n ns-nsm-linkerd -c cmd-nsc -- /bin/sh -c 'curl -v -H "Connection: close" ${GREET_CLUSTER_IP}:9080' +``` + +Experiments: +exp 0 (no iprules) +$ kubectl exec -it $WEB_LOCAL -n ns-nsm-linkerd -c cmd-nsc -- /bin/sh -c 'curl -v -H "Connection: close" 172.16.1.2:9080' +* Result: +* Trying 172.16.1.2:9080... +* Connected to 172.16.1.2 (172.16.1.2) port 9080 (#0) +> GET / HTTP/1.1 +> Host: 172.16.1.2:9080 +> User-Agent: curl/7.83.1 +> Accept: */* +> Connection: close +> +* Empty reply from server +* Closing connection 0 + curl: (52) Empty reply from server + command terminated with exit code 52 + +kubectl exec -it $PROXY_LOCAL -n ns-nsm-linkerd -c nse -- /bin/sh +Add Iptables (they don't work): +--- +iptables -t nat -N NSM_PREROUTE +iptables -t nat -A NSM_PREROUTE -j PROXY_INIT_REDIRECT +iptables -t nat -I PREROUTING 1 -p tcp -i nsm-linkerd -j NSM_PREROUTE +iptables -t nat -N NSM_OUTPUT +iptables -t nat -A NSM_OUTPUT -j DNAT --to-destination 10.96.126.33 +iptables -t nat -A OUTPUT -p tcp -s 0.0.0.0 -j NSM_OUTPUT +iptables -t nat -N NSM_POSTROUTING +iptables -t nat -A NSM_POSTROUTING -j SNAT --to-source 172.16.1.3 +iptables -t nat -D POSTROUTING -p tcp -o nsm-linkerd -j NSM_POSTROUTING + +**Curl result** +```bash +kubectl exec -it $WEB_LOCAL -n ns-nsm-linkerd -c cmd-nsc -- /bin/sh -c 'curl -v -H "Connection: close" 172.16.1.2:9080' +``` + +* Trying 172.16.1.2:9080... +* Connected to 172.16.1.2 (172.16.1.2) port 9080 (#0) +> GET / HTTP/1.1 +> Host: 172.16.1.2:9080 +> User-Agent: curl/7.83.1 +> Accept: */* +> Connection: close +> +* Recv failure: Connection reset by peer +* Closing connection 0 + curl: (56) Recv failure: Connection reset by peer + command terminated with exit code 56 + + +**exp2** +delete applied chain: +iptables -t nat -D POSTROUTING -p tcp -o nsm-linkerd -j NSM_POSTROUTING +iptables -t nat --flush NSM_POSTROUTING +iptables -t nat -X NSM_POSTROUTING + +iptables -t nat -N NSM_POSTROUTING +iptables -t nat -A NSM_POSTROUTING -j SNAT --to-source 172.16.1.3 +iptables -t nat -A POSTROUTING -p tcp -o nsm-linkerd -j NSM_POSTROUTING + +kubectl exec -it $WEB_LOCAL -n ns-nsm-linkerd -c cmd-nsc -- /bin/sh -c 'curl -v -H "Connection: close" 172.16.1.2:9080' +* Trying 172.16.1.2:9080... +* Connected to 172.16.1.2 (172.16.1.2) port 9080 (#0) +> GET / HTTP/1.1 +> Host: 172.16.1.2:9080 +> User-Agent: curl/7.83.1 +> Accept: */* +> Connection: close +> +* Empty reply from server +* Closing connection 0 + curl: (52) Empty reply from server + command terminated with exit code 52 + +**exp3** +delete Linkerd loopback chain rule: + +iptables -t nat -D PROXY_INIT_OUTPUT -o lo -m comment --comment "proxy-init/ignore-loopback/1663224390" -j RETURN + +iptables -t nat -I PROXY_INIT_OUTPUT 2 -o lo -m comment --comment "proxy-init/ignore-loopback/1663224390" -j RETURN + + +**exp4** + +delete NSM_OUTPUT chain +iptables -t nat -D NSM_OUTPUT -j DNAT --to-destination 10.96.126.33 +iptables -t nat --flush NSM_OUTPUT +iptables -t nat -D OUTPUT -s 0.0.0.0/32 -p tcp -j NSM_OUTPUT +iptables -t nat -X NSM_OUTPUT + +--destination + +iptables -t nat -S NSM_PREROUTE +iptables -t nat -A NSM_PREROUTE -d 10.96.126.33 +iptables -t nat -D NSM_PREROUTE -j PROXY_INIT_REDIRECT +iptables -t nat -I PREROUTING 1 -p tcp -i nsm-linkerd -j NSM_PREROUTE +iptables -t nat -D PREROUTING -i nsm-linkerd -p tcp -j NSM_PREROUTE +iptables -t nat -D NSM_PREROUTE -d 10.96.126.33 + +iptables -t nat -A NSM_PREROUTE --to-destination 10.96.126.33 + +iptables -t nat -I NSM_PREROUTE 1 -d 10.96.126.33 + +cluster to cluster +curl 172.16.1.2:9080 +curl clusterIP + + + +iptables -t nat -N NSM_PREROUTE +iptables -t nat -A NSM_PREROUTE -p tcp -j DNAT --to-destination 10.96.126.33 +iptables -t nat -A NSM_PREROUTE -p tcp -j PROXY_INIT_REDIRECT +iptables -t nat -I PREROUTING 1 -p tcp -i nsm-linkerd -j NSM_PREROUTE + + +iptables -t nat -N NSM_POSTROUTING +iptables -t nat -A NSM_POSTROUTING -p tcp -j SNAT --to-source 172.16.1.2 + +iptables -t nat -D POSTROUTING -p tcp -o nsm-linkerd -j NSM_POSTROUTING + + + +iptables -t nat -D NSM_POSTROUTING -j SNAT --to-source 172.16.1.3 +iptables -t nat -D NSM_PREROUTE -j PROXY_INIT_REDIRECT + +exp 6 +iptables -t nat -D PREROUTING -p tcp -i nsm-linkerd -j NSM_PREROUTE +iptables -t nat -I PREROUTING 1 -p tcp -i nsm-linkerd -d 172.16.1.2 -j NSM_PREROUTE + +Previous experiments returned this error on Proxy local: +```bash +kubectl logs $PROXY_LOCAL -n ns-nsm-linkerd -c linkerd-proxy +``` +[ 2294.311150s] WARN ThreadId(01) inbound: linkerd_app_core::serve: Server failed to become ready error=inbound connections are not allowed on this IP address (172.16.1.2) error.sources=[inbound connections are not allowed on this IP address (172.16.1.2)] client.addr=172.16.1.3:48442 + +Experiment was made to find a rootcause: +PROXY_LOCAL pod was run with greeting container on it. You can find config file for it in cluster2/nse-auto-scale/pod-template-with-greeting.yaml +Even with greeting container on it, request returned the same error. +reason is that nsm address is not in allowed_ip here: +https://github.com/linkerd/linkerd2-proxy/blob/6b9003699b170dbbf240aa22f0b36db3f21cf14a/linkerd/app/core/src/transport/allow_ips.rs + +Allowed_ips are equal to ips from LINKERD2_PROXY_INBOUND_IPS ENV https://github.com/linkerd/linkerd2/blob/f6c6ff965cae3accb49f061dca5c8edbdd9d13ef/charts/partials/templates/_proxy.tpl. +This constant is equal to pod status.podips value, defined here: https://github.com/linkerd/linkerd2/blob/f6c6ff965cae3accb49f061dca5c8edbdd9d13ef/charts/partials/templates/_proxy.tpl +Then practically the best way is to add rule into iptables to change destination before it come to linkerd-proxy: + +How to solve: +iptables: change distination address from NSM to pod IP and send it into Proxy Redirect chain + PROXY_LOCAL pod IP 10.244.1.49 + NSE addr: 172.16.1.2 + + +16/09/22 + +WEB_LOCAL=web-local-67bfcd4d9c-5hqrm +PROXY_LOCAL=proxy-web-local-67bfcd4d9c-5hqrm +1. +Run PROXY_LOCAL pod with greeting container on it. +Apply IPtables onto PROXY_LOCAL pod: + iptables -t nat -N NSM_PREROUTE + iptables -t nat -A NSM_PREROUTE -j DNAT --to-destination 10.244.1.49 + iptables -t nat -A NSM_PREROUTE -j PROXY_INIT_REDIRECT + iptables -t nat -I PREROUTING 1 -i nsm-linker-8839 -d 172.16.1.2 -j NSM_PREROUTE + + +Experiment works with greeting container on PROXY_LOCAL pod. + +2. run PROXY_LOCAL without greeting container and repeat: + +Apply same iptables on PROXY_LOCAL: +```bash +iptables -t nat -N NSM_PREROUTE +iptables -t nat -A NSM_PREROUTE -j DNAT --to-destination 10.244.2.40 +iptables -t nat -A NSM_PREROUTE -j PROXY_INIT_REDIRECT +iptables -t nat -I PREROUTING 1 -i nsm-linker-d4b4 -d 172.16.1.2 -j NSM_PREROUTE +``` +Result: +* Trying 172.16.1.2:9080... +* connect to 172.16.1.2 port 9080 failed: Connection refused +* Failed to connect to 172.16.1.2 port 9080 after 8 ms: Connection refused +* Closing connection 0 + curl: (7) Failed to connect to 172.16.1.2 port 9080 after 8 ms: Connection refused + command terminated with exit code 7 + + +3. Update iptables rules so destination chagne on greeting clusterIP + +iptables -t nat -D NSM_PREROUTE -j DNAT --to-destination 10.244.2.40 +iptables -t nat -I NSM_PREROUTE 1 -j DNAT --to-destination 10.96.210.72 +Result: +* Trying 172.16.1.2:9080... +* connect to 172.16.1.2 port 9080 failed: Operation timed out +* Failed to connect to 172.16.1.2 port 9080 after 130439 ms: Operation timed out +* Closing connection 0 + curl: (28) Failed to connect to 172.16.1.2 port 9080 after 130439 ms: Operation timed out + command terminated with exit code 28 + +WIRESHARK: + +* tcp retransmission: + + +4. Update iptables rules to change destination to greeting podIP: +Firstly, delete old rule: +```bash +iptables -t nat -D NSM_PREROUTE -j DNAT --to-destination 10.96.210.72 +iptables -t nat -I NSM_PREROUTE 1 -j DNAT --to-destination 10.244.1.56 + +``` +Result: +* Trying 172.16.1.2:9080... +* connect to 172.16.1.2 port 9080 failed: Network unreachable +* Failed to connect to 172.16.1.2 port 9080 after 129520 ms: Network unreachable +* Closing connection 0 + curl: (7) Failed to connect to 172.16.1.2 port 9080 after 129520 ms: Network unreachable + command terminated with exit code 7 + +5. Update iptables rules: + iptables -t nat -D NSM_PREROUTE -j DNAT --to-destination 10.244.1.56 + iptables -t nat -D NSM_PREROUTE -j PROXY_INIT_REDIRECT + iptables -t nat -D PREROUTING -i nsm-linker-d4b4 -d 172.16.1.2 -j NSM_PREROUTE + + iptables -t nat -A NSM_PREROUTE -d 172.16.1.2 -j DNAT --to-destination 10.244.2.40 + iptables -t nat -A NSM_PREROUTE -j PROXY_INIT_REDIRECT + iptables -t nat -I PREROUTING 1 -i nsm-linker-d4b4 -j NSM_PREROUTE + +didn't work + +6. just redirect onto localhost + +iptables -t nat -D NSM_PREROUTE -d 172.16.1.2 -j DNAT --to-destination 10.244.2.40 +iptables -t nat -D NSM_PREROUTE -j PROXY_INIT_REDIRECT +iptables -t nat -D PREROUTING -i nsm-linker-d4b4 -j NSM_PREROUTE + + +iptables -t nat -I PREROUTING 1 -p tcp -i nsm-linker-d4b4 -j DNAT --to-destination 127.0.0.1 + +7. no new rules, send request from PROXY_LOCAL to greeting via NSM address + +listen loopback +to sniff traffic from particular interface, for example loopback, run: +```bash +kubectl sniff $PROXY_LOCAL -n ns-nsm-linkerd -c linkerd-debug -i lo -o proxy-local-linkerd.pcap +``` + +Next step: +Run debug onto Linkerd outbound port 4140 + + +Useful links: +IPtables and how to use them (in Russian): +https://www.opennet.ru/docs/RUS/iptables/#TRAVERSINGOFTABLES +https://tokmakov.msk.ru/blog/item/473 + +Iptables in linkerd: +https://linkerd.io/2.12/reference/iptables/ +https://linkerd.io/2021/09/23/how-linkerd-uses-iptables-to-transparently-route-kubernetes-traffic/ +https://linkerd.io/2.11/features/protocol-detection/ \ No newline at end of file diff --git a/examples/interdomain/nsm_linkerd/cluster1/kustomization.yaml b/examples/interdomain/nsm_linkerd/cluster1/kustomization.yaml new file mode 100644 index 000000000000..f0750747b76c --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster1/kustomization.yaml @@ -0,0 +1,9 @@ +--- +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +namespace: ns-nsm-linkerd + +resources: + - namespace.yaml + - web-svc.yaml diff --git a/examples/interdomain/nsm_linkerd/cluster1/namespace.yaml b/examples/interdomain/nsm_linkerd/cluster1/namespace.yaml new file mode 100644 index 000000000000..0ab8d805a67e --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster1/namespace.yaml @@ -0,0 +1,5 @@ +--- +kind: Namespace +apiVersion: v1 +metadata: + name: ns-nsm-linkerd diff --git a/examples/interdomain/nsm_linkerd/cluster1/web-svc.yaml b/examples/interdomain/nsm_linkerd/cluster1/web-svc.yaml new file mode 100644 index 000000000000..c177e9083b42 --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster1/web-svc.yaml @@ -0,0 +1,62 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: web +--- +apiVersion: v1 +kind: Service +metadata: + name: web-svc +spec: + ports: + - name: http + port: 80 + targetPort: 8080 + selector: + app: web-svc + type: ClusterIP +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/name: web + app.kubernetes.io/part-of: emojivoto + app.kubernetes.io/version: v11 + name: web +spec: + replicas: 1 + selector: + matchLabels: + app: web-svc + version: v11 + template: + metadata: + labels: + app: web-svc + version: v11 + annotations: + networkservicemesh.io: kernel://nsm-linkerd@my.cluster2/nsm-1?app=web-svc + spec: + containers: + - name: web-svc + image: docker.l5d.io/buoyantio/emojivoto-web:v11 + securityContext: + privileged: true + env: + - name: WEB_PORT + value: "8080" + - name: EMOJISVC_HOST + value: "emoji-svc.ns-nsm-linkerd:8080" + - name: VOTINGSVC_HOST + value: "voting-svc.ns-nsm-linkerd:8080" + - name: INDEX_BUNDLE + value: dist/index_bundle.js + ports: + - containerPort: 8080 + name: http + resources: + requests: + cpu: 100m + serviceAccountName: web diff --git a/examples/interdomain/nsm_linkerd/cluster2/emojivoto/emoji.yaml b/examples/interdomain/nsm_linkerd/cluster2/emojivoto/emoji.yaml new file mode 100644 index 000000000000..d7c243057d18 --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/emojivoto/emoji.yaml @@ -0,0 +1,60 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: emoji +--- +apiVersion: v1 +kind: Service +metadata: + name: emoji-svc +spec: + ports: + - name: grpc + port: 8080 + targetPort: 8080 + - name: prom + port: 8801 + targetPort: 8801 + selector: + app: emoji-svc +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/name: emoji + app.kubernetes.io/part-of: emojivoto + app.kubernetes.io/version: v11 + name: emoji +spec: + replicas: 1 + selector: + matchLabels: + app: emoji-svc + version: v11 + template: + metadata: + labels: + app: emoji-svc + version: v11 + spec: + containers: + - env: + - name: GRPC_PORT + value: "8080" + - name: PROM_PORT + value: "8801" + image: docker.l5d.io/buoyantio/emojivoto-emoji-svc:v11 + name: emoji-svc + securityContext: + privileged: true + ports: + - containerPort: 8080 + name: grpc + - containerPort: 8801 + name: prom + resources: + requests: + cpu: 100m + serviceAccountName: emoji diff --git a/examples/interdomain/nsm_linkerd/cluster2/emojivoto/kustomization.yaml b/examples/interdomain/nsm_linkerd/cluster2/emojivoto/kustomization.yaml new file mode 100644 index 000000000000..cd5d93dd6bdb --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/emojivoto/kustomization.yaml @@ -0,0 +1,10 @@ +--- +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +namespace: ns-nsm-linkerd + +resources: + - voting.yaml + - vote-bot.yaml + - emoji.yaml \ No newline at end of file diff --git a/examples/interdomain/nsm_linkerd/cluster2/emojivoto/vote-bot.yaml b/examples/interdomain/nsm_linkerd/cluster2/emojivoto/vote-bot.yaml new file mode 100644 index 000000000000..6e45195fb3e5 --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/emojivoto/vote-bot.yaml @@ -0,0 +1,34 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/name: vote-bot + app.kubernetes.io/part-of: emojivoto + app.kubernetes.io/version: v11 + name: vote-bot +spec: + replicas: 1 + selector: + matchLabels: + app: vote-bot + version: v11 + template: + metadata: + labels: + app: vote-bot + version: v11 + spec: + containers: + - command: + - emojivoto-vote-bot + env: + - name: WEB_HOST + value: web-svc.emojivoto:80 + image: docker.l5d.io/buoyantio/emojivoto-web:v11 + name: vote-bot + securityContext: + privileged: true + resources: + requests: + cpu: 10m diff --git a/examples/interdomain/nsm_linkerd/cluster2/emojivoto/voting.yaml b/examples/interdomain/nsm_linkerd/cluster2/emojivoto/voting.yaml new file mode 100644 index 000000000000..007d469efb8a --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/emojivoto/voting.yaml @@ -0,0 +1,60 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: voting +--- +apiVersion: v1 +kind: Service +metadata: + name: voting-svc +spec: + ports: + - name: grpc + port: 8080 + targetPort: 8080 + - name: prom + port: 8801 + targetPort: 8801 + selector: + app: voting-svc +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/name: voting + app.kubernetes.io/part-of: emojivoto + app.kubernetes.io/version: v11 + name: voting +spec: + replicas: 1 + selector: + matchLabels: + app: voting-svc + version: v11 + template: + metadata: + labels: + app: voting-svc + version: v11 + spec: + containers: + - env: + - name: GRPC_PORT + value: "8080" + - name: PROM_PORT + value: "8801" + image: docker.l5d.io/buoyantio/emojivoto-voting-svc:v11 + name: voting-svc + securityContext: + privileged: true + ports: + - containerPort: 8080 + name: grpc + - containerPort: 8801 + name: prom + resources: + requests: + cpu: 100m + serviceAccountName: voting diff --git a/examples/interdomain/nsm_linkerd/cluster2/kustomization.yaml b/examples/interdomain/nsm_linkerd/cluster2/kustomization.yaml new file mode 100644 index 000000000000..18d5eae7f566 --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/kustomization.yaml @@ -0,0 +1,10 @@ +--- +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +namespace: ns-nsm-linkerd +resources: + - server.yaml +bases: + - ./nse-auto-scale + - ./emojivoto \ No newline at end of file diff --git a/examples/interdomain/nsm_linkerd/cluster2/netsvc.yaml b/examples/interdomain/nsm_linkerd/cluster2/netsvc.yaml new file mode 100644 index 000000000000..5ae9d309cb9b --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/netsvc.yaml @@ -0,0 +1,18 @@ +--- +apiVersion: networkservicemesh.io/v1 +kind: NetworkService +metadata: + name: nsm-linkerd + namespace: ns-nsm-linkerd +spec: + payload: IP + matches: + - source_selector: + fallthrough: true + routes: + - destination_selector: + podName: "{{ .podName }}" + - source_selector: + routes: + - destination_selector: + any: "true" diff --git a/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/iptables-map.yaml b/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/iptables-map.yaml new file mode 100644 index 000000000000..ed97d539c095 --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/iptables-map.yaml @@ -0,0 +1 @@ +--- diff --git a/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/kustomization.yaml b/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/kustomization.yaml new file mode 100644 index 000000000000..5b50dfb7ef3e --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/kustomization.yaml @@ -0,0 +1,22 @@ +--- +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization + +namespace: ns-nsm-linkerd + +bases: +- https://github.com/networkservicemesh/deployments-k8s/apps/nse-supplier-k8s?ref=5278bf09564d36b701e8434d9f1d4be912e6c266 + +patchesStrategicMerge: +- patch-supplier.yaml + +configMapGenerator: + - name: supplier-pod-template-configmap + files: + - pod-template.yaml + - name: iptables-map + files: + - iptables-map.yaml + +generatorOptions: + disableNameSuffixHash: true diff --git a/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/patch-supplier.yaml b/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/patch-supplier.yaml new file mode 100644 index 000000000000..4bc4f219d248 --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/patch-supplier.yaml @@ -0,0 +1,29 @@ +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: nse-supplier-k8s +spec: + template: + spec: + containers: + - name: nse-supplier + env: + - name: NSM_SERVICE_NAME + value: nsm-linkerd + - name: NSM_LABELS + value: any:true + - name: NSM_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: NSM_POD_DESCRIPTION_FILE + value: /run/supplier/pod-template.yaml + volumeMounts: + - name: pod-file + mountPath: /run/supplier + readOnly: true + volumes: + - name: pod-file + configMap: + name: supplier-pod-template-configmap diff --git a/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/pod-template-with-greeting.yaml b/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/pod-template-with-greeting.yaml new file mode 100644 index 000000000000..8750c4fac5cc --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/pod-template-with-greeting.yaml @@ -0,0 +1,96 @@ +--- +apiVersion: apps/v1 +kind: Pod +metadata: + name: proxy-{{ index .Labels "podName" }} + labels: + "spiffe.io/spiffe-id": "true" + annotations: + linkerd.io/inject: enabled + config.linkerd.io/enable-debug-sidecar: "true" + +spec: + restartPolicy: Never + containers: + - name: nse + image: ghcr.io/networkservicemesh/ci/cmd-nse-l7-proxy:custom + imagePullPolicy: IfNotPresent + securityContext: + privileged: true + env: + - name: SPIFFE_ENDPOINT_SOCKET + value: unix:///run/spire/sockets/agent.sock + - name: NSM_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAME + value: {{ index .Labels "podName" }} + - name: NSM_CONNECT_TO + value: unix:///var/lib/networkservicemesh/nsm.io.sock + - name: NSM_CIDR_PREFIX + value: 172.16.1.2/31 + - name: NSM_SERVICE_NAMES + value: nsm-linkerd + - name: NSM_LABELS + value: app:{{ index .Labels "app" }} + - name: NSM_IDLE_TIMEOUT + value: 240s + - name: NSM_LOG_LEVEL + value: TRACE + - name: NSM_RULES_CONFIG_PATH + value: iptables-map/iptables-map.yaml + volumeMounts: + - name: spire-agent-socket + mountPath: /run/spire/sockets + readOnly: true + - name: nsm-socket + mountPath: /var/lib/networkservicemesh + readOnly: true + - name: iptables-config-map + mountPath: /iptables-map + resources: + limits: + memory: 40Mi + cpu: 150m + - name: server + securityContext: + privileged: true + image: hashicorp/http-echo:alpine + args: + - -text="Alice in Wonderland + How do you get to Wonderland + Over the hill or under land + Or just behind a tree + + When clouds go rolling by + They roll away and leave the sky + Where is the land behind the eye + People cannot see + + Where can you see + Where do the stars go + Where is the crescent moon + They must be somewhere in the sunny afternoon + + Alice in Wonderland + Where is the path to Wonderland + Over the hill or here or there + I wonder where + " + - -listen=:9080 + ports: + - containerPort: 9080 + name: http + volumes: + - name: spire-agent-socket + hostPath: + path: /run/spire/sockets + type: Directory + - name: nsm-socket + hostPath: + path: /var/lib/networkservicemesh + type: DirectoryOrCreate + - name: iptables-config-map + configMap: + name: iptables-map diff --git a/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/pod-template.yaml b/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/pod-template.yaml new file mode 100644 index 000000000000..f0d7960a8de8 --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/nse-auto-scale/pod-template.yaml @@ -0,0 +1,67 @@ +--- +apiVersion: apps/v1 +kind: Pod +metadata: + name: proxy-{{ index .Labels "podName" }} + labels: + "spiffe.io/spiffe-id": "true" + annotations: + linkerd.io/inject: enabled + config.linkerd.io/enable-debug-sidecar: "true" + +spec: + restartPolicy: Never + containers: + - name: nse + image: ghcr.io/networkservicemesh/ci/cmd-nse-l7-proxy:custom + imagePullPolicy: IfNotPresent + securityContext: + privileged: true + env: + - name: SPIFFE_ENDPOINT_SOCKET + value: unix:///run/spire/sockets/agent.sock + - name: NSM_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAME + value: {{ index .Labels "podName" }} + - name: NSM_CONNECT_TO + value: unix:///var/lib/networkservicemesh/nsm.io.sock + - name: NSM_CIDR_PREFIX + value: 172.16.1.2/31 + - name: NSM_SERVICE_NAMES + value: nsm-linkerd + - name: NSM_LABELS + value: app:{{ index .Labels "app" }} + - name: NSM_IDLE_TIMEOUT + value: 240s + - name: NSM_LOG_LEVEL + value: TRACE + - name: NSM_RULES_CONFIG_PATH + value: iptables-map/iptables-map.yaml + volumeMounts: + - name: spire-agent-socket + mountPath: /run/spire/sockets + readOnly: true + - name: nsm-socket + mountPath: /var/lib/networkservicemesh + readOnly: true + - name: iptables-config-map + mountPath: /iptables-map + resources: + limits: + memory: 40Mi + cpu: 150m + volumes: + - name: spire-agent-socket + hostPath: + path: /run/spire/sockets + type: Directory + - name: nsm-socket + hostPath: + path: /var/lib/networkservicemesh + type: DirectoryOrCreate + - name: iptables-config-map + configMap: + name: iptables-map diff --git a/examples/interdomain/nsm_linkerd/cluster2/server.yaml b/examples/interdomain/nsm_linkerd/cluster2/server.yaml new file mode 100644 index 000000000000..e02d75c7cdd3 --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/server.yaml @@ -0,0 +1,70 @@ +--- +apiVersion: v1 +kind: Service +metadata: + name: greeting + labels: + app: greeting + service: greeting +spec: + ports: + - port: 9080 + name: http + selector: + app: greeting +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: greeting-sa + labels: + account: greeting +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: greeting + labels: + app: greeting +spec: + replicas: 1 + selector: + matchLabels: + app: greeting + template: + metadata: + labels: + app: greeting + spec: + serviceAccountName: greeting-sa + containers: + - name: server + securityContext: + privileged: true + image: hashicorp/http-echo:alpine + args: + - -text="Alice in Wonderland + How do you get to Wonderland + Over the hill or under land + Or just behind a tree + + When clouds go rolling by + They roll away and leave the sky + Where is the land behind the eye + People cannot see + + Where can you see + Where do the stars go + Where is the crescent moon + They must be somewhere in the sunny afternoon + + Alice in Wonderland + Where is the path to Wonderland + Over the hill or here or there + I wonder where + " + - -listen=:9080 + ports: + - containerPort: 9080 + name: http +--- diff --git a/examples/interdomain/nsm_linkerd/cluster2/web-svc.yaml b/examples/interdomain/nsm_linkerd/cluster2/web-svc.yaml new file mode 100644 index 000000000000..34dd15bf5228 --- /dev/null +++ b/examples/interdomain/nsm_linkerd/cluster2/web-svc.yaml @@ -0,0 +1,62 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: web-local +--- +apiVersion: v1 +kind: Service +metadata: + name: web-local-svc +spec: + ports: + - name: http + port: 80 + targetPort: 8080 + selector: + app: web-local-svc + type: ClusterIP +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/name: web + app.kubernetes.io/part-of: emojivoto + app.kubernetes.io/version: v11 + name: web-local +spec: + replicas: 1 + selector: + matchLabels: + app: web-local-svc + version: v11 + template: + metadata: + labels: + app: web-local-svc + version: v11 + annotations: + networkservicemesh.io: kernel://nsm-linkerd/nsm-1?app=web-local-svc + spec: + containers: + - name: web-svc + image: docker.l5d.io/buoyantio/emojivoto-web:v11 + securityContext: + privileged: true + env: + - name: WEB_PORT + value: "8080" + - name: EMOJISVC_HOST + value: "emoji-svc.ns-nsm-linkerd:8080" + - name: VOTINGSVC_HOST + value: "voting-svc.ns-nsm-linkerd:8080" + - name: INDEX_BUNDLE + value: dist/index_bundle.js + ports: + - containerPort: 8080 + name: http + resources: + requests: + cpu: 100m + serviceAccountName: web-local