You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: design/baremetal-operator/annotation-for-power-cycling-and-deleting-failed-nodes.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ attempt to recover capacity.
26
26
Hardware is imperfect, and software contains bugs. When node level failures
27
27
such as kernel hangs or dead NICs occur, the work required from the cluster
28
28
does not decrease - workloads from affected nodes need to be restarted
29
-
somewhere.
29
+
somewhere.
30
30
31
31
However some workloads may require at-most-one semantics. Failures affecting
32
32
these kind of workloads risk data loss and/or corruption if "lost" nodes are
@@ -41,7 +41,7 @@ that no Pods or PersistentVolumes are present there.
41
41
Ideally customers would over-provision the cluster so that a node failure (or
42
42
several) does not put additional stress on surviving peers, however budget
43
43
constraints mean that this is often not the case, particularly in Edge
44
-
deployments which may consist of as few as three nodes of commodity hardware.
44
+
deployments which may consist of as few as three nodes of commodity hardware.
45
45
Even when deployments start off over-provisioned, there is a tendency for the
46
46
extra capacity to become permanently utilised. It is therefore usually
47
47
important to recover the lost capacity quickly.
@@ -106,10 +106,10 @@ See [PoC code](https://github.com/kubevirt/machine-remediation/)
106
106
107
107
- A new [Machine Remediation CRD](https://github.com/kubevirt/machine-remediation/blob/master/pkg/apis/machineremediation/v1alpha1/machineremediation_types.go)
0 commit comments