Skip to content

Commit cbf7263

Browse files
Merge pull request #378 from Nordix/tuomo/switch-to-markdownlint-cli2
switch markdownlint container to markdownlint-cli2
2 parents 39ee0ac + 307192d commit cbf7263

36 files changed

+385
-349
lines changed

.markdownlint-cli2.yaml

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
# Reference: https://github.com/DavidAnson/markdownlint-cli2#markdownlint-cli2yaml
2+
3+
config:
4+
ul-indent:
5+
# Kramdown wanted us to have 3 earlier, tho this CLI recommends 2 or 4
6+
indent: 3
7+
line-length: false
8+
9+
# Don't autofix anything, we're linting here
10+
fix: false

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,6 @@
1+
<!-- markdownlint-disable no-inline-html first-line-h1 -->
12
<p align="center"><img alt="Metal³" src="./images/metal3.png" /></p>
3+
<!-- markdownlint-enable no-inline-html first-line-h1 -->
24

35
# Metal³
46

design/baremetal-operator/annotation-for-power-cycling-and-deleting-failed-nodes.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ attempt to recover capacity.
2626
Hardware is imperfect, and software contains bugs. When node level failures
2727
such as kernel hangs or dead NICs occur, the work required from the cluster
2828
does not decrease - workloads from affected nodes need to be restarted
29-
somewhere. 
29+
somewhere.
3030

3131
However some workloads may require at-most-one semantics.  Failures affecting
3232
these kind of workloads risk data loss and/or corruption if "lost" nodes are
@@ -41,7 +41,7 @@ that no Pods or PersistentVolumes are present there.
4141
Ideally customers would over-provision the cluster so that a node failure (or
4242
several) does not put additional stress on surviving peers, however budget
4343
constraints mean that this is often not the case, particularly in Edge
44-
deployments which may consist of as few as three nodes of commodity hardware. 
44+
deployments which may consist of as few as three nodes of commodity hardware.
4545
Even when deployments start off over-provisioned, there is a tendency for the
4646
extra capacity to become permanently utilised. It is therefore usually
4747
important to recover the lost capacity quickly.
@@ -106,10 +106,10 @@ See [PoC code](https://github.com/kubevirt/machine-remediation/)
106106

107107
- A new [Machine Remediation CRD](https://github.com/kubevirt/machine-remediation/blob/master/pkg/apis/machineremediation/v1alpha1/machineremediation_types.go)
108108
- Two new controllers:
109-
- [node
109+
- [node
110110
reboot](https://github.com/kubevirt/machine-remediation/tree/master/pkg/controllers/nodereboot)
111111
which looks for the annoation and creates Machine Remediation CRs
112-
- [machine
112+
- [machine
113113
remediation](https://github.com/kubevirt/machine-remediation/tree/master/pkg/controllers/machineremediation)
114114
which reboots the machine and deletes the Node object (which also
115115
erases the signalling annotation)

design/baremetal-operator/bmh-v1beta1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ checksums.
3131
the usage of `v1alpha1`.
3232

3333
- Make the API version reflect the actual support status of the API (see
34-
[alternatives](#Alternatives) for details).
34+
[alternatives](#alternatives) for details).
3535

3636
### Non-Goals
3737

design/baremetal-operator/hardware-status.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -50,14 +50,14 @@ up-to-date vision on the status of the devices.
5050

5151
- Depending on the device, we need to collect the following
5252
information, when applicable:
53-
- Capacity
54-
- Location
55-
- Manufacturer
56-
- Model
57-
- Part Number
58-
- Serial Number
59-
- Status
60-
- SMART details
53+
- Capacity
54+
- Location
55+
- Manufacturer
56+
- Model
57+
- Part Number
58+
- Serial Number
59+
- Status
60+
- SMART details
6161

6262
- NVMe drives require special tools to get SMART data (e.g. nvme-cli),
6363
although
@@ -107,11 +107,11 @@ up-to-date vision on the status of the devices.
107107
take decisions on different operations and might need a separate
108108
discussion on possible approaches, and includes at least the
109109
following aspects:
110-
- Ensure correct deployment of the new service to all the members of a
110+
- Ensure correct deployment of the new service to all the members of a
111111
cluster.
112-
- Correct configuration of the service, that needs to be aware of the
112+
- Correct configuration of the service, that needs to be aware of the
113113
ironic inspector api.
114-
- Verify the service is correctly running and regularly reporting up-to-date
114+
- Verify the service is correctly running and regularly reporting up-to-date
115115
data to ironic inspector.
116116

117117
One possible approach would be building a new element (as part of

design/baremetal-operator/hardwaredata_crd.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -232,4 +232,4 @@ None
232232
- [clusterctl move](https://cluster-api.sigs.k8s.io/clusterctl/commands/move.html#clusterctl-move)
233233
- [Pause annotation](https://github.com/metal3-io/baremetal-operator/blob/master/docs/api.md#pausing-reconciliation)
234234
- [Disable inspection](https://github.com/metal3-io/metal3-docs/blob/master/design/baremetal-operator/external-introspection.md#disable-inspection-proposal)
235-
- [ObjectReference type](https://pkg.go.dev/k8s.io/api/core/v1#ObjectReference)
235+
- [ObjectReference type](https://pkg.go.dev/k8s.io/api/core/v1#ObjectReference)

design/baremetal-operator/how-ironic-works.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -160,6 +160,7 @@ This provides not only the state of the process, but also information on
160160
start and ending time, and more.
161161
An example of status answer:
162162

163+
```json
163164
{
164165
"error": null,
165166
"finished": true,
@@ -174,6 +175,7 @@ An example of status answer:
174175
"state": "finished",
175176
"uuid": "c244557e-899f-46fa-a1ff-5b2c6718616b"
176177
}
178+
```
177179

178180
There can be multiple introspection processes running at the same time, it's
179181
possible to retrieve all the statuses using:
@@ -229,6 +231,7 @@ through the “driver_info” option.
229231

230232
An example of a typical node create request in JSON format:
231233

234+
```json
232235
{
233236
"name": "test_node_dynamic",
234237
"driver": "ipmi",
@@ -238,6 +241,7 @@ An example of a typical node create request in JSON format:
238241
},
239242
"power_interface": "ipmitool"
240243
}
244+
```
241245

242246
The response, if successful, contains a complete record of the node in JSON
243247
format with provided or default ({}, “null”, or “”) values.
@@ -257,7 +261,9 @@ All nodes in ironic are tied to a state which allows ironic to track what
257261
actions can be performed upon the node and convey its general disposition.
258262
This field is the "provision_state" field that can be retrieved via the API.
259263

264+
```console
260265
GET /v1/nodes/node-id
266+
```
261267

262268
Inside the returned document, a "provision_state" field can be referenced.
263269
Further information can be found in ironic's
@@ -272,8 +278,10 @@ of networking ports for identification of the baremetal node and creation
272278
of PXE/iPXE configuration files in order to help ensure that baremetal node
273279
is quickly booted for booting into the deployment and discovery ramdisk.
274280

281+
```console
275282
PUT /v1/nodes/node-id/provision/states
276283
{"target": "inspect"}
284+
```
277285

278286
This operation can only be performed in the "manageable" ironic node state.
279287
If the node is already in the "available" state, the same requst can be used
@@ -291,13 +299,15 @@ Starting with the bare metal node in the "available" provision_state:
291299
to disk. This is performed as a HTTP PATCH request to the
292300
`/v1/nodes/node-id` endpoint.
293301

302+
```json
294303
{
295304
{“op”: “replace”, “path”: “/instance_info”, “value”: {
296305
“image_source”: “http://url-to-image”,
297306
“image_os_hash_algo”: “sha256”,
298307
“image_os_hash_value”: “abcdefghi…”}},
299308
{“op”: “replace”, “path”: “/instance_uuid”, “value”: “anyuuidvalue”}},
300309
}
310+
```
301311

302312
**NOTE:** Instead of defining the "image_os_hash_*" values, a MD5 based
303313
image checksum can be set.
@@ -313,15 +323,19 @@ Starting with the bare metal node in the "available" provision_state:
313323
return code, with a message body that consists of a list of "driver"
314324
interfaces and any errors if applicable.
315325

326+
```console
316327
GET /v1/nodes/node-id/validate
328+
```
317329

318330
Reply:
319331

332+
```json
320333
{
321334
"boot": true,
322335
..
323336
"deploy": "configuration error message if applicable"
324337
}
338+
```
325339

326340
The particular interfaces that would be important to pay attention to are
327341
‘boot’, ‘deploy’, ‘power’, ‘management’.
@@ -352,8 +366,10 @@ Starting with the bare metal node in the "available" provision_state:
352366
4. Send a HTTP POST to `/v1/nodes/node-id/states/provision` to initiate
353367
the deployment
354368

369+
```json
355370
{“target”: “active”,
356371
“configdrive”: “http://url-to-config-drive/node.iso.gz”}
372+
```
357373

358374
Once the request to make the node active has been received by ironic,
359375
it will proceed with the deployment process and execute the required
@@ -392,8 +408,10 @@ erase the contents of the disks. This can be a time intensive process,
392408
and ultimately may only be useful for cleaning metadata except in limited
393409
circumstances.
394410

411+
```console
395412
PUT /v1/nodes/node-id/states/provision
396413
{"target": "deleted"}
414+
```
397415

398416
### How to delete a baremetal node
399417

design/baremetal-operator/kubebuilder-migration.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -218,14 +218,14 @@ all metal3 repositories.
218218
- [operator-sdk 1.0 migration guide](https://sdk.operatorframework.io/docs/building-operators/golang/project_migration_guide/)
219219
- [migration implementation pull request](https://github.com/metal3-io/baremetal-operator/pull/655)
220220
- updates to kustomize deployment files and CI fixes:
221-
- <https://github.com/metal3-io/baremetal-operator/pull/672>
222-
- <https://github.com/metal3-io/baremetal-operator/pull/674>
223-
- <https://github.com/metal3-io/baremetal-operator/pull/675>
224-
- <https://github.com/metal3-io/baremetal-operator/pull/676>
225-
- <https://github.com/metal3-io/baremetal-operator/pull/677>
226-
- <https://github.com/metal3-io/baremetal-operator/pull/679>
227-
- <https://github.com/metal3-io/metal3-dev-env/pull/510>
228-
- <https://github.com/metal3-io/cluster-api-provider-metal3/pull/137>
229-
- <https://github.com/metal3-io/cluster-api-provider-metal3/pull/138>
230-
- <https://github.com/metal3-io/baremetal-operator/pull/678>
221+
- <https://github.com/metal3-io/baremetal-operator/pull/672>
222+
- <https://github.com/metal3-io/baremetal-operator/pull/674>
223+
- <https://github.com/metal3-io/baremetal-operator/pull/675>
224+
- <https://github.com/metal3-io/baremetal-operator/pull/676>
225+
- <https://github.com/metal3-io/baremetal-operator/pull/677>
226+
- <https://github.com/metal3-io/baremetal-operator/pull/679>
227+
- <https://github.com/metal3-io/metal3-dev-env/pull/510>
228+
- <https://github.com/metal3-io/cluster-api-provider-metal3/pull/137>
229+
- <https://github.com/metal3-io/cluster-api-provider-metal3/pull/138>
230+
- <https://github.com/metal3-io/baremetal-operator/pull/678>
231231
- [baremetal-operator PR](https://github.com/metal3-io/baremetal-operator/pull/650)

design/baremetal-operator/raid-disk-controller.md

Lines changed: 26 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -24,14 +24,14 @@ controllers are to be used to construct the hardware RAID volume(s).
2424
### Goals
2525

2626
The primary goal is to depict the physical disk and RAID controller names in
27-
the baremetal-host ``hardware-raid`` section.
27+
the baremetal-host `hardware-raid` section.
2828

2929
Afterwards, implementation of the same will be done within the
30-
``baremetal-operator`` by extending the ``baremetal-host`` specification.
30+
`baremetal-operator` by extending the `baremetal-host` specification.
3131

3232
### Non-Goals
3333

34-
- This specification does not deal with ``software-raid`` and extension for it.
34+
- This specification does not deal with `software-raid` and extension for it.
3535
- It does not attempt to cover any generic (vendor agnostic) naming convention
3636
for disks or controllers.
3737
- It does not cover testing for hardware from all the vendors. It will only be
@@ -96,13 +96,13 @@ The following user story appertains to the proposal in question:
9696
#### Story 1
9797
9898
As an operator, I'd like to be able to specify the physical disks and/or RAID
99-
controllers I want to use when defining my ``hardware-raid`` configuration, or
99+
controllers I want to use when defining my `hardware-raid` configuration, or
100100
both.
101101

102102
## Design Details
103103

104-
- The CRD spec will have to be extended to add fields for ``physicalDisks`` and
105-
``controller`` under the ``hardware_raid`` section.
104+
- The CRD spec will have to be extended to add fields for `physicalDisks` and
105+
`controller` under the `hardware_raid` section.
106106
- The provisioner will then be extended to process these fields.
107107
- The provisioner will make Ironic API calls with the RAID configuration, as
108108
before, but including the physical disks and controller names this time (if
@@ -116,15 +116,15 @@ there.
116116

117117
### Implementation Details/Notes/Constraints
118118

119-
- Two new fields: ``Controller`` and ``PhysicalDisks`` fields will be added to
120-
the ``HardwareRAIDVolume`` struct in baremetalhost_types.go.
121-
- Two new fields: ``Controller`` and ``PhysicalDisks`` fields will be added to
122-
the ``nodes.logicalDisk`` struct being constructed in the
123-
``buildTargetHardwareRAIDCfg`` function in pkg/provisioner/ironic/raid.go.
124-
- A pointer to the ``RAIDConfig`` struct will be added to the
125-
``BareMetalHostStatus`` field in baremetalhost_types.go.
126-
- Unit test cases will be added for the ``buildTargetHardwareRAIDCfg``
127-
function, in a function called ``TestBuildTargetHardwareRAIDCfg`` in
119+
- Two new fields: `Controller` and `PhysicalDisks` fields will be added to
120+
the `HardwareRAIDVolume` struct in baremetalhost_types.go.
121+
- Two new fields: `Controller` and `PhysicalDisks` fields will be added to
122+
the `nodes.logicalDisk` struct being constructed in the
123+
`buildTargetHardwareRAIDCfg` function in pkg/provisioner/ironic/raid.go.
124+
- A pointer to the `RAIDConfig` struct will be added to the
125+
`BareMetalHostStatus` field in baremetalhost_types.go.
126+
- Unit test cases will be added for the `buildTargetHardwareRAIDCfg`
127+
function, in a function called `TestBuildTargetHardwareRAIDCfg` in
128128
pkg/provisioner/ironic/raid_test.go.
129129

130130
### Risks and Mitigations
@@ -135,8 +135,8 @@ undesirably remove data from disks.
135135

136136
### Work Items
137137

138-
- Extend the BMH CRD spec and status, adding fields for ``physicalDisks``and
139-
``controller`` under the ``hardware_raid`` section.
138+
- Extend the BMH CRD spec and status, adding fields for `physicalDisks`and
139+
`controller` under the `hardware_raid` section.
140140
- Extend the provisioner to process these fields.
141141
- Ensure the provisioner adds the new fields to the Ironic API call made for
142142
raid configuration.
@@ -151,17 +151,17 @@ undesirably remove data from disks.
151151
### Test Plan
152152

153153
The code will be tested in a development environment with a stand-alone
154-
deployment of the ``baremetal-operator`` and ``ironic``. A number of
155-
deployments will be performed with various combinations of ``physicalDisks``
156-
and ``controller`` fields, and RAID levels; to test maximum possibilities. The
154+
deployment of the `baremetal-operator` and `ironic`. A number of
155+
deployments will be performed with various combinations of `physicalDisks`
156+
and `controller` fields, and RAID levels; to test maximum possibilities. The
157157
RAID levels 0, 1, 5, 6, 1+0, 5+0 and 6+0 will be tested with the extended
158158
parameters.
159159

160160
Unit testing will be performed to ensure that the physical disks and
161161
controllers added to the BMH YAML RAID configuration are added correctly to the
162162
`logicalDisks` field of the `nodes` object.
163163

164-
Testing will only be performed for ``idrac-wsman``, since only that is
164+
Testing will only be performed for `idrac-wsman`, since only that is
165165
available at the moment. (i.e. with Dell EMC hardware). Other vendors will have
166166
to test the code accordingly.
167167

@@ -183,12 +183,16 @@ None.
183183

184184
## Alternatives
185185

186-
Rely on the current ``hardware-raid`` configuration which does not allow for
186+
Rely on the current `hardware-raid` configuration which does not allow for
187187
specifying physical disks and RAID controllers, but works well in use cases
188188
where such a functionality is not desired.
189189

190190
## References
191191

192+
<!-- markdownlint-disable link-image-reference-definitions -->
193+
192194
[1]: (https://i.dell.com/sites/doccontent/shared-content/data-sheets/en/Documents/Dell-PowerEdge-Boot-Optimized-Storage-Solution.pdf)
193195

194196
[2]: (https://docs.openstack.org/ironic/latest/admin/raid.html)
197+
198+
<!-- markdownlint-enable link-image-reference-definitions -->

0 commit comments

Comments
 (0)