Skip to content

Commit f6acbf0

Browse files
authored
Merge pull request #137 from mainred/fix-custom-analyzer-doc
docs: fix custom analyzer doc
2 parents da8ea47 + abb175f commit f6acbf0

File tree

1 file changed

+45
-40
lines changed

1 file changed

+45
-40
lines changed

docs/tutorials/custom-analyzers.md

Lines changed: 45 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ We will create a custom analyzer that checks a Linux host for resource issues an
55

66
[Full example code](https://github.com/k8sgpt-ai/go-custom-analyzer)
77

8-
### Why?
8+
## Why?
99

1010
There are usecases where you might want to create custom analyzers to check for specific issues in your environment. This would be in conjunction with the K8sGPT built-in analyzers.
1111
For example, you may wish to scan the Kubernetes cluster nodes more deeply to understand if there are underlying issues that are related to issues in the cluster.
@@ -15,8 +15,6 @@ For example, you may wish to scan the Kubernetes cluster nodes more deeply to un
1515
- [K8sGPT CLI](https://github.com/k8sgpt-ai/k8sgpt.git)
1616
- [Golang](https://golang.org/doc/install) go1.22 or higher
1717

18-
19-
2018
### Writing a simple analyzer
2119

2220
The K8sGPT CLI, operator and custom analyzers all use a GRPC API to communicate with each other. The API is defined in the [buf.build/k8sgpt-ai/k8sgpt](https://buf.build/k8sgpt-ai/k8sgpt/docs/main:schema.v1) repository. Buf is a tool that helps you manage Protobuf files. You can install it by following the instructions [here](https://docs.buf.build/installation).
@@ -39,18 +37,19 @@ Once we have this structure let's create a simple main.go file with the followin
3937
package main
4038

4139
import (
42-
rpc "buf.build/gen/go/k8sgpt-ai/k8sgpt/grpc/go/schema/v1/schemav1grpc"
4340
"errors"
4441
"fmt"
42+
"net"
43+
"net/http"
44+
45+
rpc "buf.build/gen/go/k8sgpt-ai/k8sgpt/grpc/go/schema/v1/schemav1grpc"
4546
"github.com/k8sgpt-ai/go-custom-analyzer/pkg/analyzer"
4647
"google.golang.org/grpc"
4748
"google.golang.org/grpc/reflection"
48-
"net"
49-
"net/http"
5049
)
5150

5251
func main() {
53-
52+
fmt.Println("Starting!")
5453
var err error
5554
address := fmt.Sprintf(":%s", "8085")
5655
lis, err := net.Listen("tcp", address)
@@ -60,7 +59,7 @@ func main() {
6059
grpcServer := grpc.NewServer()
6160
reflection.Register(grpcServer)
6261
aa := analyzer.Analyzer{}
63-
rpc.RegisterAnalyzerServiceServer(grpcServer, aa.Handler)
62+
rpc.RegisterCustomAnalyzerServiceServer(grpcServer, aa.Handler)
6463
if err := grpcServer.Serve(
6564
lis,
6665
); err != nil && !errors.Is(err, http.ErrServerClosed) {
@@ -70,7 +69,8 @@ func main() {
7069
```
7170

7271
The most important part of this file is here:
73-
```
72+
73+
```go
7474
aa := analyzer.Analyzer{}
7575
rpc.RegisterAnalyzerServiceServer(grpcServer, aa.Handler)
7676
```
@@ -89,21 +89,23 @@ Now let's create the `analyzer.go` file with the following content:
8989
package analyzer
9090

9191
import (
92+
"context"
93+
"fmt"
94+
9295
rpc "buf.build/gen/go/k8sgpt-ai/k8sgpt/grpc/go/schema/v1/schemav1grpc"
9396
v1 "buf.build/gen/go/k8sgpt-ai/k8sgpt/protocolbuffers/go/schema/v1"
94-
"context"
97+
"github.com/ricochet2200/go-disk-usage/du"
9598
)
9699

97100
type Handler struct {
98-
rpc.AnalyzerServiceServer
101+
rpc.CustomAnalyzerServiceServer
99102
}
100103
type Analyzer struct {
101104
Handler *Handler
102105
}
103106

104-
func (a *Handler) Run(context.Context, *v1.AnalyzerRunRequest) (*v1.AnalyzerRunResponse, error) {
105-
106-
response := &v1.AnalyzerRunResponse{
107+
func (a *Handler) Run(context.Context, *v1.RunRequest) (*v1.RunResponse, error) {
108+
response := &v1.RunResponse{
107109
Result: &v1.Result{
108110
Name: "example",
109111
Details: "example",
@@ -120,35 +122,29 @@ func (a *Handler) Run(context.Context, *v1.AnalyzerRunRequest) (*v1.AnalyzerRunR
120122
```
121123

122124
This file contains the `Handler` struct which implements the `Run` method. This method is called when the analyzer is run. In this example, we are returning an error message.
123-
The `Run` method takes a context and an `AnalyzerRunRequest` as arguments and returns an `AnalyzerRunResponse` and an error. Find the API available [here](https://buf.build/k8sgpt-ai/k8sgpt/file/main:schema/v1/analyzer.proto#L16).
125+
The `Run` method takes a context and an `RunRequest` as arguments and returns an `RunResponse` and an error. Find the API available [here](https://buf.build/k8sgpt-ai/k8sgpt/file/1379a5a1889d4bf49494b2e2b8e36164:schema/v1/custom_analyzer.proto).
124126

125127
### Implementing some custom logic
126128

127129
Now that we have the basic structure in place, let's implement some custom logic. We will check the disk usage on the host and return an error if it is above a certain threshold.
128130

129131
```go
130132
// analyzer.go
131-
import "github.com/ricochet2200/go-disk-usage/du"
132-
var KB = uint64(1024)
133-
func (a *Handler) Run(context.Context, *v1.AnalyzerRunRequest) (*v1.AnalyzerRunResponse, error) {
134-
133+
func (a *Handler) Run(context.Context, *v1.RunRequest) (*v1.RunResponse, error) {
134+
println("Running analyzer")
135135
usage := du.NewDiskUsage("/")
136136
diskUsage := int((usage.Size() - usage.Free()) * 100 / usage.Size())
137-
var response = &v1.AnalyzerRunResponse{}
138-
if diskUsage > 90 {
139-
response = &v1.AnalyzerRunResponse{
140-
Result: &v1.Result{
141-
Name: "Disk Usage",
142-
Details: "Disk usage is above 90%",
143-
Error: []*v1.ErrorDetail{
144-
&v1.ErrorDetail{
145-
Text: "Disk usage is above 90%",
146-
},
137+
return &v1.RunResponse{
138+
Result: &v1.Result{
139+
Name: "diskuse",
140+
Details: fmt.Sprintf("Disk usage is %d", diskUsage),
141+
Error: []*v1.ErrorDetail{
142+
{
143+
Text: fmt.Sprintf("Disk usage is %d", diskUsage),
147144
},
148145
},
149-
}
150-
}
151-
return response, nil
146+
},
147+
}, nil
152148
}
153149
```
154150

@@ -157,22 +153,31 @@ func (a *Handler) Run(context.Context, *v1.AnalyzerRunRequest) (*v1.AnalyzerRunR
157153
To test this with K8sGPT we need to update the local K8sGPT CLI configuration to point to the custom analyzer. We can do this by running the following command:
158154

159155
```bash
160-
❯ cat ~/Library/Application\ Support/k8sgpt/k8sgpt.yaml
161-
custom_analyzers:
162-
- name: Disk Usage
163-
connection:
164-
url: localhost
165-
port: 8085
156+
k8sgpt custom-analyzer add -n diskuse
157+
```
158+
159+
This will add the custom analyzer `diskuse` to the list of available analyzers in the K8sGPT CLI.
160+
161+
```bash
162+
k8sgpt custom-analyzer list
163+
Active:
164+
> diskuse
166165
```
167166

168-
This will add the custom analyzer to the list of available analyzers in the K8sGPT CLI.
169167
To execute the analyzer we can run the following command:
170168

169+
- run the customer analyzer
170+
171+
```bash
172+
go run main.go
173+
```
174+
175+
- execute the analyzer
176+
171177
```bash
172178
k8sgpt analyze --custom-analysis
173179
```
174180

175181
## What's next?
176182

177183
Now you've got the basics of how to write a custom analyzer, you can extend this to check for other issues on your hosts or in your Kubernetes cluster. You can also create more complex analyzers that check for multiple issues and provide more detailed recommendations.
178-

0 commit comments

Comments
 (0)