Skip to content

Commit b213c91

Browse files
authored
Merge pull request #29 from thomasjpfan/issue-10
Adds Flexible Labeling
2 parents 751d12d + 48b5205 commit b213c91

12 files changed

+867
-13
lines changed

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ ADD . /src
33
WORKDIR /src
44
RUN go get -t github.com/stretchr/testify/suite
55
RUN go get -d -v -t
6-
RUN go test --cover ./... --run UnitTest
6+
RUN go test --cover ./... --run UnitTest -p 1
77
RUN CGO_ENABLED=0 GOOS=linux go build -v -o docker-flow-monitor
88

99

docs/config.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,8 +103,13 @@ curl `[IP_OF_ONE_OF_SWARM_NODES]:8080/v1/docker-flow-monitor/reconfigure?scrapeP
103103

104104
Please consult [Prometheus Configuration](https://prometheus.io/docs/operating/configuration/) for more information about the available options.
105105

106-
## Scrapes
106+
## Scrape Secret Configuration
107107

108108
Additional scrapes can be added through files prefixed with `scrape_`. By default, all such files located in `/run/secrets` are automatically added to the `scrape_configs` section of the configuration. The directory can be changed by setting a different value to the environment variable `CONFIGS_DIR`.
109109

110110
The simplest way to add scrape configs is to use Docker [secrets](https://docs.docker.com/engine/swarm/secrets/) or [configs](https://docs.docker.com/engine/swarm/configs/).
111+
112+
113+
## Scrape Label Configuration
114+
115+
When using a version of [Docker Flow Swarm Listener](https://github.com/vfarcic/docker-flow-swarm-listener), DFSL, newer than `18.02.06-31`, you can configure DFSL to send node node hostnames to `Docker Flow Monitor`, DFM. This can be done by setting `DF_INCLUDE_NODE_IP_INFO` to `true` in the DFSL environment. DFM will automatically display the node hostnames as a label for each prometheus target. The `DF_SCRAPE_TARGET_LABELS` env variable allows for additional labels to be displayed. For example, if a service has env variables `com.df.env=prod` and `com.df.domain=frontend`, you can set `DF_SCRAPE_TARGET_LABELS=env,domain` in DFM to display the `prod` and `frontend` labels in prometheus.
125 KB
Loading

docs/tutorial-flexible-labeling.md

Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
# Flexible Labeling with Docker Flow Monitor
2+
3+
*Docker Flow Monitor* and *Docker Flow Swarm Listener* can be configured to allow for more flexible labeling of exporters. Please read the [Running Docker Flow Monitor](tutorial.md) tutorial before reading this one. This tutorial focuses on configuring the stacks to allow for flexible labeling.
4+
5+
## Setting Up A Cluster
6+
7+
!!! info
8+
Feel free to skip this section if you already have a Swarm cluster that can be used for this tutorial
9+
10+
We'll create a Swarm cluster consisting of three nodes created with Docker Machine.
11+
12+
```bash
13+
git clone https://github.com/vfarcic/docker-flow-monitor.git
14+
15+
cd docker-flow-monitor
16+
17+
./scripts/dm-swarm.sh
18+
19+
eval $(docker-machine env swarm-1)
20+
```
21+
22+
## Deploying Docker Flow Monitor
23+
24+
We will deploy [stacks/docker-flow-monitor-flexible-labels.yml](https://github.com/vfarcic/docker-flow-monitor/blob/master/stacks/docker-flow-monitor-flexible-labels.yml) stack that contains three services: `monitor`, `alert-manager` and `swarm-listener`. The `swarm-listener` service includes an additional environment variable: `DF_INCLUDE_NODE_IP_INFO=true`. This configures `swarm-listener` to send node and ip information to `mointor`.
25+
26+
The `monitor` service includes the environment variable: `DF_SCRAPE_TARGET_LABELS=env,metricType`. This sets up flexible labeling for exporters. If an exporter defines a deploy label `com.df.env` or `com.df.metricType`, that label will be used by `monitor`.
27+
28+
Let's deploy the `monitor` stack:
29+
30+
```bash
31+
docker network create -d overlay monitor
32+
33+
docker stack deploy \
34+
-c stacks/docker-flow-monitor-flexible-labels.yml \
35+
monitor
36+
```
37+
38+
## Collecting Metrics and Defining Alerts
39+
40+
We will deploy exporters stack defined in [stacks/exporters-tutorial-flexible-labels.yml](https://github.com/vfarcic/docker-flow-monitor/blob/master/stacks/exporters-tutorial-flexible-labels.yml), two containing two services: `cadvisor` and `node-exporter`.
41+
42+
The definition of the `cadvisor` service contains additional deploy labels:
43+
44+
```yaml
45+
cadvisor:
46+
image: google/cadvisor
47+
networks:
48+
- monitor
49+
...
50+
deploy:
51+
mode: global
52+
labels:
53+
...
54+
- com.df.scrapeNetwork=monitor
55+
- com.df.env=prod
56+
- com.df.metricType=system
57+
```
58+
59+
The `com.df.scrapeNetwork` deploy label tells `swarm-listener` to use `cadvisor`'s IP on the `monitor` network. This is important because the `monitor` service is using the `monitor` network to scrape `cadvisor`. The `com.df.env=prod` and `com.df.metricType=system` deploy labels configures flexible labeling for `cadvisor`.
60+
61+
The second service, `node-exporter` is also configured with flexiable labels:
62+
63+
```yaml
64+
node-exporter:
65+
image: basi/node-exporter
66+
networks:
67+
- monitor
68+
...
69+
deploy:
70+
mode: global
71+
labels:
72+
...
73+
- com.df.scrapeNetwork=monitor
74+
- com.df.env=dev
75+
- com.df.metricType=system
76+
```
77+
78+
Let's deploy the `exporter` stack
79+
80+
```bash
81+
docker stack deploy \
82+
-c stacks/exporters-tutorial-flexible-labels.yml \
83+
exporter
84+
```
85+
86+
Please wait until the service in the stack are up-and-running. You can check their status by executing `docker stack ps exporter`.
87+
88+
Now we can open the *Prometheus* targets page from a browser.
89+
90+
> If you're a Windows user, Git Bash might not be able to use the `open` command. If that's the case, replace the `open` command with `echo`. As a result, you'll get the full address that should be opened directly in your browser of choice.
91+
92+
```bash
93+
open "http://$(docker-machine ip swarm-1):9090/targets"
94+
```
95+
96+
You should see a targets page similar to the following:
97+
98+
![Flexiable Labeling Targets Page](img/flexiable-labeling-targets-page.png)
99+
100+
Each service is labeled with its associated `com.df.env` or `com.df.metricType` deploy label. In addition, the `node` label is the hostname the service is running on.
101+
102+
## What Now?
103+
104+
*Docker Flow Monitors*'s flexible labeling feature provides more information about your services. Please consult the documentation for any additional information you might need. Feel free to open [an issue](https://github.com/vfarcic/docker-flow-monitor/issues) if you require additional info, if you find a bug, or if you have a feature request.
105+
106+
Before you go, please remove the cluster we created and free those resources for something else.
107+
108+
```bash
109+
docker-machine rm -f swarm-1 swarm-2 swarm-3
110+
```

mkdocs.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ pages:
44
- Tutorial:
55
- Running Docker Flow Monitor: tutorial.md
66
- Auto-Scaling Services Using Instrumented Metrics: auto-scaling.md
7+
- Flexible Labeling with Docker Flow Monitor: tutorial-flexible-labeling.md
78
- Configuration: config.md
89
- Usage: usage.md
910
- Migration Guide: migration.md

prometheus/config.go

Lines changed: 66 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,7 @@ package prometheus
22

33
import (
44
"bytes"
5+
"encoding/json"
56
"fmt"
67
"net/url"
78
"os"
@@ -18,14 +19,17 @@ import (
1819
// WriteConfig creates Prometheus configuration at configPath and writes alerts into /etc/prometheus/alert.rules
1920
func WriteConfig(configPath string, scrapes map[string]Scrape, alerts map[string]Alert) {
2021
c := &Config{}
22+
fileSDDir := "/etc/prometheus/file_sd"
23+
alertRulesPath := "/etc/prometheus/alert.rules"
2124

2225
configDir := filepath.Dir(configPath)
2326
FS.MkdirAll(configDir, 0755)
27+
FS.MkdirAll(fileSDDir, 0755)
2428
c.InsertScrapes(scrapes)
2529

2630
if len(alerts) > 0 {
2731
logPrintf("Writing to alert.rules")
28-
afero.WriteFile(FS, "/etc/prometheus/alert.rules", []byte(GetAlertConfig(alerts)), 0644)
32+
afero.WriteFile(FS, alertRulesPath, []byte(GetAlertConfig(alerts)), 0644)
2933
c.RuleFiles = []string{"alert.rules"}
3034
}
3135

@@ -35,6 +39,7 @@ func WriteConfig(configPath string, scrapes map[string]Scrape, alerts map[string
3539
logPrintf("Unable to insert alertmanager url %s into prometheus config", alertmanagerURL)
3640
}
3741
}
42+
c.CreateFileStaticConfig(scrapes, fileSDDir)
3843

3944
for _, e := range os.Environ() {
4045
envSplit := strings.SplitN(e, "=", 2)
@@ -98,6 +103,9 @@ func (c *Config) InsertScrapes(scrapes map[string]Scrape) {
98103
if len(metricsPath) == 0 {
99104
metricsPath = "/metrics"
100105
}
106+
if s.NodeInfo != nil && len(*s.NodeInfo) > 0 {
107+
continue
108+
}
101109
if s.ScrapeType == "static_configs" {
102110
newScrape = &ScrapeConfig{
103111
ServiceDiscoveryConfig: ServiceDiscoveryConfig{
@@ -152,6 +160,63 @@ func (c *Config) InsertScrapesFromDir(dir string) {
152160

153161
}
154162

163+
// CreateFileStaticConfig creates static config files
164+
func (c *Config) CreateFileStaticConfig(scrapes map[string]Scrape, fileSDDir string) {
165+
166+
staticFiles := map[string]struct{}{}
167+
for _, s := range scrapes {
168+
fsc := FileStaticConfig{}
169+
if s.NodeInfo == nil {
170+
continue
171+
}
172+
for n := range *s.NodeInfo {
173+
tg := TargetGroup{}
174+
tg.Targets = []string{fmt.Sprintf("%s:%d", n.Addr, s.ScrapePort)}
175+
tg.Labels = map[string]string{}
176+
if s.ScrapeLabels != nil {
177+
for k, v := range *s.ScrapeLabels {
178+
tg.Labels[k] = v
179+
}
180+
}
181+
tg.Labels["node"] = n.Name
182+
tg.Labels["service"] = s.ServiceName
183+
fsc = append(fsc, &tg)
184+
}
185+
186+
if len(fsc) == 0 {
187+
continue
188+
}
189+
190+
fscBytes, err := json.Marshal(fsc)
191+
if err != nil {
192+
continue
193+
}
194+
filePath := fmt.Sprintf("%s/%s.json", fileSDDir, s.ServiceName)
195+
afero.WriteFile(FS, filePath, fscBytes, 0644)
196+
newScrape := &ScrapeConfig{
197+
ServiceDiscoveryConfig: ServiceDiscoveryConfig{
198+
FileSDConfigs: []*SDConfig{{
199+
Files: []string{filePath},
200+
}},
201+
},
202+
JobName: s.ServiceName,
203+
}
204+
c.ScrapeConfigs = append(c.ScrapeConfigs, newScrape)
205+
staticFiles[filePath] = struct{}{}
206+
}
207+
208+
// Remove scrapes that are not in fileStaticServices
209+
currentStaticFiles, err := afero.Glob(FS, fmt.Sprintf("%s/*.json", fileSDDir))
210+
if err != nil {
211+
return
212+
}
213+
for _, file := range currentStaticFiles {
214+
if _, ok := staticFiles[file]; !ok {
215+
FS.Remove(file)
216+
}
217+
}
218+
}
219+
155220
func normalizeScrapeFile(content []byte) []byte {
156221
spaceCnt := 0
157222
for i, c := range content {

0 commit comments

Comments
 (0)