Skip to content

Update new package README template #2707

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions internal/docs/readme.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,11 @@ type ReadmeFile struct {
Error error
}

const (
doNotModifyStr string = `<!-- NOTICE: Do not edit this file manually.-->
<!-- This file is automatically generated by Elastic Package -->`
)

// AreReadmesUpToDate function checks if all the .md readme files are up-to-date.
func AreReadmesUpToDate() ([]ReadmeFile, error) {
packageRoot, err := packages.MustFindPackageRoot()
Expand Down Expand Up @@ -204,6 +209,9 @@ func renderReadme(fileName, packageRoot, templatePath string, linksMap linkMap)
}
return linksMap.RenderLink(args[0], options)
},
"header": func(args ...string) (string, error) {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This depends on the user leaving ``{{header}}` in the readme in order to add the "Do not modify" string to the generated files.

I could also inject the string into all generated READMEs, regardless of if this {{header}} is in it, but I don't know if this would cause problems with other package types, so I didn't do that. If it won't cause problems, I could change to add to all generated files.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I definitely have no idea if there are any negative consequences, but I'd vote for the string being generated into all generated files without having to worry about including the header placeholder

return doNotModifyStr, nil
},
}).ParseFiles(templatePath)
if err != nil {
return nil, fmt.Errorf("parsing README template failed (path: %s): %w", templatePath, err)
Expand Down
112 changes: 60 additions & 52 deletions internal/packages/archetype/_static/package-docs-readme.md.tmpl
Original file line number Diff line number Diff line change
@@ -1,84 +1,92 @@
<!-- Use this template language as a starting point, replacing {placeholder text} with details about the integration. -->
<!-- Find more detailed documentation guidelines in https://github.com/elastic/integrations/blob/main/docs/documentation_guidelines.md -->
<!-- This template can be used as a starting point for writing documentation for your new integration. For each section, fill in the details
described in the comments.

# {{.Manifest.Title}}
Find more detailed documentation guidelines in https://www.elastic.co/docs/extend/integrations/documentation-guidelines
-->

<!-- The {{.Manifest.Title}} integration allows you to monitor {name of service}. {name of service} is {describe service}.
<!-- Do not remove header -->
{{ `{{header}}` }}
# {{.Manifest.Title}} Integration for Elastic

Use the {{.Manifest.Title}} integration to {purpose}. Then visualize that data in Kibana, create alerts to notify you if something goes wrong, and reference {data stream type} when troubleshooting an issue.
## Overview

For example, if you wanted to {sample use case} you could {action}. Then you can {visualize|alert|troubleshoot} by {action}. -->
<!-- Complete this section with a short summary of what data this integration collects and what use cases it enables -->
The {{.Manifest.Title}} integration for Elastic enables collection of ...
This integration facilitates ...

## Data streams
### Compatibility

<!-- The {{.Manifest.Title}} integration collects {one|two} type{s} of data streams: {logs and/or metrics}. -->
<!-- Complete this section with information on what 3rd party software or hardware versions this integration is compatible with -->
This integration is compatible with ...

<!-- If applicable -->
<!-- **Logs** help you keep a record of events happening in {service}.
Log data streams collected by the {name} integration include {sample data stream(s)} and more. See more details in the [Logs](#logs-reference). -->
### How it works

<!-- If applicable -->
<!-- **Metrics** give you insight into the state of {service}.
Metric data streams collected by the {name} integration include {sample data stream(s)} and more. See more details in the [Metrics](#metrics-reference). -->
<!-- Add a high level overview on how this integration works. For example, does it collect data from API calls or recieving data from a network or file.-->

<!-- Optional: Any additional notes on data streams -->
## What data does this integration collect?

## Requirements
<!-- Complete this section with information on what types of data the integration collects, and link to reference documentation if available -->
The {{.Manifest.Title}} integration collects log messages of the following types:
* ...

You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it.
You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware.
### Supported use cases

<!--
Optional: Other requirements including:
* System compatibility
* Supported versions of third-party products
* Permissions needed
* Anything else that could block a user from successfully using the integration
-->
<!-- Add details on the use cases that can be enabled by using this integration. Explain why a user would want to install and use this integration. -->

## What do I need to use this integration?

## Setup
Elastic Agent must be installed. For more details, check the Elastic Agent [installation instructions](docs-content://reference/fleet/install-elastic-agents.md). You can install only one Elastic Agent per host.

<!-- Any prerequisite instructions -->
Elastic Agent is required to stream data from the syslog or log file receiver and ship the data to Elastic, where the events will then be processed via the integration's ingest pipelines.

For step-by-step instructions on how to set up an integration, see the
[Getting started](https://www.elastic.co/guide/en/welcome-to-elastic/current/getting-started-observability.html) guide.
<!-- List any other vendor-specific prerequisites needed before starting to install the integration. -->

<!-- Additional set up instructions -->
## How do I deploy this integration?

<!-- If applicable -->
<!-- ## Logs reference -->
### Onboard / configure

<!-- List the steps that will need to be followed in order to completely set up a working inte completely set up a working integration.
For integrations that support multiple input types, be sure to add steps for all inputs.
-->

<!-- Repeat for each data stream of the current type -->
<!-- ### {Data stream name}
### Validation

The `{data stream name}` data stream provides events from {source} of the following types: {list types}. -->
<!-- How can the user test whether the integration is working? Including example commands or test files if applicable -->

<!-- Optional -->
<!-- #### Example
## Troubleshooting

For help with Elastic ingest tools, check [Common problems](https://www.elastic.co/docs/troubleshoot/ingest/fleet/common-problems).

<!-- Add any vendor specific troubleshooting here.

Are there common issues or “gotchas” for deploying this integration? If so, how can they be resolved?
If applicable, links to the third-party software’s troubleshooting documentation.
-->

An example event for `{data stream name}` looks as following:
## Scaling

{code block with example} -->
For more information on architectures that can be used for scaling this integration, check the [Ingest Architectures](https://www.elastic.co/docs/manage-data/ingest/ingest-reference-architectures) documentation.

<!-- #### Exported fields
<!-- Add any vendor specific scaling information here -->

{insert table} -->
## Reference

<!-- If applicable -->
<!-- ## Metrics reference -->
### ECS field Reference

<!-- Repeat for each data stream of the current type -->
<!-- ### {Data stream name}
{{ `{{fields}}` }}

The `{data stream name}` data stream provides events from {source} of the following types: {list types}. -->
### Sample Event

<!-- Optional -->
<!-- #### Example
{{ `{{event}}` }}

An example event for `{data stream name}` looks as following:
### Inputs used

{code block with example} -->
<!-- List inputs used in this integration, and link to the documentation -->
These inputs can be used with this integration:
* ...

<!-- #### Exported fields
### API usage

{insert table} -->
<!-- For integrations that use APIs to collect data, document all the APIs that are used, and link to relevent information -->
These APIs are used with this integration:
* ...
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"description": "This is an example sample-event for {{.Manifest.Title}}. Replace it with a real sample event. Hint: If system tests exist, running `elastic-package test system --generate` will generate this file."
}
16 changes: 16 additions & 0 deletions internal/packages/archetype/package.go
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,14 @@ func createPackageInDir(packageDescriptor PackageDescriptor, cwd string) error {
return fmt.Errorf("can't render package README: %w", err)
}

if packageDescriptor.Manifest.Type == "integration" {
logger.Debugf("Write docs readme to _dev")
err = renderResourceFile(packageDocsReadme, &packageDescriptor, filepath.Join(baseDir, "_dev", "build", "docs", "README.md"))
if err != nil {
return fmt.Errorf("can't render package README in _dev: %w", err)
}
}

if license := packageDescriptor.Manifest.Source.License; license != "" {
logger.Debugf("Write license file")
err = licenses.WriteTextToFile(license, filepath.Join(baseDir, "LICENSE.txt"))
Expand All @@ -78,6 +86,14 @@ func createPackageInDir(packageDescriptor PackageDescriptor, cwd string) error {
return fmt.Errorf("can't render sample screenshot: %w", err)
}

if packageDescriptor.Manifest.Type == "integration" {
logger.Debugf("Write sample sample_event")
err = renderResourceFile(packageSampleEvent, &packageDescriptor, filepath.Join(baseDir, "sample_event.json"))
if err != nil {
return fmt.Errorf("can't render sample sample_event: %w", err)
}
}

if packageDescriptor.Manifest.Type == "input" {
logger.Debugf("Write base fields")
err = renderResourceFile(fieldsBaseTemplate, &packageDescriptor, filepath.Join(baseDir, "fields", "base-fields.yml"))
Expand Down
3 changes: 3 additions & 0 deletions internal/packages/archetype/resources.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,9 @@ var packageImgSampleIcon []byte
//go:embed _static/sampleScreenshot.png.b64
var packageImgSampleScreenshot string

//go:embed _static/package-sample-event.json.tmpl
var packageSampleEvent string

// Input Package templates

//go:embed _static/input-package-agent-config.yml.tmpl
Expand Down
133 changes: 114 additions & 19 deletions test/packages/parallel/apache/_dev/build/docs/README.md
Original file line number Diff line number Diff line change
@@ -1,34 +1,129 @@
# Apache Integration
{{ `{{header}}` }}
# Apache Integration for Elastic

This integration periodically fetches metrics from [Apache](https://httpd.apache.org/) servers. It can parse access and error
logs created by the Apache server.
## Overview

## Compatibility
The Apache integration for Elastic enables collection of access logs, error logs and metrics from [Apache](https://httpd.apache.org/) servers.
This integration facilitates performance monitoring, understanding traffic patterns and gaining security insights from
your Apache servers.

The Apache datasets were tested with Apache 2.4.12 and 2.4.46 and are expected to work with
all versions >= 2.2.31 and >= 2.4.16 (independent from operating system).
### Compatibility

## Logs
This integration is compatible with all Apache server versions >= 2.4.16 and >= 2.2.31.

### Access Logs
### How it works

Access logs collects the Apache access logs.
Access and error logs data stream read from a logfile. You will configure Apache server to write log files to a location that is readable by elastic-agent, and
elastic-agent will monitor and ingest events written to these files.

{{fields "access"}}
## What data does this integration collect?

### Error Logs
The {{.Manifest.Title}} integration collects log messages of the following types:

Error logs collects the Apache error logs.
The Apache HTTP Server integration collects log messages of the following types:

{{fields "error"}}
* [Error logs](https://httpd.apache.org/docs/current/logs.html#errorlog)
* [Access logs](https://httpd.apache.org/docs/current/logs.html#accesslog)
* Status metrics

## Metrics
### Supported use cases

### Status Metrics
The Apache integration for Elastic allows you to collect, parse, and analyze Apache web server logs and metrics within the Elastic Stack.
It centralizes data like access logs, error logs, and performance metrics, transforming unstructured information into a structured, searchable format.
This enables users to monitor performance, troubleshoot issues, gain security insights, and understand website traffic patterns through powerful visualizations in Kibana.
Ultimately, it simplifies the management and analysis of critical data for maintaining a healthy and secure web infrastructure.

The server status stream collects data from the Apache Status module. It scrapes the status data from the web page
generated by the `mod_status` module.
## What do I need to use this integration?

{{event "status"}}
Elastic Agent must be installed. For more details, check the Elastic Agent [installation instructions](docs-content://reference/fleet/install-elastic-agents.md). You can install only one Elastic Agent per host.

{{fields "status"}}
Elastic Agent is required to stream data from the syslog or log file receiver and ship the data to Elastic, where the events will then be processed via the integration's ingest pipelines.


## How do I deploy this integration?

### Onboard / configure

#### Collect access and error logs

Follow [Apache server instructions](https://httpd.apache.org/docs/2.4/logs.html) to write error or access logs to a location readable by elastic-agent.
Elastic-agent can run on the same system as your Apache server, or the logs can be forwarded to a different system running elastic-agent.

#### Collect metrics

Follow the [Apache server instructions](https://httpd.apache.org/docs/2.4/mod/mod_status.html) to enable the Status module.

#### Enable the integration in Elastic

1. In Kibana navigate to **Management** > **Integrations**.
2. In the search bar, type **Apache HTTP Server**.
3. Select the **Apache HTTP Server** integration and add it.
4. If needed, install Elastic Agent on the systems which will receive error or access log files.
5. Enable and configure only the collection methods which you will use.

* **To collect logs from Apache instances**, you'll need to add log file path patterns elastic-agent will monitor.

* **To collect metrics**, you'll need to configure the Apache hosts which will be monitored.

6. Press **Save Integration** to begin collecting logs.

#### Anomaly Detection Configurations

The Apache HTTP server integration also has support of anomaly detection jobs.

These anomaly detection jobs are available in the Machine Learning app in Kibana
when you have data that matches the query specified in the
[manifest](https://github.com/elastic/integrations/blob/main/packages/apache/kibana/ml_module/apache-Logs-ml.json#L11).

##### Apache Access Logs

Find unusual activity in HTTP access logs.

| Job | Description |
|---|---|
| visitor_rate_apache | HTTP Access Logs: Detect unusual visitor rates |
| status_code_rate_apache | HTTP Access Logs: Detect unusual status code rates |
| source_ip_url_count_apache | HTTP Access Logs: Detect unusual source IPs - high distinct count of URLs |
| source_ip_request_rate_apache | HTTP Access Logs: Detect unusual source IPs - high request rates |
| low_request_rate_apache | HTTP Access Logs: Detect low request rates |

### Validation

<!-- How can the user test whether the integration is working? Including example commands or test files if applicable -->
The "[Logs Apache] Access and error logs" and "[Metrics Apache] Overview" dashboards will show activity for your Apache servers.
After the integration is installed, view these dashboards in Kibana and verify that information for servers is shown.

## Troubleshooting

For help with Elastic ingest tools, check [Common problems](https://www.elastic.co/docs/troubleshoot/ingest/fleet/common-problems).

<!-- Add any vendor specific troubleshooting here.

Are there common issues or “gotchas” for deploying this integration? If so, how can they be resolved?
If applicable, links to the third-party software’s troubleshooting documentation.
-->

## Scaling

For more information on architectures that can be used for scaling this integration, check the [Ingest Architectures](https://www.elastic.co/docs/manage-data/ingest/ingest-reference-architectures) documentation.

## Reference

### ECS field Reference

{{ `{{fields}}` }}

### Sample Event

{{ `{{event}}` }}

### Inputs used

These inputs can be used with this integration:
* [logfile](https://www.elastic.co/docs/reference/integrations/filestream)
* [apache/metrics](https://www.elastic.co/docs/reference/beats/metricbeat/metricbeat-metricset-apache-status)

### API usage

These APIs are used with this integration:
* Metrics are collected using the [mod_status](https://httpd.apache.org/docs/current/mod/mod_status.html) Apache module.
Loading