Skip to content

[Netskope] Add Alerts v2 data stream #14443

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
125 changes: 124 additions & 1 deletion packages/netskope/_dev/build/docs/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
# Netskope

This integration is for Netskope. It can be used to receive logs sent by [Netskope Cloud Log Shipper](https://docs.netskope.com/en/cloud-exchange-feature-lists.html#UUID-e7c43f4b-8aad-679e-eea0-59ce19f16e29_section-idm4547044691454432680066508785) on respective TCP ports.
This integration is for Netskope. It can be used to receive logs sent by [Netskope Cloud Log Shipper](https://docs.netskope.com/en/cloud-exchange-feature-lists.html#UUID-e7c43f4b-8aad-679e-eea0-59ce19f16e29_section-idm4547044691454432680066508785) and [Netskope Log Streaming](https://docs.netskope.com/en/log-streaming/). To receive log from Netskope Cloud Log Shipper use TCP input, and for Netskope Log Streaming use any of the Cloud based inputs (AWS, GCS, or Azure Blob Storage).


The log message is expected to be in JSON format. The data is mapped to
ECS fields where applicable and the remaining fields are written under
`netskope.<data-stream-name>.*`.

## Setup steps

### For receiving log from Netskope Cloud Shipper
1. Configure this integration with the TCP input in Kibana.
2. For all Netskope Cloud Exchange configurations refer to the [Log Shipper](https://docs.netskope.com/en/cloud-exchange-feature-lists.html#UUID-e7c43f4b-8aad-679e-eea0-59ce19f16e29_section-idm4547044691454432680066508785).
3. In Netskope Cloud Exchange please enable Log Shipper, add your Netskope Tenant.
Expand All @@ -33,6 +35,121 @@ ECS fields where applicable and the remaining fields are written under
> Note: For detailed steps refer to [Configure Log Shipper SIEM Mappings](https://docs.netskope.com/en/configure-log-shipper-siem-mappings.html).
Please make sure to use the given response formats.

### For receiving log from Netskope Log Streaming
1. To configure Log streaming please refer to the [Log Streaming Configuration](https://docs.netskope.com/en/configuring-streams). While Configuring make sure compression is set to GZIP as other compression types are not supported.

#### Collect data from an AWS S3 bucket

Considering you already have an AWS S3 bucket setup, to configure it with Netskope, follow [these steps](https://docs.netskope.com/en/stream-logs-to-amazon-s3) to enable the log streaming.

#### Collect data from Azure Blob Storage

1. If you already have an Azure storage container setup, configure it with Netskope via log streaming.
2. Enable the Netskope log streaming by following [these instructions](https://docs.netskope.com/en/stream-logs-to-azure-blob).
3. Configure the integration using either Service Account Credentials or Microsoft Entra ID RBAC with OAuth2 options. For OAuth2 (Entra ID RBAC), you'll need the Client ID, Client Secret, and Tenant ID. For Service Account Credentials, you'll need either the Service Account Key or the URI to access the data.

- How to setup the `auth.oauth2` credentials can be found in the Azure documentation [here]( https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app).
- For more details about the Azure Blob Storage input settings, check the [Filebeat documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-azure-blob-storage.html).

Note:
- The service principal must be granted the appropriate permissions to read blobs. Ensure that the necessary role assignments are in place for the service principal to access the storage resources. For more information, please refer to the [Azure Role-Based Access Control (RBAC) documentation](https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles/storage).
- We recommend assigning either the **Storage Blob Data Reader** or **Storage Blob Data Owner** role. The **Storage Blob Data Reader** role provides read-only access to blob data and is aligned with the principle of least privilege, making it suitable for most use cases. The **Storage Blob Data Owner** role grants full administrative access — including read, write, and delete permissions — and should be used only when such elevated access is explicitly required.

#### Collect data from a GCS bucket

1. If you already have a GCS bucket setup, configure it with Netskope via log streaming.
2. Enable the Netskope log streaming by following [these instructions](https://docs.netskope.com/en/stream-logs-to-gcp-cloud-storage).
3. Configure the integration with your GCS project ID, Bucket name and Service Account Key/Service Account Credentials File.

For more details about the GCS input settings, check the [Filebeat documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-gcs.html).

#### The GCS credentials key file:

Once you have added a key to GCP service account, you will get a JSON key file that can only be downloaded once.
If you're new to GCS bucket creation, follow these steps:

1. Make sure you have a service account available, if not follow the steps below:
- Navigate to 'APIs & Services' > 'Credentials'
- Click on 'Create credentials' > 'Service account'
2. Once the service account is created, you can navigate to the 'Keys' section and attach/generate your service account key.
3. Make sure to download the JSON key file once prompted.
4. Use this JSON key file either inline (JSON string object), or by specifying the path to the file on the host machine, where the agent is running.

A sample JSON Credentials file looks as follows:
```json
{
"type": "dummy_service_account",
"project_id": "dummy-project",
"private_key_id": "dummy-private-key-id",
"private_key": "-----BEGIN PRIVATE KEY-----\nDummyPrivateKey\n-----END PRIVATE KEY-----\n",
"client_email": "[email protected]",
"client_id": "12345678901234567890",
"auth_uri": "https://dummy-auth-uri.com",
"token_uri": "https://dummy-token-uri.com",
"auth_provider_x509_cert_url": "https://dummy-auth-provider-cert-url.com",
"client_x509_cert_url": "https://dummy-client-cert-url.com",
"universe_domain": "dummy-universe-domain.com"
}
```


#### Collect data from AWS SQS

1. If you've already set up a connection to push data into the AWS bucket; if not, refer to the section above.
2. To set up an SQS queue, follow "Step 1: Create an Amazon SQS Queue" mentioned in the [link](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html).
- While creating an access policy, use the bucket name configured to create a connection for AWS S3 in Netskope.
3. Configure event notifications for an S3 bucket. Follow this [link](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html).
- While creating `event notification` select the event type as s3:ObjectCreated:*, destination type SQS Queue, and select the queue name created in Step 2.

For more details about the AWS-S3 input settings, check this [documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html).

### Enable the integration in Elastic

1. In Kibana go to **Management** > **Integrations**.
2. In "Search for integrations" top bar, search for `Netskope`.
3. Select the **Netskope** integration from the search results.
4. Select "Add Netskope" to add the integration.
5. While adding the integration, there are different options to collect logs;

To collect logs via AWS S3 when adding the integration, you must provide the following details::
- Collect logs via S3 Bucket toggled on
- Access Key ID
- Secret Access Key
- Bucket ARN
- Session Token

To collect logs via AWS SQS when adding the integration, you must provide the following details:
- Collect logs via S3 Bucket toggled off
- Queue URL
- Secret Access Key
- Access Key ID

To collect logs via GCS when adding the integration, you must provide the following details:
- Project ID
- Buckets
- Service Account Key/Service Account Credentials File

To collect logs via Azure Blob Storage when adding the integration, you must provide the following details:

- For OAuth2 (Microsoft Entra ID RBAC):
- Toggle on **Collect logs using OAuth2 authentication**
- Account Name
- Client ID
- Client Secret
- Tenant ID
- Container Details.

- For Service Account Credentials:
- Service Account Key or the URI
- Account Name
- Container Details


To collect logs via TCP when adding the integration, you must provide the following details:
- Listen Address
- Listen Port
6. Save the integration.

## Compatibility

This package has been tested against `Netskope version 95.1.0.645` and `Netskope Cloud Exchange version 3.4.0`.
Expand All @@ -55,6 +172,12 @@ Default port: _9021_

{{event "alerts"}}

### Alerts V2

{{fields "alerts_v2"}}

{{event "alerts_v2"}}

### Events

{{fields "events"}}
Expand Down
5 changes: 5 additions & 0 deletions packages/netskope/changelog.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
# newer versions go on top
- version: "2.1.0"
changes:
- description: Add support for Alerts v2 data stream.
type: enhancement
link: https://github.com/elastic/integrations/pull/14443
- version: "2.0.0"
changes:
- description: Change mapping of field `netskope.alerts.breach.date` from `double` to `date`.
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
version: '2.3'
services:
terraform:
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN}
- AWS_DEFAULT_PROFILE=${AWS_DEFAULT_PROFILE}
- AWS_REGION=${AWS_REGION:-us-east-1}
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
_id,access_method,account_name,acked,acting_user,action,activity,act_user,alert,alert_name,severity,alert_source,alert_type,appcategory,appsuite,app,app_session_id,assignee,bcc,browser,browser_session_id,server_bytes,client_bytes,cc,cci,ccl,cloud_provider,breach_id,eeml,breach_score,connection_id,src_country,shared_credential_user,breach_date,policy_name,policy_action,dst_country,dst_geoip_src,dsthost,dstip,dst_location,dstport,dst_region,dst_timezone,dst_zipcode,detection_engine,device,device_classification,device_sn,dlp_file,dlp_fingerprint_classification,dlp_fingerprint_match,dlp_fingerprint_score,dlp_match_info,inline_dlp_match_info,dlp_incident_id,dlp_parent_id,dlp_profile_name,dlp_profile,dlp_rule_count,dlp_rule,dlp_rule_severity,dlp_rule_score,dlp_unique_count,dlp_is_unique_count,dns_profile,domain,driver,conn_duration,encryption_status,conn_endtime,end_time,computer_name,executable_hash,executable_signed,sharedType,file_category,destination_file_directory,file_exposure,file_id,file_md5,destination_file_name,filename,file_origin,file_owner,destination_file_path,file_path,filepath,sha256,file_size,file_type,email_from_user,from_user,app-gdpr-level,usergroup,device_type,hostname,dinsid,incident_id,latest_incident_id,instance_id,instance,instance_name,sanctioned_instance,ip_protocol,dst_latitude,src_latitude,local_md5,local_sha1,local_sha256,loc,location,src_location,dst_longitude,src_longitude,mal_id,malware_id,mal_sev,malware_severity,mal_type,malware_type,managed_app,managementID,vendor_id,md5,message_id,mime_type,tss_mode,product_id,modified_date,src_network,network_session_id,ur_normalized,oauth,object,object_id,owner,object_type,org,organization_unit,os,os_details,os_family,os_user_name,os_version,page,parent_id,owner_pdl,policy_name_enforced,policy,policy_version,pop_id,netskope_pop,port,web_url,connection_type,process_name,process_cert_subject,pid,process_path,publisher_cn,domain_ip,redirect_url,referer,region_name,src_region,region_id,iaas_remediated,iaas_remediation_action,iaas_remediated_by,iaas_remediated_on,req,req_cnt,request_id,resource_category,resource_group,resp,resp_cnt,risk_level_id,sa_profile_name,sa_rule_compliance,sa_rule_name,sa_rule_severity,sender,session_duration,session_number_unique,serverity,severity_level,severity_id,shared_with,shared_domains,tunnel_id,smtp_status,smtp_to,src_geoip_src,srcip,srcport,conn_starttime,start_time,status,subject,tags,telemetry_app,threat_type,timestamp,src_timezone,to_user,numbytes,traffic_type,transaction_id,tss_license,two_factor_auth,type,unc_path,nsdeviceuid,url,user,useragent,user_confidence_index,user_confidence_level,user_id,userip,userkey,violation,site,src_zipcode,account_id,alert_id,appact,audit_type,response_time,email_modified,email_title,subtype,event_uuid,file_cls_encrypted,fllg,file_pdl,local_source_time,server_packets,client_packets,flpp,risk_score,suppression_count,spet,spst,thr,email_user,tur,total_packets,num_users,watchlist_name,custom_attr,record_type
2bebaadf4ac868577ea32140,Endpoint,-,false,-,block,File Share Access,-,yes,CDS TEST,-,-,Device,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,CDS TEST,block,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,Win11-50-1-105,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,[email protected],-,-,-,-,-,-,-,Windows,Microsoft Windows 11 Pro 10.0.22621 64-bit,-,-,-,-,-,-,TEST,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,1747209128,-,-,-,-,-,-,-,endpoint,\Device\Mup\;LanmanRedirector\,-,-,[email protected],-,-,-,-,-,[email protected],-,-,-,-,-,-,-,-,-,-,-,64907a4d-66d6-4a3b-8693-069b206a4479,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,alert
772202b2ea0d6057f886f053,Endpoint,-,false,-,block,Insert,-,yes,BlockEndpoint,-,-,Device,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,BlockEndpoint,block,-,-,-,-,-,-,-,-,-,-,-,MacOs check,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,N49J4M9T3C,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,[email protected],-,-,-,-,-,-,-,macOS,Mac OS X Sonoma 14.7.5 arm64,-,-,-,-,-,-,BlockEndpoint,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,1747134127,-,-,-,-,-,-,-,endpoint,-,-,-,[email protected],-,-,-,-,-,[email protected],-,-,-,-,-,-,-,-,-,-,-,5a5574fd-0083-41c3-996a-81c67e6c45d6,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,alert
eb8fc9903c2fbb6aa05537ff,Client,-,false,-,alert,Edit,-,yes,Web Access Allow,-,-,policy,IT Service/Application Management,Amazon,Amazon Systems Manager,2241753685910532990,-,-,Native,4940241048203471891,-,-,-,92,excellent,-,-,-,-,2631086121425559188,SE,-,-,-,-,SE,-,-,81.2.69.142,Stockholm,443,Stockholm County,Europe/Stockholm,100 04,-,Windows Device,unmanaged,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,ssm.eu-north-1.amazonaws.com,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,Test-IDMHT6TII,-,5254981775376249392,-,202533540828,-,-,-,-,18.0717|59.328699999999998,18.0717|59.328699999999998,-,-,-,-,-,Stockholm,18.0717|59.328699999999998,18.0717|59.328699999999998,-,-,-,-,-,-,no,-,-,-,-,-,-,-,-,-,-,[email protected],-,-,-,-,-,-,-,Windows 11,-,Windows,-,Windows NT 11.0,ssm.eu-north-1.amazonaws.com,-,-,-,Web Access Allow,-,-,SE-STO1,443,-,-,-,-,-,-,-,-,-,-,-,Stockholm County,-,-,-,-,-,-,-,5254981775376249392,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,81.2.69.142,-,-,-,-,-,-,-,-,1747134122,Europe/Stockholm,-,-,CloudApp,5254981775376249392,-,-,nspolicy,-,-,ssm.eu-north-1.amazonaws.com/,[email protected],aws-sdk-go/1.55.5 (go1.23.7; windows; amd64) amazon-ssm-agent/3.3.2299.0,-,-,-,81.2.69.142,[email protected],-,Amazon Systems Manager,100 04,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,-,alert
59 changes: 59 additions & 0 deletions packages/netskope/data_stream/alerts_v2/_dev/deploy/tf/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
provider "aws" {
region = "us-east-1"
default_tags {
tags = {
environment = var.ENVIRONMENT
repo = var.REPO
branch = var.BRANCH
build = var.BUILD_ID
created_date = var.CREATED_DATE
}
}
}

resource "aws_s3_bucket" "bucket" {
bucket = "elastic-package-netskope-alert-v2-bucket-${var.TEST_RUN_ID}"
}

resource "aws_sqs_queue" "queue" {
name = "elastic-package-netskope-alert-v2-queue-${var.TEST_RUN_ID}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:*:*:elastic-package-netskope-alert-v2-queue-${var.TEST_RUN_ID}",
"Condition": {
"ArnEquals": { "aws:SourceArn": "${aws_s3_bucket.bucket.arn}" }
}
}
]
}
POLICY
}

resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = aws_s3_bucket.bucket.id

queue {
queue_arn = aws_sqs_queue.queue.arn
events = ["s3:ObjectCreated:*"]
}
}

resource "aws_s3_object" "object" {
bucket = aws_s3_bucket.bucket.id
key = "test-alerts-v2.csv.gz"
content_base64 = base64gzip(file("./files/test-alerts-v2.csv"))
content_encoding = "gzip"
content_type = "text/csv"

depends_on = [aws_sqs_queue.queue]
}

output "queue_url" {
value = aws_sqs_queue.queue.url
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
variable "BRANCH" {
description = "Branch name or pull request for tagging purposes"
default = "unknown-branch"
}

variable "BUILD_ID" {
description = "Build ID in the CI for tagging purposes"
default = "unknown-build"
}

variable "CREATED_DATE" {
description = "Creation date in epoch time for tagging purposes"
default = "unknown-date"
}

variable "ENVIRONMENT" {
default = "unknown-environment"
}

variable "REPO" {
default = "unknown-repo-name"
}

variable "TEST_RUN_ID" {
default = "detached"
}
Loading