Skip to content

Commit 96cf6d5

Browse files
authored
Merge pull request #7280 from jddocs/rc-v1.379.0
[Release Candidate] v1.379.0
2 parents 5f32985 + 0f41fd3 commit 96cf6d5

File tree

35 files changed

+67
-443
lines changed

35 files changed

+67
-443
lines changed

docs/guides/akamai/solutions/complete-observability-for-live-stream-events-with-trafficpeak/index.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: "This guide discusses the requirements and challenges related to im
55
authors: ["John Dutton"]
66
contributors: ["John Dutton"]
77
published: 2024-07-31
8-
keywords: ['list','of','keywords','and key phrases']
8+
keywords: ['observability','trafficpeak','compute','object storage','akamai cloud','datastream','logs','data loggging']
99
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
1010
external_resources:
1111
- '[Akamai Solution Brief: Media TrafficPeak Observability Platform](https://www.akamai.com/resources/solution-brief/trafficpeak-observability-platform)'
@@ -15,7 +15,7 @@ external_resources:
1515

1616
Live streaming events require complete observability in order to deliver a seamless user experience during periods of extreme traffic. Supporting large amounts of concurrent viewers depends on live application and infrastructure insights so that you can troubleshoot issues in real-time.
1717

18-
Complete observability for live streams poses multiple challenges, including implementing data logging at each step, logging storage costs, analyzing data, and timely data reporting. This guide discusses these challenges and considerations, how they can be addressed using TrafficPeak, and a high-level architecture review for achieving live stream observability on Akamai Connected Cloud.
18+
Complete observability for live streams poses multiple challenges, including implementing data logging at each step, logging storage costs, analyzing data, and timely data reporting. This guide discusses these challenges and considerations, how they can be addressed using TrafficPeak, and a high-level architecture review for achieving live stream observability on Akamai.
1919

2020
The architecture diagram in this guide references a workflow used to stream one of the largest ad-supported sporting events in the world, supporting one of the largest concurrent user bases ever with an average of 18 million concurrent viewers. The observability solution implemented via Akamai DataStream and TrafficPeak was able to ingest, store, organize, and display insights into the entire streaming media workflow, while Akamai CDN delivered the event to end-users.
2121

@@ -31,7 +31,7 @@ TrafficPeak’s observability solution allows you to ingest, store, and analyze
3131

3232
Complete observability means logging each step of the live stream process, including ingesting live camera feeds, content storage, content delivery, ad insertion, and user playback. Doing this on a global scale, for millions of concurrent users, can result in processing billions of logs and large cloud bills in a very short amount of time.
3333

34-
TrafficPeak uses a highly efficient compression algorithm that helps store more logs, for longer, and cheaper - up to 75% less than other observability solutions. And since Linode Object Storage, TrafficPeak, and Akamai CDN are all part of Akamai Connected Cloud, egress costs can also be reduced by up to 100%.
34+
TrafficPeak uses a highly efficient compression algorithm that helps store more logs, for longer, and cheaper - up to 75% less than other observability solutions. And since Linode Object Storage, TrafficPeak, and Akamai CDN are all part of Akamai, egress costs can also be reduced by up to 100%.
3535

3636
### Log Analysis
3737

@@ -54,12 +54,12 @@ TrafficPeak offers sub-second querying and optimizes log indexing with fully cus
5454

5555
### Systems and Components
5656

57-
- **Akamai DataStream + TrafficPeak:** Akamai’s complete observability solution. DataStream sends logs from the edge to TrafficPeak on compute and object storage, all while on the Akamai Connected Cloud network.
57+
- **Akamai DataStream + TrafficPeak:** Akamai’s complete observability solution. DataStream sends logs from the edge to TrafficPeak on compute and object storage running on Akamai Cloud.
5858

5959
- **Akamai CDN:** Akamai’s industry-leading content delivery network used for caching and global delivery.
6060

6161
- **Akamai Media Services Live (MSL):** Low-latency ingest of media content for high-quality live streaming.
6262

63-
- **Linode Object Storage:** Cost-effective object storage used for media and log storage on Akamai Connected Cloud.
63+
- **Linode Object Storage:** Cost-effective object storage used for media and log storage on Akamai Cloud.
6464

65-
- **Server-Side Ad Insertion (SSAI):** The process of attaching, or stitching, ads to content prior to reaching end-user devices. Ad logs (i.e. ad played and ad interacted) can also be sent to TrafficPeak with the TrafficPeak Video Analytics add-on.
65+
- **Server-Side Ad Insertion (SSAI):** The process of attaching, or stitching, ads to content prior to reaching end-user devices. Ad logs (i.e. ad played and ad interacted) can also be sent to TrafficPeak with the TrafficPeak Video Analytics add-on.

docs/guides/akamai/solutions/high-performance-kv-store-fintech-akamai/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ external_resources:
1414

1515
Fintech and eCommerce services process high volumes of transactions and have demanding requirements for performance, reliability, and resiliency. The data storage size for a given transaction in these services may not be as large as in other industries like media or gaming, but they must adhere to rigorous standards for security, latency, and consistency.
1616

17-
This guide outlines a distributed key-value (KV) storage architecture that supports registration for users between a global fintech service and a banking system. In particular, the data stored represents users' credit card information, and it is encrypted in this storage system. This KV store is implemented with NATS and the JetStream persistence engine, and it is deployed across 11 core compute regions on Akamai Connected Cloud. The system is capable of storing hundreds of millions of keys while guaranteeing low latency for registration requests and a resilient method for quickly publishing and updating key-value data.
17+
This guide outlines a distributed key-value (KV) storage architecture that supports registration for users between a global fintech service and a banking system. In particular, the data stored represents users' credit card information, and it is encrypted in this storage system. This KV store is implemented with NATS and the JetStream persistence engine, and it is deployed across 11 core compute regions on Akamai Cloud. The system is capable of storing hundreds of millions of keys while guaranteeing low latency for registration requests and a resilient method for quickly publishing and updating key-value data.
1818

1919
## Distributed KV Store Workflow
2020

@@ -58,7 +58,7 @@ For this reason, it is important for these update operations to be propagated qu
5858

5959
## Distributed Key-Value Store Design Diagram
6060

61-
This solution creates a key-value storage service on Akamai Connected Cloud. The service is composed of a primary storage cluster in one compute region and ten storage leaf nodes installed across ten other compute locations. Akamai Global Traffic Management routes requests from users to these leaf nodes.
61+
This solution creates a key-value storage service on Akamai Cloud. The service is composed of a primary storage cluster in one compute region and ten storage leaf nodes installed across ten other compute locations. Akamai Global Traffic Management routes requests from users to these leaf nodes.
6262

6363
![NATS Key-Value Store Design Diagram](nats-kv-store.svg?diagram-description-id=nats-kv-store-design-diagram)
6464

@@ -93,4 +93,4 @@ This solution creates a key-value storage service on Akamai Connected Cloud. The
9393

9494
The gateway retrieves the value of the key from the corresponding NATS leaf node.
9595

96-
- **[Akamai Global Traffic Management](https://www.akamai.com/products/global-traffic-management)** is responsible for accepting user requests on the service and routing requests to an available leaf node that provides the lowest latency for the user's location.
96+
- **[Akamai Global Traffic Management](https://www.akamai.com/products/global-traffic-management)** is responsible for accepting user requests on the service and routing requests to an available leaf node that provides the lowest latency for the user's location.

docs/guides/akamai/solutions/iot-firmware-upgrades-with-obj-and-cdn/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
1010
---
1111

1212
## Overview
13-
As more and more consumer electronics join the Internet of Things (IoT), the need to deliver feature and security firmware updates to these devices becomes more critical for IoT device manufacturers. One of the main aspects of delivery manufacturers need to plan for is how much egress data these systems will use. At scale, the price of keeping both consumers and the business happy and secure can be enormous. Using Linode Object Storage on Akamai Connected Cloud as an origin for this data, and connecting that service to Akamai CDN, can provide a huge cost savings over other competing hyperscalers.
13+
As more and more consumer electronics join the Internet of Things (IoT), the need to deliver feature and security firmware updates to these devices becomes more critical for IoT device manufacturers. One of the main aspects of delivery manufacturers need to plan for is how much egress data these systems will use. At scale, the price of keeping both consumers and the business happy and secure can be enormous. Using Linode Object Storage on Akamai Cloud as an origin for this data, and connecting that service to Akamai CDN, can provide a huge cost savings over other competing hyperscalers.
1414

1515
## Firmware Update Workflow
1616
1. A manufacturer uploads a new firmware package to an Object Storage bucket.
@@ -29,7 +29,7 @@ An IoT manufacturer found themselves struggling to send OS and firmware updates
2929

3030
Another challenge the IoT manufacturer encountered was supporting more IoT devices worldwide. This resulted in the scale of their firmware delivery service growing in both storage and delivery costs. The IoT manufacturer was looking for a service that could help them save money on egress and improve their bottom line.
3131

32-
**Solution**: Because Linode Object Storage on Akamai Connected Cloud has much lower egress rates than AWS’ offerings, and because it can be set as an origin for Akamai CDN, the IoT manufacturer was not only able to keep file system access to firmware objects, but decrease egress costs by 90%.
32+
**Solution**: Because Linode Object Storage on Akamai Cloud has much lower egress rates than AWS’ offerings, and because it can be set as an origin for Akamai CDN, the IoT manufacturer was not only able to keep file system access to firmware objects, but decrease egress costs by 90%.
3333

3434
## Architecture
3535

docs/guides/akamai/solutions/preparing-infrastructure-for-high-impact-ad-traffic/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
slug: preparing-infrastructure-for-high-impact-ad-traffic
33
title: "Preparing Infrastructure for High-Impact Advertising Traffic on Akamai"
4-
description: "This guide discusses the infrastructure challenges related to traffic associated with high-impact ad campaigns. It also proposes a reference architecture and strategies used to support surges during high-traffic events on Akamai Connected Cloud."
4+
description: "This guide discusses the infrastructure challenges related to traffic associated with high-impact ad campaigns. It also proposes a reference architecture and strategies used to support surges during high-traffic events on Akamai."
55
authors: ["John Dutton"]
66
contributors: ["John Dutton"]
77
published: 2024-07-10
@@ -70,7 +70,7 @@ At the edge, Akamai’s [App & API Protector](https://www.akamai.com/products/ap
7070

7171
5. Queue-it waiting rooms allow for user prioritization and enhance the user experience during periods of extremely high traffic.
7272

73-
6. TrafficPeak on Akamai Connected Cloud provides near real-time visualization of event data.
73+
6. TrafficPeak on Akamai provides near real-time visualization of event data.
7474

7575
7. Akamai routes CDN traffic through designated edge servers. This allows customers to drop traffic from other sources and prevent attackers from bypassing edge-based protection.
7676
{#high-impact-ad-arch .large-diagram}
@@ -93,4 +93,4 @@ At the edge, Akamai’s [App & API Protector](https://www.akamai.com/products/ap
9393

9494
- **Load Testing (CloudTest):** CloudTest is a load testing tool that lets customers run peak traffic performance testing for environments at scale.
9595

96-
- **TrafficPeak:** Akamai’s managed observability solution. Runs on Akamai Connected Cloud and is comprised of Compute Instances, Object Storage, and a visual Grafana dashboard for near real-time monitoring.
96+
- **TrafficPeak:** Akamai’s managed observability solution. Runs on Akamai Cloud infrastructure and is comprised of Compute Instances, Object Storage, and a visual Grafana dashboard for near real-time monitoring.

docs/guides/applications/big-data/manually-deploy-kafka-cluster/index.md

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,9 @@ slug: manually-deploy-kafka-cluster
33
title: "Manually Deploy an Apache Kafka Cluster on Akamai"
44
description: "Learn how to deploy and test a secure Apache Kafka cluster on Akamai using provided, customizable Ansible playbooks."
55
authors: ["John Dutton","Elvis Segura"]
6-
contributors: ["John Dutton","Elvis Segura"]
6+
contributors: ["John Dutton","Elvis Segura","Philip Tellis","Nathan Melehan"]
77
published: 2024-11-20
8+
modified: 2025-05-29
89
keywords: ['apache kafka','kafka','data stream','stream management']
910
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
1011
external_resources:
@@ -239,6 +240,9 @@ All secrets are encrypted with the Ansible Vault utility as a best practice.
239240
- `region`: The data center region for the cluster.
240241
- `image`: The distribution image to be installed on each Kafka instance. The deployment in this guide supports the `ubuntu24.04` image.
241242
- `group` and `linode_tags` (optional): Any [groups or tags](/docs/guides/tags-and-groups/) you with to apply to your cluster’s instances for organizational purposes.
243+
- `firewall_label` (optional): The label for a [Cloud Firewall](https://techdocs.akamai.com/cloud-computing/docs/cloud-firewall) that can be created for the cluster. If this label is not provided, the firewall is not created.
244+
- `vpc_label` (optional): The label for a [VPC](https://techdocs.akamai.com/cloud-computing/docs/vpc) that can be created for the cluster. If this label is not provided, the VPC is not created.
245+
- `domain_name` and `ttl_sec` (optional): A domain name and [TTL (in seconds)](https://techdocs.akamai.com/cloud-computing/docs/troubleshooting-dns-records#set-the-time-to-live-or-ttl) for the cluster. Each cluster instance is assigned a subdomain of this domain name. For example, if your domain name is `example.com`, a record named `instance_label.example.com` is created for each instance. If a domain name is not provided, these records are not created.
242246
- `cluster_size`: The number of Kafka instances in the cluster deployment. Minimum value of 3.
243247
- `sudo_username`: A sudo username for each cluster instance.
244248
- `country_name`, `state_or_province_name`, `locality_name`, and `organization_name`: The geographical and organizational information for your self-signed TLS certificate.
@@ -255,6 +259,12 @@ All secrets are encrypted with the Ansible Vault utility as a best practice.
255259
image: linode/ubuntu24.04
256260
group:
257261
linode_tags:
262+
firewall_label:
263+
vpc_label:
264+
265+
# Optional settings for DNS
266+
domain_name:
267+
ttl_sec:
258268
259269
cluster_size: 3
260270
client_count: 2

docs/guides/kubernetes/deploy-prometheus-operator-with-grafana-on-lke/index.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -150,13 +150,12 @@ In this section, you will create a Helm chart values file and use it to deploy P
150150
1. Using Helm, deploy a Prometheus Operator release labeled `lke-monitor` in the `monitoring` namespace on your LKE cluster with the settings established in your `values.yaml` file:
151151

152152
```command
153-
helm install \
154-
lke-monitor stable/kube-prometheus-stack\
153+
helm install lke-monitor kube-prometheus-stack \
154+
--repo https://prometheus-community.github.io/helm-charts \
155155
-f ~/lke-monitor/values.yaml \
156156
--namespace monitoring \
157-
--set grafana.adminPassword=$GRAFANA_ADMINPASSWORD \
158-
--set prometheusOperator.createCustomResource=false \
159-
--repo https://prometheus-community.github.io/helm-charts
157+
--set grafana.adminPassword="$GRAFANA_ADMINPASSWORD" \
158+
--set prometheusOperator.createCustomResource=false
160159
```
161160

162161
{{< note >}}
@@ -653,4 +652,4 @@ Your monitoring interfaces are now publicly accessible with HTTPS and basic auth
653652

654653
When accessing an interface for the first time, log in as `admin` with the password you configured for [basic auth credentials](#configure-basic-auth-credentials).
655654

656-
When accessing the Grafana interface, you will then log in again as `admin` with the password you exported as `$GRAFANA_ADMINPASSWORD` on your local environment. The Grafana dashboards are accessible at **Dashboards > Manage** from the left navigation bar.
655+
When accessing the Grafana interface, you will then log in again as `admin` with the password you exported as `$GRAFANA_ADMINPASSWORD` on your local environment. The Grafana dashboards are accessible at **Dashboards > Manage** from the left navigation bar.

docs/guides/platform/object-storage/optimizing-obj-bucket-architecture-for-akamai-cdn/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,15 @@ description: "This guide discusses design strategies and best practices for opti
55
authors: ["Akamai"]
66
contributors: ["Akamai"]
77
published: 2024-09-27
8-
keywords: ['object storage','cdn','delivery','linode object storage','akamai cdn','akamai connected cloud','bucket architecture']
8+
keywords: ['object storage','cdn','delivery','linode object storage','akamai cdn','akamai cloud','bucket architecture']
99
license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)'
1010
external_resources:
1111
- '[Object Storage Product Documentation](https://techdocs.akamai.com/cloud-computing/docs/object-storage)'
1212
- '[Akamai Content Delivery Documentation](https://techdocs.akamai.com/platform-basics/docs/content-delivery)'
1313
- '[Using Object Storage With Akamai CDN](/docs/guides/using-object-storage-with-akamai-cdn/)'
1414
---
1515

16-
Linode Object Storage can be an efficient, cost-effective solution for streaming and data delivery applications when used as an origin point for Akamai CDN. Since Object Storage is a part of Akamai Connected Cloud and uses the same backbone as Akamai CDN, egress can also be significantly reduced.
16+
Linode Object Storage can be an efficient, cost-effective solution for streaming and data delivery applications when used as an origin point for Akamai CDN. Since Object Storage is a part of Akamai Cloud and uses the same backbone as Akamai CDN, egress can also be significantly reduced.
1717

1818
Your Object Storage bucket architecture is critical to performance success. In particular, distributing content across multiple buckets helps with load distribution, CDN optimization, and adds security benefits like segmentation and origin obfuscation. This guide walks through bucket design strategies using a commerce site example, including an optimal bucket architecture for Akamai CDN integration.
1919

@@ -88,4 +88,4 @@ Each bucket in your architecture has the ability to serve as a single origin end
8888

8989
### Relationship To Bucket Design
9090

91-
CDNs can often overcome flaws of poorly architected environments. However, when a bucket architecture is designed well, the benefits can directly translate to the CDN. Object Storage bucket architecture should be designed to be functional, scalable, performant, resilient, and cost efficient so that Akamai CDN serves your content as effectively as possible.
91+
CDNs can often overcome flaws of poorly architected environments. However, when a bucket architecture is designed well, the benefits can directly translate to the CDN. Object Storage bucket architecture should be designed to be functional, scalable, performant, resilient, and cost efficient so that Akamai CDN serves your content as effectively as possible.

0 commit comments

Comments
 (0)