This repository is part of the Triton Data Center project. See the contribution guidelines and general documentation at the main Triton project page.
Moirai is an HAProxy based load balancer for Triton.
- Automatic certificate generation via triton-dehydrated
- Automatic configuration of backends
Moirai supports the following keys:
cloud.tritoncompute:loadbalancer
- This must be set totrue
and will be used by node-triton and/or CloudAPI at a later date.cloud.tritoncompute:portmap
- This configures the listening ports to backend mappings. This is comma separated list of service designations. See below for service designation syntax.cloud.tritoncompute:max_rs
- By default up to 32 backend servers are supported. If you need to scale larger than 32 backend instances, set this to the desired value.cloud.tritoncompute:certificate_name
- Comma separated list of certificate subjects. The first in the list will be the subjectCN
. The rest of the names will be Subject Alternate Names (SAN).cloud.tritoncompute:metrics_acl
- Space or comma-separated list of IP prefixes (e.g.,198.51.100.0/24
) that are allowed to access the metrics endpoint.cloud.tritoncompute:metrics_port
- Port number for the metrics endpoint. Defaults to8405
if not specified.cloud.tritoncompute:syslog
- Remote syslog server endpoint inHOST:PORT
format (e.g.,syslog.example.com:514
or10.11.28.101:30514
). When configured, HAProxy will forward logs to this server in addition to local logging.
Metadata keys can be added post-provision. The load balancer will reconfigure itself shortly after the metadata is updated.
All other metadata keys used by Triton are also supported (e.g.,
triton.cns.services
, tritoncli.ssh.proxy
, etc.).
The cloud.tritoncompute:portmap
metadata key is a list of service designations
separated by commas or spaces.
A service designation uses the following syntax:
<type>://<listen port>:<backend name>[:<backend port>][{health check params}]
type
- Must be one ofhttp
,https
,https+insecure
,https-http
, ortcp
:http
- Configures a Layer-7 proxy using the HTTP protocol. The backend server(s) must not use SSL/TLS.X-Forwarded-For
header will be added to requests.https
- Configures a Layer-7 proxy using the HTTP protocol. The backend server(s) must use SSL/TLS. The backend certificate WILL be verified. The front end services will use a certificate issued by Let's Encrypt if thecloud.tritoncompute:certificate_name
metadata key is also provided. Otherwise, a self-signed certificate will be generated.X-Forwarded-For
header will be added to requests.https+insecure
- Configures a Layer-7 proxy using the HTTP protocol. The backend server(s) must use SSL/TLS. The backend certificate will NOT be verified. The front end services will use a certificate issued by Let's Encrypt if thecloud.tritoncompute:certificate_name
metadata key is also provided. Otherwise, a self-signed certificate will be generated.X-Forwarded-For
header will be added to requests.https-http
- Configures a Layer-7 proxy using the HTTP protocol. The backend server(s) must NOT use SSL/TLS. The front end services will use a certificate issued by Let's Encrypt if thecloud.tritoncompute:certificate_name
metadata key is also provided. Otherwise, a self-signed certificate will be generated.X-Forwarded-For
header will be added to requests.tcp
- Configures a Layer-4 proxy. The backend can use any port. If SSL/TLS is desired, the backend must configure its own certificate.
listen port
- This designates the front end listening port.backend name
- This is a DNS name that must be resolvable. This SHOULD be a CNS name, but can be any fully qualified DNS domain name.backend port
- Optional. This designates the back end port that servers will be listening on. If provided, the back end will be configured to use A record lookups. If not provided, the back end will be configured to use SRV record lookup.health check params
- Optional. JSON-like syntax for configuring health checks (see Health Check Configuration section below).
Health checks can be configured using a JSON-like syntax appended to service designations. The parameters are enclosed in curly braces {}
and use comma-separated key:value pairs.
check
- HTTP endpoint path for health checks (e.g.,/healthz
,/status
,/ping
)port
- Port number for health check requests (overrides the backend port)rise
- Number of consecutive successful checks before marking server as healthy (default: HAProxy default)fall
- Number of consecutive failed checks before marking server as unhealthy (default: HAProxy default)
{check:/endpoint,port:9000,rise:2,fall:1}
All parameters are optional and can be specified in any order. If port
is not specified, health checks will use the same port as the backend service.
# HTTP service with health check on same port
http://80:web.example.com:8080{check:/healthz}
# HTTPS service with health check on different port
https://443:api.example.com:8443{check:/status,port:9000}
# TCP service with health check parameters
tcp://3306:db.example.com:3306{check:/ping,rise:3,fall:2}
# Service with all health check parameters
http://80:app.example.com:8080{check:/health,port:8081,rise:5,fall:2}
# Basic HTTP service
http://80:my-backend.svc.my-login.us-west-1.cns.example.com:80
# Basic HTTPS service
https://443:my-backend.svc.my-login.us-west-1.cns.example.com:8443
# Basic TCP service (using SRV records)
tcp://636:my-backend.svc.my-login.us-west-1.cns.example.com
# HTTP service with health check
http://80:my-backend.svc.my-login.us-west-1.cns.example.com:80{check:/healthz}
# HTTPS service with comprehensive health check configuration
https://443:my-backend.svc.my-login.us-west-1.cns.example.com:8443{check:/status,port:9000,rise:3,fall:1}
In order to properly generate a certificate you must have DNS CNAME records
pointing to the load balancer instance's CNS records. See the
triton-dehydrated
documentation for how to properly configure this.
If no certificate name is provided in the metadata, a self-signed certificate will be generated automatically.
If the cloud.tritoncompute:metrics_acl
metadata key is not empty then the
metrics endpoint will be enabled. The ACL must be an IP prefix
(e.g., 198.51.100.0/24
). Multiple comma or space separated prefixes can be
included.
The metrics endpoint listens on port 8405
by default. This can be customized
by setting the cloud.tritoncompute:metrics_port
metadata key to a different
port number (must be between 1-65534).
Note: The load balancer will respond to all hosts on the metrics port. Hosts
outside of the configured ACL will receive a 403
response. If you want the
load balancer to not respond at all then you must also configure Cloud Firewall
for the instance.
The load balancer can forward HAProxy logs to a remote syslog server by setting
the cloud.tritoncompute:syslog
metadata key. This is useful for centralized
logging and monitoring.
The cloud.tritoncompute:syslog
value must be in HOST:PORT
format:
- Example with IP:
10.11.28.101:30514
- Example with hostname:
syslog.example.com:514
- When configured, HAProxy will send logs to both local syslog (
127.0.0.1
) and the specified remote syslog server - The load balancer's hostname will be included in syslog messages
(
log-send-hostname
is enabled) - The metadata accepts both hostnames and IP addresses
- Any non-empty value in
HOST:PORT
format is accepted - Empty values are ignored
The syslog configuration can be updated dynamically without instance restart:
# Add or update syslog endpoint
triton instance metadata update <instance> cloud.tritoncompute:syslog=10.11.28.101:30514
# Remove syslog endpoint
triton instance metadata delete <instance> cloud.tritoncompute:syslog
The load balancer will detect the metadata change and reconfigure HAProxy within approximately one minute.
- Once a named certificate is used, the load balancer instance can't go back to a self-signed certificate. Continue to use the expired certificate or deploy a replacement loadbalancer.
- The maximum number of backend servers is configurable from 32 up to 1024.
- The application includes failsafes to prevent invalid configurations from being applied.
src/lib.rs
- Contains core functionality, data structures, and helper functionssrc/certificates.rs
- TLS certificate management modulesrc/reconfigure.rs
- Main application entry point (replaces the originalreconfigure
bash script)
# Clone the repository
git clone [email protected]:TritonDataCenter/triton-moirai.git
cd triton-moirai
# Build the project
cargo build
# Run tests
cargo test
# Build for production
cargo build --release
If you need to do a lot of iteration on the dehydrated LetsEncrypt integration
you can add a couple of lines to dehydrated.cfg
to point it at the staging endpoint:
CA=letsencrypt-test
PREFERRED_CHAIN='(STAGING) Pretend Pear X1'
On the headnode:
sdc-imgadm import -S https://updates.tritondatacenter.com?channel=experimental ${JENKINS_BUILD_UUID?}
# Get your account UUID for CNS names
UUID=$(triton account get | awk '/^id:/{print $2}')
CNS_DOMAIN=us-central-1.cns.mnx.io
REAL_DOMAIN=example.com
IMAGE=cloud-load-balancer
PACKAGE=g1.nano
# Create Backends
triton instance create -t triton.cns.services=web base-64-trunk ${PACKAGE?}
triton instance create -t triton.cns.services=web base-64-trunk ${PACKAGE?}
# Configure Backends
triton instance list -H tag.triton.cns.services=web -o shortid | while read host; do triton ssh $host "pkgin -y in nginx && svcadm enable nginx && hostname > /opt/local/share/examples/nginx/html/hostname.txt && curl http://localhost/hostname.txt" ; done;
# Create Loadbalancer with plain HTTP
triton instance create -w -t triton.cns.services=frontend-plain \
-m cloud.tritoncompute:portmap=http://80:web.svc.${UUID?}.${CNS_DOMAIN?}:80 \
-m cloud.tritoncompute:loadbalancer=true \
-n frontend-plain \
${IMAGE?} ${PACKAGE?}
# Test the load balancer
curl http://frontend-plain.svc.${UUID?}.${CNS_DOMAIN?}/hostname.txt
# Create Loadbalancer with HTTPS but no certificate_name (will use self-signed)
triton instance create -w -t triton.cns.services=frontend-ssl \
-m cloud.tritoncompute:portmap=https-http://443:web.svc.${UUID?}.${CNS_DOMAIN?}:80 \
-m cloud.tritoncompute:loadbalancer=true \
-n frontend-ssl \
${IMAGE?} ${PACKAGE?}
# Test the load balancer (will use self-signed certificate)
curl -k https://frontend-ssl.svc.${UUID?}.${CNS_DOMAIN?}/hostname.txt
This test exercise all three flavors of https proxy (http backend, unverified https, verified https):
# Create Loadbalancer with HTTPS and LetsEncrypt certificate
# Note: You must have proper DNS CNAME records pointing to the load balancer's CNS record
triton instance create -w -t triton.cns.services=frontend \
-m cloud.tritoncompute:portmap="https-http://443:web.svc.${UUID?}.${CNS_DOMAIN?}:80,https+insecure://8443:frontend-ssl.svc.${UUID?}.${CNS_DOMAIN?}:443,https://9443:us-central.manta.mnx.io:443" \
-m cloud.tritoncompute:certificate_name=${REAL_DOMAIN?} \
-m cloud.tritoncompute:loadbalancer=true \
-n frontend \
${IMAGE?} ${PACKAGE?}
# Test the load balancer (will use LetsEncrypt certificate)
curl https://${REAL_DOMAIN?}/hostname.txt
curl https://${REAL_DOMAIN?}:8443/hostname.txt
curl https://${REAL_DOMAIN?}:9443/nshalman/public/hello-world.txt
# Create TCP load balancer (Layer-4 proxy)
triton instance create -w -t triton.cns.services=frontend-tcp \
-m cloud.tritoncompute:portmap="tcp://80:web.svc.${UUID?}.${CNS_DOMAIN?}:80{check:/hostname.txt,rise:2,fall:1}" \
-m cloud.tritoncompute:loadbalancer=true \
-n frontend-tcp \
${IMAGE?} ${PACKAGE?}
# Test the TCP load balancer
curl http://frontend-tcp.svc.${UUID?}.${CNS_DOMAIN?}/hostname.txt
# Create load balancer with remote syslog forwarding
triton instance create -w -t triton.cns.services=frontend-syslog \
-m cloud.tritoncompute:portmap="http://80:web.svc.${UUID?}.${CNS_DOMAIN?}:80" \
-m cloud.tritoncompute:loadbalancer=true \
-m cloud.tritoncompute:syslog=10.11.28.101:30514 \
-n frontend-syslog \
${IMAGE?} ${PACKAGE?}
# Test the load balancer
curl http://frontend-syslog.svc.${UUID?}.${CNS_DOMAIN?}/hostname.txt
# Update syslog configuration dynamically
triton instance metadata update frontend-syslog cloud.tritoncompute:syslog=192.168.1.10:514
# Update syslog to use hostname
triton instance metadata update frontend-syslog cloud.tritoncompute:syslog=syslog.example.com:514
# Remove syslog forwarding
triton instance metadata delete frontend-syslog cloud.tritoncompute:syslog
-
DNS Configuration: For LetsEncrypt certificates, ensure you have proper DNS CNAME records pointing to your load balancer's CNS record before creating the instance.
-
Certificate Names: Replace
example.com
with your actual domain name when testing LetsEncrypt certificates. -
CNS Names: The
${UUID?}
variable is automatically populated from your account information. -
Self-Signed Certificates: Use the
-k
flag with curl when testing self-signed certificates to skip certificate verification.