A Go-based Redis pub/sub event processor for rendering Tidbyt Pixlet applications. This service receives render requests via Redis channels, processes them using Pixlet, and returns results through device-specific Redis channels.
- Redis Pub/Sub: Consumes render requests and publishes results via Redis (lightweight, high-performance)
- Pixlet Processing: Renders Tidbyt applications using the Pixlet engine
- Redis Caching: Distributed caching layer with app/device scoped keys
- 12-Factor App: Environment-based configuration following 12-factor principles
- Security: Non-root container user with read-only filesystem access
- Graceful Shutdown: Proper signal handling and cleanup
- Structured Logging: JSON-structured logging with Zap
- Health Checks: Container and service health monitoring
API → Redis Stream: "matrx:render_requests" → [Consumer Group] → MATRX Renderer (scalable) → Pixlet → Redis Pub/Sub: "device:{device_id}" → Device
The service uses a hybrid Redis architecture optimized for both work distribution and real-time delivery:
- Stream:
matrx:render_requests - Consumer Group: Enables horizontal scaling with automatic load balancing
- Features:
- Multiple render workers can consume from the same stream
- Consumer groups ensure each message is processed exactly once
- Messages persist until explicitly acknowledged (XACK)
- Failed messages can be retried or moved to dead letter queue
- Perfect for distributed work processing
- Channels:
device:{device_id}(per-device channels) - Features:
- Instant delivery to subscribed devices
- Simple, fast, ephemeral messages
- No message backlog or cleanup needed
- Perfect for real-time device control
- Device/API publishes render request to
matrx:render_requestsstream - Render worker consumes message from stream using consumer group
- Worker validates and processes request using Pixlet
- Worker publishes result to device-specific
device:{device_id}pub/sub channel - Device receives result instantly via pub/sub subscription
- Worker acknowledges message in stream (XACK)
- Handles errors gracefully with proper logging and empty result responses
Alongside the Redis worker pipeline, the renderer exposes a lightweight HTTP API that mirrors the same schema-driven workflows:
GET /health– simple service heartbeat.GET /appsandGET /apps/{id}– enumerate loaded Pixlet apps from the registry.GET /apps/{id}/schema– retrieve the Pixlet app schema;POST /apps/{id}/schemavalidates a configuration. Send the configuration object at the JSON root (no nestedconfigwrapper). The response includes normalized defaults plus structured field errors.POST /apps/{id}/render– validates the provided configuration and returns a JSON payload containing the base64-encoded WebP render output along with the normalized config. Optional query parameterswidth,height, anddevice_idcontrol rendering dimensions (defaults 64×32) and logging metadata.GET /apps/{id}/preview.webp/GET /apps/{id}/preview.gif– render previews using schema defaults (no request body) and stream the binary WebP or GIF response. Use the optionalwidthandheightquery parameters to override device dimensions.
These HTTP utilities are ideal for local testing, schema validation, or generating previews without publishing into Redis.
All configuration is done via environment variables:
REDIS_URL: Redis connection string (default:redis://localhost:6379)REDIS_ADDR: Alternative Redis address format (default:localhost:6379)REDIS_PASSWORD: Redis password (default: empty)REDIS_DB: Redis database number (default:0)REDIS_CONSUMER_GROUP: Consumer group name for streams (default:matrx-renderer-group)REDIS_CONSUMER_NAME: Consumer name (auto-generated if not provided:{hostname}-{timestamp})
SERVER_PORT: HTTP port for health checks (default:8080)SERVER_READ_TIMEOUT: Read timeout in seconds (default:10)SERVER_WRITE_TIMEOUT: Write timeout in seconds (default:10)
PIXLET_APPS_PATH: Path to Pixlet apps directory (default:/opt/apps)
App Directory Structure: Apps are organized in nested directories as /opt/apps/{app_id}/{app_id}.star. The Docker build automatically downloads apps from the matrx-apps repository.
REDIS_ADDR: Redis server address (default:localhost:6379)REDIS_PASSWORD: Redis password (optional)REDIS_DB: Redis database number (default:0)
LOG_LEVEL: Log level (default:info)
The renderer supports both in-memory and Redis-based caching:
- In-Memory Cache: Used by default when no Redis configuration is provided
- Redis Cache: Automatically enabled when
REDIS_ADDRis configured - Cache Scoping: Keys are scoped as
/{applet_id}/{device_id}/{key_name} - TTL Support: Configurable time-to-live for cached values
For detailed Redis cache configuration and usage, see REDIS_CACHE.md.
Pixlet apps are organized in a nested directory structure within the apps path:
/opt/apps/
├── clock/
│ └── clock.star
├── weather/
│ └── weather.star
└── news/
└── news.star
Each app must:
- Be in its own directory named after the app ID
- Contain a
.starfile with the same name as the directory - Follow the Pixlet app structure and conventions
The Docker build process automatically downloads apps from the koiosdigital/matrx-apps repository during image creation.
To send a render request, publish to the matrx:render_requests Redis Stream:
Using Redis CLI:
redis-cli XADD matrx:render_requests * payload '{"type":"render_request","uuid":"req-123","app_id":"clock","device":{"id":"CN","width":64,"height":32},"params":{"timezone":"America/New_York"}}'Using redis-py (Python):
import redis
import json
r = redis.Redis(host='localhost', port=6379, db=0)
request = {
"type": "render_request",
"uuid": "req-123",
"app_id": "clock",
"device": {
"id": "CN",
"width": 64,
"height": 32
},
"params": {
"timezone": "America/New_York"
}
}
r.xadd('matrx:render_requests', {'payload': json.dumps(request)})Using go-redis (Go):
import (
"encoding/json"
"github.com/redis/go-redis/v9"
)
rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})
request := map[string]interface{}{
"type": "render_request",
"uuid": "req-123",
"app_id": "clock",
"device": map[string]interface{}{
"id": "CN",
"width": 64,
"height": 32,
},
"params": map[string]string{
"timezone": "America/New_York",
},
}
payload, _ := json.Marshal(request)
rdb.XAdd(ctx, &redis.XAddArgs{
Stream: "matrx:render_requests",
Values: map[string]interface{}{"payload": string(payload)},
}){
"type": "render_request",
"uuid": "unique-request-id",
"app_id": "clock",
"device": {
"id": "device-uuid-or-string",
"width": 64,
"height": 32
},
"params": {
"timezone": "America/New_York",
"format": "12h"
}
}Results are published to device-specific pub/sub channels: device:{device_id}
Subscribe to results:
# Redis CLI
redis-cli SUBSCRIBE device:CN
# Python
import redis
r = redis.Redis()
pubsub = r.pubsub()
pubsub.subscribe('device:CN')
for message in pubsub.listen():
print(message)Result payload:
{
"type": "render_result",
"uuid": "unique-request-id",
"device_id": "device-uuid-or-string",
"app_id": "clock",
"render_output": "base64-encoded-webp-data",
"processed_at": "2025-08-12T10:30:05Z"
}Note: On error, the service logs the error to console.
The service uses a dynamic queue routing system:
- Queue:
matrx.renderer_requests - Routing Key:
renderer_requests - All render requests are sent to this single queue
- Queue:
matrx.{DEVICE_ID}(e.g.,matrx.device-123) - Routing Key:
{DEVICE_ID}(e.g.,device-123) - Each device gets its own result queue for isolation
- Queues are created automatically when the first result is published
This design allows:
- Multiple renderer instances to consume from the same input queue
- Device-specific result routing for proper message isolation
- Automatic scaling based on queue depth
The MATRX renderer is designed for horizontal scaling with multiple instances:
- Fair Load Distribution: Each instance processes only one message at a time.
- Message Safety: Manual acknowledgment ensures messages are only removed after successful processing
- Automatic Failover: Failed messages are requeued for other instances to process
- Instance Identification: Each consumer has a unique tag for monitoring and debugging
- Start with 1-2 instances and monitor queue depth
- Scale up when average queue depth consistently exceeds desired latency
- Monitor CPU/memory usage per instance - rendering is CPU-intensive
- Use container orchestration (Kubernetes, Docker Swarm) for automatic scaling
apiVersion: apps/v1
kind: Deployment
metadata:
name: matrx-renderer
spec:
replicas: 3 # Start with 3 instances
selector:
matchLabels:
app: matrx-renderer
template:
metadata:
labels:
app: matrx-renderer
spec:
containers:
- name: renderer
image: matrx-renderer:latest
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1GiThe renderer is designed for horizontal scaling using Redis Streams consumer groups:
- Automatic Load Balancing: Redis consumer groups distribute messages across all instances
- No Configuration Changes: Simply increase replica count - each instance auto-registers
- Exactly-Once Processing: Consumer groups ensure each message is processed by only one worker
- Fault Tolerance: Failed instances don't lose messages - they can be reassigned to healthy workers
Scaling Example:
# Scale to 5 instances
kubectl scale deployment matrx-renderer --replicas=5
# Scale based on CPU usage
kubectl autoscale deployment matrx-renderer --cpu-percent=70 --min=3 --max=10Monitor these metrics for scaling decisions:
- Stream pending messages:
XPENDING matrx:render_requests - Consumer group lag: Messages not yet acknowledged
- Message processing rate per instance
- CPU/Memory usage per instance
- Error rates and failed message counts
- Go 1.21+
- Docker and Docker Compose
- Pixlet CLI tool
- Clone the repository
- Copy environment file:
cp .env.example .env - Install dependencies:
go mod download - Run RabbitMQ:
docker-compose up -d rabbitmq - Create apps directory:
mkdir -p apps - Add your Pixlet
.starfiles to theappsdirectory - Run the service:
go run cmd/server/main.go
-
Build and run with Docker Compose:
docker-compose up --build
-
Access RabbitMQ Management UI at http://localhost:15672 (guest/guest)
Run tests with:
go test ./...Build the image:
docker build -t matrx-renderer .The application is designed to work well in Kubernetes with:
- ConfigMaps for configuration
- Secrets for sensitive data
- ReadOnlyRootFilesystem security context
- Resource limits and requests
- Health check endpoints
- Non-root user: Container runs as user ID 1001
- Read-only filesystem: Apps directory mounted read-only
- Path validation: Prevents directory traversal attacks
- Input sanitization: Validates configuration parameters
- Minimal attack surface: Alpine-based minimal container
- No shell access: User has no shell (
/sbin/nologin)
The service provides:
- Health check endpoint for container orchestration
- Structured JSON logging for log aggregation
- Error tracking with correlation IDs
- Performance metrics through logging
MIT License - see LICENSE file for details