A high-performance TypeScript backend for monitoring and analyzing Xandeum pNodes. Built with Fastify, PostgreSQL, Redis, and WebSocket support for real-time updates.
- pNode Discovery: Automatically discovers pNodes from the Xandeum gossip network
- Pod Credits Integration: Fetches and tracks credits from
podcredits.xandeum.network - Credits Leaderboard: Rank pNodes by credits with historical tracking
- Uptime Tracking: Calculates and stores uptime percentage for each pNode
- Real-time Monitoring: WebSocket-based live updates for pNode status changes
- Historical Data: Stores and queries time-series metrics for trend analysis
- Alert System: Configurable alerts with threshold-based notifications
- Data Export: CSV and JSON export endpoints for analytics data
- Rate Limiting: Built-in API rate limiting for production use
- RESTful API: Comprehensive API for frontend integration
- Caching Layer: Redis-backed caching for optimal performance
| Component | Technology |
|---|---|
| Runtime | Node.js 20+ |
| Language | TypeScript 5.x |
| Framework | Fastify 5.x |
| Database | PostgreSQL 16 + Drizzle ORM |
| Cache | Redis (ioredis) |
| Scheduler | node-cron |
| Validation | Zod |
| WebSocket | @fastify/websocket |
- Node.js 20 or higher
- PostgreSQL 16+ (required) - Can be:
- Local PostgreSQL installation, OR
- Hosted database service (Supabase recommended - free tier available)
- Redis 7+ (optional - application will work without it, but caching will be disabled)
- pnpm, npm, or yarn
- Xandeum pNode running (for testing/development with local pNode)
If you're running a pNode without a pNode license for testing purposes, there's an additional setup step required since the register button won't work in the Xandminer interface:
Run as root user:
# Create a 10GB file (adjust size as needed)
sudo fallocate -l 10g /xandeum-pages
# Create symlink to where the pod software expects it
sudo ln -s /xandeum-pages /run/xandeum-podThis creates a 10GB file and symlinks it to where the pod software is looking for it, allowing the pod to start. Adjust the size (10g) to what you want to allocate.
Check if the service starts:
sudo systemctl status pod.serviceNote: This allows the pNode to start and use pRPC, but does not allow you to be part of the incentivized DevNet payouts. This is useful for local testing and development.
You can use either a local PostgreSQL database or a hosted database service like Supabase.
Supabase provides a free PostgreSQL database with 500MB storage and 2GB bandwidth/month.
-
Create a Supabase account and project:
- Go to https://supabase.com and sign up
- Click "New Project"
- Choose a project name and database password
- Select a region close to your users
- Wait for the project to be provisioned (~2 minutes)
-
Get your connection string:
- Go to Project Settings → Database
- Scroll to "Connection string" section
- Important: If you're on an IPv4 network, use the "Session Pooler" connection string (not "URI")
- Copy the Session Pooler connection string
- Format:
postgresql://postgres.[project-ref]:[password]@aws-1-[region].pooler.supabase.com:5432/postgres - Note: The exact format may vary by region (e.g.,
aws-1-eu-west-1oraws-0-us-east-1)
-
Add to your
.envfile:# Supabase Session Pooler (IPv4 compatible) DATABASE_URL=postgresql://postgres.[project-ref]:[password]@aws-1-[region].pooler.supabase.com:5432/postgres
Note: Make sure to URL-encode special characters in your password (e.g.,
@becomes%40) -
Run database migrations:
npm run setup:db
The setup script will automatically detect Supabase and skip database creation (database is pre-created).
-
Verify tables are created:
- Go to Supabase Dashboard → Table Editor
- You should see all tables:
pnodes,pnode_stats,pnode_events,alerts,alert_notifications,credits_history
Important Notes:
- IPv4 Networks: Use the "Session Pooler" connection string (port 5432) - this is IPv4 compatible
- IPv6 Networks: Can use either "URI" (direct) or "Session Pooler" connection strings
- Password Encoding: If your password contains special characters (like
@,#,%), URL-encode them in the connection string (e.g.,@becomes%40) - Production: The Session Pooler is recommended for production as it handles connection pooling automatically
If you prefer to run PostgreSQL locally:
-
Install PostgreSQL (if not already installed):
# macOS (Homebrew) brew install postgresql@16 brew services start postgresql@16 # Linux (Ubuntu/Debian) sudo apt-get install postgresql-16 sudo systemctl start postgresql
-
Add to your
.envfile:DATABASE_URL=postgresql://postgres:password@localhost:5432/xandeum_analytics
-
Run database setup:
npm run setup:db
This will create the database and run migrations.
- Clone and install dependencies:
cd backend-typescript
npm install- Configure environment variables:
Create a .env file in the project root:
# Server Configuration
PORT=3000
HOST=0.0.0.0
NODE_ENV=development
# PostgreSQL Database (required)
# For local PostgreSQL:
# DATABASE_URL=postgresql://postgres:password@localhost:5432/xandeum_analytics
# For Supabase (Session Pooler - IPv4 compatible):
# DATABASE_URL=postgresql://postgres.[project-ref]:[password]@aws-1-[region].pooler.supabase.com:5432/postgres
# Note: URL-encode special characters in password (e.g., @ becomes %40)
DATABASE_URL=postgresql://postgres:password@localhost:5432/xandeum_analytics
# Redis Cache (optional - app works without it)
REDIS_URL=redis://localhost:6379
# Xandeum pRPC Configuration
PRPC_ENDPOINT=https://prpc.xandeum.com
# Pod Credits API
POD_CREDITS_ENDPOINT=https://podcredits.xandeum.network/api/pods-credits
# Seed Nodes (comma-separated IPs for pNode discovery)
# Default: Fresh public pNodes (updated 2024)
# Note: These are IP addresses only - pRPC port (6000) is used automatically
SEED_NODES=173.212.207.32,152.53.236.91,62.171.138.27,89.123.115.81,45.151.122.77,161.97.185.116,192.190.136.28,89.123.115.79,154.38.171.140,154.38.170.117,152.53.155.15,45.151.122.60,173.249.3.118,216.234.134.5,161.97.97.41,62.171.135.107,173.212.220.65,192.190.136.38,207.244.255.1
# Data Collection Intervals (in seconds)
STATS_POLL_INTERVAL=30
DISCOVERY_INTERVAL=300
CLEANUP_HOUR=3
# Alert Configuration
ALERT_CHECK_INTERVAL=60- Setup the database:
If you haven't set up your database yet, see the Database Setup section above.
The automated setup script works for both local and hosted databases:
npm run setup:dbThis script will:
- For local PostgreSQL: Create the
xandeum_analyticsdatabase if it doesn't exist - For hosted databases (Supabase, Neon, etc.): Skip database creation (database is pre-created)
- Run database migrations to set up all tables
Alternative manual setup (local PostgreSQL only):
# Create database manually
createdb xandeum_analytics
# Run migrations
npm run db:pushnpm run devnpm run build
npm startGET /healthReturns server health status including database and Redis connection state.
Check pRPC port (6000) connectivity for pNodes, similar to the bash script's port testing:
GET /health/prpc?limit=10&sample=trueQuery Parameters:
limit(optional, default: 10): Number of pNodes to checksample(optional, default: false): Iftrue, samples random pNodes from database; iffalse, checks seed nodes
Returns:
- Total checked, reachable, and unreachable counts
- Success rate percentage
- Detailed results for each checked node (latency, error messages)
Check a specific pNode:
GET /health/prpc/:ip?port=6000Example:
# Check seed nodes
curl http://localhost:3000/health/prpc?limit=5
# Sample random pNodes from database
curl http://localhost:3000/health/prpc?limit=10&sample=true
# Check a specific IP
curl http://localhost:3000/health/prpc/173.212.203.145?port=6000GET /api/pnodes?status=online&search=abc&limit=50&offset=0Query Parameters:
status(optional): Filter by status (online,offline,unknown)search(optional): Search by pubkey or IP addressversion(optional): Filter by software versionlimit(default: 50, max: 100): Number of resultsoffset(default: 0): Pagination offset
GET /api/pnodes/:idGET /api/pnodes/:id/stats?from=2024-01-01T00:00:00Z&to=2024-01-02T00:00:00Z&limit=100GET /api/pnodes/:id/events?limit=50POST /api/pnodes/discoverPOST /api/pnodes/:id/pollGET /api/pnodes/diagnostics/invalid-ipsReturns pNodes with:
- Missing IP addresses
- Invalid IP format
- Localhost/private IPs (likely incorrect for remote pNodes)
GET /api/pnodes/diagnostics/validate-ipsComprehensive validation that checks:
- IP format validity
- Connectivity issues
- Provides suggestions for fixing
PATCH /api/pnodes/:id/ip
Content-Type: application/json
{
"ip": "173.212.203.145"
}Updates a pNode's IP address with validation.
GET /api/stats/networkReturns aggregate statistics:
- Total pNode count
- Online/offline/unknown counts
- Storage utilization
- Average CPU/memory usage
- Total files count
GET /api/stats/trends?period=24hPeriods: 24h, 7d, 30d
GET /api/stats/events?limit=50GET /api/stats/versionsGET /api/stats/top?metric=storage&limit=10Metrics: storage, uptime, peers, requests
GET /api/alerts?activeOnly=truePOST /api/alerts
Content-Type: application/json
{
"name": "High CPU Usage",
"description": "Alert when CPU exceeds 80%",
"metricType": "cpu_usage",
"condition": "gt",
"threshold": 80,
"pnodeId": null
}Metric Types: cpu_usage, memory_usage, disk_usage, peers_count, status
Conditions: gt (greater than), lt (less than), eq (equals), neq (not equals)
GET /api/alerts/:idPATCH /api/alerts/:id
Content-Type: application/json
{
"isActive": false
}DELETE /api/alerts/:idGET /api/alerts/notifications?unreadOnly=true&limit=50POST /api/alerts/notifications/read
Content-Type: application/json
{
"notificationIds": ["uuid1", "uuid2"]
}GET /api/leaderboard?limit=50&offset=0Returns pNodes ranked by credits with pagination.
GET /api/leaderboard/statsReturns network-wide credits statistics (total, average, median).
GET /api/leaderboard/movers?limit=10Returns top gainers and losers based on recent credit changes.
GET /api/leaderboard/:id/history?from=2024-01-01T00:00:00Z&limit=100GET /api/leaderboard/uptime?limit=50Returns pNodes ranked by uptime percentage.
POST /api/leaderboard/syncTriggers manual sync from podcredits.xandeum.network.
GET /api/export/pnodes?format=csv&limit=1000Formats: json, csv
GET /api/export/leaderboard?format=csvGET /api/export/pnodes/:id/stats?format=csv&from=2024-01-01T00:00:00ZGET /api/export/pnodes/:id/credits?format=csvGET /api/export/network-summary?format=jsonConnect to the WebSocket endpoint for real-time updates:
const ws = new WebSocket('ws://localhost:3000/ws');
ws.onmessage = (event) => {
const message = JSON.parse(event.data);
console.log(message.type, message.data);
};
// Send ping
ws.send(JSON.stringify({ type: 'ping' }));| Type | Description |
|---|---|
connected |
Initial connection confirmation |
client_count |
Updated count of connected clients |
stats_updated |
New stats collected from pNodes |
nodes_discovered |
New pNodes discovered via gossip |
alert_triggered |
An alert condition was met |
pong |
Response to ping |
| Type | Description |
|---|---|
ping |
Keep-alive ping |
subscribe |
Subscribe to a channel |
unsubscribe |
Unsubscribe from a channel |
- pnodes: pNode identity and metadata
- pnode_stats: Time-series stats snapshots
- pnode_events: Status changes and events
- alerts: User-configured alert rules
- alert_notifications: Triggered alert instances
Generate migrations after schema changes:
npm run db:generateApply migrations:
npm run db:migrateOpen Drizzle Studio for database inspection:
npm run db:studiosrc/
├── index.ts # Application entry point
├── config/
│ └── index.ts # Environment configuration
├── db/
│ ├── index.ts # Database connection
│ ├── schema.ts # Drizzle schema definitions
│ └── migrations/ # Database migrations
├── cache/
│ └── redis.ts # Redis client and helpers
├── prpc/
│ └── client.ts # Xandeum pRPC JSON-RPC client
├── services/
│ ├── pnode.service.ts # pNode business logic
│ ├── stats.service.ts # Statistics aggregation
│ └── alert.service.ts # Alert management
├── routes/
│ ├── pnodes.ts # /api/pnodes endpoints
│ ├── stats.ts # /api/stats endpoints
│ └── alerts.ts # /api/alerts endpoints
├── jobs/
│ └── collector.ts # Scheduled data collection
└── websocket/
└── handler.ts # WebSocket handler
| Job | Interval | Description |
|---|---|---|
| Stats Poll | 30s | Collects metrics from online pNodes |
| Discovery | 5m | Discovers new pNodes from gossip |
| Alert Check | 1m | Evaluates active alert rules |
| Cleanup | Daily 3AM | Removes old stats and events |
npm run lint
npm run lint:fixnpx tsc --noEmit| Variable | Default | Description |
|---|---|---|
PORT |
3000 | Server port |
HOST |
0.0.0.0 | Server host |
NODE_ENV |
development | Environment mode |
DATABASE_URL |
- | PostgreSQL connection string (local or hosted like Supabase) |
REDIS_URL |
redis://localhost:6379 | Redis connection string |
PRPC_ENDPOINT |
https://xandeum.network/prpc | Xandeum pRPC endpoint |
STATS_POLL_INTERVAL |
30 | Stats collection interval (seconds) |
DISCOVERY_INTERVAL |
300 | Node discovery interval (seconds) |
CLEANUP_HOUR |
3 | Hour for daily cleanup (0-23) |
ALERT_CHECK_INTERVAL |
60 | Alert check interval (seconds) |
PRPC_TIMEOUT |
10000 | pRPC request timeout (milliseconds) |
PRPC_RETRIES |
2 | Number of retries for failed pRPC requests |
PRPC_RETRY_DELAY |
1000 | Delay between retries (milliseconds) |
If you see an error like listen EADDRINUSE: address already in use 0.0.0.0:3000:
Option 1: Kill the process using the port
# Find the process
lsof -ti:3000
# Kill it
kill -9 $(lsof -ti:3000)Option 2: Use a different port
PORT=3001 npm run devOr update your .env file:
PORT=3001If you see database "xandeum_analytics" does not exist:
Quick fix:
npm run setup:dbManual fix:
# Create the database
createdb xandeum_analytics
# Run migrations
npm run db:pushIf you see connection errors:
For Local PostgreSQL:
-
Check if PostgreSQL is running:
# macOS (Homebrew) brew services list # Linux (systemd) sudo systemctl status postgresql
-
Verify your DATABASE_URL:
- Format:
postgresql://username:password@host:port/database - Default:
postgresql://postgres:password@localhost:5432/xandeum_analytics - Make sure credentials are correct
- Format:
-
Test connection:
psql $DATABASE_URL -c "SELECT 1"
For Supabase/Hosted Databases:
-
Verify your connection string:
- Get the connection string from your Supabase dashboard (Project Settings → Database)
- IPv4 Networks: Use the "Session Pooler" connection string (not "URI")
- Make sure you're using the correct password (the one you set when creating the project)
- Password encoding: If your password contains special characters, URL-encode them (e.g.,
@→%40,#→%23) - Check that the connection string format is correct
-
Common connection string issues:
- "Not IPv4 compatible" error: Use the Session Pooler connection string instead of URI
- "Tenant or user not found": Check that the region in your connection string matches your project region
- "getaddrinfo ENOTFOUND": Verify the hostname is correct (should be
aws-0-[region]oraws-1-[region].pooler.supabase.com)
-
Check network/firewall:
- Ensure your network allows outbound connections to Supabase
- Some corporate networks may block database connections
- If on IPv4, you must use the Session Pooler connection string
-
Verify project status:
- Go to Supabase dashboard and ensure your project is active
- Check if there are any service notifications
- Wait for project provisioning to complete (~2 minutes after creation)
-
Test connection:
# Test with psql (if installed) psql $DATABASE_URL -c "SELECT 1" # Or test with the setup script npm run setup:db # Or check tables directly node scripts/check-tables.js
Redis is optional - the application will work without it, but caching will be disabled.
If you see Redis errors:
-
Option 1: Start Redis:
# macOS (Homebrew) brew services start redis # Linux (systemd) sudo systemctl start redis
-
Option 2: Ignore the errors - the app will continue without caching
The application automatically retries Redis connections and gracefully degrades when Redis is unavailable.
If migrations fail:
-
Check database connection:
psql $DATABASE_URL -c "SELECT version()"
-
Reset and re-run migrations:
# WARNING: This will drop all tables! npm run db:push -- --force -
Generate new migrations:
npm run db:generate npm run db:migrate
If you see 0 success, 211 failed in stats poll:
Possible causes:
- pNodes are offline - The nodes in your database may be offline
- Network/firewall blocking - Your server may not be able to reach the pNodes
- Wrong IP addresses - The IPs in the database may be incorrect or outdated
- Wrong pRPC port - Some nodes may use a different port than 6000
Solutions:
-
Check if pNodes are actually online:
# Test a specific pNode curl http://<pnode-ip>:6000/rpc -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1,"method":"get-version"}'
-
Increase timeout for slow networks:
PRPC_TIMEOUT=20000 # 20 seconds instead of 10
-
Adjust retry settings:
PRPC_RETRIES=3 # Try 3 times instead of 2 PRPC_RETRY_DELAY=2000 # Wait 2 seconds between retries
-
Trigger discovery to refresh node list:
curl -X POST http://localhost:3000/api/pnodes/discover
-
Check specific pNode status:
curl http://localhost:3000/api/pnodes?status=online
The application now includes:
- Automatic retry logic with exponential backoff
- Better error messages to diagnose issues
- Configurable timeouts and retry counts
- Success rate warnings when connectivity is poor
Issue: Error: Cannot find module
- Solution: Run
npm installto install dependencies
Issue: TypeScript compilation errors
- Solution: Run
npm run buildto check for type errors
Issue: Rate limiting too aggressive
- Solution: Adjust rate limit settings in
src/index.tsor useallowListfor your IP
Issue: All stats polls failing (0% success rate)
- Solution: See "pRPC Connectivity Issues" section above
Issue: Incorrect IP addresses in database
- Solution: Use the diagnostic endpoints to find and fix invalid IPs:
# Find all pNodes with invalid IPs curl http://localhost:3000/api/pnodes/diagnostics/invalid-ips # Validate all IPs and get detailed report curl http://localhost:3000/api/pnodes/diagnostics/validate-ips # Update a specific pNode's IP address curl -X PATCH http://localhost:3000/api/pnodes/:id/ip \ -H "Content-Type: application/json" \ -d '{"ip": "173.212.203.145"}'
MIT
- Fork the repository
- Create a feature branch
- Make your changes
- Run linting and tests
- Submit a pull request
For questions or issues, please open a GitHub issue or contact the Xandeum Labs team.