Implementation of regime-switching models with portfolio optimization and delta hedging.
Base VQ-VAE-HMM model with proper Gaussian emissions and numerical stability.
Key Classes:
Encoder: Conv1D encoder for regime detectionPrior: HMM prior with input-conditioned transitionsDecoder: Gaussian emission decoder with mean and log-varianceVAE_HMM: Main model combining all componentsRandomChunkDataset: Dataset for variable-length sequences
Neural network architectures for portfolio optimization.
Models:
AttentionPortfolioOptimizer: Multi-head attention for regime weightingTransformerPortfolioOptimizer: Full transformer encoder for sequencesBayesianPortfolioOptimizer: Uncertainty quantification with weight distributionsEnsemblePortfolioOptimizer: Multiple models for robust predictionsHierarchicalPortfolioOptimizer: Macro → micro regime hierarchy
Comprehensive loss functions for portfolio optimization.
Loss Functions:
portfolio_loss: Multi-objective with transaction costs, position limits, leverage, drawdown, CVaRsortino_loss: Downside risk optimizationcalmar_loss: Return/max drawdown ratiorisk_parity_loss: Equal risk contributionregime_conditional_loss: Regime-specific covariance optimizationadversarial_portfolio_loss: Robustness to regime misclassificationtransition_aware_loss: Accounts for expected regime changes
Training strategies and optimization techniques.
Classes:
MetaPortfolioOptimizer: MAML-style meta-learning for fast adaptationOnlinePortfolioOptimizer: Continuous learning with EMAWalkForwardTrainer: Realistic backtesting with rolling windows
Functions:
train_portfolio: Comprehensive training with schedulers, adversarial training, ensembles
Utilities for regime analysis and portfolio construction.
Models:
RegimeChangeDetector: Predict upcoming regime transitionsForwardTransitionPredictor: Multi-step ahead regime forecastingRegimePersistenceModel: Estimate expected regime durationTemperatureScaling: Calibrate regime probabilitiesRegimeFactorModel: Factor decomposition per regime
Functions:
estimate_regime_covariance: Regime-conditional covariance matricesconfidence_based_sizing: Scale positions by regime certaintyoptimize_rebalancing_frequency: Optimal trade-off between alpha and costsoptimize_leverage: Target volatility with leverage constraints
Regime-aware delta hedging strategies.
Models:
RegimeDeltaHedger: Basic regime-conditional delta hedgingDynamicDeltaHedger: Includes gamma hedgingLSTMDeltaHedger: Sequential hedging decisionsTransactionCostAwareHedger: Optimal rehedging thresholdsTransitionAwareHedger: Anticipates regime changes
Functions:
minimum_variance_hedge_ratio: Regime-conditional minimum variance hedgedelta_hedge_loss: Variance minimization with transaction costsoptimal_hedge_frequency: Leland (1985) with regime persistencetrain_delta_hedger: Training loop for hedging models
Backtesting framework for portfolio strategies.
Classes:
Backtester: Core backtesting engine with transaction costs and slippageWalkForwardBacktest: Rolling window backtesting with retrainingRegimeBacktest: Regime-specific performance analysisBacktestResult: Results container with metrics and equity curve
Functions:
compare_strategies: Compare multiple strategies side-by-sideplot_results: Visualize backtest results
Threshold calibration with signal/noise control and empirical stopping criteria.
Classes:
ThresholdCalibrator: Calibrate thresholds with precision/recall constraintsSignalNoiseController: Control signal vs noise ratio directlyEmpiricalStoppingCriteria: Data-driven stopping based on convergence curvesPrecisionRecallOptimizer: Optimize precision/recall tradeoffsEvaluationLoop: Concrete evaluation with calibration and stopping
Functions:
calibrate_regime_thresholds: Regime-specific threshold calibrationevaluate_with_tradeoffs: Analyze precision/recall tradeoff curves
from VQ_VAE_HMM_fixed import VAE_HMM, train_model, collate_fn, RandomChunkDataset
# Create model
model = VAE_HMM(input_dim=5, hidden_dim=64, K=3, hidden_dim2=32, u_dim=4)
# Create dataset
dataset = RandomChunkDataset(x_sequences, u_sequences, min_len=20, max_len=200)
dataloader = DataLoader(dataset, batch_size=64, collate_fn=collate_fn)
# Train
trained_model = train_model(model, dataloader, num_epochs=150, lr=1e-5)from portfolio_optimizer import TransformerPortfolioOptimizer
from loss_functions import portfolio_loss
from training import train_portfolio
# Create optimizer
portfolio_model = TransformerPortfolioOptimizer(K=3, n_assets=10, hidden_dim=64)
# Train with features
trained_portfolio = train_portfolio(
portfolio_model, vae_hmm, dataloader, returns_data,
num_epochs=100, lr=0.001, use_scheduler=True,
use_adversarial=True, use_ensemble=False
)from delta_hedger import TransitionAwareHedger, train_delta_hedger
# Create hedger
hedger = TransitionAwareHedger(K=3, n_assets=10, hidden_dim=64, lookahead=5)
# Train
trained_hedger = train_delta_hedger(
hedger, vae_hmm, spot_data, futures_data,
num_epochs=50, lr=0.001
)
# Get hedge ratios
with torch.no_grad():
hedge_ratios = hedger(regime_probs, transition_matrix, spot_prices)from training import MetaPortfolioOptimizer
meta_optimizer = MetaPortfolioOptimizer(
model, inner_lr=0.01, outer_lr=0.001, n_inner_steps=5
)
# Meta-train on multiple market conditions
for epoch in range(100):
tasks = sample_tasks(dataloader) # Different market regimes
meta_loss = meta_optimizer.meta_update(tasks, loss_fn)from training import WalkForwardTrainer
trainer = WalkForwardTrainer(
model, loss_fn, train_window=252, test_window=21, retrain_freq=21
)
results = trainer.run(full_data, n_periods=50)from backtesting import Backtester, WalkForwardBacktest, compare_strategies
# basic backtest
backtester = Backtester(initial_capital=100000, tx_cost=0.001)
result = backtester.run(portfolio_model, vae_hmm, data, prices, returns)
print(f"Sharpe Ratio: {result.metrics['sharpe_ratio']:.2f}")
print(f"Max Drawdown: {result.metrics['max_drawdown']:.2%}")
# walk-forward backtest
wf_backtest = WalkForwardBacktest(train_window=252, test_window=21)
wf_results = wf_backtest.run(portfolio_model, vae_hmm, train_fn, data, prices, returns)
# compare strategies
comparison = compare_strategies({'strategy1': result1, 'strategy2': result2})
print(comparison)from calibration import ThresholdCalibrator, SignalNoiseController, EmpiricalStoppingCriteria
# calibrate with precision/recall constraints
calibrator = ThresholdCalibrator(min_precision=0.7, min_recall=0.5)
result = calibrator.calibrate(predictions, targets)
print(f"Optimal threshold: {result.threshold}")
print(f"F1 Score: {result.f1_score}")
# control signal/noise ratio
controller = SignalNoiseController(target_signal_ratio=0.3)
threshold = controller.find_threshold(predictions)
quality = controller.evaluate_signal_quality(predictions, targets, threshold)
# empirical stopping criteria
stopping = EmpiricalStoppingCriteria(patience=10, min_delta=0.001)
for epoch in range(100):
metrics = train_epoch()
if stopping.should_stop(metrics):
break- Transaction cost modeling
- Position and leverage limits
- Maximum drawdown constraints
- CVaR optimization
- Regime-conditional covariance
- Multi-head attention
- Transformer encoders
- Bayesian uncertainty quantification
- Ensemble methods
- Hierarchical regime modeling
- Meta-learning (MAML)
- Online learning with EMA
- Walk-forward validation
- Adversarial training
- Learning rate scheduling
- Gradient clipping
- Transition prediction
- Regime persistence modeling
- Probability calibration
- Factor models per regime
- Confidence-based sizing
- Regime-aware hedging
- Gamma hedging
- Transaction cost optimization
- Optimal rehedging frequency
- Transition-aware hedging
- Transaction costs and slippage
- Walk-forward validation
- Regime-specific analysis
- Performance metrics (Sharpe, Sortino, Calmar)
- Drawdown analysis
- Signal vs noise ratio control
- Precision/recall tradeoff optimization
- Empirical stopping criteria
- Convergence curve analysis
- Regime-specific thresholds
- Architecture: Attention, Transformers, Bayesian, Ensemble, Hierarchical
- Loss Functions: Sharpe, Sortino, Calmar, Risk Parity, CVaR, Multi-objective
- Training: Meta-learning, Online learning, Walk-forward, Adversarial
- Risk Management: Transaction costs, Position limits, Drawdown, Leverage
- Regime Modeling: Transitions, Persistence, Calibration, Factors
- Hedging: Delta, Gamma, Transaction-aware, Transition-aware
- Uncertainty: Bayesian weights, Confidence sizing, Ensemble predictions
- Backtesting: Walk-forward, Regime-specific, Transaction costs, Performance metrics
- Calibration: Signal/noise control, Precision/recall tradeoffs, Empirical stopping
- PyTorch
- NumPy
- Pandas
- Math
- All models support GPU acceleration
- Gradient clipping prevents training instability
- Numerical stability built into all loss functions
- Supports variable-length sequences with masking