foreBlocks is a modular PyTorch library for time-series forecasting. The repository combines:
foreblocks: forecasting models, training, evaluation, preprocessing, and DARTS searchforetools: companion utilities, synthetic data generation, decomposition, and analysis notebooks
The project is best approached as a research toolkit rather than a single monolithic framework. The most stable public entry points are the top-level imports exported from foreblocks.
pip install foreblocksInstall optional extras when you need specific subsystems:
| Extra | Adds |
|---|---|
mltracker |
experiment tracking API and UI dependencies |
vmd |
VMD decomposition and Optuna-based search support |
wavelets |
optional wavelet backends |
benchmark |
external forecasting baselines and spreadsheet readers |
foreminer |
changepoint-detection support |
all |
all runtime extras above |
Examples:
pip install "foreblocks[mltracker]"
pip install "foreblocks[vmd,wavelets]"
pip install "foreblocks[all]"Local development install:
git clone https://github.com/lseman/foreblocks.git
cd foreblocks
pip install -e ".[dev]"The example below is intentionally small and uses the most reliable path through the current API: a direct forecaster with a custom head, trained through Trainer.
import numpy as np
import torch
import torch.nn as nn
from foreblocks import (
ForecastingModel,
ModelEvaluator,
Trainer,
TrainingConfig,
create_dataloaders,
)
seq_len = 24
horizon = 6
n_features = 4
rng = np.random.default_rng(0)
X_train = rng.normal(size=(64, seq_len, n_features)).astype("float32")
y_train = rng.normal(size=(64, horizon)).astype("float32")
X_val = rng.normal(size=(16, seq_len, n_features)).astype("float32")
y_val = rng.normal(size=(16, horizon)).astype("float32")
train_loader, val_loader = create_dataloaders(
X_train,
y_train,
X_val,
y_val,
batch_size=16,
)
head = nn.Sequential(
nn.Flatten(),
nn.Linear(seq_len * n_features, 64),
nn.GELU(),
nn.Linear(64, horizon),
)
model = ForecastingModel(
head=head,
forecasting_strategy="direct",
model_type="head_only",
target_len=horizon,
)
trainer = Trainer(
model,
config=TrainingConfig(
num_epochs=5,
batch_size=16,
patience=3,
use_amp=False,
),
auto_track=False,
)
history = trainer.train(train_loader, val_loader)
evaluator = ModelEvaluator(trainer)
metrics = evaluator.compute_metrics(torch.tensor(X_val), torch.tensor(y_val))
print(history.train_losses[-1], metrics)This path was smoke-tested in the repository. Once that is working, move on to encoder/decoder models, preprocessing, and DARTS.
These are the top-level imports currently exposed by foreblocks:
| Import | Purpose |
|---|---|
ForecastingModel |
Core forecasting wrapper for direct, autoregressive, and seq2seq-style models |
Trainer |
Training loop with NAS hooks, MLTracker integration, and optional conformal support |
ModelEvaluator |
Prediction helpers, metrics, cross-validation, and training-curve plots |
TimeSeriesHandler |
Time-series handling pipeline for windowing, scaling, filtering, imputation, and time features |
TimeSeriesDataset |
Dataset wrapper used by the dataloader helper |
create_dataloaders |
Build train/validation PyTorch dataloaders from NumPy arrays |
ModelConfig, TrainingConfig |
Lightweight configuration dataclasses |
LSTMEncoder, LSTMDecoder, GRUEncoder, GRUDecoder |
Recurrent encoder/decoder blocks |
TransformerEncoder, TransformerDecoder |
Transformer backbones and related advanced features |
AttentionLayer |
Attention module entry point |
| Path | What it contains |
|---|---|
foreblocks/core |
ForecastingModel, heads, conformal utilities, sampling |
foreblocks/training |
Trainer, training loop, quantization utilities |
foreblocks/evaluation |
ModelEvaluator, benchmarking helpers |
foreblocks/ts_handler |
TimeSeriesHandler, imputation, filtering, outlier handling |
foreblocks/tf |
transformer stack, attention variants, MoE, norms, embeddings |
foreblocks/darts |
neural architecture search pipeline and evaluation |
foretools/tsgen |
synthetic time-series generator and notebooks |
examples/ |
notebooks and runnable usage examples |
web/ |
static landing page assets for the published site root |
docs/ |
MkDocs source for the versioned documentation site published under /docs/ |
Start here if you are new to the repository:
Topic guides:
Companion tooling:
Useful notebooks and examples:
There is also a repository-local docs navigation file at mkdocs.yml. The current publishing model is:
- site root
/: custom landing page fromweb/index.html - site docs
/docs/: MkDocs site built fromdocs/
- The repository is broad and still evolving. Some subsystems are more mature than others.
- The top-level imports listed above are the safest place to start.
Trainersupports MLTracker and conformal prediction, but you can disable tracking during local smoke tests withauto_track=False.MultiAttentionnow includes an experimental attention-matching KV compaction mode for dense paged causal decode. Enable it withuse_attention_matching_compaction=Trueanduse_mla=False.- For decoder-based seq2seq and transformer workflows, use the topic guides before wiring custom modules, because dimension contracts are stricter than the direct head path.
TrainingConfignow lives in a single canonical location and includes trainer, NAS, MLTracker, and conformal settings.
Documentation improvements are especially valuable here because the repository spans forecasting models, search, preprocessing, and auxiliary tooling. If you add or change a public API, update:
- this
README.md - the relevant guide under
docs/ - at least one runnable example or notebook