The current constrained 2d/3d models can be simplified. Currently, it constrains weights to be equal that are equidistant to the center of the window. For 2d AR(1):
Where
$$
\hat{y} = \frac{1}{4} \sum_{i=1}^{4} w_1 x_i
$$
This is implemented with convolutions by constraining a 2d kernel:
This constrained learning can be simplified. Pull the ar weight ($w_1$) out:
$$
\hat{y} =w_1 \frac{1}{4} \sum_{i=1}^{4} x_i
$$
This allows us to take averages at each equidistant points, turning 2d/3d problems into 1d. This will simplify computation and allow standard 1d AR solvers.
The first thing to do is figuring out how to efficiently scale the averaging of x across all windows for AR(p) in n-d. The output should be a matrix X with shape (n_windows, p).
The current constrained 2d/3d models can be simplified. Currently, it constrains weights to be equal that are equidistant to the center of the window. For 2d AR(1):
Where
This is implemented with convolutions by constraining a 2d kernel:
This constrained learning can be simplified. Pull the ar weight ($w_1$ ) out:
This allows us to take averages at each equidistant points, turning 2d/3d problems into 1d. This will simplify computation and allow standard 1d AR solvers.
The first thing to do is figuring out how to efficiently scale the averaging of x across all windows for AR(p) in n-d. The output should be a matrix X with shape (n_windows, p).