I am trying to implement Convolutional KAN and RNN KAN. While coding, I noticed that the grid can be broadcast automatically, so using expand does not seem to provide any real benefit and may only waste memory. Are there any reasons why expand(in_features, -1) might exist?
Extra note:
Convolutional KAN replaces convolution weights with univariate functions, following the formulation in the original KAN paper. If expand is required for the grid tensor, should it be expanded as expand(in_channels, height, width, -1) or as expand(in_channels, kernel_size1, kernel_size2, -1)?
I am trying to implement Convolutional KAN and RNN KAN. While coding, I noticed that the grid can be broadcast automatically, so using expand does not seem to provide any real benefit and may only waste memory. Are there any reasons why
expand(in_features, -1)might exist?Extra note:
Convolutional KAN replaces convolution weights with univariate functions, following the formulation in the original KAN paper. If expand is required for the grid tensor, should it be expanded as expand(in_channels, height, width, -1) or as expand(in_channels, kernel_size1, kernel_size2, -1)?