Skip to content

Conversation

@pobonomo
Copy link
Member

@pobonomo pobonomo commented Oct 6, 2025

Try to do convolutional networks.

Implemented a 2D convolution (maybe correct)
and 2D max pooling.

Have two notebooks where it works (kinda 2D max pooling is really tough for MIP it seems)

@pobonomo
Copy link
Member Author

pobonomo commented Oct 6, 2025

Test are failing because I had to deactivate all the code to test the dimensions of input and output of models and layers. This code is too naive for convolutional networks

pobonomo and others added 15 commits October 7, 2025 15:21
Started doing a convolutional net with keras.

notebooks/adversarial/adversarial_cnn.ipynb

contains a working example that we would want to model with gurobi_ml

A conv2d layer is in progress (doesn't crash and does something).

The big mess now is variable dimensions. Gurobi ML was more or less
designed around the idea that the input of the ML model is 2-d
(n_examples, n_features) but of course convolutional layers are 4-d
(n_examples, height, width, n_channels)
The 2d assumption is all other the place...

Missing layers:
- MaxPooling
- Dropout
- Flatten

Also for the conv2d only did with valid padding and the for loop is
Let's see if chat gpt can do the max pooling and flatten layers
We get something unsolvable on my example :-()
A very tiny one without pooling layers.
A less tiny with pooling. Right now I need to cheat on the bounds
of the input layer.
The layer is flattened in different orders in pytorch and tensorflow
adapt it.
This is still a somewhat lazy approach where we don't differentiate
the dimension of the input and the output.
We have to respect shapes now
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant