Skip to content

Conversation

@ido-shm-uel
Copy link
Contributor

  • Implemented PreimageApproximation Algorithm (simplified, without branch-and-bound) for ReLU, LeakyReLU, Sign and Billinear Layers (arXiv:2305.03686v4 [cs.SE], arXiv:2405.21063 [cs.LG]).
  • Added option to perform LP relaxation using backward-preimage-approx.
  • Added unit tests at Test_NetworkLevelReasoner.h.

Added assignment and store-into-other NLR tests for every activation function.
0-2-0: Implementing layer simulations for every activation function.
0-2-1: Testing layer simulations for every activation function.
0-3-0: Implementing interval arithmetic for all activation functions.
0-3-1: Adding interval arithmetic tests for all activation functions.
…nction.

0-4-0: Implementing symbolic bound tightening for every activation function.
…tions.

0-4-1: Adding symbolic bound tightening tests for all activation functions.
0-5-0: Adding rounding constant for LP relaxation propagation.
… activation functions.

0-5-1: Implementing forward-backward LP propagation algorithm for all activation functions.
…n functions.

0-5-2: Adding forward-backward LP propagation tests for all activation functions.
…rithms, adjusting in Engine.

0-6-0: Adding MILP bound types for two backpropagation of bounds algorithms, adjusting in Engine.
0-6-1: Adjusting in Options for two new MILP Bound types.
…rithms).

0-6-2: Introducing Polygonal Tightenings (output type of two new algorithms).
0-6-3: Adjusting in NLR, LayerOwner for Polygonal Tightenings.
…rithms will optimize value of parameter).

0-6-4: Introducing parameterised bounds in LPFormulator (two new algorithms will optimize value of parameter).
0-6-5: Current state of PreimageApproximation + cleanup + corrections.
0-6-6: New constants for PreimageApproximation (random seeds).
…algorithm (gradient-free optimization for PreimageApproximation).

0-6-7: TEMPORARY CHANGE - use boost v1.85 for Differential Evolution algorithm (gradient-free optimization for PreimageApproximation).
0-6-8: Revert last change.
0-6-9: Adapt DifferentialEvolution from Boost v1.85.
…ented).

0-6-10: Current state of PreimageApproximation (EstimateVolume implemented).
0-6-11: Add constants for PreimageApproximation.
…descent).

0-6-12: Current state of PreimageApproximation (implement coordinate descent).
…ow).

0-6-13: Timing PreimageApproximation with coordinate descent (very slow).
0-6-14: Small Fix (same as 0-5---4).
…imization algorithms, parameters, clang-format,)

- Apply 0-5--6 fixes (Backward-Forward Algorithm branch) to this branch.
- Add new optimization algorithms for PreimageApproximation: Coordinate Descent, Gradient Descent, Adam Optimizer.
- Change default algorithm  (Adam) + parameters to reduce runtime.
- Apply clang-format to all files changed.
ido-shm-uel and others added 10 commits February 6, 2025 01:14
…pdateRandomly() when the chosen plConstraintToUpdate is fixed
smal fix for bug in SumOfInfeasibilitiesManager::proposePhasePatternU… -  - Merge idan0610's fork to solve regression tests failling due to fixed Sign Layer splitting (MarabouError 7).
Copy link
Collaborator

@wu-haoze wu-haoze left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ido-shm-uel , thanks for this addition. I have some questions and (mostly minor) code style suggestions. Please see my comments!


#endif // __PolygonalTightening_h__

//
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove lines after 95 as they are obsolete since we switched to cmake.

void LPFormulator::optimizeBoundsWithLpRelaxation( const Map<unsigned, Layer *> &layers,
bool backward )
bool backward,
std::vector<double> coeffs )
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use the internal Vector instead. Also, please pass by reference to avoid keep copying the coeffs vectors.

Vector<double> &coeffs

void LPFormulator::optimizeBoundsOfNeuronsWithLpRlaxation( ThreadArgument &args, bool backward )
void LPFormulator::optimizeBoundsOfNeuronsWithLpRelaxation( ThreadArgument &args,
bool backward,
std::vector<double> coeffs )
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similarly, please pass by reference and change the datatype.

GurobiWrapper &gurobi,
unsigned lastLayer )
unsigned lastLayer,
std::vector<double> coeffs )
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

GurobiWrapper &gurobi,
unsigned firstLayer )
unsigned firstLayer,
std::vector<double> coeffs )
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

// Concrete upper bound: x_f <= ub_b
double width = sourceUb - sourceLb;
double coeff = ( sourceUb - _alpha * sourceLb ) / width;
double weight = ( sourceUb - _alpha * sourceLb ) / width;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the rationale behind this name change? It seems we are mostly using coeffs when it comes to symbolic bounds?

src/nlr/Layer.h Outdated
void computeSymbolicBounds();
void computeIntervalArithmeticBounds();
void computeSymbolicBounds();
void computeParameterisedSymbolicBounds( std::vector<double> coeffs,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to Vector &coeffs here and below.

void obtainCurrentBounds();
void intervalArithmeticBoundPropagation();
void symbolicBoundPropagation();
void parameterisedSymbolicBoundPropagation( std::vector<double> coeffs );
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pass by reference.

/*
Equality operator.
*/
bool operator==( const PolygonalTightening &other ) const
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you point to me where this equality operator is used?

** directory for licensing information.\endverbatim
**
** [[ Add lengthier description here ]]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a description of what this class is supposed to represent? Does this represent a linear symbolic bound in terms of neurons? Also, this file should probably be in src/nlr/ since it's very NLR specific.

@ido-shm-uel ido-shm-uel reopened this Feb 27, 2025
SBT for Sigmoid: Corrected criterion for fixed Sigmoid neuron.
SBT for Softmax: Symbolic bounds now stored correctly in _symbolicLb, _symbolicUb.
SBT for Bilinear: Fixed calculation for symbolic lower/upper bias, fixed mistaken assumption both source neurons are from same layer.
Parameterized SBT for Bilinear: Fixed calculation for symbolic lower/upper bias, fixed mistaken assumption both source neurons are from same layer.
LP relaxation for Bilinear: Fixed mistaken assumption both source neurons are from same layer.
Parameterized LP relaxation for Bilinear: Fixed mistaken assumption both source neurons are from same layer.
Minor changes for code of PreimageApproximation optimization function: size_t -> unsigned, define sign variable at start of function, use Vector( double, unsigned ) constructor and std::min/std::max.
Fixed typos in documentation of some tests, Corrected one SBT Softmax test and two PreimageApproximation Bilinear tests, removed all void populateNetworkBackward*() functions from Test_NetworkLevelReasoner.h (all unused, duplicated from Test_LPRelaxation,h).
Interval Arithmetic and Symbolic Bound Tightening for Max, Softmax, Bilinear mistakenly assume 1. All source neurons are from same layer. 2. The size of the current layer is equal to the number of source neurons of any softmax neuron. Fixed both errors.
…-FIX

Corrections for SBT and LP relaxations with Sigmoid, Max, Softmax, Bilinear layers.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants