Backward Analysis using Preimage Approximation#861
Backward Analysis using Preimage Approximation#861ido-shm-uel wants to merge 80 commits intoNeuralNetworkVerification:masterfrom
Conversation
ido-shm-uel
commented
Feb 6, 2025
- Implemented PreimageApproximation Algorithm (simplified, without branch-and-bound) for ReLU, LeakyReLU, Sign and Billinear Layers (arXiv:2305.03686v4 [cs.SE], arXiv:2405.21063 [cs.LG]).
- Added option to perform LP relaxation using backward-preimage-approx.
- Added unit tests at Test_NetworkLevelReasoner.h.
Added assignment and store-into-other NLR tests for every activation function.
0-2-0: Implementing layer simulations for every activation function.
0-2-1: Testing layer simulations for every activation function.
0-3-0: Implementing interval arithmetic for all activation functions.
0-3-1: Adding interval arithmetic tests for all activation functions.
…nction. 0-4-0: Implementing symbolic bound tightening for every activation function.
…tions. 0-4-1: Adding symbolic bound tightening tests for all activation functions.
0-5-0: Adding rounding constant for LP relaxation propagation.
… activation functions. 0-5-1: Implementing forward-backward LP propagation algorithm for all activation functions.
…n functions. 0-5-2: Adding forward-backward LP propagation tests for all activation functions.
…rithms, adjusting in Engine. 0-6-0: Adding MILP bound types for two backpropagation of bounds algorithms, adjusting in Engine.
0-6-1: Adjusting in Options for two new MILP Bound types.
…rithms). 0-6-2: Introducing Polygonal Tightenings (output type of two new algorithms).
0-6-3: Adjusting in NLR, LayerOwner for Polygonal Tightenings.
…rithms will optimize value of parameter). 0-6-4: Introducing parameterised bounds in LPFormulator (two new algorithms will optimize value of parameter).
0-6-5: Current state of PreimageApproximation + cleanup + corrections.
0-6-6: New constants for PreimageApproximation (random seeds).
…algorithm (gradient-free optimization for PreimageApproximation). 0-6-7: TEMPORARY CHANGE - use boost v1.85 for Differential Evolution algorithm (gradient-free optimization for PreimageApproximation).
0-6-8: Revert last change.
0-6-9: Adapt DifferentialEvolution from Boost v1.85.
…ented). 0-6-10: Current state of PreimageApproximation (EstimateVolume implemented).
0-6-11: Add constants for PreimageApproximation.
…descent). 0-6-12: Current state of PreimageApproximation (implement coordinate descent).
…ow). 0-6-13: Timing PreimageApproximation with coordinate descent (very slow).
0-6-14: Small Fix (same as 0-5---4).
UNSAT certification statistics (NeuralNetworkVerification#850) (Fork Sync)
…imization algorithms, parameters, clang-format,) - Apply 0-5--6 fixes (Backward-Forward Algorithm branch) to this branch. - Add new optimization algorithms for PreimageApproximation: Coordinate Descent, Gradient Descent, Adam Optimizer. - Change default algorithm (Adam) + parameters to reduce runtime. - Apply clang-format to all files changed.
…pdateRandomly() when the chosen plConstraintToUpdate is fixed
smal fix for bug in SumOfInfeasibilitiesManager::proposePhasePatternU… - - Merge idan0610's fork to solve regression tests failling due to fixed Sign Layer splitting (MarabouError 7).
wu-haoze
left a comment
There was a problem hiding this comment.
Hi @ido-shm-uel , thanks for this addition. I have some questions and (mostly minor) code style suggestions. Please see my comments!
src/engine/PolygonalTightening.h
Outdated
There was a problem hiding this comment.
Remove lines after 95 as they are obsolete since we switched to cmake.
src/nlr/LPFormulator.cpp
Outdated
There was a problem hiding this comment.
use the internal Vector instead. Also, please pass by reference to avoid keep copying the coeffs vectors.
Vector<double> &coeffs
src/nlr/LPFormulator.cpp
Outdated
There was a problem hiding this comment.
similarly, please pass by reference and change the datatype.
src/nlr/LPFormulator.cpp
Outdated
src/nlr/LPFormulator.cpp
Outdated
| // Concrete upper bound: x_f <= ub_b | ||
| double width = sourceUb - sourceLb; | ||
| double coeff = ( sourceUb - _alpha * sourceLb ) / width; | ||
| double weight = ( sourceUb - _alpha * sourceLb ) / width; |
There was a problem hiding this comment.
What is the rationale behind this name change? It seems we are mostly using coeffs when it comes to symbolic bounds?
src/nlr/Layer.h
Outdated
There was a problem hiding this comment.
Change to Vector &coeffs here and below.
src/nlr/NetworkLevelReasoner.h
Outdated
src/engine/PolygonalTightening.h
Outdated
There was a problem hiding this comment.
Could you point to me where this equality operator is used?
src/engine/PolygonalTightening.h
Outdated
There was a problem hiding this comment.
Could you add a description of what this class is supposed to represent? Does this represent a linear symbolic bound in terms of neurons? Also, this file should probably be in src/nlr/ since it's very NLR specific.
SBT for Sigmoid: Corrected criterion for fixed Sigmoid neuron. SBT for Softmax: Symbolic bounds now stored correctly in _symbolicLb, _symbolicUb. SBT for Bilinear: Fixed calculation for symbolic lower/upper bias, fixed mistaken assumption both source neurons are from same layer. Parameterized SBT for Bilinear: Fixed calculation for symbolic lower/upper bias, fixed mistaken assumption both source neurons are from same layer. LP relaxation for Bilinear: Fixed mistaken assumption both source neurons are from same layer. Parameterized LP relaxation for Bilinear: Fixed mistaken assumption both source neurons are from same layer. Minor changes for code of PreimageApproximation optimization function: size_t -> unsigned, define sign variable at start of function, use Vector( double, unsigned ) constructor and std::min/std::max.
Fixed typos in documentation of some tests, Corrected one SBT Softmax test and two PreimageApproximation Bilinear tests, removed all void populateNetworkBackward*() functions from Test_NetworkLevelReasoner.h (all unused, duplicated from Test_LPRelaxation,h).
Interval Arithmetic and Symbolic Bound Tightening for Max, Softmax, Bilinear mistakenly assume 1. All source neurons are from same layer. 2. The size of the current layer is equal to the number of source neurons of any softmax neuron. Fixed both errors.
…-FIX Corrections for SBT and LP relaxations with Sigmoid, Max, Softmax, Bilinear layers.