-
Notifications
You must be signed in to change notification settings - Fork 4
Description
The implementation of the total variation (TV) regularized bicubic spline from #83 still allocates a fair amount of memory. Most of this memory is due to internal allocations within JuMP.jl to construct the model from the given variables, constraints, and objective function that can be passed to HiGHS.jl. The allocations could be significantly reduced if one hand-rolls the sparse matrices to build the Lp optimization problem. The TV bicubic spline problem has a fair amount of structure with absolute value constraints converted into linear inequalities and a linear objective function.
One option would be to directly build the large sparse matrix and use MathOptInterface.jl to package everything for highs. MathOptInterface is the internal abstraction that JuMP uses to build the model before sending it to the optimizer. By removing the abstraction layer of JuMP one gains the ability to manage the memory and control allocations.