Hi,
I'm comparing KLU.jl with kvxopt, a Python package that wraps KLU. For the same sparse matrix of size 19928*19928, KLU.jl takes twice the time for factorization. Not sure where the issue is except for the megabytes of memory allocated. I believe it would be helpful as KLU.jl is used in many applications.
Below are the code in Julia and Python with the timing data on my PC. The data file is also attached.
Julia code:
using NPZ
using KLU
using BenchmarkTools
using SparseArrays
jac9241 = npzread("./src/sciml/9241jac.npz");
I = vec(jac9241["I"] .+ 1);
J = vec(jac9241["J"] .+ 1);
V = vec(jac9241["V"]);
n = jac9241["n"]
b = jac9241["b"]
A = sparse(I, J, V, n, n);
factor = KLU.lu(A);
# 31.038 ms (40 allocations: 21.33 MiB)
@btime KLU.lu!(factor, A);
factor \ b
from kvxopt import klu, spmatrix, matrix
import numpy as np
jac9241 = np.load("9241jac.npz")
I = jac9241.f.I
J = jac9241.f.J
V = jac9241.f.V
n = jac9241.f.n
A = spmatrix(V, I, J, (n, n), 'd')
b = jac9241.f.b
b = matrix(b)
def test_klu(A, b):
b_new = matrix(b)
klu.linsolve(A, b)
%timeit test_klu(A, b)
# 15.6 ms ± 478 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# individual symbolic and numeric factorization routines:
%timeit F = klu.symbolic(A)
# 6.72 ms ± 98 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
F = klu.symbolic(A)
%timeit N= klu.numeric(A, F)
# 8.48 ms ± 301 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
9241jac.zip - need to be extracted.
Hi,
I'm comparing KLU.jl with kvxopt, a Python package that wraps KLU. For the same sparse matrix of size 19928*19928, KLU.jl takes twice the time for factorization. Not sure where the issue is except for the megabytes of memory allocated. I believe it would be helpful as KLU.jl is used in many applications.
Below are the code in Julia and Python with the timing data on my PC. The data file is also attached.
Julia code:
9241jac.zip - need to be extracted.