Replies: 1 comment 1 reply
-
Hi! From the implementation in MathNet, the performance issue stems from its lack of specialized handling for the Dense * Sparse matrix multiplication scenario. When the left operand is a dense matrix, the library fails to recognize that the right operand is sparse and instead falls back to a suboptimal general-purpose multiplication approach. Specifically, the current implementation mechanically applies the full traversal logic of dense matrix multiplication, explicitly iterating through all potential indices in the computation of C[i, j] += A[i, k] * B[k, j]. This approach completely disregards the inherent advantage of the CSR (Compressed Sparse Row) format, which naturally supports efficient row-wise traversal of non-zero elements—a critical feature left unexploited here. To retrieve the value of B[k, j], it searches through the non-zero column indices of row k using Consequently, even for simple cases like identity matrix multiplication—where almost every elements are zero—the implementation introduces excessive overhead, making Dense * Sparse operations significantly slower than their dense matrix counterparts. Given the current limited maintenance activity on this repository, maintainers may not have the energy to address this issue. Maybe I will attempt to propose an optimized implementation through a issue. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I found that the order of multiplying a dense matrix by a sparse matix really matters:
These are the numbers I got for multiplying two identity matrices of the size shown on the x-axis (e.g. 200x200)

Any idea if this is to be expected, or if there might be a mistake in the mathnet implementation?
Beta Was this translation helpful? Give feedback.
All reactions