Skip to content
Commit ccd047cb authored by Aart Bik's avatar Aart Bik
Browse files

[mlir][sparse] optimize COO index handling

By using a shared index pool, we reduce the footprint of each "Element"
in the COO scheme and, in addition, reduce the overhead of allocating
indices (trading many allocations of vectors for allocations in a single
vector only). When the capacity is known, this means *all* allocation
can be done in advance.

This is a big win. For example, reading matrix SK-2005, with dimensions
50,636,154 x 50,636,154 and 1,949,412,601 nonzero elements improves
as follows (time in ms), or about 3.5x faster overall

```
SK-2005 before        after      speedup
  ---------------------------------------------
read     305,086.65    180,318.12    1.69
sort   2,836,096.23    510,492.87    5.56
pack     364,485.67    312,009.96    1.17
  ---------------------------------------------
TOTAL  3,505,668.56  1,002,820.95    3.50
```

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D124502
parent c7bb5ac5
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment