Loading…

Parallel GEMM-based convolution for deep learning on multicore RISC-V processors

We address the efficient implementation of the convolution operator on the GAP8 parallel ultra-low power platform (PULP), a heterogeneous multi-core processor equipped with a fabric controller (FC); a cluster of eight compute cores; and a four-level memory hierarchy with scratchpads instead of conve...

Full description

Saved in:
Bibliographic Details
Published in:The Journal of supercomputing 2024, Vol.80 (9), p.12623-12643
Main Authors: Ramírez, Cristian, Castelló, Adrián, Martínez, Héctor, Quintana-Ortí, Enrique S.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We address the efficient implementation of the convolution operator on the GAP8 parallel ultra-low power platform (PULP), a heterogeneous multi-core processor equipped with a fabric controller (FC); a cluster of eight compute cores; and a four-level memory hierarchy with scratchpads instead of conventional, hardware-assisted cache memories. Our solution for this platform transforms the convolution into a general matrix–matrix multiplication ( gemm ) via the lowering approach, demonstrating that it is possible to attain reasonable performance on the GAP8 by carefully adapting techniques such as tiling and loop parallelism, which are mainstream in the multi-threaded, cache-aware realization of gemm .
ISSN:0920-8542
1573-0484
DOI:10.1007/s11227-024-05927-y