Approximate computing for AI

ByTEXTAROSSA Project

Approximate computing for AI

Machine Learning in general, and Deep Neural Networks (DNNs) in particular, have recently been shown to tolerate low-precision representations of their parameters.

This represents an opportunity to accelerate computations, reduce storage, and, most importantly, reduce power consumption. At the edge and on embedded devices, the latter is critical.

In neural networks, two game-changing factors are developing.

The RISC-V open instruction set architecture (ISA) enables for the seamless implementation of custom instruction sets. Second, several novel formats for real number arithmetic exist. In TextaRossa we aim to merge these two major components by developing an accelerator for mixed precision, employing one or more promising low-precision formats (e.g., Posit, bfloat). We aim to develop an enhancement to an original RISC-V ISA that allows for the computation such formats as well as the interoperability of these formats alongside the standard 32-bit IEEE Floats (a.k.a. binary32) or traditional fixed-point formats to provide a compact representation of real numbers with minimal to no accuracy deterioration and with a compression factor of 2 to 4 times. In TextaRossa we have two main paths in exploiting low-precision format.

The first one is the design by UNIPISA of an IP core for a lightweight PPU (Posit Processing Unit) to be connected to a 64b RISC-V processor in the form of a co-processor with an extension of the Instruction Set Architecture (ISA). We focus on the compression abilities of posits by providing a co-processor with only conversions in mind, called light PPU. We can convert binary32 floating point numbers to posit numbers with 16 and 8 bits. This co-processor can be paired with a RISCV-V core that already has a floating-point unit (e.g., Ariane 64b RISC-V) without interrupting the existing pipeline. On the other hand, we can use this unit to enable ALU computation of posit numbers with the posit-to-fixed conversion modules on a RISCV-V core that does not support floating-point.

The second one is the design by UNIPISA of a complete Posit Processing Unit (namely Full PPU, FPPU) that can be connected to a RISC-V processor core with a further extension of the ISA, adding the capabilities of complete posit arithmetic to such core. This approach enables us to deliver efficient real number arithmetic with 8 or 16 bits (thus reducing the bits used by a factor 4 or 2, compared to binary32 numbers), even in low-power processors that are not equipped with a traditional floating-point unit. Low power performance of the PPU coprocessors has been also validated by UNIPISA and POLIMI.

Leading partner: UNIPI

References

  1. M. Cococcioni, F. Rossi, E. Ruffaldi and S. Saponara, “A Lightweight Posit Processing Unit for RISC-V Processors in Deep Neural Network Applications,” in IEEE Transactions on Emerging Topics in Computing, vol. 10, no. 4, pp. 1898-1908, 1 Oct.-Dec. 2022, doi: 10.1109/TETC.2021.3120538.
  2. Michele Piccoli, Davide Zoni, William Fornaciari, Marco Cococcioni, Federico Rossi, Emanuele Ruffaldi, Sergio Saponara, and Giuseppe Massari. “Dynamic Power consumption of the Full Posit Processing Unit: Analysis and Experiments”. In: PARMA-DITAM 2023. Open Access Series in Informatics (OASIcs). Dagstuhl, Germany, 2023, to appear.

About the author

TEXTAROSSA Project administrator