The TEXTAROSSA project has been presented by E4 at the HPCAI Advisory Council (Lugano, Svizzera – 03-06/04/2023).
The TEXTAROSSA project has been presented by E4 at the HPCAI Advisory Council (Lugano, Svizzera – 03-06/04/2023).
Machine Learning in general, and Deep Neural Networks (DNNs) in particular, have recently been shown to tolerate low-precision representations of their parameters.
This represents an opportunity to accelerate computations, reduce storage, and, most importantly, reduce power consumption. At the edge and on embedded devices, the latter is critical.
In neural networks, two game-changing factors are developing.
The RISC-V open instruction set architecture (ISA) enables for the seamless implementation of custom instruction sets. Second, several novel formats for real number arithmetic exist. In TextaRossa we aim to merge these two major components by developing an accelerator for mixed precision, employing one or more promising low-precision formats (e.g., Posit, bfloat). We aim to develop an enhancement to an original RISC-V ISA that allows for the computation such formats as well as the interoperability of these formats alongside the standard 32-bit IEEE Floats (a.k.a. binary32) or traditional fixed-point formats to provide a compact representation of real numbers with minimal to no accuracy deterioration and with a compression factor of 2 to 4 times. In TextaRossa we have two main paths in exploiting low-precision format.
The first one is the design by UNIPISA of an IP core for a lightweight PPU (Posit Processing Unit) to be connected to a 64b RISC-V processor in the form of a co-processor with an extension of the Instruction Set Architecture (ISA). We focus on the compression abilities of posits by providing a co-processor with only conversions in mind, called light PPU. We can convert binary32 floating point numbers to posit numbers with 16 and 8 bits. This co-processor can be paired with a RISCV-V core that already has a floating-point unit (e.g., Ariane 64b RISC-V) without interrupting the existing pipeline. On the other hand, we can use this unit to enable ALU computation of posit numbers with the posit-to-fixed conversion modules on a RISCV-V core that does not support floating-point.
The second one is the design by UNIPISA of a complete Posit Processing Unit (namely Full PPU, FPPU) that can be connected to a RISC-V processor core with a further extension of the ISA, adding the capabilities of complete posit arithmetic to such core. This approach enables us to deliver efficient real number arithmetic with 8 or 16 bits (thus reducing the bits used by a factor 4 or 2, compared to binary32 numbers), even in low-power processors that are not equipped with a traditional floating-point unit. Low power performance of the PPU coprocessors has been also validated by UNIPISA and POLIMI.
Leading partner: UNIPI
References
The TEXTAROSSA project won the superlative award for the Favorite Zany Acronym by HPC Wire. Not the most important of the scientific achievements, but for sure funny!
Read the full story here.
Today (May 25, 2022), Alessandro Lonardo (INFN) is presenting the TEXTAROSSA project at the annual INFN Workshop on Computing.
PhD Position F/M Optimization of high-performance applications on heterogeneous computing nodes
A PhD position is open in HiePACS, a joint project-team with Bordeaux INP, Bordeaux University and CNRS, and CAMUS, a joint project-team with Strasbourg University and CNRS.
The purpose of the HiePACS project is to efficiently perform frontier simulations arising from challenging research and industrial multiscale applications. The solution of these challenging problems requires a multidisciplinary approach involving applied mathematics, computational and computer sciences. In applied mathematics, it essentially involves advanced numerical schemes. In computational science, it involves massively parallel computing and the design of highly scalable algorithms and codes to be executed on future petaflop (and beyond) platforms. Through this approach, HiePACS intends to contribute to all steps that go from the design of new high-performance more scalable, robust and more accurate numerical schemes to the optimized implementations of the associated algorithms and codes on very high performance supercomputers.
The CAMUS research team focuses on parallelization, optimization, profiling, modeling, and compilation. The team has increasing interests in the approaches used and enhanced in the high-performance community. The team’s research activities are organized into five main issues that are closely related to reach the following objectives: performance, correction and productivity. These issues are: static parallelization and optimization of programs (where all statically detected parallelisms are expressed as well as all “hypothetical” parallelisms which would be eventually taken advantage of at runtime), profiling and execution behavior modeling (where expressive representation models of the program execution behavior will be used as engines for dynamic parallelizing processes), dynamic parallelization and optimization of programs (such transformation processes running inside a virtual machine), object-oriented programming and compiling for multicores (where object parallelism, expressed or detected, has to result in efficient runs), and finally program transformations proof (where the correction of many static and dynamic program transformations has to be ensured).
The objectives of the thesis will be to study how the new features of the TEXTAROSSA computing nodes can be used to develop high performance applications. With this aim, we will study how high performance task-based applications can be adapted in order to exploit the full potential of the platform. We will thus consider two existing high performance libraries by adapting them, designing advanced scheduling strategies and considering energy consumption awareness as a major constraint of the work.