Multiple-base Logarithmic Quantization and Application in Reduced Precision AI Computations - LIRMM - Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier
Communication Dans Un Congrès Année : 2024

Multiple-base Logarithmic Quantization and Application in Reduced Precision AI Computations

Résumé

The power of logarithmic quantizations and computations has been recognized as a useful tool in optimizing the performance of large ML models. In this article, we provide results that demonstrate significantly better quantization signal-to-noise ratio performance thanks to multiple-base logarithmic number systems (MDLNS) in comparison with the floating-point quantizations that use the same number of bits. On a hardware level, we present details about our Xilinx VCU-128 FPGA design for dot product and matrix-vector computations. The MDLNS matrix-vector design significantly outperforms equivalent fixed-point binary designs in terms of area (A) and time (T) complexity and power consumption as evidenced by a 4x scaling of AT2 metric for VLSI performance, and 57% increase in computational throughput per watt compared to fixed-point arithmetic.
Fichier principal
Vignette du fichier
main.pdf (755.55 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

lirmm-04638183 , version 1 (08-07-2024)

Identifiants

Citer

Vassil Dimitrov, Richard Ford, Laurent Imbert, Arjuna Madanayake, Nilan Udayanga, et al.. Multiple-base Logarithmic Quantization and Application in Reduced Precision AI Computations. ARITH 2024 - 31st IEEE International Symposium on Computer Arithmetic, Jun 2024, Málaga, Spain. pp.48-51, ⟨10.1109/ARITH61463.2024.00017⟩. ⟨lirmm-04638183⟩
35 Consultations
26 Téléchargements

Altmetric

Partager

More