γ-Quant: Towards Learnable Quantization for Low-bit Pattern Recognition

Published in DAGM German Conference on Pattern Recognition, 2025

Mishal Fatima1, Shashank Agnihotri, Marius Bock, Kanchana Vaishnavi Gandikota, Kristof Van Laerhoven, Michael Moeller, Margret Keuper

Teaser Image

Abstract

Most pattern recognition models are developed on pre-processed data. In computer vision, for instance, RGB images processed through image signal processing (ISP) pipelines designed to cater to human perception are the most frequent input to image analysis networks. However, many modern vision tasks operate without a human in the loop, raising the question of whether such pre-processing is optimal for automated analysis. Similarly, human activity recognition (HAR) on body-worn sensor data commonly takes normalized floating-point data arising from a high-bit analog-to-digital converter (ADC) as an input, despite such an approach being highly inefficient in terms of data transmission, significantly affecting the battery life of wearable devices. In this work, we target low-bandwidth and energy-constrained settings where sensors are limited to low-bit-depth capture. We propose γ-Quant, i.e. the task-specific learning of a non-linear quantization for pattern recognition. We exemplify our approach on raw-image object detection as well as HAR of wearable data, and demonstrate that raw data with a learnable quantization using as few as 4-bits can perform on par with the use of raw 12-bit data. All code to reproduce our experiments will be released upon acceptance.

Resources

[pdf] [arxiv]

Bibtex

@InProceedings{Fatima_2025_GCPR,
author    = {Fatima, Mishal and Agnihotri, Shashank and Bock, Marius and Gandikota, Kanchana Vaishnavi and van Laerhoven, Kristof and Moeller, Michael and Keuper, Margret},
title     = {γ-Quant: Towards Learnable Quantization for Low-bit Pattern Recognition},
booktitle = {Proceedings of the  DAGM German Conference on Pattern Recognition},
month     = {September},
year      = {2025} }