HapticLLaMA: A Multimodal Sensory Language Model for Haptic Captioning

Computer Vision & MultiModal AI
Published: arXiv: 2508.06475v1
Authors

Guimin Hu Daniel Hershcovich Hasti Seifi

Abstract

Haptic captioning is the task of generating natural language descriptions from haptic signals, such as vibrations, for use in virtual reality, accessibility, and rehabilitation applications. While previous multimodal research has focused primarily on vision and audio, haptic signals for the sense of touch remain underexplored. To address this gap, we formalize the haptic captioning task and propose HapticLLaMA, a multimodal sensory language model that interprets vibration signals into descriptions in a given sensory, emotional, or associative category. We investigate two types of haptic tokenizers, a frequency-based tokenizer and an EnCodec-based tokenizer, that convert haptic signals into sequences of discrete units, enabling their integration with the LLaMA model. HapticLLaMA is trained in two stages: (1) supervised fine-tuning using the LLaMA architecture with LoRA-based adaptation, and (2) fine-tuning via reinforcement learning from human feedback (RLHF). We assess HapticLLaMA's captioning performance using both automated n-gram metrics and human evaluation. HapticLLaMA demonstrates strong capability in interpreting haptic vibration signals, achieving a METEOR score of 59.98 and a BLEU-4 score of 32.06 respectively. Additionally, over 61% of the generated captions received human ratings above 3.5 on a 7-point scale, with RLHF yielding a 10% improvement in the overall rating distribution, indicating stronger alignment with human haptic perception. These findings highlight the potential of large language models to process and adapt to sensory data.

Paper Summary

Key Innovation
The proposed model, HapticLLaMA, is a multimodal sensory language model that interprets vibration signals into descriptions in a given category (sensory, emotional, or associative). It uses two types of haptic tokenizers to convert haptic signals into sequences of discrete units, enabling their integration with the LLaMA model. The model is trained in two stages: supervised fine-tuning using the LLaMA architecture and reinforcement learning from human feedback.
Practical Impact
HapticLLaMA has the potential to enhance AI systems' ability to interpret human perception and behavior by providing a richer and more nuanced understanding of user context and sensory experiences. Applications include user interactions in virtual reality, physical rehabilitation, blind user navigation, and gaming.
Analogy / Intuitive Explanation
Imagine trying to describe the feeling of a gentle breeze on your skin or the sensation of playing a musical instrument. Haptic captioning is like trying to put those feelings into words. It's a challenging task that requires understanding the complex relationships between sensory inputs and human perception. HapticLLaMA is like a language model that can "hear" the vibrations and translate them into descriptive text, allowing us to better understand and interact with the world around us.
Paper Information
Categories:
cs.CL
Published Date:

arXiv ID:

2508.06475v1

Quick Actions