Attentive Multimodal Learning on Sensor Data using Hyperdimensional Computing

PROCEEDINGS OF THE 2023 THE 22ND INTERNATIONAL CONFERENCE ON INFORMATION PROCESSING IN SENSOR NETWORKS, IPSN 2023(2023)

引用 0|浏览1
暂无评分
摘要
With the continuing advancement of ubiquitous computing and various sensor technologies, we are observing a massive population of multimodal sensors at the edge which posts significant challenges in fusing the data. In this poster we propose MultimodalHD, a novel Hyperdimensional Computing (HD)-based design for learning from multimodal data on edge devices. We use HD to encode raw sensory data to high-dimensional low-precision hypervectors, after which the multimodal hypervectors are fed to an attentive fusion module for learning richer representations via inter-modality attention. Our experiments on multimodal time-series datasets show MultimodalHD to be highly efficient. MultimodalHD achieves 17x and 14x speedup in training time per epoch on HAR and MHEALTH datasets when comparing with state-of-the-art RNNs, while maintaining comparable accuracy performance.
更多
查看译文
关键词
Hyperdimensional Computing,Multimodal Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要