Member Journey Mapping and Prediction Using Multi-Modal Data Fusion

Authors

  • Appala Nooka Kumar Doodala Technical Test Lead at Infosys Ltd, USA. Author
  • Swathi Thatraju Technical Test Lead at Infosys Ltd, USA. Author

DOI:

https://doi.org/10.63282/3050-9416.IJAIBDCMS-V2I3P113

Keywords:

Member Journey, Data Fusion, Predictive Modeling, Machine Learning, Multi-Modal Analytics, Customer Experience, Behavioral Prediction

Abstract

By member journey mapping and prediction and the integration of multi-modal data fusion, this research intends to deepen the understanding of the engagement dynamics that exist in various digital ecosystems. Through member journey mapping, one gets a clear organizational framework on how to visualize and analyze user interactions over time, thereby unearthing behavioral patterns that can greatly help in retention and personalization strategies. Since traditional models that depend on a single data stream are often unsuccessful in grasping the complexity of user engagement, this research has embarked on the use of multi-modal data, which includes user feedback in text form, image content, behavioral logs, demographic profiles, and social media interactions, with the sole aim of having a more comprehensive and predictive understanding of member behavior.  The new framework proposed in the study uses a combination of advanced machine learning and deep learning techniques, which not only amalgamate feature extraction and representation learning, but also involve temporal sequence modeling in order to be able to foresee the very next engagement stages as well as churn risk. The model on digital membership platforms when applied to real-world datasets, reveals significant improvements in predictive accuracy as well as interpretability when compared to unimodal baselines. On top of that, analytical methods like attention-based fusion networks and graph-based temporal modeling, among others, provide the going-down-the-rabbit-hole possibilities of latent factors that influence not only the engagement depth but also the transitions. The main contributions of this research entail the creation of a scalable multi-modal prediction architecture, the acquisition of insight into cross-modal correlations affecting member loyalty, as well as the coming into being of interpretable visualizations for journey progression. Future possibilities include marketing optimization, personalized content delivery, and retention forecasting in various industries such as e-commerce, fitness, and education. Still, the current research limitations such as data sparsity, privacy concerns, and computational complexity are acknowledged by the authors who, accordingly, propose future work directions including federated learning and ethical AI integration for sustainable ​deployment.

References

1. Mäenpää, Heikki, Andrei Lobov, and Jose L. Martinez Lastra. "Travel mode estimation for multi-modal journey planner." Transportation Research Part C: Emerging Technologies 82 (2017): 273-289.

2. Hammoudeh, Mohammad, et al. "Map as a service: a framework for visualising and maximising information return from multi-modal wireless sensor networks." Sensors 15.9 (2015): 22970-23003.

3. Zhou, Jiancun, et al. "Two-stage spatial mapping for multimodal data fusion in mobile crowd sensing." IEEE Access 8 (2020): 96727-96737.

4. You, Linlin, et al. "A generic future mobility sensing system for travel data collection, management, fusion, and visualization." IEEE Transactions on Intelligent Transportation Systems 21.10 (2019): 4149-4160.

5. Town, Christopher. "Multi-sensory and multi-modal fusion for sentient computing." International Journal of Computer Vision 71.2 (2007): 235-253.

6. Aditjandra, Paulus T., John D. Nelson, and Steve D. Wright. "A multi-modal international journey planning system: a case study of WISETRIP." 16th ITS world congress and exhibition on intelligent transport systems and services. 2009.

7. Ming-Hao, Y. A. N. G., and T. A. O. Jian-Hua. "Data fusion methods in multimodal human computer dialog." Virtual Reality & Intelligent Hardware 1.1 (2019): 21-38.

8. Ramey, Arnaud. "Local user mapping via multi-modal fusion for social robots." (2013).

9. Liu, Hao, et al. "Incorporating multi-source urban data for personalized and context-aware multi-modal transportation recommendation." IEEE Transactions on Knowledge and Data Engineering 34.2 (2020): 723-735.

10. Huang, Zhiyu, et al. "Multi-modal sensor fusion-based deep neural network for end-to-end autonomous driving with scene understanding." IEEE Sensors Journal 21.10 (2020): 11781-11790.

11. Kalajdjieski, Jovan, et al. "Air pollution prediction with multi-modal data and deep neural networks." Remote Sensing 12.24 (2020): 4142.

12. Dobrev, Yassen, Peter Gulden, and Martin Vossiek. "An indoor positioning system based on wireless range and angle measurements assisted by multi-modal sensor fusion for service robot applications." IEEE Access 6 (2018): 69036-69052.

13. Dennison Jr, Mark, et al. "Improving motion sickness severity classification through multi-modal data fusion." Artificial intelligence and machine learning for multi-domain operations applications. Vol. 11006. SPIE, 2019.

14. Chou, Chun-An, et al. MMDF 2018 multimodal data fusion workshop report. Diss. Northeastern University‎(Boston, Mass), 2018.

15. Lee, Garam, et al. "Predicting Alzheimer’s disease progression using multi-modal deep learning approach." Scientific reports 9.1 (2019): 1952.

16. Krishna Chaitanaya Chittoor, “Architecting Scalable Ai Systems For Predictive Patient Risk”, INTERNATIONAL JOURNAL OF CURRENT SCIENCE, 11(2), PP-86-94, 2021, https://rjpn.org/ijcspub/papers/IJCSP21B1012.pdf

Downloads

Published

2021-09-30

Issue

Section

Articles

How to Cite

1.
Kumar Doodala AN, Thatraju S. Member Journey Mapping and Prediction Using Multi-Modal Data Fusion. IJAIBDCMS [Internet]. 2021 Sep. 30 [cited 2026 Apr. 16];2(3):110-21. Available from: https://ijaibdcms.org/index.php/ijaibdcms/article/view/515