Federal Learning Optimization for Edge Devices with Limited Resources

Authors

  • Shalli Rani Professor, Chitkara University, Rajpura Punjab, India. Author
  • Yuvraj Powar Technical Architect. Author

DOI:

https://doi.org/10.63282/3050-9416.IJAIBDCMS-V5I4P112

Keywords:

Federated Learning, Edge Computing, Resource-Constrained Devices, Model Compression, Adaptive Communication, Distributed Machine Learning

Abstract

The rapid growth of the Internet of Things (IoT) and edge devices has catalyzed the birth and sustenance of decentralized machine learning paradigms like Federated Learning. In Federated Learning, a model is trained across multiple clients in such a way that raw data remains local, thus achieving data-privacy. Deployed in edge devices, however, FL faces challenges of computational power, memory limitations, intermittent connectivity, and energy constraints. Therefore, this paper proposes an integrated optimization framework for the practical realization and scaling-up of ML via Federated paradigm on resource-constrained edge infrastructures.

To this aim, we propose a multi-level approach involving:

  • Lightweight model architectures using quantization and pruning techniques
  • Adaptive client selection based on a device's capability or energy level
  • Communication-efficient aggregation protocols, such as periodic averaging, asynchronous updates, and sparsified gradients

Furthermore, the proposed system incorporates a real-time monitoring layer for load-aware scheduling of edge nodes. Empirical evaluation has been done with public datasets (such as CIFAR-10, HAR, Shakespeare) on Raspberry Pi 4 and NVIDIA Jetson Nano to emulate typical constraints on edge devices. The results demonstrate communication cost reduction up to 38.7%, 41.3% faster convergence, and 50.6% energy savings when compared to the baseline federated setting. A comparative study sheds light on the trade-offs between model performances and resource utilizations under different optimization schemes. This research contributes to the already burgeoning literature aimed at practical federated learning with the demonstration of how edge-native deployments are realizable through architectural adaptations alongside resource-aware mechanisms. The work has strong implications in smart healthcare, predictive maintenance, and autonomous sensing in edge AI applications

References

1. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. 20th Int. Conf. Artif. Intell. Statist. (AISTATS), vol. 54, pp. 1273–1282, 2017.

2. CT Aghaunor. (2023). From Data to Decisions: Harnessing AI and Analytics. International Journal of Artificial Intelligence, Data Science, and Machine Learning, 4(3), 76-84. https://doi.org/10.63282/3050-9262.IJAIDSML-V4I3P109

3. P. Kairouz et al., “Advances and open problems in federated learning,” Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1–210, 2021.

4. McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & Aguilera, A. (2017). Communication-Efficient Learning of Deep Networks from Decentralized Data (FedAvg). arXiv:1602.05629.

5. Rautaray, S., & Tayagi, D. (2023). Artificial Intelligence in Telecommunications: Applications, Risks, and Governance in the 5G and Beyond Era. Artificial Intelligence

6. RA Kodete. (2022). Enhancing Blockchain Payment Security with Federated Learning. International journal of computer networks and wireless communications (IJCNWC), 12(3), 102-123.

7. C. Xie, S. Koyejo, and I. Gupta, “Fall of empires: Breaking Byzantine-tolerant federated learning,” Proc. NeurIPS, vol. 33, pp. 6615–6625, 2020.

8. D Alexander.(2022). EMERGING TRENDS IN FINTECH: HOW TECHNOLOGY IS RESHAPING THE GLOBAL FINANCIAL LANDSCAPE. Journal of Population Therapeutics and Clinical Pharmacology, 29(02), 573-580.

9. Y. Kang, Y. Lin, and B. Li, “Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory,” in Proc. IEEE INFOCOM, pp. 1913–1922, 2021.

10. Y. Chen, X. Jin, C. Zhang, and T. He, “Communication-efficient federated learning with adaptive gradient compression,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 7, pp. 2970–2983, Jul. 2022.

11. Autade, Rahul, Financial Security and Transparency with Blockchain Solutions (May 01, 2021). Turkish Online Journal of Qualitative Inquiry, 2021[10.53555/w60q8320], Available at SSRN: https://ssrn.com/abstract=5339013 or http://dx.doi.org/10.53555/w60q8320

12. H. Yin, Y. Gao, Z. Ma, and Q. Yang, “FedTransformer: Communication-efficient federated learning with transformer,” in Proc. IEEE Int. Conf. Big Data, pp. 2909–2918, 2021.

13. C. Du, X. Zhang, W. Bao, and Y. Wu, “Federated learning with communication-efficient over-the-air aggregation,” IEEE Trans. Wireless Commun., vol. 21, no. 9, pp. 7642–7656, Sep. 2022.

14. Konečný, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., & Bacon, D. (2016). Federated Learning: Strategies for Improving Communication Efficiency. arXiv:1610.05492.

15. S. Rieke et al., “The future of digital health with federated learning,” NPJ Digital Medicine, vol. 3, no. 1, pp. 1–7, Sep. 2020.

16. Lin, Y., Han, S., Mao, H., Wang, Y., & Dally, W. J. (2018). Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training. ICLR.

17. Z. Zhao, F. Wang, and M. Liang, “Federated learning with adaptive client selection and dropout,” in Proc. AAAI Conf. Artif. Intell., pp. 11517–11525, 2021.

18. Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020). Federated Optimization in Heterogeneous Networks (FedProx). MLSys.

19. J. Xu et al., “Edge federated learning with mobile devices: Issues, challenges, and future research,” IEEE Netw., vol. 35, no. 6, pp. 108–114, Nov. 2021.

20. L. Li, J. Ma, and W. Lou, “Privacy-preserving federated learning with hardware assistance in edge computing,” IEEE Trans. Comput., vol. 70, no. 9, pp. 1456–1468, Sep. 2021.

21. F. Sattler, S. Wiedemann, K.-R. Müller, and W. Samek, “Sparse binary compression: Towards distributed deep learning with minimal communication,” Proc. IEEE Int. Jt. Conf. Neural Netw., pp. 1–8, 2019.

22. K. H. Kim et al., “Lightweight federated learning for IoT edge devices based on model compression,” Sensors, vol. 21, no. 9, pp. 1–17, Apr. 2021.

23. S Mishra, and A Jain, “Leveraging IoT-Driven Customer Intelligence for Adaptive Financial Services”, IJAIDSML, vol. 4, no. 3, pp. 60–71, Oct. 2023, doi: 10.63282/3050-9262.IJAIDSML-V4I3P107

24. Reddi, S. J., et al. (2021). Adaptive Federated Optimization (FedAdam, FedYogi, FedAdagrad). arXiv:2003.00295 / MLSys.

25. Y. Liu, X. Huang, and L. Wang, “Towards fair and efficient federated learning in edge computing,” IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 5, pp. 1080–1092, May 2022.

26. M. Pandey, and A. R. Pathak, “A Multi-Layered AI-IoT Framework for Adaptive Financial Services”, IJETCSIT, vol. 5, no. 3, pp. 47–57, Oct. 2024, doi: 10.63282/3050-9246.IJETCSIT-V5I3P105

27. S. Panda et al., “TinyFed: Enabling federated learning on ultra-low power devices,” in Proc. ACM Sensys, pp. 41–53, 2021.

28. F. T. Weng, C. W. Lien, and C. W. Tsai, “A study on the deployment of federated learning on Jetson Nano and Raspberry Pi,” in Proc. Int. Conf. Ubiquitous Intell. Comput., pp. 387–393, 2021.

29. X. Zeng et al., “FedEdge: Achieving efficient federated learning in edge computing environments,” IEEE Trans. Mobile Comput., early access, 2023.

30. Alistarh, D., Hoefler, T., Johansson, M., & Konstantinov, N. (2017/2018). TernGrad / QSGD & Sparse/Ternary Compression for Communication-Efficient Training. ICML/NeurIPS.

31. Nguyen, M.-D., Lee, S.-M., Pham, Q.-V., Hoang, D. T., Nguyen, D. N., & Hwang, W.-J. (2022). HCFL: A High Compression Approach for Communication-Efficient Federated Learning in Very Large-Scale IoT Networks. arXiv:2204.06760.

32. Laxman doddipatla, & Sai Teja Sharma R.(2023). The Role of AI and Machine Learning in Strengthening Digital Wallet Security Against Fraud. Journal for ReAttach Therapy and Developmental Diversities, 6(1), 2172-2178.

33. Prakash, P., Ding, J., Chen, R., Qin, X., Shu, M., Cui, Q., Guo, Y., & Pan, M. (2022). IoT Device Friendly and Communication-Efficient Federated Learning via Joint Model Pruning and Quantization. IEEE Internet of Things Journal, 9(15), 13638–13650.

34. Jiang, Y., Wang, S., Valls, V., Ko, B. J., Lee, W.-H., Leung, K. K., & Tassiulas, L. (2023). Model Pruning Enables Efficient Federated Learning on Edge Devices. IEEE TNNLS, 34(12), 10374–10386.

35. Y. H. Lan et al., “Federated distillation: Enabling resource-constrained edge learning,” in Proc. IEEE GLOBECOM, pp. 1–6, 2020.

36. A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, and R. Pedarsani, “FedPAQ: A communication-efficient federated learning method with periodic averaging and quantization,” in Proc. ICML, 2020.

37. H. Yu, S. Yang, and S. Zhu, “Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for federated learning,” in Proc. AAAI, 2019.

38. C. He, S. Li, J. So, X. Wang, and M. Alizadeh, “FedML: A research library and benchmark for federated machine learning,” Proc. 2nd SysML Conf., pp. 1–5, 2020.

39. B. McMahan, D. Ramage, K. Talwar, and L. Zhang, “Learning differentially private recurrent language models,” Proc. ICLR, pp. 1–14, 2018.

40. Hemalatha Naga Himabindu, Gurajada. (2022). Unlocking Insights: The Power of Data Science and AI in Data Visualization. International Journal of Computer Science and Information Technology Research (IJCSITR), 3(1), 154-179. https://doi.org/10.63530/IJCSITR_2022_03_01_016

41. X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the convergence of FedAvg on non-IID data,” Proc. ICLR, pp. 1–24, 2020.

42. AB Dorothy. GREEN FINTECH AND ITS INFLUENCE ON SUSTAINABLE FINANCIAL PRACTICES. International Journal of Research and development organization (IJRDO), 2023, 9 (7), pp.1-9. ⟨10.53555/bm.v9i7.6393⟩. ⟨hal-05215332⟩

43. Y. Zhao, M. Li, L. Lai, and J. S. Baras, “Federated learning with non-IID data via local class-wise updating,” Proc. IEEE ICASSP, pp. 8851–8855, 2020.

44. T. Wang, N. Zhang, Y. Zhang, and X. S. Shen, “Adaptive federated learning in resource constrained edge computing systems,” IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 256–270, Jan. 2021.

45. R. R. Yerram, "Risk management in foreign exchange for crossborder payments:Strategies for minimizing exposure," Turkish Online Journal of Qualitative Inquiry, pp. 892-900, 2020.

46. H. Hu, X. Wu, C. Gu, X. Liu, and H. Liu, “Fed-Cache: An efficient and privacy-preserving federated learning framework with local caching,” IEEE Trans. Netw. Serv. Manag., vol. 18, no. 4, pp. 4201–4214, Dec. 2021

Downloads

Published

2024-12-30

Issue

Section

Articles

How to Cite

1.
Rani S, Powar Y. Federal Learning Optimization for Edge Devices with Limited Resources. IJAIBDCMS [Internet]. 2024 Dec. 30 [cited 2025 Sep. 13];5(4):115-23. Available from: https://ijaibdcms.org/index.php/ijaibdcms/article/view/224