AI-Enhanced Data Structures for High-Performance Computing

Authors

  • Prof. Liang Zhao Zhejiang University, AI & Machine Perception Research Center, China Author

DOI:

https://doi.org/10.63282/3050-9416.IJAIBDCMS-V2I2P101

Keywords:

AI-enhanced data structures, adaptive hashing, predictive caching, self-organizing lists, hybrid hash-tree, high-performance computing, scalability, machine learning, big data analytics, reinforcement learning

Abstract

High-Performance Computing (HPC) is a critical domain that drives scientific discovery, engineering simulations, and large-scale data analysis. Traditional data structures, while efficient for many applications, often struggle to meet the demands of HPC due to the sheer volume and complexity of data. This paper explores the integration of Artificial Intelligence (AI) techniques into data structures to enhance their performance, scalability, and adaptability. We discuss various AIenhanced data structures, their theoretical foundations, and practical applications in HPC. The paper also presents case studies and benchmarks to demonstrate the effectiveness of these structures in real-world scenarios. Finally, we outline future research directions and challenges in this emerging field

References

1. Kepner, J., & Gilbert, J. (Eds.). (2011). Graph algorithms in the language of linear algebra. Society for Industrial and Applied Mathematics.

2. Mattson, T., Bader, D., Berry, J., Buluç, A., & Dongarra, J. (2013). Standards for graph algorithm primitives. In 2013 IEEE High Performance Extreme Computing Conference (HPEC) (pp. 1-2). IEEE.

3. Kumar, M., & Moreira, J. (2017). The GraphBLAS C API: Design and implementation. In 2017 IEEE High Performance Extreme Computing Conference (HPEC) (pp. 1-6). IEEE.

4. Davis, T. A. (2019). Algorithm 1000: SuiteSparse:GraphBLAS: Graph algorithms in the language of sparse linear algebra. ACM Transactions on Mathematical Software (TOMS), 45(4), 1-25.

5. Buluç, A., Fineman, J. T., Frigo, M., Gilbert, J. R., & Leiserson, C. E. (2009). Parallel sparse matrix-vector and matrixtranspose-vector multiplication using compressed sparse blocks. In Proceedings of the twenty-first annual symposium on Parallelism in algorithms and architectures (pp. 233-244).

6. Kepner, J., & Jansen, D. (2016). Mathematics of big data: Spreadsheets, databases, matrices, and graphs. MIT Press.

7. Kumar, M., & Serrano, M. (2017). The GraphBLAS: Mathematics and applications. In 2017 IEEE High Performance Extreme Computing Conference (HPEC) (pp. 1-5). IEEE.

8. Bader, D. A., & Madduri, K. (2008). SNAP, small-world network analysis and partitioning: An open-source parallel graph framework for the exploration of large-scale networks. In 2008 IEEE International Symposium on Parallel and Distributed Processing (pp. 1-12). IEEE.

9. Azad, A., & Buluc, A. (2017). A work-efficient parallel sparse matrix-sparse vector multiplication algorithm. In 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS) (pp. 688-697). IEEE.

10. Mattson, T. G., & Kahan, S. (2019). GraphBLAS: A linear algebraic approach to graph analytics. The International Journal of High-Performance Computing Applications, 33(4), 735-760.

11. Kepner, J., & Chaidez, K. (2018). Abstracting the graph BLAS: unifying graph programming and high-performance computing. In 2018 IEEE High Performance extreme computing conference (HPEC) (pp. 1-6). IEEE.

12. Buluç, A., & Gilbert, J. R. (2011). The combinatorial BLAS: Design, implementation, and applications. International Journal of High-Performance Computing Applications, 25(4), 496-509.

Downloads

Published

2021-04-15

Issue

Section

Articles

How to Cite

1.
Zhao L. AI-Enhanced Data Structures for High-Performance Computing. IJAIBDCMS [Internet]. 2021 Apr. 15 [cited 2025 Oct. 29];2(2):1-9. Available from: https://ijaibdcms.org/index.php/ijaibdcms/article/view/27