- J. Bellavita, M. Rubino, N. Iyer, A. Chang, A. Devarakonda, F. Vella and G. Guidi, Communication-Avoiding Linear Algebraic Kernel K-Means on GPUs, arXiv:2601.17136, (Accepted, IPDPS'26)
- Y. Wang, Z. Shao, T. Jiang, and A. Devarakonda, Enhanced Cyclic Coordinate Descent Methods for Elastic Net Penalized Linear Models, arXiv:2510.19999, (Accepted, NeurIPS'25)
- S. Akkas, A. Devarakonda, and A. Azad, DistShap: Scalable GNN Explanations with Distributed Shapley Values, arXiv:2506.22668
- J. Pinheiro, A. Devarakonda, and G. Ballard, Parallel Rank-Adaptive Higher Order Orthogonal Iteration, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC'25), pp. 1800-1815. 2025.
- A. Devarakonda and R. Kannan, Communication-Efficient, 2D Parallel Stochastic Gradient Descent for Distributed-Memory Optimization, arXiv:2501.07526
- Z. Shao and A. Devarakonda, Scalable dual coordinate descent for kernel methods, Proceedings of the International Conference on High Performance Computing in Asia-Pacific Region (HPC-Asia'25), pp. 52-63, 2025. Outstanding paper award.
- A. Devarakonda and G. Ballard, Sequential and Shared-Memory Parallel Algorithms for Partitioned Local Depths, Proceedings of the 2024 SIAM Conference on Parallel Processing for Scientific Computing (PP'24), pp. 53-64, 2024.
- A. Devarakonda and J. Demmel, Avoiding communication in logistic regression, IEEE 27th international conference on high performance computing, data, and analytics (HiPC'20), pp. 91-100, 2020.
- A. Devarakonda, K. Fountoulakis, J. Demmel, and M. W. Mahoney, Avoiding communication in primal and dual block coordinate descent methods, SIAM Journal on Scientific Computing, 41, no. 1, C1-C27, 2019.
- S. Soori, A. Devarakonda, Z. Blanco, J. Demmel, M. Gurbuzbalaban, M. M. Dehnavi, Reducing communication in proximal Newton methods for sparse least squares problems, Proceedings of the 47th International Conference on Parallel Processing (ICPP'18), pp. 1-10, 2018.
- A. Devarakonda, K. Fountoulakis, J. Demmel, M. W. Mahoney, Avoiding synchronization in first-order methods for sparse convex optimization, IEEE International Parallel and Distributed Processing Symposium (IPDPS'18), pp. 409-418, 2018.
- A. Devarakonda, M. Naumov, and M. Garland, Adabatch: Adaptive batch sizes for training deep neural networks, arXiv:1712.02029.
- S. Soori, A. Devarakonda, Z. Blanco, J. Demmel, M. Gurbuzbalaban, M. M. Dehnavi, Avoiding communication in proximal methods for convex optimization problems, arXiv:1710.08883.
- A. Gittens, A. Devarakonda, E. Racah, M. Ringenburg, L. Gerhardt, J. Kottalam, J. Liu, K. Maschhoff, S. Canon, J. Chhugani, P. Sharma, J. Yang, J. Demmel, J. Harrell, V. Krishnamurthy, M. W. Mahoney Matrix factorizations at scale: A comparison of scientific data analytics in Spark and C+ MPI using three case studies, IEEE International Conference on Big Data (BigData'16), pp 204-213, 2016.
- R. Carbunescu, A. Devarakonda, J. Demmel, S. Gordon, J. Alameda, S. Mehringer, Architecting an autograder for parallel code, Proceedings of the 2014 Annual Conference on Extreme Science and Engineering Discovery Environment (XSEDE'14), pp. 1-8, 2014.
- M. Parashar, M. AbelBaky, I. Rodero, and A. Devarakonda, Cloud paradigms and practices for computational and data-enabled science and engineering, Computing in Science & Engineering 15 (4), pp. 10-18, 2013.
- D. Villegas, N. Bobroff, I. Rodero, J. Delgado, Y. Liu, A. Devarakonda, L. Fong, S. M. Sadjadi, and M. Parashar, Cloud federation in a layered service model, Journal of Computer and System Sciences 78 (5), pp. 1330-1344, 2012.