CFP last date
30 March 2026
Call for Paper
April Edition
IJAIS solicits high quality original research papers for the upcoming April edition of the journal. The last date of research paper submission is 30 March 2026

Submit your paper
Know more
Random Articles
Reseach Article

Loop Detection and Parallelization for C++ Code using the OpenMP

by Shuruq Alsaedi, Amal AlMansour, Lama Al Khzayem, Fathy E. Eassa, Rsha Mirza
International Journal of Applied Information Systems
Foundation of Computer Science (FCS), NY, USA
Volume 13 - Number 2
Year of Publication: 2026
Authors: Shuruq Alsaedi, Amal AlMansour, Lama Al Khzayem, Fathy E. Eassa, Rsha Mirza
10.5120/ijais2026452048

Shuruq Alsaedi, Amal AlMansour, Lama Al Khzayem, Fathy E. Eassa, Rsha Mirza . Loop Detection and Parallelization for C++ Code using the OpenMP. International Journal of Applied Information Systems. 13, 2 ( Mar 2026), 55-66. DOI=10.5120/ijais2026452048

@article{ 10.5120/ijais2026452048,
author = { Shuruq Alsaedi, Amal AlMansour, Lama Al Khzayem, Fathy E. Eassa, Rsha Mirza },
title = { Loop Detection and Parallelization for C++ Code using the OpenMP },
journal = { International Journal of Applied Information Systems },
issue_date = { Mar 2026 },
volume = { 13 },
number = { 2 },
month = { Mar },
year = { 2026 },
issn = { 2249-0868 },
pages = { 55-66 },
numpages = {9},
url = { https://www.ijais.org/archives/volume13/number2/loop-detection-and-parallelization-for-c-code-using-the-openmp/ },
doi = { 10.5120/ijais2026452048 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2026-03-11T11:21:17.446721+05:30
%A Shuruq Alsaedi
%A Amal AlMansour
%A Lama Al Khzayem
%A Fathy E. Eassa
%A Rsha Mirza
%T Loop Detection and Parallelization for C++ Code using the OpenMP
%J International Journal of Applied Information Systems
%@ 2249-0868
%V 13
%N 2
%P 55-66
%D 2026
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Finding solutions that increase the speed of the code running process has become a necessity, especially in the current era, which has witnessed the evolution of processor speed techniques and tools. However, in order to fully benefit from such advanced processors, code should be written in a way that can utilize these processors' advantages. In fact, if a sequential code is transformed into parallel code, it would reduce the execution time, resulting in better performance and more efficient utilization of processor resources, especially for large problem sizes. Although automating the code parallelization process would save programmers time and effort, as well as help avoid many programming errors, the parallelization process requires a good knowledge of several important factors such as dependency analysis and identifying parallelizable regions in the code, which can be challenging to handle manually. This study introduces a novel automatic code translation and optimization tool that converts C++ sequential code into parallel code using the OpenMP programming model. To validate the tool, four benchmark programs were tested: a dot product achieved a speedup of up to 5.39× on 12 threads; an array sum reduced execution time from 0.038 seconds in the sequential case to 0.0052 seconds in the parallel case, giving a 9.03× speedup; matrix multiplication of size 1000×1000 improved from 2.91 seconds in sequential execution to 0.39 seconds in parallel execution with a 6.80× speedup; and numerical integration using the trapezoidal rule reached a speedup of 9.66× at 14 threads. These results demonstrate that the proposed approach with OpenMP programming model delivers consistent performance gains across diverse computational workloads.

References
  1. Anurag, A. V. P., Chauhan, A., and Yadav, M. K. 2022. Performance analysis of serial computing vs. parallel computing in MPI environment. International Journal of Advanced Research in Science, Communication and Technology (IJARSCT), 2(7).
  2. Rastogi, S. and Zaheer, H. 2016. Significance of parallel computation over serial computation. In Proceedings of the 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT). 57–61.
  3. Gabriel, E., et al. 2004. Open MPI: Goals, concept, and design of a next generation MPI implementation. In Recent Advances in Parallel Virtual Machine and Message Passing Interface.
  4. Kranzlmüller, D., Kacsuk, P., and Dongarra, J., Eds. 2004. Lecture Notes in Computer Science, Vol. 3241. Springer, Berlin, Germany.
  5. Gropp, W., Lusk, E., Doss, N., and Skjellum, A. 1996. A high-performance, portable implementation of the MPI message passing interface standard. Parallel Computing 22, 6, 789–828.
  6. Mehtre, V. V. and Kumar, V. 2019. Review paper on C++. IRE Journal 3, 6, 48.
  7. Aljas, J. M., Amores, H. A., and Lincopinis, D. R. 2023. An overview on C++ programming language. Western Mindanao State University, Pagadian City, Philippines.
  8. Guan, W. 2023. Research on how to optimize data structures with C++ language. Academic Journal of Computing and Information Science 6, 1, 52–56.
  9. Bazoukis, K. 2019. Comparing maintainability, reliability and efficiency quality attributes on C, classic C++, and modern C++ languages. Presented at Conference, Feb. 2019.
  10. Singh, I. 2013. Review on parallel and distributed computing. Scholars Journal of Engineering and Technology (SJET) 1, 4, 218–225.
  11. Muddukrishna, A. 2016. Improving OpenMP productivity with data locality optimizations and high-resolution performance analysis. PhD Dissertation. Department of ICT, KTH Royal Institute of Technology, Stockholm, Sweden.
  12. Nikolic, G., Dimitrijević, B., Nikolic, T., and Stojcev, M. 2022. Fifty years of microprocessor evolution: From single CPU to multicore and manycore systems. Facta Universitatis Series: Electronics and Energetics 35, 2, 155–186.
  13. Kumar, R., Rao, K. V. R., Dakshinamurthy, A., and Chatterjee, D. 2013. Evolution of processors and its implication in data computation capability. In Proceedings of the 10th Biennial International Conference and Exposition. 281.
  14. Shima, M. 2005. The birth, evolution and future of the microprocessor. In Proceedings of the 5th International Conference on Computer and Information Technology (CIT’05), Shanghai, China.
  15. Herhut, S., Hudson, R. L., Shpeisman, T., and Sreeram, J. 2012. Parallel programming for the web. In Proceedings of the 4th USENIX Conference on Hot Topics in Parallelism.
  16. Sardar, S. and Faizabadi, S. 2019. Parallelization and analysis of selected numerical algorithms using OpenMP and Pluto on symmetric multiprocessing machine. Data Technologies and Applications 53, 1, 90–106.
  17. Kovačević, D., Stanojević, M., Marinković, V., and Popović, M. 2013. A solution for automatic parallelization of sequential assembly code. Serbian Journal of Electrical Engineering 10, 1, 91–101.
  18. Bluemke, I. and Fugas, J. 2010. C code parallelization with ParaGraph. In Proceedings of Conference. 163–166.
  19. Kalender, M. E., Mergenci, C., and Ozturk, O. 2014. AutopaR: An automatic parallelization tool for recursive calls. In Proceedings of Conference, Bilkent University, Ankara, Turkey.
  20. Mathews, M. and Abraham, J. P. 2016. Automatic code parallelization with OpenMP task constructs. Department of Computer Science and Engineering, Mar Athanasius College of Engineering, Kochi, India.
  21. Jindal, A., Jindal, N., and Sethia, D. 2012. Automated tool to generate parallel CUDA code from a serial C code. International Journal of Computer Applications 50, 8, 15–22.
  22. Algarni, A., Alofi, A., and Eassa, F. 2021. Parallelization technique using hybrid programming model. International Journal of Advanced Computer Science and Applications (IJACSA) 12, 2.
Index Terms

Computer Science
Information Sciences

Keywords

Sequential code ; Single programming model; parallel code automatic code translation; C++; OpenMP