| International Journal of Applied Information Systems |
| Foundation of Computer Science (FCS), NY, USA |
| Volume 13 - Number 2 |
| Year of Publication: 2026 |
| Authors: Shuruq Alsaedi, Amal AlMansour, Lama Al Khzayem, Fathy E. Eassa, Rsha Mirza |
10.5120/ijais2026452048
|
Shuruq Alsaedi, Amal AlMansour, Lama Al Khzayem, Fathy E. Eassa, Rsha Mirza . Loop Detection and Parallelization for C++ Code using the OpenMP. International Journal of Applied Information Systems. 13, 2 ( Mar 2026), 55-66. DOI=10.5120/ijais2026452048
Finding solutions that increase the speed of the code running process has become a necessity, especially in the current era, which has witnessed the evolution of processor speed techniques and tools. However, in order to fully benefit from such advanced processors, code should be written in a way that can utilize these processors' advantages. In fact, if a sequential code is transformed into parallel code, it would reduce the execution time, resulting in better performance and more efficient utilization of processor resources, especially for large problem sizes. Although automating the code parallelization process would save programmers time and effort, as well as help avoid many programming errors, the parallelization process requires a good knowledge of several important factors such as dependency analysis and identifying parallelizable regions in the code, which can be challenging to handle manually. This study introduces a novel automatic code translation and optimization tool that converts C++ sequential code into parallel code using the OpenMP programming model. To validate the tool, four benchmark programs were tested: a dot product achieved a speedup of up to 5.39× on 12 threads; an array sum reduced execution time from 0.038 seconds in the sequential case to 0.0052 seconds in the parallel case, giving a 9.03× speedup; matrix multiplication of size 1000×1000 improved from 2.91 seconds in sequential execution to 0.39 seconds in parallel execution with a 6.80× speedup; and numerical integration using the trapezoidal rule reached a speedup of 9.66× at 14 threads. These results demonstrate that the proposed approach with OpenMP programming model delivers consistent performance gains across diverse computational workloads.