-
Notifications
You must be signed in to change notification settings - Fork 127
Code Clean Up Redux #1087
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code Clean Up Redux #1087
Conversation
|
CodeAnt AI is reviewing your PR. Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughRemoves legacy parallel-loop macros (ACC_PARALLEL_LOOP_OLD, OMP_PARALLEL_LOOP_OLD, GPU_PARALLEL_LOOP_OLD), updates call sites to use the new GPU_PARALLEL_LOOP/END markers in MPI packing/unpacking, and wraps MPI-specific startup/FFT code with Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
| q_comm(i)%sf(j + unpack_offset, k, l) = real(buff_recv(r), kind=stp) | ||
| #if defined(__INTEL_COMPILER) | ||
| if (ieee_is_nan(q_comm(i)%sf(j, k, l))) then | ||
| print *, "Error", j, k, l, i | ||
| error stop "NaN(s) in recv" | ||
| end if | ||
| if (ieee_is_nan(q_comm(i)%sf(j, k, l))) then | ||
| print *, "Error", j, k, l, i | ||
| error stop "NaN(s) in recv" | ||
| end if | ||
| #endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Fix the array index in the ieee_is_nan check to use j + unpack_offset to match the array element being assigned, ensuring correct NaN validation. [possible issue, importance: 8]
| q_comm(i)%sf(j + unpack_offset, k, l) = real(buff_recv(r), kind=stp) | |
| #if defined(__INTEL_COMPILER) | |
| if (ieee_is_nan(q_comm(i)%sf(j, k, l))) then | |
| print *, "Error", j, k, l, i | |
| error stop "NaN(s) in recv" | |
| end if | |
| if (ieee_is_nan(q_comm(i)%sf(j, k, l))) then | |
| print *, "Error", j, k, l, i | |
| error stop "NaN(s) in recv" | |
| end if | |
| #endif | |
| q_comm(i)%sf(j + unpack_offset, k, l) = real(buff_recv(r), kind=stp) | |
| #if defined(__INTEL_COMPILER) | |
| if (ieee_is_nan(q_comm(i)%sf(j + unpack_offset, k, l))) then | |
| print *, "Error", j, k, l, i | |
| error stop "NaN(s) in recv" | |
| end if | |
| #endif |
| impure subroutine s_initialize_mpi_domain | ||
|
|
||
| num_dims = 1 + min(1, n) + min(1, p) | ||
|
|
||
| #ifdef MFC_MPI | ||
| ! Initialization of the MPI environment | ||
| call s_mpi_initialize() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Move the num_dims calculation back inside the #ifdef MFC_MPI block to prevent using uninitialized variables n and p when compiling without MPI. [possible issue, importance: 9]
| impure subroutine s_initialize_mpi_domain | |
| num_dims = 1 + min(1, n) + min(1, p) | |
| #ifdef MFC_MPI | |
| ! Initialization of the MPI environment | |
| call s_mpi_initialize() | |
| impure subroutine s_initialize_mpi_domain | |
| #ifdef MFC_MPI | |
| ! Initialization of the MPI environment | |
| call s_mpi_initialize() | |
| ! Processor with rank 0 assigns default user input values prior to reading | |
| ! from the input file. This is necessary because the other processors do not | |
| ! have access to it. | |
| if (proc_rank == 0) call s_set_default_user_inputs() | |
| ! The root processor (rank = 0) broadcasts the user-defined parameters to | |
| ! all other processors. This must be done before the computational domain | |
| ! can be decomposed among all available processors. | |
| call s_mpi_bcast_user_inputs() | |
| num_dims = 1 + min(1, n) + min(1, p) |
Nitpicks 🔍
|
|
CodeAnt AI finished reviewing your PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 5 files
Prompt for AI agents (all 1 issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="src/post_process/m_start_up.fpp">
<violation number="1" location="src/post_process/m_start_up.fpp:1141">
P1: `num_dims` is calculated before `n` and `p` are read from the input file. The `s_read_input_file()` call that populates `n` and `p` is inside the `#ifdef MFC_MPI` block below this line. This line should be moved inside the MPI block, after the input file is read.</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
|
|
||
| impure subroutine s_initialize_mpi_domain | ||
|
|
||
| num_dims = 1 + min(1, n) + min(1, p) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P1: num_dims is calculated before n and p are read from the input file. The s_read_input_file() call that populates n and p is inside the #ifdef MFC_MPI block below this line. This line should be moved inside the MPI block, after the input file is read.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src/post_process/m_start_up.fpp, line 1141:
<comment>`num_dims` is calculated before `n` and `p` are read from the input file. The `s_read_input_file()` call that populates `n` and `p` is inside the `#ifdef MFC_MPI` block below this line. This line should be moved inside the MPI block, after the input file is read.</comment>
<file context>
@@ -1118,9 +1132,15 @@ contains
impure subroutine s_initialize_mpi_domain
+
+ num_dims = 1 + min(1, n) + min(1, p)
+
+#ifdef MFC_MPI
</file context>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is interesting. i'm surprised it hasn't come up as an issue before w/ no-mpi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
specifically, that it vaguely appears that num_dims is only set in this subroutine, so remains unset in --no-mpi configurations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback! I've saved this as a new learning to improve future reviews.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question:
specifically, that it vaguely appears that num_dims is only set in this subroutine, so remains unset in --no-mpi configurations
Answer:
You can steer Qodo Merge to reliably catch this kind of “no-MPI path leaves a variable unset / initialized too early” issue by tightening Improve/Review guidance and raising severity for build-time/runtime correctness over style:
-
Instruct Improve to look for guard-dependent init, early computations, and variables set only in one preprocessor branch:
[pr_code_suggestions]
extra_instructions="""
Focus on correctness over style:- Flag variables initialized only inside conditional compile-time guards (e.g., #ifdef MFC_MPI) when those values are later used in both guarded and unguarded paths.
- Detect computations that occur before required configuration is loaded (e.g., num_dims computed before n/p are read).
- Prefer moving such computations after input parsing, or duplicating minimal initialization in the non-MPI path.
- When suggesting fixes, show both MPI and non-MPI branches and ensure num_dims is set in --no-mpi builds.
"""
-
Make Improve more thorough for these cases (optional):
[pr_code_suggestions]
suggestions_depth = "exhaustive"
focus_only_on_problems = true -
Nudge Review to emphasize these checks and surface them prominently:
[pr_reviewer]
extra_instructions="""
Prioritize build/runtime correctness:- Verify that critical initialization (reading inputs, defaults) happens in both MPI and non-MPI builds.
- Identify variables computed before their inputs are loaded.
- Call out any value set inside #ifdef that’s later used outside the guard.
"""
Tip: If you want high-signal, fewer-but-critical findings, keep suggestions_depth="regular" and set:
[pr_code_suggestions]
suggestions_score_threshold = 6
This configuration will guide Qodo Merge to propose moving num_dims computation after input parsing and ensuring a non-MPI initialization path, directly addressing the concern.
Relevant Sources:
- https://qodo-merge-docs.qodo.ai//tools/improve#extra-instructions-and-best-practices
- https://qodo-merge-docs.qodo.ai//tools/improve#controlling-suggestions-depth
- https://qodo-merge-docs.qodo.ai//tools/review#usage-tips
- https://qodo-merge-docs.qodo.ai//usage-guide/additional_configurations#extra-instructions
|
You have run out of free Bugbot PR reviews for this billing cycle. This will reset on January 8. To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial. |
|
CodeAnt AI is running Incremental review Thanks for using CodeAnt! 🎉We're free for open-source projects. if you're enjoying it, help us grow by sharing. Share on X · |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻��� Duplicate comments (1)
src/common/m_mpi_common.fpp (1)
994-1000: Fix NaN check to validate the element that was just assigned (offset bug).You assign into halo-shifted indices (e.g.,
j + unpack_offset/k + unpack_offset/l + unpack_offset) but the Intel-onlyieee_is_nanchecks read the unshifted indices, which can (a) miss NaNs in the received data and/or (b) falsely trip on unrelated/uninitialized halo cells. This was also flagged in prior review comments.#if defined(__INTEL_COMPILER) - if (ieee_is_nan(q_comm(i)%sf(j, k, l))) then + if (ieee_is_nan(q_comm(i)%sf(j + unpack_offset, k, l))) then print *, "Error", j, k, l, i error stop "NaN(s) in recv" end if #endif#if defined(__INTEL_COMPILER) - if (ieee_is_nan(q_comm(i)%sf(j, k, l))) then + if (ieee_is_nan(q_comm(i)%sf(j, k + unpack_offset, l))) then print *, "Error", j, k, l, i error stop "NaN(s) in recv" end if #endif#if defined(__INTEL_COMPILER) - if (ieee_is_nan(q_comm(i)%sf(j, k, l))) then + if (ieee_is_nan(q_comm(i)%sf(j, k, l + unpack_offset))) then print *, "Error", j, k, l, i error stop "NaN(s) in recv" end if #endifAlso applies to: 1049-1055, 1108-1114
🧹 Nitpick comments (1)
src/common/m_mpi_common.fpp (1)
995-999: Avoiderror stopin these parallel-loop regions; prefer project abort path (and verify compiler/offload behavior).Project guidelines prohibit
stop/error stopand also discourage such termination paths in GPU/device code. Since these checks sit inside the GPU-parallel wrapper, please verify this block is never emitted in accelerator-offload builds; otherwise, consider switching to a host-safe debug check or an abort routine consistent with the codebase. Based on coding guidelines/learnings, avoidstop/error stopand use the project’s abort mechanism.#if defined(__INTEL_COMPILER) if (ieee_is_nan(q_comm(i)%sf(...))) then print *, "Error", j, k, l, i - error stop "NaN(s) in recv" + call s_mpi_abort('NaN(s) in recv') end if #endifAlso applies to: 1050-1054, 1109-1113
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/common/m_mpi_common.fpp(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{fpp,f90}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop
**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under&
Use lower-case keywords and intrinsics (do,end subroutine, etc.)
Name modules withm_<feature>prefix (e.g.,m_transport)
Name public subroutines ass_<verb>_<noun>(e.g.,s_compute_flux) and functions asf_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoidgotostatements (except unavoidable legacy); avoid global state (COMMON,save)
Every variable must haveintent(in|out|inout)specification and appropriatedimension/allocatable/pointer
Uses_mpi_abort(<msg>)for error termination instead ofstop
Use!>style documentation for header comments; follow Doxygen Fortran format with!! @paramand!! @returntags
Useimplicit nonestatement in all modules
Useprivatedeclaration followed by explicitpublicexports in modules
Use derived types with pointers for encapsulation (e.g.,pointer, dimension(:,:,:) => null())
Usepureandelementalattributes for side-effect-free functions; combine them for array ...
Files:
src/common/m_mpi_common.fpp
src/**/*.fpp
📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)
src/**/*.fpp: Use.fppfile extension for Fypp preprocessed files; CMake transpiles them to.f90
Start module files with Fypp include for macros:#:include 'macros.fpp'
Use the fyppASSERTmacro for validating conditions:@:ASSERT(predicate, message)
Use fypp macro@:ALLOCATE(var1, var2)for device-aware allocation instead of standard Fortranallocate
Use fypp macro@:DEALLOCATE(var1, var2)for device-aware deallocation instead of standard Fortrandeallocate
Files:
src/common/m_mpi_common.fpp
🧠 Learnings (12)
📓 Common learnings
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Favor array operations over explicit loops when possible; use `collapse=N` directive to optimize nested loops
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Compile with Cray `ftn` or NVIDIA `nvfortran` for GPU offloading; also build CPU-only with GNU `gfortran` and Intel `ifx`/`ifort` for portability
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging
Applied to files:
src/common/m_mpi_common.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Applied to files:
src/common/m_mpi_common.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration
Applied to files:
src/common/m_mpi_common.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Declare loop-local variables with `private='[...]'` in GPU parallel loop macros
Applied to files:
src/common/m_mpi_common.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers
Applied to files:
src/common/m_mpi_common.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead
Applied to files:
src/common/m_mpi_common.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code
Applied to files:
src/common/m_mpi_common.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Applied to files:
src/common/m_mpi_common.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `s_mpi_abort(<msg>)` for error termination instead of `stop`
Applied to files:
src/common/m_mpi_common.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Favor array operations over explicit loops when possible; use `collapse=N` directive to optimize nested loops
Applied to files:
src/common/m_mpi_common.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up
Applied to files:
src/common/m_mpi_common.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
- GitHub Check: Detect File Changes
- GitHub Check: Code Cleanliness Check
- GitHub Check: Georgia Tech | Phoenix (gpu-acc)
- GitHub Check: Oak Ridge | Frontier (cpu)
- GitHub Check: Georgia Tech | Phoenix (cpu)
- GitHub Check: Github (macos, mpi, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, no-debug, false)
- GitHub Check: Github (macos, mpi, debug, false)
- GitHub Check: Github (ubuntu, mpi, debug, true)
- GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
- GitHub Check: Github (ubuntu, mpi, debug, false)
- GitHub Check: Oak Ridge | Frontier (gpu-omp)
- GitHub Check: Github (ubuntu, mpi, no-debug, true)
- GitHub Check: Georgia Tech | Phoenix (gpu-omp)
- GitHub Check: Oak Ridge | Frontier (gpu-acc)
- GitHub Check: Coverage Test on CodeCov
- GitHub Check: Build & Publish
🔇 Additional comments (1)
src/common/m_mpi_common.fpp (1)
987-1061: Loop-wrapper migration looks consistent; END markers are correctly paired.The
$:GPU_PARALLEL_LOOP(...)/$:END_GPU_PARALLEL_LOOP()blocks appear properly scoped around the intended nested loops, and therindex math still matches the pack-side layout (including the+1/+buff_sizeshifts for negative halo indices).Also applies to: 1099-1119, 1008-1039, 1063-1096, 1122-1157
This is the kind of adversarial mentor-mentee relationship @henryleberre appreciates (prefers, arguably). Well, it worked the first time! |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1087 +/- ##
==========================================
- Coverage 44.13% 44.08% -0.06%
==========================================
Files 71 71
Lines 20279 20332 +53
Branches 1981 1981
==========================================
+ Hits 8951 8963 +12
- Misses 10195 10236 +41
Partials 1133 1133 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…d macros (MFlowCode#1087) Co-authored-by: Spencer Bryngelson <sbryngelson@gmail.com>
User description
User description
Description
I accidentally deleted this branch before it got merged. Alright @sbryngelson, get merging before I delete it again.
PR Type
Bug fix, Enhancement
Description
Removed deprecated
_OLDmacro variants from GPU parallel loop macrosReplaced all
GPU_PARALLEL_LOOP_OLDcalls withGPU_PARALLEL_LOOPin MPI communication codeAdded
#ifdef MFC_MPIguards to post-processing FFT and MPI-specific functionsFixed compilation issues when post-processing is enabled without MPI support
Diagram Walkthrough
File Walkthrough
acc_macros.fpp
Remove deprecated ACC_PARALLEL_LOOP_OLD macrosrc/common/include/acc_macros.fpp
ACC_PARALLEL_LOOP_OLDmacro definitionACC_PARALLEL_LOOPmacro unchangedomp_macros.fpp
Remove deprecated OMP_PARALLEL_LOOP_OLD macrosrc/common/include/omp_macros.fpp
OMP_PARALLEL_LOOP_OLDmacro definitionOMP_PARALLEL_LOOPmacro unchangedparallel_macros.fpp
Remove deprecated GPU_PARALLEL_LOOP_OLD macrosrc/common/include/parallel_macros.fpp
GPU_PARALLEL_LOOP_OLDmacro definitionGPU_PARALLEL_LOOPmacro unchangedm_mpi_common.fpp
Replace GPU_PARALLEL_LOOP_OLD with GPU_PARALLEL_LOOP callssrc/common/m_mpi_common.fpp
#:call GPU_PARALLEL_LOOP_OLDwith$:GPU_PARALLEL_LOOP#:endcall GPU_PARALLEL_LOOP_OLDwith$:END_GPU_PARALLEL_LOOP()m_start_up.fpp
Add MFC_MPI conditional guards to post-processing functionssrc/post_process/m_start_up.fpp
#ifdef MFC_MPIguard aroundMPI_ALLREDUCEcall in FFT energycomputation
#ifdef MFC_MPIguards arounds_mpi_transpose_x2ysubroutine body#ifdef MFC_MPIguards arounds_mpi_transpose_y2zsubroutine body#ifdef MFC_MPIguards around FFT initialization code ins_initialize_modules#ifdef MFC_MPIguards arounds_mpi_FFT_fwdsubroutine body#ifdef MFC_MPIguards around MPI domain initialization ins_initialize_mpi_domain#ifdef MFC_MPIguards around MPI communicator cleanup ins_finalize_modulesnum_dimscalculation outside MPI conditional blockCodeAnt-AI Description
Ensure post-processing compiles and runs correctly when MPI or legacy macros are absent
What Changed
Impact
✅ Builds post-processing without MPI enabled✅ Fewer compilation failures due to removed legacy macros✅ Avoids calling MPI routines when MPI is not present💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.
Summary by CodeRabbit
Refactor
Chores
✏️ Tip: You can customize this high-level summary in your review settings.