Skip to content

Conversation

gaborcsardi
Copy link
Member

It was not prepared for receiving updates from multiple files concurrently and some results were lost.

It was not prepared for receiving updates from multiple
files concurrently and some results were lost.
@gaborcsardi gaborcsardi requested a review from hadley September 9, 2025 11:54
@gaborcsardi gaborcsardi changed the title Fix the lister when runnin in parallel Fix the lister when running in parallel Sep 9, 2025
Copy link

claude bot commented Sep 9, 2025

Claude finished @gaborcsardi's task —— View job


Code Review

Todo List

  • Read and analyze the changed files
  • Review the parallel reporter fix for potential bugs
  • Provide concise feedback

The fix correctly addresses parallel execution issues by:

  1. Per-file state tracking: Replaced global current_* fields with self$running environment keyed by filename
  2. Proper initialization: start_file() now creates per-file state objects if they don't exist
  3. Cleanup: end_file() removes state from the environment after processing

Potential issue: In end_file() at line 91, rm(list = self$current_file, envir = self$running) could fail if self$current_file is not a valid name in the environment. Consider adding existence check before removal.

Otherwise, the parallel safety implementation looks solid.

self$running <- new.env(parent = emptyenv())
},

start_test = function(context, test) {
Copy link
Member

@hadley hadley Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if it's important, but to fully handle nested tests (which always existed via describe() and it() but are now fleshed out for test_that() too), start_test() might be called multiple times before end_test(); i.e. tests are a stack now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is actually how the parallel reporting works. We call start_file() and start_test() every time we get a result from a subprocess.

But do you also call end_test() multiple times, or no?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right — each test always generates one start_test() and one end_test() but there might be multiple starts before you get to the match ends. There's a tests/testthat/reporter/nested.R you can use to test if needed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is essentially what he new test case I added does as well. (In addition to doing all this "concurrently".) So this should be fine. (Also, all tests pass, that means it is fine, no?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, seems fine. I mostly wanted to make sure that you were aware of this change to the reporter API because I keep forgetting about it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH, I never really looked at how describe() tests are different, so I indeed wasn't aware of this.

But the logic should be the same as before for non-parallel runs, and parallel runs should now produce the same results as non-parallel runs. This was my reasoning for this PR being correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants