-
Notifications
You must be signed in to change notification settings - Fork 981
Description
What happened:
To prevent extreme situations, when the number of replias of the resource template is zero in the scheduler, the number of replicas of the member cluster remains unchanged (implemented by a custom scheduler)
The problem is: now when workload.spec.replicas = 0, rb.spec.cluster > 0, the number of member cluster replicas all becomes zero;
Under normal circumstances, the number of member cluster replicas should be consistent with that in rb.spec.cluster, which is obviously not the case here
work object
Here only resource template replicas's >0, rb.spec.cluster will be merged to workload.
karmada/pkg/controllers/binding/common.go
Lines 70 to 77 in c5365b2
if needReviseReplicas(bindingSpec.Replicas, bindingSpec.Placement) { | |
if resourceInterpreter.HookEnabled(clonedWorkload.GroupVersionKind(), configv1alpha1.InterpreterOperationReviseReplica) { | |
clonedWorkload, err = resourceInterpreter.ReviseReplica(clonedWorkload, int64(targetCluster.Replicas)) | |
if err != nil { | |
klog.Errorf("Failed to revise replica for %s/%s/%s in cluster %s, err is: %v", | |
workload.GetKind(), workload.GetNamespace(), workload.GetName(), targetCluster.Name, err) | |
errs = append(errs, err) | |
continue |
karmada/pkg/controllers/binding/common.go
Lines 315 to 317 in c5365b2
func needReviseReplicas(replicas int32, placement *policyv1alpha1.Placement) bool { | |
return replicas > 0 && placement != nil && placement.ReplicaSchedulingType() == policyv1alpha1.ReplicaSchedulingTypeDivided | |
} |
What you expected to happen:
work object's replias must same as rb.spec.cluster info
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
- Karmada version:
- kubectl-karmada or karmadactl version (the result of
kubectl-karmada version
orkarmadactl version
): - Others:
Metadata
Metadata
Assignees
Labels
Type
Projects
Status