-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Expected Behavior
We expect the metric tekton_pipelines_controller_pipelinerun_duration_seconds to always (consistently, e.g. for every single scrape request) report the value for every single PipelineRun as long as the PipelineRun exists in k8s when using the lastvalue setting (see provided config at end).
Actual Behavior
While part of the initial scrapes, the values disappear over time. For example a PipelineRun that was started in the morning yields metrics for several hours, but then after a certain point in time, it yields no more metrics (verified by checking the /metrics endpoint of the pipelines-controller - default port 9090).
A picture says more than a thousand words:

When the metrics are visualized in prometheus (picture above) you would believe that during the gap in the middle - for a duration of around 30 minutes, there was no PipelineRun in the cluster. This is not true! There were plenty, it is just that they are no longer contained in the metrics output.
Steps to Reproduce the Problem
- configure metrics to use the
lastvaluesetting (like in the example provided at the bottom of this post) - recommended: also set up prometheus to scrape them, easier to visualize
- produce plenty of PipelineRuns throughout the day
- do some cleanups of PipelineRuns throughout the day - but never go to zero. We do cleanups like this, but potentially this is also reproducible without cleanups!
- Find the ends of the timerseries.
If you followed step 2. you essentially just need to look at the graph from prometheus.
If you find a gap like in the picture above, the issue is reproduced. (Clarification: It looks like a gap, it is not an actual gap, since an actual gap would mean the same time series is continued later, which it is not - those are new PipelineRuns - new time series!)
Otherwise (not using prometheus), the procedure is: for each pipelinerun in k8s, check if it is also part of the latest scrape.
If an instance is found that is in k8s, but not in the metrics output, the problem is already reproduced.
Additional Info
-
Kubernetes version:
Output of
kubectl version:
<pre>Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-14T09:53:42Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.10", GitCommit:"0fa26aea1d5c21516b0d96fea95a77d8d429912e", GitTreeState:"clean", BuildDate:"2024-01-17T13:38:41Z", GoVersion:"go1.20.13", Compiler:"gc", Platform:"linux/amd64"}
</pre>
-
Tekton Pipeline version:
Output of
tkn versionorkubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}'
Client version: 0.36.0
Chains version: v0.20.0
Pipeline version: v0.56.1
Triggers version: v0.26.1
Dashboard version: v0.43.1
Operator version: v0.70.0
- Config info:
We used the following tekton operator settings on thepipeline:metrics.count.enable-reason: false metrics.pipelinerun.duration-type: lastvalue metrics.pipelinerun.level: pipelinerun metrics.taskrun.duration-type: lastvalue metrics.taskrun.level: taskrun