Skip to content

Add monitor() method for monitoring model performance in production #179

@pplonski

Description

@pplonski

The AutoML API should be extended with monitor() method:

  • the monitor() should track the model performance on new data
  • it should check prediction distribution on new data and compare with the distribution from training (out of folds predictions)
  • it should detect outliers in new data
  • it should detect data drifts in new data

I propose to have the following arguments in monitor():

  • X (new test data)
  • y (new test data targets)
  • y_predicted (predictions from the AutoML)

The monitor() should return a report about incidents in new data. For example, warnings list with explanations what was the problem.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or requesthelp wantedExtra attention is needed

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions