-
Notifications
You must be signed in to change notification settings - Fork 524
Closed
Description
Use case: As a cluster operator I would like to run many frameworks (possibly multiple Chronos instances) on the same Mesos cluster. I'd like to mitigate offer starvation; that is, schedulers with work to do should receive offers from the Mesos master.
From the SchedulerDriver documentation, Chronos can use both the overloaded definition of declineOffer
to set filters:
/**
* Declines an offer in its entirety and applies the specified
* filters on the resources (see mesos.proto for a description of
* Filters). Note that this can be done at any time, it is not
* necessary to do this within the {@link Scheduler#resourceOffers}
* callback.
*
* @param offerId The ID of the offer to be declined.
* @param filters The filters to set for any remaining resources.
*
* @return The state of the driver after the call.
*
* @see OfferID
* @see Filters
* @see Status
*/
Status declineOffer(OfferID offerId, Filters filters);
and clear them later on.
/**
* Removes all filters, previously set by the framework (via {@link
* #launchTasks}). This enables the framework to receive offers
* from those filtered slaves.
*
* @return The state of the driver after the call.
*
* @see Status
*/
Status reviveOffers();
Proposal: _When the task queue becomes empty, decline subsequent offers with a long timeout (say, max double value). When the task queue becomes non-empty, invoke reviveOffers.
NOTE: This issue is analogous to d2iq-archive/marathon#1931
Metadata
Metadata
Assignees
Labels
No labels