You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 23, 2024. It is now read-only.
Use case:As a cluster operator I would like to run many frameworks (possibly multiple Marathon instances) on the same Mesos cluster. I'd like to mitigate offer starvation; that is, schedulers with work to do should receive offers from the Mesos master.
From the SchedulerDriver documentation, Marathon can use both the overloaded definition of declineOffer to set filters:
/** * Declines an offer in its entirety and applies the specified * filters on the resources (see mesos.proto for a description of * Filters). Note that this can be done at any time, it is not * necessary to do this within the {@link Scheduler#resourceOffers} * callback. * * @param offerId The ID of the offer to be declined. * @param filters The filters to set for any remaining resources. * * @return The state of the driver after the call. * * @see OfferID * @see Filters * @see Status */StatusdeclineOffer(OfferIDofferId, Filtersfilters);
and clear them later on.
/**
* Removes all filters, previously set by the framework (via {@link
* #launchTasks}). This enables the framework to receive offers
* from those filtered slaves.
*
* @return The state of the driver after the call.
*
* @see Status
*/
Status reviveOffers();
Proposal:When the task queue becomes empty, decline subsequent offers with a long timeout (say, max double value). When the task queue becomes non-empty, invoke reviveOffers. (per offline conversation with @kolloch, @aquamatthias, and @gkleiman).