Skip to content

Refactoring the add_metrics_to_eval_loaders function to accept list of metric names instead of a dictionary of metrics. #938

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
04dd334
Merge pull request #1 from mosaicml/main
ShashankMosaicML Oct 9, 2023
87b2fdc
Merge pull request #8 from mosaicml/main
ShashankMosaicML Oct 27, 2023
c9a42e4
Merge pull request #12 from mosaicml/main
ShashankMosaicML Nov 6, 2023
ddea9ee
Merge branch 'mosaicml:main' into main
ShashankMosaicML Nov 6, 2023
0bcd8ee
Merge pull request #13 from mosaicml/main
ShashankMosaicML Nov 8, 2023
f209b58
Merge pull request #14 from mosaicml/main
ShashankMosaicML Nov 14, 2023
ec4378d
Merge pull request #15 from mosaicml/main
ShashankMosaicML Nov 15, 2023
b436706
Merge branch 'mosaicml:main' into main
ShashankMosaicML Dec 2, 2023
bcace03
..
ShashankMosaicML Dec 8, 2023
cf4aa58
Merge branch 'mosaicml:main' into main
ShashankMosaicML Dec 11, 2023
7c35ce8
Merge branch 'mosaicml:main' into main
ShashankMosaicML Dec 13, 2023
0a8ebfb
..
ShashankMosaicML Dec 15, 2023
6f18a33
..
ShashankMosaicML Dec 15, 2023
f42d585
Merge branch 'mosaicml:main' into main
ShashankMosaicML Dec 16, 2023
2f3f53c
Merge branch 'mosaicml:main' into main
ShashankMosaicML Dec 19, 2023
77b975f
..
ShashankMosaicML Dec 20, 2023
e28cfbe
Merge branch 'mosaicml:main' into main
ShashankMosaicML Jan 1, 2024
800c6f8
Merge branch 'mosaicml:main' into main
ShashankMosaicML Jan 2, 2024
922ef13
Merge branch 'mosaicml:main' into main
ShashankMosaicML Jan 3, 2024
d36f5f7
Merge branch 'mosaicml:main' into main
ShashankMosaicML Jan 5, 2024
d524531
Merge branch 'mosaicml:main' into main
ShashankMosaicML Jan 17, 2024
2b2f3d8
..
ShashankMosaicML Jan 17, 2024
25795b5
undoing prev commit
ShashankMosaicML Jan 17, 2024
624a339
Merge branch 'mosaicml:main' into main
ShashankMosaicML Jan 18, 2024
1c25b98
Merge branch 'mosaicml:main' into main
ShashankMosaicML Jan 29, 2024
d25cf2e
Merge branch 'mosaicml:main' into main
ShashankMosaicML Feb 1, 2024
1cc4505
Merge branch 'mosaicml:main' into main
ShashankMosaicML Feb 3, 2024
b324d76
Refactoring the function to accept list of metric names instead of d…
ShashankMosaicML Feb 3, 2024
83b8bfc
..
ShashankMosaicML Feb 3, 2024
2e30040
..
ShashankMosaicML Feb 3, 2024
f31aef5
..
ShashankMosaicML Feb 3, 2024
3b323bd
..
ShashankMosaicML Feb 3, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions llmfoundry/utils/builders.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@
from omegaconf import DictConfig, ListConfig
from omegaconf import OmegaConf as om
from torch.optim.optimizer import Optimizer
from torchmetrics import Metric
from transformers import AutoTokenizer, PreTrainedTokenizerBase

from llmfoundry.callbacks import (AsyncEval, EvalGauntlet, FDiffMetrics,
Expand Down Expand Up @@ -108,9 +107,8 @@ def build_eval_loaders(

def add_metrics_to_eval_loaders(
evaluators: List[Evaluator],
metrics: Dict[str, Metric],
metric_names: List[str],
) -> List[Evaluator]:
metric_names = list(metrics.keys())
eval_loaders, other_evaluators = [], []
for evaluator in evaluators:
if evaluator.metric_names == []:
Expand Down
3 changes: 2 additions & 1 deletion scripts/eval/eval.py
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,8 @@ def evaluate_model(
# Now add the eval metrics
if eval_loader_config is not None:
train_metrics = composer_model.get_metrics(is_train=True)
evaluators = add_metrics_to_eval_loaders(evaluators, train_metrics)
evaluators = add_metrics_to_eval_loaders(evaluators,
list(train_metrics.keys()))

if eval_gauntlet_df is None and eval_gauntlet_callback is not None:
eval_gauntlet_df = pd.DataFrame(
Expand Down
3 changes: 2 additions & 1 deletion scripts/train/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -544,7 +544,8 @@ def main(cfg: DictConfig) -> Trainer:
# Now add the eval metrics
if eval_loader_config is not None and not use_async_eval:
train_metrics = model.get_metrics(is_train=True)
evaluators = add_metrics_to_eval_loaders(evaluators, train_metrics)
evaluators = add_metrics_to_eval_loaders(evaluators,
list(train_metrics.keys()))

# Build the Trainer
log.info('Building trainer...')
Expand Down
8 changes: 1 addition & 7 deletions tests/utils/test_builders.py
Original file line number Diff line number Diff line change
Expand Up @@ -335,13 +335,7 @@ def test_add_metrics_to_eval_loaders():
)
]

new_evaluators = add_metrics_to_eval_loaders(
evaluators,
{
'new1': 'foo',
'new2': 'bar'
}, # type: ignore
)
new_evaluators = add_metrics_to_eval_loaders(evaluators, ['new1', 'new2'])
assert len(new_evaluators) == 3

assert new_evaluators[0].label == 'second'
Expand Down