Skip to content

Conversation

glenn-jocher
Copy link
Member

@glenn-jocher glenn-jocher commented Jul 31, 2022

Disable FP16 validation if AMP checks fail or amp=False.

May partially resolve #7908

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Improved validation inference with automatic mixed precision (AMP) during training.

📊 Key Changes

  • Added the use of automatic mixed precision (AMP) to the validation step in the training process.

🎯 Purpose & Impact

  • 🎯 Purpose: To enhance the efficiency and speed of the model validation phase by utilizing AMP.
  • 💡 Impact: Users may experience faster validation times and reduced memory usage while training models, potentially improving the overall training performance on compatible hardware.

Disable FP16 validation if AMP checks fail or amp=False.
@glenn-jocher glenn-jocher merged commit 59595c1 into master Jul 31, 2022
@glenn-jocher glenn-jocher deleted the glenn-jocher-patch-1 branch July 31, 2022 02:17
@glenn-jocher glenn-jocher self-assigned this Jul 31, 2022
ctjanuhowski pushed a commit to ctjanuhowski/yolov5 that referenced this pull request Sep 8, 2022
Disable FP16 validation if AMP checks fail or amp=False.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

NaN tensor values problem for GTX16xx users (no problem on other devices)

1 participant