-
Notifications
You must be signed in to change notification settings - Fork 189
Separate trust_remote_code
args
#152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. |
This LGTM and seems to only impact the finetune pathway. One suggestion @kylesayrs is to checkout a branch of https://github.com/neuralmagic/llm-compressor-testing/tree/main/.github/workflows and update such that instead of main, we checkout this branch. Can then kick off the weekly and nightly testing to make sure this doesn't cause any other impacts on args. Let me know if this is unclear. The tests should finish in about an hour each |
Initial tests were using a stale cache. In order to clear the HF dataset caches, you must remove two directories
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but agreed with Dipika lets run the nightly and weekly tests on this change before merging since we're changing argument names
Purpose
tests/llmcompressor/transformers/finetune/test_oneshot_then_finetune
)Changes
ModelArgs.trust_remote_code
toModelArgs.trust_remote_code_model
to avoid name conflict during arg parsingDataTrainingArgs.trust_remote_code_data
trust_remote_code_data
arg to testsTesting
python3 -m pytest tests/llmcompressor/transformers/finetune