You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PS D:\CodeFile\Python\fineTune> & C:/Users/29614/AppData/Local/Programs/Python/Python313/python.exe d:/CodeFile/Python/fineTune/ft.py
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: Making model.base_model.model.model require gradients
Map: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 386.65 examples/s]
Unsloth: Tokenizing ["text"] (num_proc=2): 0%| | 0/5 [00:00<?, ? examples/s] 🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
🦥 Unsloth Zoo will now patch everything to make training faster!
🦥 Unsloth Zoo will now patch everything to make training faster!
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: Making model.base_model.model.model require gradients
Unsloth: Making model.base_model.model.model require gradients
Unsloth: Making model.base_model.model.model require gradients
Map: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 406.45 examples/s]
Traceback (most recent call last):
File "", line 1, in
from multiprocess.spawn import spawn_main; spawn_main(parent_pid=31472, pipe_handle=1028)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 131, in _main
prepare(preparation_data)
~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
run_name="mp_main")
File "", line 287, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "d:\CodeFile\Python\fineTune\ft.py", line 65, in
trainer = SFTTrainer(
model = model,
...<19 lines>...
),
)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth\trainer.py", line 209, in new_init
original_init(self, *args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 1292, in init
super().init(
~~~~~~~~~~~~~~~~^
model = model,
^^^^^^^^^^^^^^
...<10 lines>...
peft_config = peft_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^
formatting_func = formatting_func,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 697, in init
train_dataset = self._prepare_dataset(
train_dataset, processing_class, args, args.packing, formatting_func, "train"
)
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 948, in _prepare_dataset
dataset = dataset.map(_tokenize, batched = True, **map_kwargs)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 3163, in map
with Pool(len(kwargs_per_job)) as pool:
~~~~^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
context=self.get_context())
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 215, in init
self._repopulate_pool()
~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
self._processes,
^^^^^^^^^^^^^^^^
...<3 lines>...
self._maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^
self._wrap_exception)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 329, in _repopulate_pool_static
w.start()
~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\process.py", line 121, in start
self._popen = self._Popen(self)
~~~~~~~~~~~^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 337, in _Popen
return Popen(process_obj)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
...<16 lines>...
''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
To fix this issue, refer to the "Safe importing of main module"
section in https://docs.python.org/3/library/multiprocessing.html
Map: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 353.21 examples/s]
Traceback (most recent call last):
File "", line 1, in
from multiprocess.spawn import spawn_main; spawn_main(parent_pid=31472, pipe_handle=1036)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 131, in _main
prepare(preparation_data)
~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
run_name="mp_main")
File "", line 287, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "d:\CodeFile\Python\fineTune\ft.py", line 65, in
trainer = SFTTrainer(
model = model,
...<19 lines>...
),
)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth\trainer.py", line 209, in new_init
original_init(self, *args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 1292, in init
super().init(
~~~~~~~~~~~~~~~~^
model = model,
^^^^^^^^^^^^^^
...<10 lines>...
peft_config = peft_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^
formatting_func = formatting_func,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 697, in init
train_dataset = self._prepare_dataset(
train_dataset, processing_class, args, args.packing, formatting_func, "train"
)
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 948, in _prepare_dataset
dataset = dataset.map(_tokenize, batched = True, **map_kwargs)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 3163, in map
with Pool(len(kwargs_per_job)) as pool:
~~~~^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
context=self.get_context())
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 215, in init
self._repopulate_pool()
~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
self._processes,
^^^^^^^^^^^^^^^^
...<3 lines>...
self._maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^
self._wrap_exception)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 329, in _repopulate_pool_static
w.start()
~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\process.py", line 121, in start
self._popen = self._Popen(self)
~~~~~~~~~~~^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 337, in _Popen
return Popen(process_obj)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
...<16 lines>...
''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
To fix this issue, refer to the "Safe importing of main module"
section in https://docs.python.org/3/library/multiprocessing.html
Map: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 383.96 examples/s]
Traceback (most recent call last):
File "", line 1, in
from multiprocess.spawn import spawn_main; spawn_main(parent_pid=31472, pipe_handle=1464)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 131, in _main
prepare(preparation_data)
~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
run_name="mp_main")
File "", line 287, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "d:\CodeFile\Python\fineTune\ft.py", line 65, in
trainer = SFTTrainer(
model = model,
...<19 lines>...
),
)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth\trainer.py", line 209, in new_init
original_init(self, *args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 1292, in init
super().init(
~~~~~~~~~~~~~~~~^
model = model,
^^^^^^^^^^^^^^
...<10 lines>...
peft_config = peft_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^
formatting_func = formatting_func,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 697, in init
train_dataset = self._prepare_dataset(
train_dataset, processing_class, args, args.packing, formatting_func, "train"
)
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 948, in _prepare_dataset
dataset = dataset.map(_tokenize, batched = True, **map_kwargs)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 3163, in map
with Pool(len(kwargs_per_job)) as pool:
~~~~^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
context=self.get_context())
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 215, in init
self._repopulate_pool()
~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
self._processes,
^^^^^^^^^^^^^^^^
...<3 lines>...
self._maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^
self._wrap_exception)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 329, in _repopulate_pool_static
w.start()
~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\process.py", line 121, in start
self._popen = self._Popen(self)
~~~~~~~~~~~^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 337, in _Popen
return Popen(process_obj)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
...<16 lines>...
''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
To fix this issue, refer to the "Safe importing of main module"
section in https://docs.python.org/3/library/multiprocessing.html
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
🦥 Unsloth Zoo will now patch everything to make training faster!
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: Making model.base_model.model.model require gradients
Unsloth: Making model.base_model.model.model require gradients
Map: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 396.35 examples/s]
Traceback (most recent call last):
File "", line 1, in
from multiprocess.spawn import spawn_main; spawn_main(parent_pid=31472, pipe_handle=1028)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 131, in _main
prepare(preparation_data)
~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
run_name="mp_main")
File "", line 287, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "d:\CodeFile\Python\fineTune\ft.py", line 65, in
trainer = SFTTrainer(
model = model,
...<19 lines>...
),
)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth\trainer.py", line 209, in new_init
original_init(self, *args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 1292, in init
super().init(
~~~~~~~~~~~~~~~~^
model = model,
^^^^^^^^^^^^^^
...<10 lines>...
peft_config = peft_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^
formatting_func = formatting_func,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 697, in init
train_dataset = self._prepare_dataset(
train_dataset, processing_class, args, args.packing, formatting_func, "train"
)
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 948, in _prepare_dataset
dataset = dataset.map(_tokenize, batched = True, **map_kwargs)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 3163, in map
with Pool(len(kwargs_per_job)) as pool:
~~~~^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
context=self.get_context())
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 215, in init
self._repopulate_pool()
~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
self._processes,
^^^^^^^^^^^^^^^^
...<3 lines>...
self._maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^
self._wrap_exception)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 329, in _repopulate_pool_static
w.start()
~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\process.py", line 121, in start
self._popen = self._Popen(self)
~~~~~~~~~~~^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 337, in _Popen
return Popen(process_obj)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
...<16 lines>...
''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
To fix this issue, refer to the "Safe importing of main module"
section in https://docs.python.org/3/library/multiprocessing.html
Map: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 390.51 examples/s]
Traceback (most recent call last):
File "", line 1, in
from multiprocess.spawn import spawn_main; spawn_main(parent_pid=31472, pipe_handle=1036)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 131, in _main
prepare(preparation_data)
~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
run_name="mp_main")
File "", line 287, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "d:\CodeFile\Python\fineTune\ft.py", line 65, in
trainer = SFTTrainer(
model = model,
...<19 lines>...
),
)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth\trainer.py", line 209, in new_init
original_init(self, *args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 1292, in init
super().init(
~~~~~~~~~~~~~~~~^
model = model,
^^^^^^^^^^^^^^
...<10 lines>...
peft_config = peft_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^
formatting_func = formatting_func,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 697, in init
train_dataset = self._prepare_dataset(
train_dataset, processing_class, args, args.packing, formatting_func, "train"
)
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 948, in _prepare_dataset
dataset = dataset.map(_tokenize, batched = True, **map_kwargs)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 3163, in map
with Pool(len(kwargs_per_job)) as pool:
~~~~^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
context=self.get_context())
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 215, in init
self._repopulate_pool()
~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
self._processes,
^^^^^^^^^^^^^^^^
...<3 lines>...
self._maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^
self._wrap_exception)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 329, in _repopulate_pool_static
w.start()
~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\process.py", line 121, in start
self._popen = self._Popen(self)
~~~~~~~~~~~^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 337, in _Popen
return Popen(process_obj)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
...<16 lines>...
''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
To fix this issue, refer to the "Safe importing of main module"
section in https://docs.python.org/3/library/multiprocessing.html
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
🦥 Unsloth Zoo will now patch everything to make training faster!
Code:
import unsloth
from unsloth import FastModel
from unsloth.chat_templates import get_chat_template
import torch
from trl import SFTTrainer, SFTConfig
from transformers import TrainingArguments
from datasets import Dataset
import json
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 1,
args = SFTConfig(
dataset_text_field = "text",
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4, # Use GA to mimic batch size!
warmup_steps = 5,
# num_train_epochs = 1, # Set this for 1 full training run.
max_steps = 60,
learning_rate = 2e-4, # Reduce to 2e-5 for long training runs
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407,
report_to = "none", # Use this for WandB etc
),
)
trainer_stats = trainer.train()
model.save_pretrained("lora_model") # Local saving
tokenizer.save_pretrained("lora_model")
The error message and code have been posted above. It seems that there is a problem with multiprocess, but I can't find a solution. I'm a newbie, so could the experts please tell me why this is happening?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
PS D:\CodeFile\Python\fineTune> & C:/Users/29614/AppData/Local/Programs/Python/Python313/python.exe d:/CodeFile/Python/fineTune/ft.py
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: Making
model.base_model.model.model
require gradientsMap: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 386.65 examples/s]
Unsloth: Tokenizing ["text"] (num_proc=2): 0%| | 0/5 [00:00<?, ? examples/s] 🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
🦥 Unsloth Zoo will now patch everything to make training faster!
🦥 Unsloth Zoo will now patch everything to make training faster!
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: Making
model.base_model.model.model
require gradientsUnsloth: Making
model.base_model.model.model
require gradientsUnsloth: Making
model.base_model.model.model
require gradientsMap: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 406.45 examples/s]
Traceback (most recent call last):
File "", line 1, in
from multiprocess.spawn import spawn_main; spawn_main(parent_pid=31472, pipe_handle=1028)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 131, in _main
prepare(preparation_data)
~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
run_name="mp_main")
File "", line 287, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "d:\CodeFile\Python\fineTune\ft.py", line 65, in
trainer = SFTTrainer(
model = model,
...<19 lines>...
),
)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth\trainer.py", line 209, in new_init
original_init(self, *args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 1292, in init
super().init(
~~~~~~~~~~~~~~~~^
model = model,
^^^^^^^^^^^^^^
...<10 lines>...
peft_config = peft_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^
formatting_func = formatting_func,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 697, in init
train_dataset = self._prepare_dataset(
train_dataset, processing_class, args, args.packing, formatting_func, "train"
)
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 948, in _prepare_dataset
dataset = dataset.map(_tokenize, batched = True, **map_kwargs)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 3163, in map
with Pool(len(kwargs_per_job)) as pool:
~~~~^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
context=self.get_context())
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 215, in init
self._repopulate_pool()
~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
self._processes,
^^^^^^^^^^^^^^^^
...<3 lines>...
self._maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^
self._wrap_exception)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 329, in _repopulate_pool_static
w.start()
~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\process.py", line 121, in start
self._popen = self._Popen(self)
~~~~~~~~~~~^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 337, in _Popen
return Popen(process_obj)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
...<16 lines>...
''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
Map: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 353.21 examples/s]
Traceback (most recent call last):
File "", line 1, in
from multiprocess.spawn import spawn_main; spawn_main(parent_pid=31472, pipe_handle=1036)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 131, in _main
prepare(preparation_data)
~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
run_name="mp_main")
File "", line 287, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "d:\CodeFile\Python\fineTune\ft.py", line 65, in
trainer = SFTTrainer(
model = model,
...<19 lines>...
),
)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth\trainer.py", line 209, in new_init
original_init(self, *args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 1292, in init
super().init(
~~~~~~~~~~~~~~~~^
model = model,
^^^^^^^^^^^^^^
...<10 lines>...
peft_config = peft_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^
formatting_func = formatting_func,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 697, in init
train_dataset = self._prepare_dataset(
train_dataset, processing_class, args, args.packing, formatting_func, "train"
)
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 948, in _prepare_dataset
dataset = dataset.map(_tokenize, batched = True, **map_kwargs)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 3163, in map
with Pool(len(kwargs_per_job)) as pool:
~~~~^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
context=self.get_context())
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 215, in init
self._repopulate_pool()
~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
self._processes,
^^^^^^^^^^^^^^^^
...<3 lines>...
self._maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^
self._wrap_exception)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 329, in _repopulate_pool_static
w.start()
~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\process.py", line 121, in start
self._popen = self._Popen(self)
~~~~~~~~~~~^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 337, in _Popen
return Popen(process_obj)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
...<16 lines>...
''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
Map: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 383.96 examples/s]
Traceback (most recent call last):
File "", line 1, in
from multiprocess.spawn import spawn_main; spawn_main(parent_pid=31472, pipe_handle=1464)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 131, in _main
prepare(preparation_data)
~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
run_name="mp_main")
File "", line 287, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "d:\CodeFile\Python\fineTune\ft.py", line 65, in
trainer = SFTTrainer(
model = model,
...<19 lines>...
),
)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth\trainer.py", line 209, in new_init
original_init(self, *args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 1292, in init
super().init(
~~~~~~~~~~~~~~~~^
model = model,
^^^^^^^^^^^^^^
...<10 lines>...
peft_config = peft_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^
formatting_func = formatting_func,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 697, in init
train_dataset = self._prepare_dataset(
train_dataset, processing_class, args, args.packing, formatting_func, "train"
)
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 948, in _prepare_dataset
dataset = dataset.map(_tokenize, batched = True, **map_kwargs)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 3163, in map
with Pool(len(kwargs_per_job)) as pool:
~~~~^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
context=self.get_context())
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 215, in init
self._repopulate_pool()
~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
self._processes,
^^^^^^^^^^^^^^^^
...<3 lines>...
self._maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^
self._wrap_exception)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 329, in _repopulate_pool_static
w.start()
~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\process.py", line 121, in start
self._popen = self._Popen(self)
~~~~~~~~~~~^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 337, in _Popen
return Popen(process_obj)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
...<16 lines>...
''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
🦥 Unsloth Zoo will now patch everything to make training faster!
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth_zoo\gradient_checkpointing.py:339: UserWarning: expandable_segments not supported on this platform (Triggered internally at C:\actions-runner_work\pytorch\pytorch\pytorch\c10/cuda/CUDAAllocatorConfig.h:35.)
GPU_BUFFERS = tuple([torch.empty(22562048, dtype = dtype, device = f"{DEVICE_TYPE}:{i}") for i in range(n_gpus)])
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
==((====))== Unsloth 2025.8.6: Fast Qwen3 patching. Transformers: 4.55.2.
\ /| NVIDIA GeForce RTX 4070 SUPER. Num GPUs = 1. Max memory: 11.994 GB. Platform: Windows.
O^O/ _/ \ Torch: 2.8.0+cu126. CUDA: 8.9. CUDA Toolkit: 12.6. Triton: 3.4.0
\ / Bfloat16 = TRUE. FA [Xformers = 0.0.32.post2. FA2 = False]
"--" Free license: http://github.com/unslothai/unsloth
Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!
Unsloth: Making
model.base_model.model.model
require gradientsUnsloth: Making
model.base_model.model.model
require gradientsMap: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 396.35 examples/s]
Traceback (most recent call last):
File "", line 1, in
from multiprocess.spawn import spawn_main; spawn_main(parent_pid=31472, pipe_handle=1028)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 131, in _main
prepare(preparation_data)
~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
run_name="mp_main")
File "", line 287, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "d:\CodeFile\Python\fineTune\ft.py", line 65, in
trainer = SFTTrainer(
model = model,
...<19 lines>...
),
)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth\trainer.py", line 209, in new_init
original_init(self, *args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 1292, in init
super().init(
~~~~~~~~~~~~~~~~^
model = model,
^^^^^^^^^^^^^^
...<10 lines>...
peft_config = peft_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^
formatting_func = formatting_func,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 697, in init
train_dataset = self._prepare_dataset(
train_dataset, processing_class, args, args.packing, formatting_func, "train"
)
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 948, in _prepare_dataset
dataset = dataset.map(_tokenize, batched = True, **map_kwargs)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 3163, in map
with Pool(len(kwargs_per_job)) as pool:
~~~~^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
context=self.get_context())
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 215, in init
self._repopulate_pool()
~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
self._processes,
^^^^^^^^^^^^^^^^
...<3 lines>...
self._maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^
self._wrap_exception)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 329, in _repopulate_pool_static
w.start()
~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\process.py", line 121, in start
self._popen = self._Popen(self)
~~~~~~~~~~~^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 337, in _Popen
return Popen(process_obj)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
...<16 lines>...
''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
Map: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 390.51 examples/s]
Traceback (most recent call last):
File "", line 1, in
from multiprocess.spawn import spawn_main; spawn_main(parent_pid=31472, pipe_handle=1036)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 131, in _main
prepare(preparation_data)
~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
run_name="mp_main")
File "", line 287, in run_path
File "", line 98, in _run_module_code
File "", line 88, in _run_code
File "d:\CodeFile\Python\fineTune\ft.py", line 65, in
trainer = SFTTrainer(
model = model,
...<19 lines>...
),
)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\unsloth\trainer.py", line 209, in new_init
original_init(self, *args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 1292, in init
super().init(
~~~~~~~~~~~~~~~~^
model = model,
^^^^^^^^^^^^^^
...<10 lines>...
peft_config = peft_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^
formatting_func = formatting_func,**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 697, in init
train_dataset = self._prepare_dataset(
train_dataset, processing_class, args, args.packing, formatting_func, "train"
)
File "D:\CodeFile\Python\fineTune\unsloth_compiled_cache\UnslothSFTTrainer.py", line 948, in _prepare_dataset
dataset = dataset.map(_tokenize, batched = True, **map_kwargs)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\datasets\arrow_dataset.py", line 3163, in map
with Pool(len(kwargs_per_job)) as pool:
~~~~^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
context=self.get_context())
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 215, in init
self._repopulate_pool()
~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^
self._processes,
^^^^^^^^^^^^^^^^
...<3 lines>...
self._maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^
self._wrap_exception)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\pool.py", line 329, in _repopulate_pool_static
w.start()
~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\process.py", line 121, in start
self._popen = self._Popen(self)
~~~~~~~~~~~^^^^^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\context.py", line 337, in _Popen
return Popen(process_obj)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\popen_spawn_win32.py", line 46, in init
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "C:\Users\29614\AppData\Local\Programs\Python\Python313\Lib\site-packages\multiprocess\spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
...<16 lines>...
''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
🦥 Unsloth Zoo will now patch everything to make training faster!
🦥 Unsloth Zoo will now patch everything to make training faster!
Code:
import unsloth
from unsloth import FastModel
from unsloth.chat_templates import get_chat_template
import torch
from trl import SFTTrainer, SFTConfig
from transformers import TrainingArguments
from datasets import Dataset
import json
max_seq_length = 2048
dtype = None
load_in_4bit = True
model, tokenizer = FastModel.from_pretrained(
model_name = "F:/model/Qwen3-0.6B",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
# token = "hf_...",
)
model = FastModel.get_peft_model(
model,
r = 16,
target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",],
lora_alpha = 16,
lora_dropout = 0,
bias = "none",
use_gradient_checkpointing = "unsloth",
random_state = 3407,
use_rslora = False,
loftq_config = None,
)
tokenizer = get_chat_template(
tokenizer,
chat_template = "qwen-3",
)
def formatting_prompts_func(examples):
convos = []
for i in range(len(examples['question'])):
convo = [
{"role": "user", "content": examples['question'][i]},
{"role": "assistant", "content": examples['answer'][i]}
]
convos.append(convo)
with open(r"./datasets/test_dataset.jsonl", "r", encoding="utf-8") as f:
train_data = [json.loads(line) for line in f]
dataset = Dataset.from_list(train_data)
dataset = dataset.map(formatting_prompts_func, batched=True)
trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = max_seq_length,
dataset_num_proc = 1,
args = SFTConfig(
dataset_text_field = "text",
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4, # Use GA to mimic batch size!
warmup_steps = 5,
# num_train_epochs = 1, # Set this for 1 full training run.
max_steps = 60,
learning_rate = 2e-4, # Reduce to 2e-5 for long training runs
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407,
report_to = "none", # Use this for WandB etc
),
)
trainer_stats = trainer.train()
model.save_pretrained("lora_model") # Local saving
tokenizer.save_pretrained("lora_model")
The error message and code have been posted above. It seems that there is a problem with multiprocess, but I can't find a solution. I'm a newbie, so could the experts please tell me why this is happening?
Beta Was this translation helpful? Give feedback.
All reactions