-
Notifications
You must be signed in to change notification settings - Fork 80
Error when trying to initialize MedCLIP model #47
Copy link
Copy link
Open
Description
Hi!
Thanks for the amazing work. I just have small problem when trying to initialize a model to pretrain it. Here is the segment that is throwing an error:
# Initialize model
model = MedCLIPModel()
model.cuda()And the output is:
Traceback (most recent call last):
File "E:\llmcxr\venv\Lib\site-packages\transformers\modeling_utils.py", line 3897, in from_pretrained
).start()
^^^^^^^
File "C:\Program Files\Python312\Lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\multiprocessing\context.py", line 337, in _Popen
return Popen(process_obj)
^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\multiprocessing\spawn.py", line 164, in get_preparation_data
_check_not_importing_main()
File "C:\Program Files\Python312\Lib\multiprocessing\spawn.py", line 140, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
To fix this issue, refer to the "Safe importing of main module"
section in https://docs.python.org/3/library/multiprocessing.html
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Python312\Lib\multiprocessing\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\multiprocessing\spawn.py", line 131, in _main
prepare(preparation_data)
File "C:\Program Files\Python312\Lib\multiprocessing\spawn.py", line 246, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Program Files\Python312\Lib\multiprocessing\spawn.py", line 297, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen runpy>", line 286, in run_path
File "<frozen runpy>", line 98, in _run_module_code
File "<frozen runpy>", line 88, in _run_code
File "E:\llmcxr\src\train.py", line 36, in <module>
model = MedCLIPModel()
^^^^^^^^^^^^^^
File "E:\llmcxr\venv\Lib\site-packages\medclip\modeling_medclip.py", line 145, in __init__
self.text_model = MedCLIPTextModel(proj_bias=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\llmcxr\venv\Lib\site-packages\medclip\modeling_medclip.py", line 27, in __init__
self.model = AutoModel.from_pretrained(self.bert_type, output_hidden_states=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\llmcxr\venv\Lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\llmcxr\venv\Lib\site-packages\transformers\modeling_utils.py", line 3941, in from_pretrained
raise EnvironmentError(
OSError: Can't load the model for 'emilyalsentzer/Bio_ClinicalBERT'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'emilyalsentzer/Bio_ClinicalBERT' is the correct path to a directory containing a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.This seems to be a generic error transformers throw so I don't have a lot to go on. I am running this on a Windows 11 machine and I'm sure the pytorch_model.bin file is downloaded. Here is the file tree for the HuggingFace hub:
C:.
├───.no_exist
│ └───d5892b39a4adaed74b92212a44081509db72f87b
│ added_tokens.json
│ chat_template.jinja
│ model.safetensors
│ model.safetensors.index.json
│ special_tokens_map.json
│ tokenizer.json
│ tokenizer_config.json
│
├───blobs
│ 2ea941cc79a6f3d7985ca6991ef4f67dad62af04
│ 7803f5e6d2057cb1927d283bde1def0ea3862d48
│ a18c4c260fb5c0978b86658615106d5617050b5f14dac6ceb5e0d8beb2f9f719
│
├───refs
│ main
│
└───snapshots
└───d5892b39a4adaed74b92212a44081509db72f87b
config.json
pytorch_model.bin
vocab.txtI would really appreciate the help. Thank you!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels