Skip to content

Conversation

@amrzv
Copy link

@amrzv amrzv commented Nov 24, 2025

Hello.
This MR fixes two erros with beit codebase

  1. AttributeError: module 'numpy' has no attribute 'int' on line bool_masked_pos = mask_generator()`
AttributeError                            Traceback (most recent call last)
[/tmp/ipython-input-3264900144.py](https://localhost:8080/#) in <cell line: 0>()
      2 
      3 # create input 2: bool_masked_pos
----> 4 bool_masked_pos = mask_generator()
      5 bool_masked_pos = torch.from_numpy(bool_masked_pos).unsqueeze(0)

1 frames
[/content/unilm/beit/masking_generator.py](https://localhost:8080/#) in __call__(self)
     78 
     79     def __call__(self):
---> 80         mask = np.zeros(shape=self.get_shape(), dtype=np.int)
     81         mask_count = 0
     82         while mask_count < self.num_masking_patches:

[/usr/local/lib/python3.12/dist-packages/numpy/__init__.py](https://localhost:8080/#) in __getattr__(attr)
    392 
    393         if attr in __former_attrs__:
--> 394             raise AttributeError(__former_attrs__[attr])
    395 
    396         if attr in __expired_attributes__:

AttributeError: module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  1. UnpicklingError on line encoder = load_model("https://cdn.openai.com/dall-e/encoder.pkl", device)
UnpicklingError                           Traceback (most recent call last)
[/tmp/ipython-input-1047815512.py](https://localhost:8080/#) in <cell line: 0>()
      4 # step 2: get input_ids from OpenAI's DALL-E
      5 device = torch.device('cpu')
----> 6 encoder = load_model("https://cdn.openai.com/dall-e/encoder.pkl", device)

1 frames
[/content/unilm/beit/dall_e/__init__.py](https://localhost:8080/#) in load_model(path, device)
     13 
     14         with io.BytesIO(resp.content) as buf:
---> 15             return torch.load(buf, map_location=device)
     16     else:
     17         with open(path, 'rb') as f:

[/usr/local/lib/python3.12/dist-packages/torch/serialization.py](https://localhost:8080/#) in load(f, map_location, pickle_module, weights_only, mmap, **pickle_load_args)
   1527                         )
   1528                     except pickle.UnpicklingError as e:
-> 1529                         raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
   1530                 return _load(
   1531                     opened_zipfile,

UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. 
	(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
	(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
	WeightsUnpickler error: Unsupported global: GLOBAL dall_e.encoder.Encoder was not an allowed global by default. Please use `torch.serialization.add_safe_globals([dall_e.encoder.Encoder])` or the `torch.serialization.safe_globals([dall_e.encoder.Encoder])` context manager to allowlist this global if you trust this class/function.

Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant