Skip to content

feat: implement nanamin/nanamax#30

Merged
imos merged 3 commits intopfnet-research:masterfrom
Ryoya777:fix/implement-nanamin-nanamax
Mar 25, 2026
Merged

feat: implement nanamin/nanamax#30
imos merged 3 commits intopfnet-research:masterfrom
Ryoya777:fix/implement-nanamin-nanamax

Conversation

@Ryoya777
Copy link
Copy Markdown
Collaborator

@Ryoya777 Ryoya777 commented Mar 3, 2026

Closes #28

Implement nanamax and nanamin — NaN-aware maximum/minimum functions that support multiple dimensions via torch.amax/torch.amin.

Motivation

nanmax/nanmin only accept a single int for dim because they rely on torch.max/torch.min (which return indices). nanamax/nanamin use torch.amax/torch.amin instead, allowing dim to be an int or a tuple of ints at the cost of not returning indices.

Changes

  • qfeval_functions/functions/nanamax.py — new function following the nansum.py 3-line pattern
  • qfeval_functions/functions/nanamin.py — delegates to nanamax via sign negation (-nanamax(-x))
  • qfeval_functions/functions/__init__.py — register both functions
  • tests/functions/test_nanamax.py — 43 tests
  • tests/functions/test_nanamin.py — 44 tests

Tests

  • make test: 1390 passed
  • make format: clean
  • make lint: clean (black, isort, pflake8, mypy)

Comment thread qfeval_functions/functions/nanamax.py Outdated
Comment on lines +98 to +99
if x.numel() == 0:
return torch.as_tensor(math.nan).to(x)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

次元の処理がおかしい。dimやkeepdimをちゃんと見て要素が0個のときのshapeはsumとかと挙動を一緒にしてほしい。

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

空テンソルの場合は、torch.sum で dim, keepdim に応じた正しい shape のテンソルを生成し、NaN を掛けて返すように修正しました。

Comment thread qfeval_functions/functions/nanamax.py Outdated
return torch.as_tensor(math.nan).to(x)

# 2. Build a mask for slices with at least one valid (non-NaN) element.
is_valid = (~x.isnan()).sum(dim=dim, keepdim=keepdim) > 0
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sumではなくany使った方が良いかも

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any に変更しました。ただし any(dim=()) は sum(dim=()) と異なり全次元を集約しないので、dim==() のときは引数なしの any() を呼ぶ分岐を追加しました。

@@ -0,0 +1,577 @@
import math
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

amax/aminなので複数の要素が最大/最小となったときの勾配に関するテストがほしいです。

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

追加しました。その時、勾配が均等分配されることを、1D, 2D, NaNありの3条件で検証しました。

@Ryoya777 Ryoya777 requested a review from imos March 3, 2026 08:52
Comment thread qfeval_functions/functions/nanamax.py Outdated
"""
# 1. Handle empty tensor (amax raises RuntimeError for numel() == 0).
if x.numel() == 0:
return x.sum(dim=dim, keepdim=keepdim) * math.nan
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch/numpy動かしてみましたがいずれもRuntimeErrorだったのでRuntimeErrorにして良いと思います。

torch
RuntimeError: max(): Expected reduction dim to be specified for input.numel() == 0. Specify the reduction dim with the 'dim' argument.
numpy
ValueError: zero-size array to reduction operation maximum which has no identity

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x.numel() == 0 のときRuntimeErrorを出すようにしました。
エラーメッセージは torch.amax のものに合わせました。

Comment thread qfeval_functions/functions/nanamax.py Outdated

def nanamax(
x: torch.Tensor,
dim: typing.Union[int, typing.Tuple[int, ...]] = (),
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.amaxを確認したらNoneを指定したときが全reduceなのでこの記述が正しそう。()はreduceしないという意味なのでちょっと違う。

Suggested change
dim: typing.Union[int, typing.Tuple[int, ...]] = (),
dim: typing.Union[None, int, typing.Tuple[int, ...]] = None,

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

いただいたsuggestion のとおり修正しました (nanamin.py も同様)。

type: ignore[arg-type] としたのは、dim のデフォルトを None に変更したことで amax(dim=dim) に None が渡りうるようになり、mypyで arg-type エラーが出たためです。
ignoreがよくなければ、dim is None でif文で分岐するように書く必要があると思いますが、そちらの方が良ければ修正します。

Comment thread qfeval_functions/functions/nanamax.py Outdated
Comment on lines +103 to +106
not_nan = ~x.isnan()
is_valid = (
not_nan.any() if dim == () else not_nan.any(dim=dim, keepdim=keepdim)
)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
not_nan = ~x.isnan()
is_valid = (
not_nan.any() if dim == () else not_nan.any(dim=dim, keepdim=keepdim)
)
is_invalid = x.isnan().all(dim=dim, keepdim=keepdim)

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestionの通り修正しました。

Comment thread qfeval_functions/functions/nanamax.py Outdated
)

# 4. Restore NaN for all-NaN slices.
return torch.where(is_valid, y, torch.as_tensor(math.nan).to(y))
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return torch.where(is_valid, y, torch.as_tensor(math.nan).to(y))
return torch.where(is_invalid, torch.as_tensor(math.nan).to(y), y)

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestionの通り修正しました。

@Ryoya777 Ryoya777 force-pushed the fix/implement-nanamin-nanamax branch from 0130d11 to eb5be04 Compare March 23, 2026 04:05
Copy link
Copy Markdown
Member

@imos imos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@imos imos merged commit 812249e into pfnet-research:master Mar 25, 2026
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

implement nanamin/nanamax

2 participants