Skip to content

Create variables on correct devices and with correct types#292

Merged
denproc merged 4 commits intomasterfrom
refactor/optimization
Feb 20, 2022
Merged

Create variables on correct devices and with correct types#292
denproc merged 4 commits intomasterfrom
refactor/optimization

Conversation

@jzakirov
Copy link
Collaborator

@jzakirov jzakirov commented Feb 6, 2022

Related to #288

Proposed Changes

  • Propagate input dtype though whole computation pipeline and cast intermediate kernels / filters to it.
  • Add tests to verify that for float32 and float64. float16 is not included, because not all operations are defined for CPU.

@codecov
Copy link

codecov bot commented Feb 6, 2022

Codecov Report

Merging #292 (27f9cab) into master (19f0d92) will decrease coverage by 2.33%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #292      +/-   ##
==========================================
- Coverage   93.35%   91.02%   -2.34%     
==========================================
  Files          33       33              
  Lines        2289     2284       -5     
==========================================
- Hits         2137     2079      -58     
- Misses        152      205      +53     
Flag Coverage Δ
unittests 91.02% <100.00%> (-2.34%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
piq/brisque.py 99.01% <100.00%> (ø)
piq/dss.py 100.00% <100.00%> (+1.03%) ⬆️
piq/fsim.py 100.00% <100.00%> (ø)
piq/functional/filters.py 100.00% <100.00%> (ø)
piq/ms_ssim.py 100.00% <100.00%> (ø)
piq/ssim.py 100.00% <100.00%> (ø)
piq/vsi.py 100.00% <100.00%> (ø)
piq/gs.py 32.96% <0.00%> (-59.35%) ⬇️

@jzakirov jzakirov marked this pull request as ready for review February 6, 2022 20:56
Copy link
Collaborator

@denproc denproc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great fix!

  • For BRISQUE, '_natural_scene_statistics' uses gaussian kernel (line 178), which is generated as 'torch.FloatTensor' losing the precision for 'torch.DoubleTensor' input. I suggest to propagate type into the kernel generation or use 'torch.float64' as default for generating kernels.
  • For DSS, I suggest type assignment in line 90 to avoid implicit type upgrade in following loop. Function '_dct_matrix' and ‘gaussian_fliter’ produce 'torch.FloatTensor's losing the precision if 'torch.DoubleTensor's are used as the input. Therefore, I suggest, to add type propagation inside.
  • For FSIM, '_lowpassfilter' amd ‘scharr_filter’ are 'torch.FloatTensor'. Similar suggestion to the previous. Propagate types or generate 'torch.float64' as default with following reduction to 'torch.float32'. Same for ‘_construct_filters’ in lines 154-158.
  • For MS-SSIM and SSIM, we use gaussian filter, generated using ‘torch.FloatTensor’ in line 73 (MS-ssim) and in line 64 (SSIM).
  • For VSI, ‘scharr_filter’ and ‘_log_gabor’ are generated as ‘torch.FloatTensor’.
  • For other functionality, that wasn't changed with presented commits, could contain same issues.

Concluding on the above, ‘functional/filters.py’ could be upgraded to ‘torch.float64’ as default precision, which could be reduces after.

@sonarqubecloud
Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.0% 0.0% Duplication

@jzakirov
Copy link
Collaborator Author

As discussed with Denis and Sergei, we won't create kernels as torch.DoubleTensor by default.

Reasoning:
It's much more common to have single precision float tensors as inputs. So having kernels in float64 will create a lot of casting back to float32 every time metrics are computed.

This PR adds support for float64 input inference and slight loss of precision due to kernel's dtype is not considered important.

@denproc please review again and merge.

Copy link
Collaborator

@denproc denproc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LTGM

@denproc denproc merged commit 19e3cb3 into master Feb 20, 2022
@snk4tr snk4tr deleted the refactor/optimization branch February 20, 2022 13:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants