@lesserwhirls - the problem as far as I can tell exists in netcdf-java as well.
This is a repeat of this issue which can probably be closed in favor of this one.
The problem: we're hosting netcdf-classic files with in EFS and reading them from EC2, so we're charged per bytes read. The charges have been unexpectedly high (order of magnitude) for reads from our datasets with unlimited dimensions.
I've attached the results of some testing I've done locally on my mac using fs-usage, input files, the output of those tests, and the python script I used to count bytes read. The short summary is output from reading a fixed dimension dataset are about what we expect with both ncdump and toolsui, but way too high when reading from a record dataset.
test_disk_reads.zip
@lesserwhirls - the problem as far as I can tell exists in netcdf-java as well.
This is a repeat of this issue which can probably be closed in favor of this one.
The problem: we're hosting netcdf-classic files with in EFS and reading them from EC2, so we're charged per bytes read. The charges have been unexpectedly high (order of magnitude) for reads from our datasets with unlimited dimensions.
I've attached the results of some testing I've done locally on my mac using
fs-usage, input files, the output of those tests, and the python script I used to count bytes read. The short summary is output from reading a fixed dimension dataset are about what we expect with both ncdump and toolsui, but way too high when reading from a record dataset.test_disk_reads.zip