-
Notifications
You must be signed in to change notification settings - Fork 82
ci(parquet/pqarrow): integration tests for reading shredded variants #455
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@zeroshade I regenerated the files with Variant logical type (apache/parquet-testing#91). Can you retest it? |
|
I'm away for the rest of this week but I can retest on Tuesday. The regeneration of the tests wouldn't fix the rest of the issues I listed right? |
Both
This seems to be reasonable to error out for me to enforce it's required in the schema.
We will generate the schema first which will have both
This is same as test case 43. My understanding is that if writer writes wrong data, the reader may only read the |
While the spec states that
The section you quoted states that the partially shredded object must never contain the fields and that a reader may assume that shredded fields aren't present in the
Correct, the spec states that if the
The spec says that's a valid thing to do, but it also says that this must never happen and doesn't definitively state what the behavior in this case should be. Only that it may be inconsistent. As I said above, if the intent is that the data in the |
|
@rdblue This seems to be reasonable to error out for me to enforce it's required in the schema. Do you remember why we had such positive case? |
My preference would be to relax the spec for this issue. It doesn't seem like there's much benefit to enforcing it on the read side, and it's easy to imagine a writer failing to enforce it in some cases where it usually adds a |
+1 IIRC, these wordings are by purpose to not complicate the reader side implementation w.r.t. reading values and consuming column stats directly from |
I think the spec tried to be careful with wording, but there is a lot of semantic overlap between required/missing/optional. Having a glossary for these terms and doing an audit of the spec to make sure they are used consistently would help. |
Another approach is to update the wording in the spec so the reader doesn't need to do such check. |
|
I made a small change to the spec to clarify (apache/parquet-format#512). The rest I feel should be aligned with what the spec described. |
|
@zeroshade, sorry for the confusion here. You're right about a lot of those test cases. They are not allowed by the spec. The implementation I generated these cases from is defensive and tries to read if it can rather than producing errors. I'd recommend doing the same thing to handle outside-of-spec cases. For instance, most of the time if a column is missing, most implementations will allow you to project a column of nulls. Extending this idea to Variant, it's reasonable to assume that a missing The most confusing one is where there is a field in the The behavior in these cases was debated when we were working on the spec. We ultimately decided to disallow writers from producing them, but I think it is a best practice to ensure that the read behavior is predictable, accepts even slightly malformed cases, and has consistent behavior depending on the projected columns. |
@rdblue I think it would be worthwhile to add wording to the spec which defines the consistent behavior desired for readers in these malformed cases rather than it becoming just a "de facto" standard based on convention. The wording can specify that while it's invalid for a writer to produce these that in those cases the spec defines what the reader behavior should be. |
Here is a proposal for how to make it clearer in the tests that this is an invalid file: |
|
@rdblue @aihuaxu @emkornfield @wgtmac @cashmand I've been thinking about this a bit more over the past few days and had the following thoughts. Please let me know what you all think.
The problem I see here is multi-faceted. While I understand the idea and reasons for doing this, allowing outside-of-spec things on read but not on write causes a few problems:
This is true for compute engines, but not necessarily true for regular Parquet implementations. Generally, the idea of "projection" is an engine-level concept. Most implementations of parquet will simply error if you try to read a column that doesn't exist, letting the level above the file interactions decide upon projection.
I agree with this sentiment. We definitely want read behavior to be predictable and have consistent behavior. If the desired consistent behavior isn't actually written into the spec, how would one know what that behavior should be?
This touches on my ultimate point honestly. My conclusion ultimately is that one of two things should happen:
@alamb's suggestion is also a possible idea: to simply identify and mark which test cases are testing things which are outside the spec and invalid so that implementations can either choose to support those cases despite the spec, or can skip those cases. |
|
Just chiming in as an arrow-go (but not parquet) contributor: I find the idea of test cases that are invalid and yet presented as real test cases rather unsettling; it feels like implementation- or vendor- specific behavior snuck its way in to what is supposed to be the official spec.
I think this is the crux of the issue: if Parquet does want to allow slightly out-of-spec files to be read it needs to be clearly defined in the spec |
I think at this point where we are trying to finalize the Variant spec, the most expedient path would be to update the tests cases rather than re-open a discussion that seems to have been discussed at length in the initial spec design Just my $0.02 |
That's also my understanding that those have been discussed during the design phase. We can update the test cases to indicate that engines can have different behavior. BTW: @zeroshade There is a ask to create the test files from GO language and then test out from Parquet-Java side. Can you help generate the same set of the test files? |
|
@alamb I'm fine with that too, I just wanted to get some confirmation one way or the other. If @rdblue or @aihuaxu are willing to update the test cases (either by removing or using your suggestion to flag tests that are invalid constructions), then that's fine with me. Mostly I want to just get this figured out, and we shouldn't have integration tests which are testing behavior that doesn't exist in the spec. |
I can do so, except for the constructions which aren't valid according to the spec as the Go implementation doesn't allow for it. This includes the confusion over required vs optional vs present etc. at a minimum, possibly taking @emkornfield's suggestion here to at least add a glossary of terms or otherwise simply clarify the language based on the intent. Is there a script that was used to generate the test cases? Or do I need to parse the values in the JSON to figure out what should go in the files? In addition, where should I put the files? |
We generated those test files from Iceberg unit tests (apache/iceberg#13654), not from a script. What I can think of is to expand this PR to read those files in tests and then rewrite out through GO-Variant writer logic. We can put in https://github.com/apache/parquet-testing repository. If we can have the same layout as apache/parquet-testing#90. That will be great to make the test from Parquet-Java easier. BTW: can we consider merging apache/parquet-testing#90 and I will address updating test cases in apache/parquet-testing#91 to add variant logical type and update the test description? |
|
I'm updating the test description for the ones mentioned in apache/parquet-testing#91. @alamb and @zeroshade Can you take a look? |
|
@aihuaxu It appears all of the parquet files with Decimal values that you regenerated are inconsistent with the test cases. The values in the parquet files don't match the expected values in the variant (for example, containing |
|
Also some cases haven't been given the new notes/descriptions, such as the cases which test for a missing value column (the spec wording still does not allow for the |
For decimal values, the changes are expected to align with the spec - original 1234567890 should be treated as decimal8 rather than decimal4 since the values with 1~9 precision is stored as decimal4.
Please take another look at apache/parquet-testing#91. I consolidate the files into one place. Sorry that I didn't notice that. |
|
@zeroshade I'm clarifying the missing value field in apache/parquet-format#512. Can you also take a look? |
|
@aihuaxu I've updated the files and this PR with the relevant changes. The files generated by Go are in apache/parquet-testing#94 Please take a look at both the files and the code and let me know what you think. Thanks! I'll wait until the relevant PRs for |
|
@alamb @lidavidm @wgtmac @emkornfield Can I get a review from any of you? Once the |
|
oops sorry, I missed this - I'll try to take a look |


Rationale for this change
Testing out the variant implementation here against Parquet java using the test cases generated in https://github.com/apache/parquet-testing/pull/90/files. Overall, it confirms that our implementation is generally compatible for reading parquet files written by parquet-java with some caveats.
What changes are included in this PR?
New testing suite in
parquet/pqarrow/variant_test.gowhich uses the test cases defined in parquet-testing and attempts to read the parquet files and compares the resulting variants against the expected ones.Some issues were found that I believe are issues with Parquet-java and the test cases rather than issues with the Go implementation, as such discussion is needed for the following:
valuecolumn is missing. Based on my reading of the spec this seems to be an invalid scenario. The specific case is that the spec states thetyped_valuefield may be omitted when not shredding elements as a specific type, but says nothing about allowing omission of thevaluefield. Currently, the Go implementation will error if this field is missing as per my reading of the spec, meaning those test cases fail.testPartiallyShreddedObjectMissingFieldConflictseems to have a conflict between what is expected and what in the spec. Thebfield exists within thevaluefield, while also being a shredded field, the test appears to assume the data in thevaluefield would be ignored, but the spec says thatvaluemust never contain fields represented by the shredded fields. This needs clarification on the desired behavior and result.testShreddedObjectWithOptionalFieldStructstests the schenario where the shredded fields of an object are listed asoptionalin the schema, but the spec states that they must berequired. Thus, the Go implementation errors on this test as the spec says this is an error. Clarification is needed on if this is a valid test case.testShreddedObjectMissingTypedValuetests the case where thetyped_valuefield is missing, this is allowed by the spec except that the spec states that in this scenario thevaluefield must berequired. The test case usesoptionalin this scenario causing the Go implementation to fail. Clarification is needed here.testPartiallyShreddedObjectFieldConflictagain tests the case of a field existing in both thevalueand the shredded column which the spec states is invalid and will lead to inconsistent results. Thus it is not valid to have this test case assert a specific result according to the spec unless the spec is amended to state that the shredded field takes precedence in this case.