Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PARQUET-1950: Define core features #164

Draft
wants to merge 6 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
187 changes: 187 additions & 0 deletions CoreFeatures.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
<!--
- Licensed to the Apache Software Foundation (ASF) under one
- or more contributor license agreements. See the NOTICE file
- distributed with this work for additional information
- regarding copyright ownership. The ASF licenses this file
- to you under the Apache License, Version 2.0 (the
- "License"); you may not use this file except in compliance
- with the License. You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing,
- software distributed under the License is distributed on an
- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- KIND, either express or implied. See the License for the
- specific language governing permissions and limitations
- under the License.
-->

# Parquet Core Features

This document lists the core features for each parquet-format release. This
list is a subset of the features which parquet-format makes available.

## Purpose

The list of core features for a certain release makes a compliance level for
implementations. If a writer implementation claims that it is at a certain
compliance level then it must use only features from the *core feature list* of
that parquet-format release. If a reader implementation claims the same if must
implement all of the listed features. This way it is easier to ensure
compatibility between the different Parquet implementations.

We cannot and don't want to stop our clients to use any features that are not
on this list but it shall be highlighted that using these features might make
the written Parquet files unreadable by other implementations. We can say that
the features available in a parquet-format release (and one of the
implementations of it) and not on the *core feature list* are experimental.

## Versioning

This document is versioned by the parquet-format releases which follows the
scheme of semantic versioning. It means that no feature will be deleted from
this document under the same major version. (We might deprecate some, though.)
Because of the semantic versioning if one implementation supports the core
features of the parquet-format release `a.c.x` it must be able to read any
Parquet files written by implementations supporting the release `a.b.y` where
`c >= b`.

If a Parquet file is written according to a released version of this document
it might be a good idea to write this version into the field `compliance_level`
in the Thrift object `FileMetaData`.

## Adding new features

The idea is to only include features which are specified correctly and proven
to be useful for everyone. Because of that we require to have at least two
gszadovszky marked this conversation as resolved.
Show resolved Hide resolved
different implementations that are released and widely tested. We also require
to implement interoperability tests for that feature to prove one
implementation can read the data written by the other one and vice versa.

## Core feature list

This list is based on the [Parquet Thrift file](src/main/thrift/parquet.thrift)
where all the data structures we might use in a Parquet file are defined.

### File structure

All of the required fields in the structure (and sub-structures) of
`FileMetaData` must be set according to the specification.
The following page types are supported:
* Data page V1 (see `DataPageHeader`)
* Dictionary page (see `DictionaryPageHeader`)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might make sense to also call out here status of data page v2.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean as a note? I am currently not sure if data page v2 worth to be supported at all. It was never widely used and the benefits of it are more about using some more advanced encodings and not the v2 page header itself.
So, I am not sure about the status of v2.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't the data page v2 header allow selective compression of individual pages?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on parquet.thrift V2 only allows to select if a page is compressed or not. The compression codec is specified in ColumnMetaData so it can be set by column (and not page) for both V1 and V2.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, if some pages don't compress well you can skip the CPU cost of compression for them (I suppose that was the rationale for adding this feature?).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant that they aren't widely supported, so if you use it you will find implementations that can't read your files (per the chicken/egg problem above)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, what would be the general approach here? Do we want to add V2 to the first release of core features so we expect all the implementations to support it or we leave it for a next release?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a consensus about whether DataPageV2 is considered a stable feature now ?
It would help if the spec clearly says whether or not DataPageV2 is finalized and is expected to be supported rather than skipping mention of it here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Stable" is unfortunately underdefined here. It certainly seems stable as far as the format is concerned. It's not necessarily widely implemented, though both parquet-mr and Parquet C++ support it.

Copy link
Contributor

@raunaqmorarka raunaqmorarka May 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trino also supports reading V2 data pages. However, when we attempted to use it in our parquet writer, we discovered other engines failed to read it trinodb/trino#6377
There was confusion about whether V2 is officially finalised trinodb/trino#7953 (comment)
I believe support for V2 pages in other engines has improved since then.
If it's simply a matter of adoption, then it would help to have some clarity about it in the spec (like for example acknowledging it as a core feature here) so that someone implementing a parquet writer can be assured that writing V2 pages is not some experimental feature that can be deprecated from the format.


**TODO**: list optional fields that must be filled properly.

#### Column chunk file reference

The optional field `file_path` in the `ColumnChunk` object of the Parquet footer
(aka Parquet Thrift file) makes it available to reference an external file. This
option was used for different features like _summary files_ or
_external column chunks_. These features were never specified correctly and
they did not spread across the different implementations. Because of that we do
not include these features in this document and therefore the field `file_path`
is not supported.

### Types

#### Primitive types

The following [primitive types](README.md#types) are supported
* `BOOLEAN`
* `INT32`
* `INT64`
* `FLOAT`
* `DOUBLE`
* `BYTE\_ARRAY`
* `FIXED\_LEN\_BYTE\_ARRAY`

NOTE: The primitive type `INT96` is deprecated so it is intentionally not listed
here.

#### Logical types

The [logical type](LogicalTypes.md)s are practically annotations helping to
understand the related primitive type (or structure). Originally we have had
the `ConvertedType` enum in the Thrift file representing all the possible
logical types. After a while we realized it is hard to extend and so introduced
the `LogicalType` union. For backward compatibility reasons we allow to use the
old `ConvertedType` values according to the specified rules but we expect that
the logical types in the file schema are defined with `LogicalType` objects.

The following LogicalTypes are supported:
* `STRING`
* `MAP`
* `LIST`
* `ENUM`
* `DECIMAL` (for which primitives?)
* `DATE`
* `TIME`: **(Which unit, utc?)**
* `TIMESTAMP`: **(Which unit, utc?)**
* `INTEGER`: (all bitwidth 8, 16, 32, 64) **(unsigned?)**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think parquet-cpp and parquet-mr both support unsigned right? I checked Impala and it would just interpret it as a signed integer. I think it's not the only engine consuming parquet than only has signed types.

I guess included unsigned here seems fairly reasonable, but it would come with the caveat that the behaviour is implementation-dependent about how the cast to signed would be handled (e.g. allowing overflow or not).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think parquet-cpp and parquet-mr both support unsigned right?

parquet-cpp definitely does, since C++ has native unsigned integers (i.e. no cast to signed is involved).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am aware of only one implementation where the cast can be a problem, this is parquet-mr. There is no native support for unsigned values in primitives in java. In parquet-mr we give back the original bitmap in the related primitive (so a long for an unsigned int64) and it is up to the client to deal with the value. If the related client does not check the logical type (so realize that the value is unsigned) then the value might be represented incorrectly.
I think this would be a case in Hive. Meanwhile, I am not aware of any SQL engines that would write unsigned int values so currently it does not seem to be a problem.
If we want to be on the safe side we should not add unsigned values because of the java limitations but I am not sure about this because of the issue would be on the client side of parquet-mr and not parquet-mr itself. What do you think?

* `UNKNOWN` **(?)**
* `JSON` **(?)**
* `BSON` **(?)**
* `UUID` **(?)**
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The last three seem highly application-specific. For example, Arrow doesn't have native types for them.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good point. I think this is true for logical types in general - if the application doesn't support the type at all, then it's a limitation of the application, not the application's Parquet implementation. It's reasonable for applications to not want to handle, say, timestamps.

Of course parquet-mr and parquet-cpp need to be able to at least pass through the data to the application in all of these cases.

So maybe all of these need to be qualified:

  • If the implementation is general-purpose Parquet library, users of the library must be able to access data with all logical types. It may just return the data in the same format as the primitive type. E.g. if JSON is just returned as a STRING, that seems fine.
  • If the implementation is anything else (SQL engine, computation framework, etc), then if it supports a primitive or logical type, it must support all core features associated with that type.

E.g. suppose you have an application that only processes DECIMAL values. It might be a bad and useless application, but that's not Parquet's problem. If it can read and write all DECIMAL encodings correctly, then it's compliant. If it couldn't read decimals stored as int64, or if it wrote decimals with precision < 10, it's not compliant (see https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#decimal).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds fine to me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think, this topic is not that simple. However parquet-mr usually returns values related to the primitive types only (no conversion for logical type) there are some additional logic that depends on the logical type.

  • For example the sorting order matters when we are talking about statistics (min/max for row groups or column indexes). The sorting order might depend on the logical type (e.g. BINARY as a STRING or as a DECIMAL).
  • In your example about JSON you cannot return a String if you don't know that the related BINARY primitive contains an UTF-8 encoded value.
  • We have many problems with the different semantics used for timestamps in the different projects. This problem cannot be solved entirely in parquet but it can be the base of the solution. That's why we introduced the new timestamp logical type.
  • Some parquet-mr bindings (e.g. parquet-avro) do depend on the logical types when converting values
  • We already found some issues with the string representation. Impala did not write the logical type UTF-8 for strings only the primitive type BINARY. Meanwhile, iceberg required to have this logical type otherwise it recognized the value as a binary and it did not work with it.

To summarize, we might not add all of the logical types (or not all of the primitive representations) to the core features but I think, we shall add at least the ones already used widely. The whole core features idea is about to help interoperability. Logical types are a big part of it.

Although, from interoperability point of view a read-only parquet implementation that cannot do anything with the concept of DECIMAL it might not implement the reading of it. But if one would like to write a UTF-8 string than we shall require to use a STRING logical type in the schema for it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In your example about JSON you cannot return a String if you don't know that the related BINARY primitive contains an UTF-8 encoded value.

In practice that shouldn't be very problematic, as most JSON should be UTF8-encoded (note that the IETF standard requires either UTF-8, UTF-16 or UTF-32, and states that UTF-8 is the default encoding).

The whole core features idea is about to help interoperability. Logical types are a big part of it.

Agreed. The question is: what does supporting a logical type mean? For example, if I accept JSON data and simply present it as regular string data, is that enough to say that I support it? Otherwise, am I supposed to do something specific?

Copy link
Member

@pitrou pitrou Mar 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit difficult to read to me, what do you call "related type" or "related version of core features"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I guessed so. I am not very good at it and that's why I've tried to write something here instead of in the document at the first place.

I would like to formalize the following idea. Writer writes a logical type if has a data type with similar semantics. It is allowed to not to write some logical types if there are no similar values to write (kind of common sense). Reader have to recognize all the listed logical types and either convert (or read as is) to one of its own types or if no such internal types exists it may not read such values but have to inform the user. We do not want misinterpreted or silently skipped values.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it is not ok for a reader to ignore the logical type and still interpret the data based on the physical type?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not. Logical types can significantly modify the meaning of a primitive type. Back to one of my examples. A BINARY primitive type can be a STRING or a DECIMAL. Even the statistics (min/max) are different for these two.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense, thank you.


NOTE: The old ConvertedType `INTERVAL` has no representation in LogicalTypes.
This is becasue `INTERVAL` is deprecated so we do not include it in this list.

### Encodings

The following encodings are supported:
* [PLAIN](Encodings.md#plain-plain--0)
parquet-mr: Basically all value types are written in this encoding in case of
V1 pages
* [PLAIN\_DICTIONARY](Encodings.md#dictionary-encoding-plain_dictionary--2-and-rle_dictionary--8)
**(?)**
parquet-mr: As per the spec this encoding is deprecated while we still use it
for V1 page dictionaries.
* [RLE](Encodings.md#run-length-encoding--bit-packing-hybrid-rle--3)
parquet-mr: Used for both V1 and V2 pages to encode RL and DL and for BOOLEAN
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the first time these abbreviations are used

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parts describing how the parquet-mr implementations work were not meant to be part of the final document. As I don't know too much about other implementations I've felt it is good to note the current behavior so others may understand the situation better. After a consensus of the related topic made I would remove this note.

values in case of V2 pages
* [DELTA\_BINARY\_PACKED](Encodings.md#delta-encoding-delta_binary_packed--5)
**(?)**
parquet-mr: Used for V2 pages to encode INT32 and INT64 values.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this notes the current status for parquet-mr, it would greatly help to have a clarification in the spec about whether the delta encodings are only allowed with V2 Data pages or if it's legal to use delta encodings in V1 Data pages as well. Or in other words, is the choice of encodings linked in any way to the choice of V1 or V2 in the data page ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIU, the choice of data encodings is orthogonal to the choice of V1 or V2 data pages. But different default values may be applied based on which version the user asks for.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the choice of data encodings is orthogonal to the choice of V1 or V2 data pages

That's what I'm hoping for as well, however the spec doesn't say explicitly if it is valid to write delta encodings in V1 pages. The parquet-mr implementation does not use any delta encodings in V1 pages. Keeping this ambiguous in the spec means that someone implementing a writer cannot be sure about whether the various projects implementing readers will consider this a valid input. It's also not clear to me whether parquet-mr has tests guaranteeing that this works given that parquet-mr itself will not do this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's what I'm hoping for as well, however the spec doesn't say explicitly if it is valid to write delta encodings in V1 pages.

Can you open a separate issue about that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, what's the appropriate place to open issues ? GH issues seem to be disabled on this repo.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you need to open a JIRA issue. See here: https://parquet.apache.org/community/

* [DELTA\_LENGTH\_BYTE\_ARRAY](Encodings.md#delta-length-byte-array-delta_length_byte_array--6)
**(?)**
parquet-mr: Not used directly
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While parquet-mr does not use this directly, the spec currently says that "This encoding is always preferred over PLAIN for byte array columns" at https://github.com/apache/parquet-format/blob/master/Encodings.md#delta-length-byte-array-delta_length_byte_array--6
So why doesn't parquet-mr use DELTA_LENGTH_BYTE_ARRAY instead of PLAIN for byte arrays ?
As an implementor, it's not clear to me how should the line parquet-mr: Not used directly be interpreted.
Is this recommending doing exactly what parquet-mr does regarding usage of encodings or should implementors use their own judgement or should we follow the prescription of the spec at https://github.com/apache/parquet-format/blob/master/Encodings.md#delta-length-byte-array-delta_length_byte_array--6 regardless of what parquet-mr is doing ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably because DELTA_BYTE_ARRAY should be more efficient in general than DELTA_LENGTH_BYTE_ARRAY, thanks to prefix compression?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That might be true from the point of view of compression, however the decoder logic is more complicated with DELTA_BYTE_ARRAY and decoding speed could be slower.
If DELTA_BYTE_ARRAY is always the better choice, the spec should instead recommend that DELTA_BYTE_ARRAY should always be preferred over PLAIN for byte array columns and explicitly discourage using DELTA_LENGTH_BYTE_ARRAY directly as an encoding for byte arrays.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the spec should recommend an encoding over another one. It should clearly document the advantages and drawbacks of each encoding, so that implementations and applications can make an informed choice.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the spec would simply provide guidelines and not be opinionated consistently in every case, that would be okay too. However, that doesn't appear to be the case today given the above recommendation of DELTA_LENGTH over PLAIN in the current spec. This PR is also highlighting the specific encodings chosen by parquet-mr along with choice of V1/V2 page format, which gives me the impression that any choice than what is mentioned here is not really recommended even if it exists in the spec.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, that doesn't appear to be the case today given the above recommendation of DELTA_LENGTH over PLAIN in the current spec.

This may deserve a separate issue, then.

* [DELTA\_BYTE\_ARRAY](Encodings.md#delta-strings-delta_byte_array--7)
**(?)**
parquet-mr: Used for V2 pages to encode BYTE\_ARRAY and
FIXED\_LEN\_BYTE\_ARRAY values
Comment on lines +144 to +153
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the source code, the three DELTA encodings are not yet supported in Arrow parquet-cpp? (@emkornfield is that correct?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parquet C++ should now support DELTA_BINARY_PACKED for reading.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update: Parquet C++ now supports all encodings on the read side (which is sufficient here).

* [RLE\_DICTIONARY](Encodings.md#dictionary-encoding-plain_dictionary--2-and-rle_dictionary--8)
**(?)**
parquet-mr: Used for V2 page dictionaries
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in Arrow parquet-cpp you can use RLE_DICTIONARY (when asking for version=2) without also using V2 data pages

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Impala 3.4 and below wouldn't support this, FWIW.

I also recently implemented this in Impala and on the read side it's orthogonal to the data page version, but does require additional code: IMPALA-6434

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another general choice here. Do we want to add encodings that might not supported widely yet at the first round or add only the ones we regularly use and leave the rest (maybe some even newer ones) for a later release?

* [BYTE\_STREAM\_SPLIT](Encodings.md#byte-stream-split-byte_stream_split--9)
**(?)**
parquet-mr: Not used by default; can be used only via explicit configuration

NOTE: [BIT\_PACKED](Encodings.md#bit-packed-deprecated-bit_packed--4) is
deprecated and not used directly (boolean values are encoded with this under
PLAIN) so not included in this list.

### Compression

The following compression algorithms are supported (including `UNCOMPRESSED`).
* `SNAPPY`
* `GZIP`
* `BROTLI` **(?)**
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is LZO deprecated? If so we should state that

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if we want to list every possibility based on the format and state whether it is supported or not. LZO was removed after these comments.

* `ZSTD` **(?)**
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would love to use LZ4/ZSTD more cause the performance/density benefits can be significant; not sure which implementations have support yet though (Impala has both as of the 3.3 release).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The big question here is if the other implementations can guarantee to read/write these compressions. For example parquet-mr uses hadoop for some of these therefore it depends on the environment if support them or not. If we put these to the core features parquet-mr cannot rely on the environment to provide the related codecs. (According to https://github.com/airlift/aircompressor it does not seem to be a big deal just requires some efforts and testing.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parquet-cpp has recently fixed its LZ4 support to be compatible with the weird encoding used by parquet-mr.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does it mean that it uses hadoop for the codec? Does that mean it's difficult or impossible in some environments to enable the compression codecs?

We ran into the same LZ4 issue too - https://issues.apache.org/jira/browse/IMPALA-8617

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parquet-mr does not ship all of the codec implementations it supports. E.g. we do not unit test LZO, LZ4 and ZSTD because they are "not distributed in the default version of Hadoop". It means it is up to the user to install the related codecs to the hadoop environment so parquet-mr can use them. (It might mean that in some environments it is not possible to use some codecs but I don't have too mush experience in this.) This is the current situation.
That's why I've said if we add these codecs to the core features than parquet-mr have to add the implementations of these codecs (as direct dependencies) so it can guarantee the codecs are accessible in any situation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the record, we (meaning @chairmank) discovered an incompatibility in the LZ4 compatibility code that was added to Parquet C++ to handle Hadoop-produced files: see https://issues.apache.org/jira/browse/ARROW-11301

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an update, while we (parquet-cpp) managed to fix the aforementioned compatibility issue, another compatibility issue appeared in the other direction (LZ4-compressed files produced by parquet-cpp cannot be read back by parquet-java): https://issues.apache.org/jira/browse/PARQUET-1974

At this point, it seems the lack of specification around LZ4 compression makes it basically impossible to implement in a compatible manner. Therefore it shouldn't be in this spec.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've proposed a new LZ4_RAW codec in #168, where I also deprecate the current LZ4 codec.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll remove LZ4 from here. I guess we cannot add LZ4_RAW here yet.


### Statistics

Statistics are not required for reading data but incorrect or under specified statistics implementation can cause data loss.
still list these features as wrongly specified/implemented statistics can still
cause losing data unnoticed.
The following features related to statistics are supported.
* The row group level min/max values: The fields `min\_value` and `max\_value`
shall be used in the `Statistics` object according to the specification
* [Column Index](PageIndex.md)

NOTE: Writing page level statistics to the data page headers is not required.

The list of `column\_orders` in `FileMetaData` must be set according to the
notes. See the special handlings required for floating point numbers at
`ColumnOrder`.
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,6 +239,13 @@ There are many places in the format for compatible extensions:
- Encodings: Encodings are specified by enum and more can be added in the future.
- Page types: Additional page types can be added and safely skipped.

## Compatibility
Because of the many features that have been added to the Parquet format not all
of the implementations was able to keep up. We introduced the list of "core
features". This document is versioned by the Parquet format releases and defines
a compliance level for the different implementations. See
[CoreFeatures.md](CoreFeatures.md) for more details.

## Contributing
Comment on the issue and/or contact [the parquet-dev mailing list](http://mail-archives.apache.org/mod_mbox/parquet-dev/) with your questions and ideas.
Changes to this core format definition are proposed and discussed in depth on the mailing list. You may also be interested in contributing to the Parquet-MR subproject, which contains all the Java-side implementation and APIs. See the "How To Contribute" section of the [Parquet-MR project](https://github.com/apache/parquet-mr#how-to-contribute)
Expand Down
7 changes: 7 additions & 0 deletions src/main/thrift/parquet.thrift
Original file line number Diff line number Diff line change
Expand Up @@ -1041,6 +1041,13 @@ struct FileMetaData {
* Used only in encrypted files with plaintext footer.
*/
9: optional binary footer_signing_key_metadata

/**
* This field might be set with the version number of a parquet-format release
* if this file is created by using only the features listed in the related
* list of core features. See CoreFeatures.md for details.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be useful to clarify how this differs from the version key above ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a good point, @jorisvandenbossche. However, I am not sure what the other version is for. Maybe, it would be better to specify that one correctly and it would be clear that this one is different thing.
Currently parquet-mr always writes 1 to that version field. I guess this is a file format version that we never incremented as it is backward compatible to the first ones we wrote. If it is the case then I think we will never increment it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Arrow (parquet-cpp) this version field actually gets populated with 1 or 2, and depending on that different features are used to write the file (version 2 enabled some additional features, like writing nanosecond timestamps, or using RLE_DICTIONARY instead of PLAIN_DICTIONARY)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't sound good. It means that we are using a required field for different purposes. Because it is required we cannot deprecate it easily.
parquet-mr does not read this field only writes 1 all the time. What does parquet-cpp does with this value at the read path?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, the field is not used when reading (but I am not super familiar with the C++ code)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, Impala reads this field and fails if it is 2 (https://issues.apache.org/jira/browse/IMPALA-11838?filter=-2)

*/
10: optional string compliance_level
}

/** Crypto metadata for files with encrypted footer **/
Expand Down