Pivotal Greenplum 6.1 Release Notes

Pivotal Greenplum 6.1 Release Notes

This document contains pertinent release information about Pivotal Greenplum Database 6.1 releases. For previous versions of the release notes for Greenplum Database, go to Pivotal Greenplum Database Documentation. For information about Greenplum Database end of life, see Pivotal Greenplum Database end of life policy.

Pivotal Greenplum 6 software is available for download from the Pivotal Greenplum page on Pivotal Network.

Pivotal Greenplum 6 is based on the open source Greenplum Database project code.

Important: Pivotal Support does not provide support for open source versions of Greenplum Database. Only Pivotal Greenplum Database is supported by Pivotal Support.

Release 6.1.0

Release Date: 2019-11-1

Pivotal Greenplum 6.1.0 is a minor release that includes new features and resolves several issues.

New Features

Greenplum Database 6.1.0 includes these new features:

  • Greenplum Stream Server 1.3 is included, which introduces new features and bug fixes. New GPSS features include:
    • GPSS now supports log rotation, utilizing a mechanism that you can easily integrate with the Linux logrotate system. See Managing GPSS Log Files for more information.
    • GPSS has added the new INPUT:FILTER load configuration property. This property enables you to specify a filter that GPSS applies to Kafka input data before loading it into Greenplum Database.
    • GPSS displays job progress by partition when you provide the --partition flag to the gpsscli progress command.
    • GPSS enables you to load Kafka data that was emitted since a specific timestamp into Greenplum Database. To use this feature, you provide the --force-reset-timestamp flag when you run gpsscli load, gpsscli start, or gpkafka load.
    • GPSS now supports update and merge operations on data stored in a Greenplum Database table. The load configuration file accepts MODE, MATCH_COLUMNS, UPDATE_COLUMNS, and UPDATE_CONDITION property values to direct these operations. Example: Merging Data from Kafka into Greenplum Using the Greenplum Stream Server provides an example merge scenario.
    • GPSS supports Kerberos authentication to both Kafka and Greenplum Database.
    • GPSS supports SSL encryption between GPSS and Kafka.
    • GPSS supports SSL encryption on the data channel between GPSS and Greenplum Database.
  • The DataDirect JDBC and ODBC drivers were updated to versions 5.1.4.000270 (F000450.U000214) and 07.16.0334 (B0510, U0363), respectively.

    The DataDirect JDBC driver introduces support for the prepareThreshold connection parameter, which specifies the number of prepared statement executions that can be performed before the driver switches to using server-side prepared statements. This parameter defaults to 0, which preserves the earlier driver behavior of always using server-side prepare for prepared statements. Set a number greater than 1 to set a threshold after which server-side prepare is used.

    Note: ExecuteBatch() always uses server-side prepare for prepared statements. This matches the behavior of the Postgres open source driver.
    When the prepareThreshold value is greater than 1, parameterized operations do not send any SQL prepare calls with connection.prepareStatement(). The driver instead sends the query all at once, at execution time. Because of this limitation, the driver must determine the type of every column using the JDBC API before sending the query to the server. This determination works for many data types, but does not work for the following types that could be mapped to multiple Greenplum data types:
    • BIT VARYING
    • BOOLEAN
    • JSON
    • TIME WITH TIME ZONE
    • UUIDCOL

    You must set prepareThreshold to 0 before using parameterized operations with any of the above types. Examine the ResultSetMetaData object in advance to determine if any of the above types are used in a query. Also keep in mind that GPORCA does not support prepared statements that have parameterized values, and will fall back to using the Postgres Planner.

Resolved Issues

Pivotal Greenplum 6.1.0 is a maintenance release that resolves these issues:

8804 - Server
In some cases, running the EXPLAIN ANALYZE command on a sorted query in utility mode would cause the segment to crash. This issue is fixed. Greenplum Database no longer crashes in this situation.
8636 - Server
Some users encountered Error: unrecognized parameter "appendoptimized" while creating a partitioned table that specified the appendoptimized=true storage parameter. This issue is fixed; the Greenplum Database server now properly recognizes the appendoptimized parameter when it is specified on partition table creation.
26225 - gpcheckcat
The gpcheckcat utility failed to generate a summary report if there was an orphan TOAST table entry in one of the segments. This is fixed. The string "N/A" is reported when there is no relation OID to report.
29580 - Management and Monitoring
During Greenplum Database startup, an extra empty log file was produced ahead of the current date while performing time-based rotation of log files. For example, if Greenplum started at midnight September 2nd, two log files were generated, gpdb-2019-09-02_000000.csv and gpdb-2019-09-03_000000.csv. This issue has now been fixed.
29984 - Server
During startup, idle query executor (QE) processes can commit up to 16MB of memory each, but they are not tracked by the Linux virtual memory tracker. In a worst-case scenario, these idle processes could trigger OOM errors that were difficult to diagnose. To prevent these situations, Greenplum now hard-codes a startup memory cost to account for untracked QE processes.
30112 - Query Optimizer
For some queries against partitioned tables that contain a large amount of data, GPORCA generated a sub-optimal query plan because of inaccurate cardinality estimation. This issue has been resolved. GPORCA cardinality estimation has been improved.
30183, 30184 - analyzedb
When running the analyzedb command with the --skip-root-partition option, the command could take a long time to finish when analyzing a partitioned table with many partitions due to how the root partition statistics were handled when the partitions were analyzed. This issue has been resolved. Now, only partition statistics are updated.
Note: GPORCA uses root partition statistics. If you use --skip-root-partition option, you should ensure that root partition statistics are up to date so that GPORCA does not produce inferior query plans due to stale root partition statistics.
30149 - Query Execution
A query might fail and return an error with the message invalid seek in sequential BufFile when the server configuration file gp_workfile_compression is on and the query spills to temporary workfiles. The error was caused due to an issue working with workfiles that contain compressed data. The issue has been resolved by correctly handling the compressed workfile data.
30150 - Query Execution
A query might fail and return with the message AssignTransactionId() called by Segment Reader process when the server configuration parameter temp_tablespaces is set. The error was cause by an internal locking and transaction ID issue. This issue has been resolved by removing the requirement to acquire the lock.
30160 - Query Optimizer
GPORCA might return incorrect results when a the query contains a join predicate where one side is distributed on a citext column, and the other is not. GPORCA did not use the correct hash when generating a plan that redistributes the citext column. Now Greenplum Database falls back to the Postgres Planner for the specified type of query.
30183 - analyzedb
The analyzedb command could take a long time to finish when analyzing a table with many partitions. The command's performance has been greatly improved by waiting to update the root partition statistics until all leaf partitions of a table have been analyzed.
164823612 - gpss
GPSS incorrectly treated Kafka jobs that specified the same Kafka topic and Greenplum output schema name and output table name, but different database names, as the same job. This issue has been resolved. GPSS now includes the Greenplum database name when constructing a job definition.
167997441 - gpss
GPSS did not save error data to the external table error log when it encountered an incorrectly-formatted JSON or Avro message. This issue has been fixed; invoking gp_read_error_log() on the external table now displays the offending data.
168130147 - gpss
In some situations, specifying the --force-reset-earliest flag when loading data failed to read from the correct offset. This problem has been fixed. (Using the --force-reset-xxx flags outside of an offset mismatch scenario is discouraged.)
168393571 - Query Optimizer
Certain queries with btree indexes on Append Optimized (AO) tables were unnecessarily slow due to GPORCA selecting a scan with high transformation and cost impact. This issue has been fixed by improving GPORCA handling of btree type indexes.
168393645 - Query Optimizer
In some situations, a query ran slow because GPORCA did not produce an optimal plan when it encountered a null-rejecting predicate where an operand could be false or null, but not true. This issue is fixed; GPORCA now produces a more optimal plan when evaluating null-rejecting predicates for AND and OR operands.
168705484 - Query Optimizer
For certain queries with a UNION operator over a large number of children, GPORCA query optimization required a long time. This issue has been addressed by adding the ability to derive scalar properties on demand.
168707515 - Query Optimizer
Some queries in GPORCA were consuming more memory than necessary due to suboptimal memory tracking. This has been fixed by optimizing memory accounting inside GPORCA.
169081574 - Interconnect
Greenplum Database might generate a PANIC when the server configuration parameter gp_interconnect_type is TCP due to an issue with memory management during interconnect setup. The issue has been resolved by properly managing the internal interconnect object memory.
169117536 - Execution
Greenplum Database might generate a PANIC when the server configuration parameter log_min_messages is set to debug5. Greenplum Database did not properly handle a debug5 message correctly. The issue is resolved.
169198230 - Plan Cache
A prepared statement might run slow because a cost model issue prevented Greenplum Database from generating a direct dispatch plan for the statement. This issue is fixed. Greenplum Database now introduces non-direct dispatch cost into the cost model only for cached plans, and tries to use direct dispatch for prepared statements when possible.

Upgrading to Greenplum 6.1.0

Note: Greenplum 6 does not support direct upgrades from Greenplum 4 or Greenplum 5 releases, or from earlier Greenplum 6 Beta releases.

See Upgrading from an Earlier Greenplum 6 Release to upgrade your existing Greenplum 6.x software to Greenplum 6.1.0.

Migrating Data to Greenplum 6

Note: Greenplum 6 does not support direct upgrades from Greenplum 4 or Greenplum 5 releases, or from earlier Greenplum 6 Beta releases.

See Migrating Data from Greenplum 4.3 or 5 for guidelines and considerations for migrating existing Greenplum data to Greenplum 6, using standard backup and restore procedures.

Known Issues and Limitations

Pivotal Greenplum 6 has these limitations:

  • Upgrading a Greenplum Database 4 or 5 release, or Greenplum 6 Beta release, to Pivotal Greenplum 6 is not supported.
  • MADlib, GPText, and PostGIS are not yet provided for installation on Ubuntu systems.
  • gpcopy cannot yet copy data from Greenplum 4 or 5 to Greenplum 6.
  • Greenplum 6 is not supported for installation on DCA systems.
  • Greenplum for Kubernetes is not yet provided with this release.

The following table lists key known issues in Pivotal Greenplum 6.x.

Table 1. Key Known Issues in Pivotal Greenplum 6.x
Issue Category Description
169200795 Greenplum Stream Server When loading Kafka data into Greenplum Database in UPDATE and MERGE modes, GPSS requires that a MAPPING exist for each column name identified in the MATCH_COLUMNS and UPDATE_COLUMNS lists.
168548176 gpbackup When using gpbackup to back up a Greenplum Database 5.7.1 or earlier 5.x release with resource groups enabled, gpbackup returns a column not found error for t6.value AS memoryauditor.
164791118 PL/R PL/R cannot be installed using the deprecated createlang utility, and displays the error:
createlang: language installation failed: ERROR:  
no schema has been selected to create in
Workaround: Use CREATE EXTENSION to install PL/R, as described in the documentation.