Greenplum Database 4.3.x Resolved Issues
A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 4.x documentation.
Greenplum Database 4.3.x Resolved Issues
Consolidated list of resolved issues for the Greenplum Database 4.3.x releases.
|24677||Backup and Restore||188.8.131.52||In some cases after a successful back up operation, error messages about lock
files were incorrectly displayed.
This issue has been resolved.
|24667||DDL and Utility Statements||184.108.40.206||When creating a temporary table with the ON COMMIT DELETE ROWS clause in a heavy workload environment, the prepared transaction that created the temporary table failed in some cases.|
|Monitoring: Commander Center Alerting||220.127.116.11||In the Greenplum Command Center database gpperfmon, performance issues were caused by data skew in the log_alert_history. The distribution key for the table has been changed to resolve the issue. See Changed Features.|
|24515||Replication: Segment Mirroring||18.104.22.168||In some cases under a heavy workload, logging onto the Greenplum Database segment host as a UNIX user was not possible. This was caused by a Greenplum Database filerep process that was incorrectly sending signals to the user after the process failed to create a sub-process.|
|23751||Monitoring: gpperfmon server||22.214.171.124||In some cases, a memory leak caused the gpmmon process to consume a large amount of memory and CPU resources.|
If a Greenplum Database segment failed during two-phase transaction processing, the transaction remained in a uncompleted state and was cleaned up only during a Greenplum Database restart. In many cases, this caused high disk consumption by the Greenplum Database xlog process.
|18509||Functions and Languages||126.96.36.199||In some cases, Greenplum Database did not handle data of type date properly and caused a segmentation fault.|
|24479||Backup and Restore||188.8.131.52||A table could not be restored (with the gpdbrestore -T option) from a back up that is on a Data Domain Boost system and that was created with the gpcrondump --ddboost options.|
|24478||Management Scripts: expansion||184.108.40.206||The Greenplum Database gpexpand utility failed when an error
table for an external table was present in Greenplum Database. The utility
displayed this message:
DETAIL: ALTER TABLE is not allowed on error tables
|24326||Query Execution, Storage Access Methods||220.127.116.11||If either a non-partitioned append-only table or an individual append-only
part of a partitioned table had more than 127 million rows on a segment, a query
that uses an index to access the table data could return duplicate rows.
This issue has been fixed.
|24317||Security||18.104.22.168||Greenplum Database software has been updated to use OpenSSL 0.9.8zb in response to the OpenSSL Security Advisory [6 Aug 2014]. For information about the advisory, see https://www.openssl.org/news/secadv_20140806.txt.|
|24248||GPHDFS||22.214.171.124||The Greenplum Database external table protocol gphdfs supports the Cloudera 4.x and 5.x HDFS distributions. See Hadoop Distribution Compatibility|
|24237||DDL and Utility Statements||126.96.36.199||Temporary tables were not cleaned up properly in the following situation. A
user defined function (UDF) was created with a security definer and that included
statements to create the temporary table. The UDF was executed by a regular user
who was given EXECUTE permission on the function.
This caused the temporary table to stay in the database after the session was disconnected.
|24182||Management Scripts: General||188.8.131.52||Greenplum Database timezone information has been updated to match world-wide timezones. For in formation about timezones, see http://www.iana.org/time-zones.|
|24168||Vacuum||184.108.40.206||For an append-optimized table that did not contain any data, the VACUUM command did not update the value of relfrozenxid in the catalog table pg_class.|
|24158||Upgrade / Downgrade||220.127.116.11||When upgrading Greenplum Database from a 4.2.x release to a 4.3.x release prior to 18.104.22.168, append-only tables were not correctly converted to append-optimized tables. In some cases, the incorrect conversion prevented the VACUUM command from reclaiming storage occupied by deleted tuples. For information about the upgrade issue, see Upgrading from 4.3.x to 4.3.3.x.|
|24119||Query Execution||22.214.171.124||In some cases, a segmentation fault occurred when a DECLARE CURSOR WITH HOLD command was run by an ODBC driver.|
|Loaders: gpfdist||126.96.36.199||The Greenplum Database gpfdist utility failed with a SIGSEGV error when the utility received a empty request with two consecutive return characters "\n\n".|
|24089||Loaders: Copy/ExternalTabs||188.8.131.52||Multibyte characters were not handled properly when writing to an external
table that uses the gb18030 encoding from a Greenplum database that was created
with UTF8 encoding. In some cases, this error was encountered.
ERROR: The size of the value cannot be bigger than the field size value
|24079||GPHDFS||184.108.40.206||The Greenplum Database external table protocol gphdfs supports the Pivotal 2.0 distribution.|
|24068||Postgis||220.127.116.11||When using PostGIS, In some cases a closed curved polygon that was converted to a linear polygon was not closed due to a linear approximation precision issue with PostGIS 2.0.3.|
|24067||Loaders: gpfdist, Loaders: gpload||18.104.22.168||In some cases when network load was heavy, the Greenplum Database utility gpfdist intermittently failed with this error: gpfdist closed connection to server|
|24055||Vacuum||22.214.171.124||The VACUUM FULL command transaction processing has been enhanced ensure proper operation with other concurrent operations.|
|24011||Catalog and Metadata, Vacuum||126.96.36.199||In some cases, when a VACUUM FULL command was cancelled, incorrect handling of Greenplum Database transaction log caused a PANIC signal to be issued and prevented Greenplum Database from performing a crash recovery of a segment mirror.|
|24001||Backup and Restore||188.8.131.52||During a backup operation, the Greenplum Database utility gpcrondump held an EXCLUSIVE lock on the catalog table pg_class longer than required.|
|23955||Query Execution||184.108.40.206||In some query plans, where a window operator is under the right child of a nested loops join, wrong results could have been generated because of improper cleanup of the operator's internal state.|
|23925||Management Scripts: expansion, Management Scripts: General||220.127.116.11||The Greenplum Database utilities gpactivatestandby and gpexpand used SSH to connect to localhost (the Greenplum Database host where the utility was run). Using SSH was redundant as the command was already on the local host and has been eliminated.|
|23894||Backup and Restore||18.104.22.168||Performing a back up to a Data Domain system failed when the Greenplum Database utility gpcrondump command specified the --ddboost options because gpcrondump performed a disk space check.|
|23864||Catalog and Metadata||22.214.171.124||Running the REINDEX command on a database while other workloads are concurrently running could create inconsistencies in the database catalog.|
|23850||Management Scripts||126.96.36.199||In some cases after expanding a Greenplum Database system, running gpinitstandby -n failed to resynchronize the data between the primary and standby master host.|
|23850||Management Scripts: General||188.8.131.52||In some cases after expanding a Greenplum Database system, running gpinitstandby -n failed to resynchronize the data between the primary and standby master host.|
|23842||Replication: Segment Mirroring||184.108.40.206||In some rare cases, if a restart occurred while the gprecoverseg utility was running, some tables and a persistent table were detected having less data on a mirror segment that corresponds to a primary segment.|
|23802||Query Execution||220.127.116.11||Greenplum Database did not manage temporary workfiles (spill files) properly. In some cases, this caused a query that required workfiles to fail with a message that stated that a Greenplum Database segment had reached the maximum configured workfile usage limit.|
|23753||Backup and Restore||18.104.22.168||The emails sent by the Greenplum Database gpcrondump utility could not be customized. Now the utility supports customized email notification for backup operations.|
|23730||Backup and Restore, Management Scripts: master mirroring||22.214.171.124||When configuring a Greenplum Database system with a standby master, the gpinitstandby utility did not correctly update the pg_hba.conf file on Greenplum Database segment hosts.|
|23729||Backup and Restore, DDL and Utility Statements||126.96.36.199||When the -b option was specified with the gpcrondump utility to disable a disk space check, a check was still performed.|
|23717||Locking, Signals, Processes||188.8.131.52||During Greenplum Database shutdown, a signal-unsafe function call was called from a signal handler function. The signal-unsafe function was replaced.|
|23699||Monitoring: gpperfmon server||184.108.40.206||Greenplum Database failed when the gpperfmon log files were not encoded in
This issue has been resolved.
|23637||Backup and Restore||220.127.116.11||When restoring a Greenplum database with the Greenplum Database gpcrondump utility, the utility performed an ANALYZE operation on the entire database. Now the gpcrondump utility analyzes only the restored tables.|
|23568||Backup and Restore||18.104.22.168||When backing up a Greenplum database with the Greenplum Database gpcrondump utility and specifying an NFS directory with the -u option, the gpcrondump utility created an empty db_dumps directory in the master and segment data directories.|
|23558||Backup and Restore||22.214.171.124||When restoring a backup from a Data Domain system using --ddboost options, the Greenplum Database gpdbrestore utility failed because it could not find C data and post data files.|
|23286||Dispatch||126.96.36.199||In some cases, Greenplum Database did not handle the processing of cancelled
distributed queries properly.
This issue has been resolved.
|22974||Loaders: Copy/ExternalTabs||188.8.131.52||When reading data from external sources, Greenplum Database stopped reading data if the first 1000 rows processed contain formatting errors. Now the limit is configurable.|
|20504||Query Execution||184.108.40.206||FOR loops in PL/pgSQL did not close the sequence generator if further access was still required.|
|18562||DDL and Utility Statements||220.127.116.11||A transaction lock did not block reader processes from proceeding when a
writer process was holding the same lock. In some cases this caused a race
condition to occur.
Now, Greenplum Database blocks reader processes when a writer process holds the same lock to prevent race conditions from occurring.
|17264||Replication: Segment Mirroring||18.104.22.168||In some cases, Greenplum Database continuously logged this message when
sending file replication process statistics to Greenplum Database perfmon process:
Error when sending file rep stats to perfmon
|16450||Backup and Restore||22.214.171.124||When running the Greenplum Database utility pg_dumpall with the option --resource-queues to create scripts that contain resource queue definitions, the utility generated incorrect scripts when the resource queue definition contained the memory_limit option.|
|16059||Resource Management||126.96.36.199||Some SQL statements that executed a PL/pgSQL function that contained an
insert, update, or delete operation did not allocate memory correctly. This caused
the following issues:
This issue has been resolved.
|Issue Number||Category||Resolved in||Description|
|24158||Upgrade / Downgrade||188.8.131.52||When upgrading Greenplum Database from a 4.2.x release to a 4.3.x release prior to 184.108.40.206, append-only tables were not correctly converted to append-optimized tables. In some cases, the incorrect conversion prevented the VACUUM command from reclaiming storage occupied by deleted tuples. For information about the upgrade issue, see Product Enhancements.|
|24326||Query Execution, Storage Access Methods||220.127.116.11||If either a non-partitioned append-only
table or an individual append-only part of a partitioned table had more than 127
million rows on a segment, a query that uses an index to access the table data could
return duplicate rows.
This issue has been fixed.
|24037||Client Access Methods and Tools||4.3.2||In some cases, when the SQLCancel function was used with the Greenplum Database ODBC driver to cancel the execution of a query, a rollback of the transaction did not occur.|
|23838||Loaders: Copy/External Tables||4.3.2||When the COPY command copied data from a file and the file contained the character sequence '\r\r\n', a postmaster reset occured.|
|23768||Query Execution||4.3.2||In some cases, the clean up of an aborted transaction was not handled correctly and caused a PANIC signal to be issued.|
|23751||Monitoring: gpperfmon server||4.3.2||A memory leak caused the gpmmon process to consume a large amount of memory and CPU resources.|
|23735||Languages: PL/Java||4.3.2||In some cases, Greenplum Database did not handle concurrent shared memory operations properly from PL/Java routines. This caused a PANIC signal to be issued.|
|23708||Backup and Restore||4.3.2||In some cases, running the Greenplum
Database gpdbrestore utility with the -T or --table-file option failed with this
ValueError: need more than 1 value to unpack
|23706||Upgrade / Downgrade||4.3.2||The Greenplum Database installer did not support upgrading from a Greenplum Database hotfix.|
|23647||Vacuum||4.3.2||Performing a VACUUM operation on a partitioned append-optimized table did not correctly reduce the age of the parent table and child tables.|
|23631||Replication: Segment Mirroring||4.3.2||In some rare cases, the crash recovery of a segment mirror failed due to an inconsistent LSN.|
|23604||Interconnect||4.3.2||In some cases when a Greenplum Database process was cancelled on the Greenplum Database master, corresponding processes remained running on Greenplum Database segment instances.|
|23578||gphdfs||4.3.2||For Greenplum Database external tables, the gphdfs protocol that accesses data from files on HDFS now supports the CSV file format.|
|23546||Storage Access Methods||4.3.2||In some cases, a DELETE command that
contains a join between an append-optimized table and heap table returned this
ERROR: tuple already updated by self
|23485||Transaction Management||4.3.2||When a single Greenplum Database session
ran transactions, temporary files were not removed after the transaction completed.
If a the session ran a large number of transactions, the temporary files required a
large amount of disk space.
This issue has been resolved.
|23417||Transaction Management||4.3.2||Some queries against an append-optimized table with compression enabled that containd a column with an unknown data type caused a Greenplum Database SIGSEGV error.|
|23227||Client Access Methods and Tools||4.3.2||For Greenplum Database with GSS Authentication enabled, the database role attribute Valid Until was ignored. The Valid Until parameter is now respected when GSS authentication is enabled.|
|23222||Client Access Methods and Tools||4.3.2||When Greenplum Database receives a
SIGSEGV when running the COPY command, Greenplum Database hangs and continuously log
this warning message:
copy: unexpected response (3)
|23204||Query Execution||4.3.2||In some cases, when a Greenplum Database segment fault occurred during the execution of a PL/R function, PL/R hung and continuously returned the same error message.|
|23202||Management Scripts: expansion||4.3.2||During the process of adding new hosts, the Greenplum Database expand utility gpexpand did not update the pg_hba.conf files on Greenplum Database hosts with the correct host information.|
|23174||Languages: R, PLR||4.3.2||In Greenplum Database, a signal handling issue in the R programming language caused a potential for postgres processes to hang when running PL/R functions.|
|23138||Replication: Segment Mirroring||4.3.2||The gprecoverseg utility failed to recover a Greenplum Database segment that was marked as down when the data directory location for the segment was a symbolic link, and a postgres process was running with the same PID as the PID associated with the down segment.|
|23067||Loaders: Copy/External Tables||4.3.2||In some cases, when an INSERT FROM SELECT
command was run that selected from readable external able and inserted into writable
external table, this warning was generated:
WARNING select failed on curl_multi_fdset (maxfd 10) (4 - Interrupted system call)
|23038||Query Execution||4.3.2||When a query was run that contained a
polymorphic, user-defined aggregate function, and Greenplum Database was required to
create spill files on disk, the query failed with this error.
ERROR: could not determine actual argument type for polymorphic function
This issue has been fixed.
|23008||Dispatch||4.3.2||In some cases when temporary tables were used, Greenplum Database did not perform the clean up of temporary namespaces properly after a transaction completed and caused a SIGSEGV.|
|22914||Loaders: Copy/ExternalTables||4.3.2||When a query joined a heap table with an external table that used the gpfdist protocol, an incorrect plan that returned no results might have been chosen.|
|22787||Monitoring: gpperfmon server||4.3.2||In some cases, the Greenplum Database gpmmon process failed. The gpmmon process is used for Greenplum Database performance monitoring.|
|22784||Storage Access Methods||4.3.2||After a database expansion, some tables
created with APPENDONLY=TRUE and compression enabled consumed much more disk space
than before the expansion.
To reduce disk space in this situation, the Greenplum Database gpreload utility reloads table data with column data sorted.
|22706||Management Scripts: master mirroring||4.3.2||The Greenplum Database gpinitstandby
utility completed successfully but returned an error when the $GPHOME/share directory was not writable.
Now, the utility returns this warning:
Please run gppkg --clean after successful standby initialization.
|22592||Backup and Restore||4.3.2||When the Greenplum Database gpdbrestore utility could not find files on the Greenplum Database master segment that are used to perform a restore operation, the utility did not return the correct error message.|
|22413||Query Planner||4.3.2||In some cases, an SQL query that contains the following returned incorrect results: a combination of a median function with other aggregates where the GROUP BY columns are a subset of the table's distribution columns.|
|22328||Management Scripts||4.3.2||When a Greenplum Database extension
package was updated with the Greenplum Database gppkg utility option -u, gppkg did
not warn the user that updating a package includes removing all previous versions of
the system objects related to the package.
Now, the gppkg utility warns the user and lets the user cancel the operation.
|22265||Locking, Signals, Processes||4.3.2||Greenplum Database hung due to incorrect lock handling that caused a race condition. The lock handling issue was caused by a compiler optimization.|
|22205||Replication: Segment Mirroring||4.3.2||In some cases, running the Greenplum Database command gprecoverseg -r to rebalance segment instances failed and caused database catalog corruption.|
|21916||Interconnect||4.3.2||In some cases when the Greenplum Database query dispatcher encountered connection errors, a postmaster reset occured.|
|21867||DDL and Utility Statements||4.3.2||The performance of Greenplum Database truncate operations degraded between restarts of Greenplum Database.|
|21103||Query Execution||4.3.2||In Greenplum Database, support of subnormal double-precision (float8) numbers differed between Red Hat Enterprise Linux 5 and Red Hat Enterprise Linux 6. For example, the value 5e-309 was not handled consistently by Greenplum Database on RHEL 5 and RHEL 6. This issue has been resolved.|
|20600||Query Planner||4.3.2||For some SQL queries that contained a subquery, this error message was returned. ERROR: no parameter found for initplan subquery.|
|20268||Loaders: Copy/ExternalTables||4.3.2||In some cases when a COPY command was run, improper memory handling caused a PANIC signal to be issued.|
|19949||Backup and Restore||4.3.2||If a Greenplum database was backed up and the database name contained upper-case characters, the Greenplum Database gpdbrestore utility did not restore the database with the correct name.|
|19660||Authentication||4.3.2||Greenplum Database supports LDAP authentication. Previously, an issue in Greenplum Database prevented LDAPS (LDAP over SSL) from functioning. This issue has been resolved.|
|19246||Backup and Restore||4.3.2||When performing a selective restore of a
partitioned table from a full backup with the Greenplum Database utility
gpdbrestore, the data from leaf partitions are now restored.
Previously, when performing a selective restore of a partitioned table, you needed to specify all the individual leaf partitions.
|18774||Loaders||4.3.2||External web tables that use IPv6
addresses no longer require a port number when using the default port.
In previous releases, a port number was required when using an IPv6 address.
|13282||Backup and Restore||4.3.2||The database objects in the gp_toolkit schema were not restored after a database was re-created and then restored with the Greenplum Database gpdbrestore utility. The gp_toolkit objects are now restored when a database is re-created and restored.|
|Issue Number||Category||Resolved in||Description|
|23757||Security||4.3.1||Greenplum Database software has been updated to use OpenSSL 0.9.8za in response to the OpenSSL Security Advisory [05 Jun 2014]. For information about the advisory, see http://www.openssl.org/news/secadv_20140605.txt.|
|22301||Replication: Master Mirroring||4.3.1||DCA customers who wished to use Greenplum Database 4.3 could not use the utility dca_setup. This issue has been resolved in Greenplum Database 4.3.1.|
|22281||Backup and Restore||4.3.1||For partitioned append-optimized tables, a partition was backed up even though it was not modified.|
|21591||Management Scripts Suite||4.3.1||The Greenplum Database utilities gpstart and gprecoverseg hung when checking the process ID in the postmaster.pid file and the ID matched a non-postgres running process.|
|23421||Locking, Signals, Processes||4.3.1||In some cases, concurrent CREATE TABLE and DROP TABLE operations caused Greenplum Database to hang due to incorrect lock handling.|
|13825||Functions and Languages, Transaction Management||4.3.1||In PL/PGSQL functions, exception blocks
were not handled properly. Depending on where the exception is encountered during
function execution, the improper block handling resulted in either the creation of
catalog inconsistency between master and segment, or Greenplum Database issuing
the following message:
The distributed transaction 'Prepare' broadcast failed to one or more segments.
|22655||Locking, Signals, Processes||4.3.1||Greenplum Database hung due to incorrect lock handling that caused a race condition. The lock handling issue was caused by a compiler optimization.|
|20924||Dispatch||4.3.1||For some queries that contained a window function and that executed on both the master and segments, the query would hang when executed from an ODBC/JDBC client.|
|21899||Backup and Restore||4.3.1||When performing an incremental backup, the gpcrondump utility backed up temporary tables that existed during the time of the backup. This caused a failure when performing a restore with the gpdbrestore utility that used the incremental backup.|
|22293||Backup and Restore||4.3.1||Greenplum Database supports Data Domain DDOS 5.4. See Supported Platforms for information about supported versions of Data Domain Boost.|
|22442||Loaders: gpfdist||4.3.1||The Greenplum Database Load Tools for Windows installation did not include the gssapi and auth libraries. This issue has been resolved.|
|19476||Client Access Methods and Tools||4.3.1||Running multiple gpload sessions simultaneously that loaded data into the same table resulted in inconsistent data in the table. See the gpload information in Product Enhancements.|
|22863||DDL and Utility Statements||4.3.1||When > (greater than) was used in
the CREATE OPERATOR CLASS command as an operator name, this error
operator > is not a valid ordering operator when using operator classes
|22219||Query Planner||4.3.1||In certain queries that contain the median function and a GROUP BY clause, the query planner produced an incorrect plan in which some necessary columns were not projected in the operator nodes. This caused an error when trying to look up the missing columns.|
|22084||OS Abstraction||4.3.1||Improved handing of situations where Greenplum Database encounters segment violation errors.|
|17995||DDL and Utility Statements||4.3.1||In some cases, the functions pg_cancel_backend() and pg_terminate_backend() did not terminate sessions.|
|17773||DDL and Utility Statements||4.3.1||Greenplum Database did not properly check privileges during certain RESET ALL operations.|
|17481||Catalog and Metadata, DDL and Utility Statements||4.3.1||Queries on the system view pg_partitions could fail to return when DDL statements on partitioned tables were running concurrently.|
|15834||Loaders: Copy/ExternalTabs||4.3.1||A COPY command cancel request (Ctrl+c) followed by another COPY command and a cancel request caused the Greenplum Database session to hang. When cancel request was attempted again, a SIGSEGV error occured.|
|14367||DDL and Utility Statements||4.3.1||ALTER TABLE ADD COLUMN with default NULL was not supported for append-optimized tables. This syntax is now supported.|
|21522||Backup and Restore||4.3||The Greenplum Database utility pg_dump printed information-level messages (messages with the label [INFO]) to stderr that were not printed in previous releases. These messages were printed even when pg_dump completes without errors.|
|21522||Backup and Restore||The Greenplum Database utility pg_dump printed information-level messages (messages with the label [INFO]) to stderr that were not printed in previous releases. These messages were printed even when pg_dump completes without errors.|