Greenplum Database 4.3.12.0 Release Notes

Greenplum Database 4.3.12.0 Release Notes

Rev: A01

Updated: February, 2017

Welcome to Pivotal Greenplum Database 4.3.12.0

Greenplum Database is a massively parallel processing (MPP) database server that supports next generation data warehousing and large-scale analytics processing. By automatically partitioning data and running parallel queries, it allows a cluster of servers to operate as a single database supercomputer performing tens or hundreds times faster than a traditional database. It supports SQL, MapReduce parallel processing, and data volumes ranging from hundreds of gigabytes, to hundreds of terabytes.

Note: This document contains pertinent release information about Greenplum Database 4.3.12.0. For previous versions of the release notes for Greenplum Database, go to Pivotal Documentation. For information about Greenplum Database end of life, see Greenplum Database end of life policy.
Important: Pivotal Global Support Services (GSS) does not provide support for open source versions of Greenplum Database. Only Pivotal Greenplum Database is supported by Pivotal GSS.

About Greenplum Database 4.3.12.0

Product Enhancements

s3 Protocol Enhancements

Greenplum Database 4.3.12.0 includes these s3 protocol enhancements

  • The external table s3 protocol can access an Amazon S3 compatible service that is supported by Greenplum Database as an external data store. Greenplum Database supports the Amazon S3 compatible service Dell EMC Elastic Cloud Storage (ECS).

    You configure the s3 protocol to access an S3 compatible service.

    • In the s3 configuration file, set the version parameter to 2. The parameter controls whether you can specify an S3 compatible service in the CREATE EXTERNAL TABLE command.
    • Specify the location and region of the S3 compatible service in the LOCATION clause of the CREATE EXTERNAL TABLE command. When you define the service in the LOCATION clause, you specify the service location in the URL and specify the service region in the region parameter.
    This is an example LOCATION clause that contains a region parameter and specifies an S3 compatible service.
    LOCATION ('s3://test.company.com/s3test.company.io/ds1/normal/ region=us-west
          config=/home/gpadmin/aws_s3/s3.conf')
    Note: If the s3 configuration file version parameter is 2, you can also specify an Amazon S3 location. This example specifies an Amazon S3 location.
    LOCATION ('s3://s3-us-west-2.amazonaws.com/s3test.pivotal.io/ds1/normal/ region=us-west-2
          config=/home/gpadmin/aws_s3/s3.conf')
  • The s3 configuration file supports these new parameters that control connections to S3 data sources.
    • verifycert - Controls how authentication is performed when establishing encrypted communication between a client and an S3 data store over HTTPS.
      • verifycert=false enables the use of a self-signed SSL certificate.
      • verifycert=true requires an SSL certificate signed by a certificate authority (CA).
    • version - Enables support of the region parameter in LOCATION clause of CREATE EXTERNAL TABLE command. If the value is 1, the LOCATION clause supports the Amazon S3 URL and does not contain the region parameter. If the value is 2, the LOCATION clause supports S3 compatible services that are supported by Greenplum Database, the REGION parameter is required in the LOCATION clause. The region parameter specifies the S3 data source region.

For information about the CREATE EXTERNAL TABLE command, see the Greenplum Database Reference Guide. For information about the s3 protocol, see the Greenplum Database Administrator Guide.

gpconfig Utility Displays Values from postgresql.conf

For Greenplum Database 4.3.12.0, the Greenplum Database utility gpconfig includes the --file option.

If you specify the --file option with the -s option to display the value of a server configuration parameter, the utility shows the value from the postgresql.conf file on all instances (master and segments) in the Greenplum Database system. If there is a difference in a parameter value among the instances, the utility displays a message.

For example, the server configuration parameter statement_mem is set to 64MB for a user with the ALTER ROLE command, and the value in the postgresql.conf file is 128MB. Running the command gpconfig -s statement_mem --file displays 128MB. The command gpconfig -s statement_mem run by the user displays 64MB.

Enhanced PL/Java Environment for Development

For Greenplum Database 4.3.12.0, the new server configuration parameter pljava_classpath_insecure controls the ability of normal database users to set the server configuration parameter pljava_classpath. Greenplum Database uses the list of jar files or directories containing jar files specified by pljava_classpath when running PL/Java functions. When pljava_classpath_insecure is enabled, Greenplum Database developers who are working on PL/Java functions do not have to be database superusers to change pljava_classpath. In previous releases, only database superusers could change pljava_classpath.

Warning: Enabling pljava_classpath_insecure exposes a security risk by giving non-administrator database users the ability to run unauthorized Java methods.

For information about the pljava_classpath_insecure parameter, see New Parameter. For information about the PL/Java procedural language, see the Greenplum Database Reference Guide.

Changed Features

Greenplum Database 4.3.12.0 includes these feature changes to Greenplum Database.

  • For the ALTER TABLE RENAME command, renaming a relation acquires an Access Exclusive lock on the relation.

    In previous releases, a lock was not acquired when the ALTER TABLE RENAME command renamed a table. This violated the isolation level required when concurrent transactions occurred on the table. For example, without a lock, renaming a table while concurrently inserting data into a table could fail or insert data into the incorrect table.

    For information about locks and the ALTER TABLE command, see the Greenplum Database Reference Guide.

  • For the ALTER TABLE ... CLUSTER ON command, executing the command on an append-optimized table returns an error.

    In previous releases, the commands completed successfully on an append optimized table. Clustering table data on an append-optimized table is not supported in Greenplum Database. Executing the CLUSTER command on an append-optimized table returns an error.

    For information about the CLUSTER and ALTER TABLE commands, see the Greenplum Database Reference Guide.

  • For backup files created by the Greenplum Database gpcrondump utility, the file names specify both content id and db id. For Greenplum Database 4.3.12.0 and later releases, this is the format of the backup file names created by gpcrondump.
    prefix_gp_dump_content_dbid_timestamp
    The content and dbid are identifiers for the Greenplum Database segment instances that are assigned by Greenplum Database. For information about the identifiers, see the Greenplum Database system catalog table gp_id in the Greenplum Database Reference Guide.
    For Greenplum Database 4.3.11.3 and earlier releases, this is the format.
    prefix_gp_dump_[0 or 1]_dbid_timestamp

    Where the value 0 is for segment instances and the value 1 is for the master instance.

    The Greenplum Database 4.3.12.0 gpdbrestore utility recognizes both backup file name formats.

    For information about the utilities, see the Greenplum Database Utility Guide.

  • The Greenplum Database external table protocol gphdfs now supports Hortonworks HDP 2.4 and 2.5.

    With Greenplum Database external tables created with the CREATE EXTERNAL TABLE command, you can specify the gphdfs protocol to access external files on an Hadoop file system (HDFS) as if they are regular database tables.

    For information about supported Hadoop distributions, see Hadoop Distribution Compatibility. For information about external tables, see "Loading and Unloading Data" in the Greenplum Database Administrator Guide.

New and Changed Parameters

New Parameter

Greenplum Database 4.3.12.0 includes new server configuration parameters.

For information about Greenplum Database server configuration parameters, see the Greenplum Database Reference Guide.

pljava_classpath_insecure

Controls whether the server configuration parameter pljava_classpath can be set by a user without Greenplum Database superuser privileges. When true, pljava_classpath can be set by a regular user. Otherwise, pljava_classpath can be set only by a superuser.

The default is false.

Warning: Enabling this parameter exposes a security risk by giving non-administrator database users the ability to run unauthorized Java methods.
Value Range Default Set Classifications
Boolean false master

session

reload

superuser

Changed Parameter

The description of the pljava_classpath server configuration parameter includes information about the pljava_classpath_insecure server configuration parameter.

For information about Greenplum Database server configuration parameters, see the Greenplum Database Reference Guide.

pljava_classpath

A colon (:) separated list of jar files or directories containing jar files needed for PL/Java functions. The full path to the jar file or directory must be specified, except the path can be omitted for jar files in the $GPHOME/lib/postgresql/java directory. The jar files must be installed in the same locations on all Greenplum hosts and readable by the gpadmin user.

The pljava_classpath parameter is used to assemble the PL/Java classpath at the beginning of each user session. Jar files added after a session has started are not available to that session.

If the full path to a jar file is specified in pljava_classpath it is added to the PL/Java classpath. When a directory is specified, any jar files the directory contains are added to the PL/Java classpath. The search does not descend into subdirectories of the specified directories. If the name of a jar file is included in pljava_classpath with no path, the jar file must be in the $GPHOME/lib/postgresql/java directory.

Note: Performance can be affected if there are many directories to search or a large number of jar files.

If pljava_classpath_insecure is false, setting the pljava_classpath parameter requires superuser privilege. Setting the classpath in SQL code will fail when the code is executed by a user without superuser privilege. The pljava_classpath parameter must have been set previously by a superuser or in the postgresql.conf file. Changing the classpath in the postgresql.conf file requires a reload (gpstop -u).

Value Range Default Set Classifications
string   master

session

reload

superuser

Deprecated Features

In Greenplum Database 4.3.12.0, these features have been deprecated in will be removed in a future release.

These advanced analytic functions and advance aggregate functions are deprecated and will be removed from Greenplum Database in a future release. Equivalent functionality as well as other mathematical, statistical, and machine-learning methods are available in the Greenplum Database MADlib extension.
Table 1. Deprecated Greenplum Database Analytic and Aggregate Functions
Greenplum Database Analytic/Aggregate Function MADlib Function
matrix_add() array_add
matrix_multiply() array_mult, array_scalar_mult
matrix_transpose() matrix_trans

The MADlib function syntax and format are different than matrix_transpose()

pinv() matrix_pinv, __matrix_pinv_final

The MADlib function syntax and format are different than pinv()

mregr_coef()

mregr_r2()

mregr_pvalues()

mregr_tstats()

linregr_train

The MADlib function returns an object of type linregr_result that contains the results of the aggregate functions.

nb_classify()

nb_probabilities()

create_nb_classify_view

create_nb_probs_view

The MADlib functions provide similar functionality.

For information about installing and using the MADlib extension package, see "Greenplum MADlib Extension for Analytics" in the Greenplum Database Reference Guide.

Documentation Change

Greenplum Database 4.3.12.0 includes these documentation changes to Greenplum Database.
  • In the Greenplum Database documentation, the name Pivotal Query Optimizer has been changed to GPORCA.

    In Greenplum Database, GPORCA co-exists with the legacy query optimizer. For information about the Greenplum Database query optimizers, see the Greenplum Database Administrator Guide.

Downloading Greenplum Database

These are the locations of the Greenplum Database software and documentation:

Supported Platforms

Greenplum Database 4.3.12.0 runs on the following platforms:

  • Red Hat Enterprise Linux 64-bit 7.x (See the following Note)
  • Red Hat Enterprise Linux 64-bit 6.x
  • Red Hat Enterprise Linux 64-bit 5.x
  • SuSE Linux Enterprise Server 64-bit 11 SP1, 11 SP2, 11 SP4
  • Oracle Unbreakable Linux 64-bit 5.5
  • CentOS 64-bit 7.x
  • CentOS 64-bit 6.x
  • CentOS 64-bit 5.x
Note: For Greenplum Database that is installed on Red Hat Enterprise Linux 7.x or CentOS 7.x prior to 7.3, an operating system issue might cause Greenplum Database that is running large workloads to hang in the workload.. The Greenplum Database issue is caused by Linux kernel bugs.

RHEL 7.3 and CentOS 7.3 resolves the issue.

Note: Support for SuSE Linux Enterprise Server 64-bit 10 SP4 has been dropped for Greenplum Database 4.3.9.0 and later releases.
Greenplum Database 4.3.x supports these Java versions:
  • 8.xxx
  • 7.xxx
  • 6.xxx
The Greenplum Database s3 external table protocol supports these data sources:

Greenplum Database 4.3.x supports Data Domain Boost on Red Hat Enterprise Linux.

This table lists the versions of Data Domain Boost SDK and DDOS supported by Greenplum Database 4.3.x.

Table 2. Data Domain Boost Compatibility
Greenplum Database Data Domain Boost DDOS
4.3.12.0 3.0.0.3 5.7 (all versions)

5.6 (all versions)

5.5 (all versions)

5.4 (all versions)

5.3 (all versions)

4.3.11.3

4.3.11.2

4.3.11.1

3.0.0.3 5.7 (all versions)

5.6 (all versions)

5.5 (all versions)

5.4 (all versions)

5.3 (all versions)

4.3.10.0 3.0.0.3 5.7 (all versions)

5.6 (all versions)

5.5 (all versions)

5.4 (all versions)

5.3 (all versions)

4.3.9.1

4.3.9.0

3.0.0.3 5.7 (all versions)

5.6 (all versions)

5.5 (all versions)

5.4 (all versions)

5.3 (all versions)

4.3.8.1

4.3.8.0

3.0.0.3 5.6 (all versions)

5.5 (all versions)

5.4 (all versions)

5.3 (all versions)

4.3.7.3

4.3.7.2

4.3.7.1

4.3.7.0

3.0.0.3 5.6 (all versions)

5.5 (all versions)

5.4 (all versions)

5.3 (all versions)

4.3.6.2

4.3.6.1

4.3.6.0

3.0.0.3 5.6 (all versions)

5.5.0.x

5.4 (all versions)

5.3 (all versions)

4.3.5.3

4.3.5.2

4.3.5.1

4.3.5.0

3.0.0.3 5.5.0.x

5.4 (all versions)

5.3 (all versions)

4.3.4.2

4.3.4.1

4.3.4.0

3.0.0.3 5.5.0.x

5.4 (all versions)

5.3 (all versions)

4.3.3.0 2.6.2.0 5.2, 5.3, and 5.4
4.3.2.0 2.6.2.0 5.2, 5.3, and 5.4
4.3.1.0 2.6.2.0 5.2, 5.3, and 5.4
4.3.0.0 2.4.2.2 5.0.1.0, 5.1, and 5.2
Note: In addition to the DDOS versions listed in the previous table, Greenplum Database 4.3.4.0 and later supports all minor patch releases (fourth digit releases) later than the certified version.
Greenplum Database support on DCA:
  • Greenplum Database 4.3.x, all versions, is supported on DCA V3.
  • Greenplum Database 4.3.x, all versions, is supported on DCA V2, and requires DCA software version 2.1.0.0 or greater due to known DCA software issues in older DCA software versions.
  • Greenplum Database 4.3.x, all versions, is supported on DCA V1, and requires DCA software version 1.2.2.2 or greater due to known DCA software issues in older DCA software versions.
Note: Greenplum Database 4.3.12.0 does not support the ODBC driver for Cognos Analytics V11.

In the next major release of Greenplum Database, connecting to IBM Cognos software with an ODBC driver will not be supported. Greenplum Database supports connecting to IBM Cognos software with a JDBC driver.

Pivotal recommends that you migrate to a version of IBM Cognos software that supports connectivity to Greenplum Database with a JDBC driver.

Supported Platform Notes

The following notes describe platform support for Greenplum Database. Please send any questions or comments to Pivotal Support at https://support.pivotal.io.

  • The only file system supported for running Greenplum Database is the XFS file system. All other file systems are explicitly not supported by Pivotal.
  • Greenplum Database is supported on all 1U and 2U commodity servers with local storage. Special purpose hardware that is not commodity may be supported at the full discretion of Pivotal Product Management based on the general similarity of the hardware to commodity servers.
  • Greenplum Database is supported on network or shared storage if the shared storage is presented as a block device to the servers running Greenplum Database and the XFS file system is mounted on the block device. Network file systems are not supported. When using network or shared storage, Greenplum Database mirroring must be used in the same way as with local storage, and no modifications may be made to the mirroring scheme or the recovery scheme of the segments. Other features of the shared storage such as de-duplication and/or replication are not directly supported by Pivotal Greenplum Database, but may be used with support of the storage vendor as long as they do not interfere with the expected operation of Greenplum Database at the discretion of Pivotal.
  • Greenplum Database is supported when running on virtualized systems, as long as the storage is presented as block devices and the XFS file system is mounted for the storage of the segment directories.
  • A minimum of 10-gigabit network is required for a system configuration to be supported by Pivotal.
  • Greenplum Database is supported on Amazon Web Services (AWS) servers using either Amazon instance store (Amazon uses the volume names ephemeral[0-20]) or Amazon Elastic Block Store (Amazon EBS) storage. If using Amazon EBS storage the storage should be RAID of Amazon EBS volumes and mounted with the XFS file system for it to be a supported configuration.
  • For Red Hat Enterprise Linux 7.2 or CentOS 7.2, the default systemd setting RemoveIPC=yes removes IPC connections when non-system users logout. This causes the Greenplum Database utility gpinitsystem to fail with semaphore errors. To avoid this issue, see "Setting the Greenplum Recommended OS Parameters" in the Greenplum Database Installation Guide.

Resolved Issues in Greenplum Database 4.3.12.0

The table below lists issues that are now resolved in Pivotal Greenplum Database 4.3.12.0

For issues resolved in prior 4.3 releases, refer to the corresponding release notes. Release notes are available from Pivotal Network or on the Pivotal Greenplum Database documentation site at Release Notes. A consolidated list of resolved issues for all 4.3 releases is also available on the documentation site.

Table 3. Resolved Issues in 4.3.12.0
Issue Number Category Resolved In Description
26798 Dispatch 4.3.12.0 In some cases, when executing a query that contains an exception block in a function, Greenplum Database might have caused a SIGBUS or SIGSEGV error. The error occurred because Greenplum Database incorrectly dispatched a command to a busy gang process on a segment when executing the exception block.

This issue has been resolved. Now, Greenplum Database aborts the query and prints an error message in the specified situation.

26790 Scripts: gpcheckcat 4.3.12.0 When the Greenplum Database gpcheckcat utility completed, the completion message displayed the incorrect time.

This issue has been resolved. Now, the utility displays the correct time.

26787 Dispatch 4.3.12.0 In some cases, a Greenplum Database PANIC occurred when a query that accessed an external table failed. Greenplum Database did not correctly handle an error that occurred when a scan of a external table failed.

The issue has been resolved. Now, the specified error is handled properly.

26782 Query Optimizer 4.3.12.0 For queries that require a full join on append-optimized partitioned tables, GPORCA did not handle non-visible system columns on the tables correctly. This generated an incorrect plan that caused an error in the query executor.

This issue has been resolved. Now, GPORCA handles non-visible system columns correctly in the specified situation.

26760 Query Optimizer 4.3.12.0 For queries with a projection list that contains multiple set returning functions, GPORCA incorrectly collapsed the projected nodes. This caused GPORCA to return incorrect results.

This issue has been resolved. GPORCA correctly handles the specified type of queries.

26753 Query Planner 4.3.12.0 In some cases, the legacy query optimizer generated a Greenplum Database PANIC when the legacy optimizer encountered corrupted statistics during query optimization.

This issue has been resolved. Now, when legacy query optimizer detects corrupted statistics, the corrupted statistics are skipped and a warning is issued.

26719 DML, Storage: Access Methods 4.3.12.0 For a column-oriented table, in some rare cases, data might not be stored correctly in a column when the column is defined with data type BIGINT and with RLE compression. The error occurred due to the incorrect handling of data during the compression process..

This issue has been resolved. The data compression process has been improved.

26706 Query Execution 4.3.12.0 For some queries, the query plan generated by GPORCA did not correctly handle bitmap index information during query execution. This caused a Greenplum Database PANIC.

The issue has been resolved. Handling of bitmap index information during query execution has been improved.

26573 Storage: Vacuum/Reindex/Truncate 4.3.12.0 In some cases, when a lazy vacuum operation was performed by the VACUUM command, Greenplum Database did not perform a compaction of the database segment files when ratio of hidden rows to total rows in the segment file was greater than the Greenplum Database server configuration parameter gp_appendonly_compaction_threshold. Compaction was not performed because the ratio was not calculated correctly.

This issue has been resolved. The ratio calculation has been improved. For information about the parameter, see the Greenplum Database Reference Guide.

26440 Scripts: gpconfig 4.3.12.0 The server configuration parameter value displayed by the Greenplum Database gpconfig utility with the -s option is the value from the database, not the value from the master and segment instance postgresql.conf files.

The parameter value from the postgresql.conf files can be displayed by specifying the gpcoonfig option --file with the -s option. See Product Enhancements.

20620 DDL 4.3.12.0 The ALTER TABLE RENAME command did not acquire an Access Exclusive lock on the table.

This issue has been resolved. See Changed Features.

136986803 Query Execution 4.3.12.0

Greenplum Database returned incorrect results for a query when the legacy query planner generated query plan that contained Merge Full Join and both children of the join are Motion operators. Incorrect results were returned because not all data was returned by a Motion operator.

This issue has been resolved. Now Motion operators return all the data in the specified situations.

n/a Server Configuration 4.3.12.0 The server configuration parameter pljava_classpath_insecure, described in earlier versions of the 4.3.11.x documentation, is not available in this Greenplum Database release.

This issue has been resolved. See New Parameter.

Known Issues in Greenplum Database 4.3.12.0

This section lists the known issues in Greenplum Database 4.3.12.0. A workaround is provided where applicable.

For known issues discovered in previous 4.3.x releases, see the release notes available on Pivotal Network or on the Pivotal Greenplum Database documentation site at Release Notes. For known issues discovered in other previous releases, including patch releases to Greenplum Database 4.2.x, 4.1 or 4.0.x, see the corresponding release notes, available from Dell EMC Support Zone

Table 4. All Known Issues in 4.3.12.0
Issue Category Description
26591 Query Execution For the Greenplum Database function get_ao_compression_ratio(), specifying a null value or the name of table that contains no rows causes a Greenplum Database PANIC.

Workaround: Specify a non-null value or a table that contains rows.

115746399 Operating System For Greenplum Database that is installed on Red Hat Enterprise Linux 7.x or CentOS 7.x prior to 7.3, an operating system issue might cause Greenplum Database that is running large workloads to hang in the workload. The Greenplum Database issue is caused by Linux kernel bugs.

Workaround: RHEL 7.3 and CentOS 7.3 resolves the issue.

26626 GPHDFS For Greenplum Database external tables, the gphdfs protocol supports Avro files that contain a single top-level schema. Avro files that contain multiple top-level schemas are not supported.
25584 Query Execution In some situations, a running Greenplum Database query cannot be terminated with the functions pg_cancel_backend or pg_terminate_backend.

The functions could not terminate the query due to a blocking fopen of a FIFO file for write.

26249 GPHDFS When reading data from an Avro file, the gphdfs protocol does not support the double quote character (") within string data. The gphdfs protocol uses the double quote as the column delimiter.

Workaround: Before reading data from an Avro file, either remove double quotes that are in string data or replace the character with a different character.

26292 Loaders: gpload The Greenplum Database gpload utility fails on MacOS X El Capitan. The utility script is included with the Greenplum Database Load Tools installer package for Apple OS X greenplum-loaders-version-OSX-i386.bin.
Workaround: Run the python script gpload.py directly. For example, python command displays the gpload help information on the command line.
python gpload.py -h
26128 Loaders: gpload When the YAML control file for the Greenplum Database gpload utility specifies the key LOG_ERRORS: true without the key REUSE TABLES: true, the gpload operation returns only summary information about formatting errors. The formatting errors are deleted from Greenplum Database error logs. When REUSE TABLES: true is not specified, the temporary tables that are used by gpload are dropped after the gpload operation, and the formatting errors are also deleted from the Greenplum Database error logs.

Workaround: Specify the YAML control file key REUSE TABLES: true to retain the temporary tables that are used to load the data. The log information is also retained. You can delete the formatting errors in the Greenplum Database logs with the Greenplum Database function gp_truncate_error_log().

For information about the gpload utility, see the Greenplum Database Utility Guide.

25934

25936

Query Optimizer

Query Planner

For queries that compare data from columns of different character types, for example a join comparing a columns of data types CHAR(n) and VARCHAR(m), the returned results might not be as expected depending the padding added to the data (space characters added after the last non-space character).
For example, this comparison returns false.
select 'A '::char(2) ='A '::text ;
This comparison returns true.
select 'A'::char(2) ='A '::varchar(5) ; 

Workaround: Pivotal recommends specifying character column types to be of data type VARCHAR or TEXT so that comparisons include padding added to the data.

For information about how the character data types CHAR, VARCHAR, and TEXT handle padding added to the data see the CREATE TABLE command in the Greenplum Database Reference Guide.

25737 Catalog and Metadata Greenplum Database does not support the FILTER clause within aggregate expressions.
25754 Management Scripts: expansion The Greenplum Database gpexpand utility fails to create an input file for system expansion if the Greenplum Database system define different TCP/IP port numbers on different hosts for Greenplum Database internal communication.

Workaround: Create the input file manually.

25833 Management Scripts: gpexpand The Greenplum Database utility gpexpand fails when expanding a Greenplum Database system and in the system a database table column name contains a tab character. The utility does not support database names, table names, or column names that contain a tab character.
15835 DDL and Utility Statements For multi-level partitioned tables that have these characteristics:
  • The top level partition is partitioned by range.
  • The lowest level partition (the leaf child partitions) are partitioned by list.
Splitting a subpartition with the ALTER TABLE SPLIT PARTITION command returns an error and rolls back the transaction.
12019 Management Scripts: checkperf When the Greenplum Database gpcheckperf utility is run with the option -f host_file and the host that is running gpcheckperf is listed in host_file, processes that were started gpcheckperf might not be cleaned up after the utility completes.

Workaround: Manually stop the processes that were started by gpcheckperf.

24870 Query Optimizer GPORCA might terminate all sessions if a query attempts to cast to a timestamp a date with year greater than 200,000.
23571 Query Optimizer For queries that contain inequality conditions such as != , < and , >, GPORCA does not consider table indexes when generating a query plan. For those queries, indexes are not used and the query might run slower than expected.
21508 Query Optimizer GPORCA does not support GiST indexes.
20030 Query Optimizer GPORCA does not support partition elimination when the query contains functions that are applied to the partition key.
20360 Query Execution GPORCA does not enforce different access rights in different parts of a partition table. Pivotal recommends that you set the same access privileges for the partitioned table and all its parts (child tables).
20241 Query Optimizer The GPORCA does not consider indices when querying parts/child tables of partitioned tables directly.
25326 Interconnect Setting the Greenplum Database server configuration parameter log_hostname to on Greenplum Database segment hosts causes an Interconnect Error that states that the listeneraddress name or service not known.

The parameter should be set to on only on the Greenplum Database master.

25280 Management Scripts: gpstart/gpstop The Greenplum Database utility gpstop, the utility returns an error if it is run and the system environment variable LANG is set, for example, export LANG=ja_JP.UTF-8.
Workaround: Unset the environment variable LANG before running the gpstop utility. For example:
$ unset LANG
25246 Management Scripts: gpconfig When you set the server configuration parameters gp_email_to and gp_email_from with the Greenplum Database utility gpconfig, the utility removes the single quotes from the values.
$ gpconfig -c gp_email_to -v 'test@example.com'
The improperly set parameter causes Greenplum Database to fail when it is restarted.
Workaround: Enclose the value for gp_email_to or gp_email_from with double quotes.
$ gpconfig -c gp_email_to -v "'test@example.com'"
25168 Locking, Signals, Processes When the server configuration parameter client_min_messages is set to either set to PANIC or FATAL and a PANIC or FATAL level message is encountered, Greenplum Database hangs.

The client_min_messages parameter should not be set a value higher than ERROR.

24588 Management Scripts: gpconfig The Greenplum Database gpconfig utility does not display the correct information for the server configuration parameter gp_enable_gpperfmon. The parameter displays the state of the Greenplum Command Center data collection agents (gpperfmon).

Workaround: The SQL command SHOW displays the correct gp_enable_gpperfmon value.

24031 gphdfs If a readable external table is created with FORMAT 'CSV' and uses the gphdfs protocol, reading a record fails if the record spans multiple lines and the record is stored in multiple HDFS blocks.

Workaround: Remove line separators from within the record so that the record does not span multiple lines.

23824 Authentication In some cases, LDAP client utility tools cannot be used after running the source command:

source $GPHOME/greenplum_path.sh

because the LDAP libraries included with Greenplum Database are not compatible with the LDAP client utility tools that are installed with operating system.

Workaround: The LDAP tools can be used without running the source command in the environment.

23366 Resource Management In Greenplum Database 4.2.7.0 and later, the priority of some running queries, cannot be dynamically adjusted with the gp_adjust_priority() function. The attempt to execute this request might silently fail. The return value of the gp_adjust_priority() call indicates success or failure. If 1 is returned, the request was not successfully executed. If a number greater than 1 is returned, the request was successful. If the request fails, the priority of all running queries are unchanged, they remain as they were before the gp_adjust_priority() call.
23492 Backup and Restore, A backup from a Greenplum Database 4.3.x system that is created with a Greenplum Database back up utility, for example gpcrondump, cannot be restored to a Greenplum Database 4.2.x system with the psql utility or the corresponding restore utility, for example gpdbrestore.
23521 Client Access Methods and Tools Hadoop YARN based on Hadoop 2.2 or later does not work with Greenplum Database.

Workaround: For Hadoop distributions based on Hadoop 2.2 or later that are supported by Greenplum Database, the classpath environment variable and other directory paths defined in $GPHOME/lib/hadoop/hadoop_env.sh must be to be modified so that the paths point to the appropriate JAR files.

20453 Query Planner For SQL queries of either of the following forms:
SELECT columns FROM table WHERE table.column NOT IN subquery;
SELECT columns FROM table WHERE table.column = ALL subquery;
tuples that satisfy both of the following conditions are not included in the result set:
  • table.column is NULL.
  • subquery returns the empty result.
21838 Backup and Restore When restoring sets of tables with the Greenplum Database utility gpdbrestore, the table schemas must be defined in the database. If a table’s schema is not defined in the database, the table is not restored. When performing a full restore, the database schemas are created when the tables are restored.

Workaround: Before restoring a set of tables, create the schemas for the tables in the database.

21129 DDL and Utility Statements SSL is only supported on the master host. It is not supported on segment hosts.
20822 Backup and Restore Special characters such as !, $, #, and @ cannot be used in the password for the Data Domain Boost user when specifying the Data Domain Boost credentials with the gpcrondump options --ddboost-host and --ddboost-user.
18247 DDL and Utility Statements TRUNCATE command does not remove rows from a sub-table of a partitioned table. If you specify a sub-table of a partitioned table with the TRUNCATE command, the command does not remove rows from the sub-table and its child tables.

Workaround: Use the ALTER TABLE command with the TRUNCATE PARTITION clause to remove rows from the sub-table and its child tables.

19705 Loaders: gpload gpload fails on Windows XP with Python 2.6.

Workaround: Install Python 2.5 on the system where gpload is installed.

19493

19464

19426

Backup and Restore The gpcrondump and gpdbrestore utilities do not handle errors returned by DD Boost or Data Domain correctly.
These are two examples:
  • If invalid Data Domain credentials are specified when setting the Data Domain Boost credentials with the gpcrondump utility, the error message does not indicate that invalid credentials were specified.
  • Restoring a Greenplum database from a Data Domain system with gpdbrestore and the --ddboost option indicates success even though segment failures occured during the restore.

Workaround: The errors are logged in the master and segment server backup or restore status and report files. Scan the status and report files to check for error messages.

15692

17192

Backup and Restore Greenplum Database’s implementation of RSA lock box for Data Domain Boost changes backup and restore requirements for customers running SuSE.

The current implementation of the RSA lock box for Data Domain Boost login credential encryption only supports customers running on Red Hat Enterprise Linux.

Workaround: If you run Greenplum Database on SuSE, use NFS as your backup solution. See the Greenplum Database Administrator Guide for information on setting up a NFS backup.

18850 Backup and Restore Data Domain Boost credentials cannot be set up in some environments due to the absence of certain libraries (for example, libstdc++) expected to reside on the platform.

Workaround: Install the missing libraries manually on the system.

18851 Backup and Restore When performing a data-only restore of a particular table, it is possible to introduce data into Greenplum Database that contradicts the distribution policy of that table. In such cases, subsequent queries may return unexpected and incorrect results. To avoid this scenario, we suggest you carefully consider the table schema when performing a restore.
18713 Catalog and Metadata Drop language plpgsql cascade results in a loss of gp_toolkit functionality.

Workaround: Reinstall gp_toolkit.

18710 Management Scripts Suite Greenplum Management utilities cannot parse IPv6 IP addresses.

Workaround: Always specify IPv6 hostnames rather than IP addresses

18703 Loaders The bytenum field (byte offset in the load file where the error occurred) in the error log when using gpfdist with data in text format errors is not populated, making it difficult to find the location of an error in the source file.
12468 Management Scripts Suite gpexpand --rollback fails if an error occurs during expansion such that it leaves the database down

gpstart also fails as it detects that expansion is in progress and suggests to run gpexpand --rollback which will not work because the database is down.

Workaround: Run gpstart -m to start the master and then run rollback.

18785 Loaders Running gpload with the --ssl option and the relative path of the source file results in an error that states the source file is missing.

Workaround: Provide the full path in the yaml file or add the loaded data file to the certificate folder.

18414 Loaders Unable to define external tables with fixed width format and empty line delimiter when file size is larger than gpfdist chunk (by default, 32K).
17285 Backup and Restore NFS backup with gpcrondump -c can fail.

In circumstances where you haven't backed up to a local disk before, backups to NFS using gpcrondump with the -c option can fail. On fresh systems where a backup has not been previously invoked there are no dump files to cleanup and the -c flag will have no effect.

Workaround: Do not run gpcrondump with the -c option the first time a backup is invoked from a system.

17837 Upgrade/ Downgrade Major version upgrades internally depend on the gp_toolkit system schema. The alteration or absence of this schema may cause upgrades to error out during preliminary checks.

Workaround: To enable the upgrade process to proceed, you need to reinstall the gp_toolkit schema in all affected databases by applying the SQL file found here: $GPHOME/share/postgresql/gp_toolkit.sql.

17513 Management Scripts Suite Running more than one gpfilespace command concurrently with itself to move either temporary files (--movetempfilespace) or transaction files (--movetransfilespace) to a new filespace can in some circumstances cause OID inconsistencies.

Workaround: Do not run more than one gpfilespace command concurrently with itself. If an OID inconsistency is introduced gpfilespace --movetempfilespace or gpfilespace --movetransfilespace can be used to revert to the default filespace.

17780 DDL/DML: Partitioning ALTER TABLE ADD PARTITION inheritance issue

When performing an ALTER TABLE ADD PARTITION operation, the resulting parts may not correctly inherit the storage properties of the parent table in cases such as adding a default partition or more complex subpartitioning. This issue can be avoided by explicitly dictating the storage properties during the ADD PARTITION invocation. For leaf partitions that are already afflicted, the issue can be rectified through use of EXCHANGE PARTITION.

17795 Management Scripts Suite Under some circumstances, gppkg on SuSE is unable to correctly interpret error messages returned by rpm.

On SuSE, gppkg is unable to operate correctly under circumstances that require a non-trivial interpretation of underlying rpm commands. This includes scenarios that result from overlapping packages, partial installs, and partial uninstalls.

17604 Security A Red Hat Enterprise Linux (RHEL) 6.x security configuration file limits the number of processes that can run on gpadmin.

RHEL 6.x contains a security file (/etc/security/limits.d/90-nproc.conf) that limits available processes running on gpadmin to 1064.

Workaround: Remove this file or increase the processes to 131072.

17334 Management Scripts Suite You may see warning messages that interfere with the operation of management scripts when logging in.

Greenplum recommends that you edit the /etc/motd file and add the warning message to it. This will send the messages to are redirected to stdout and not stderr. You must encode these warning messages in UTF-8 format.

17221 Resource Management Resource queue deadlocks may be encountered if a cursor is associated with a query invoking a function within another function.
17113 Management Scripts Suite Filespaces are inconsistent when the Greenplum database is down.

Filespaces become inconsistent in case of a network failure. Greenplum recommends that processes such as moving a filespace be done in an environment with an uninterrupted power supply.

17189 Loaders: gpfdist gpfdist shows the error “Address already in use” after successfully binding to socket IPv6.

Greenplum supports IPv4 and IPv6. However, gpfdist fails to bind to socket IPv4, and shows the message “Address already in use”, but binds successfully to socket IPv6.

16064 Backup and Restore Restoring a compressed dump with the --ddboost option displays incorrect dump parameter information.

When using gpdbrestore --ddboost to restore a compressed dump, the restore parameters incorrectly show “Restore compressed dump = Off”. This error occurs even if gpdbrestore passes the --gp-c option to use gunzip for in-line de-compression.

15899 Backup and Restore When running gpdbrestore with the list (-L) option, external tables do not appear; this has no functional impact on the restore job.

Upgrading to Greenplum Database 4.3.12.0

The upgrade path supported for this release is Greenplum Database 4.2.x.x to Greenplum Database 4.3.12.0. The minimum recommended upgrade path for this release is from Greenplum Database version 4.2.x.x. If you have an earlier major version of the database, you must first upgrade to version 4.2.x.x.

Prerequisites

Before starting the upgrade process, Pivotal recommends performing the following checks.

  • Verify the health of the Greenplum Database host hardware, and that you verify that the hosts meet the requirements for running Greenplum Database. The Greenplum Database gpcheckperf utility can assist you in confirming the host requirements.
  • If upgrading from Greenplum Database 4.2.x.x, Pivotal recommends running the gpcheckcat utility to check for Greenplum Database catalog inconsistencies.
    Note: If you need to run the gpcheckcat utility, Pivotal recommends running it a few weeks before the upgrade and that you run gpcheckcat during a maintenance period. If necessary, you can resolve any issues found by the utility before the scheduled upgrade.

    The utility is in $GPHOME/bin. Pivotal recommends that Greenplum Database be in restricted mode when you run gpcheckcat utility. See the Greenplum Database Utility Guide for information about the gpcheckcat utility.

    If gpcheckcat reports catalog inconsistencies, you can run gpcheckcat with the -g option to generate SQL scripts to fix the inconsistencies.

    After you run the SQL scripts, run gpcheckcat again. You might need to repeat the process of running gpcheckcat and creating SQL scripts to ensure that there are no inconsistencies. Pivotal recommends that the SQL scripts generated by gpcheckcat be run on a quiescent system. The utility might report false alerts if there is activity on the system.

    Important: If the gpcheckcat utility reports errors, but does not generate a SQL script to fix the errors, contact Pivotal support. Information for contacting Pivotal Support is at https://support.pivotal.io.
  • Ensure that the Linux sed utility is installed on the Greenplum Database hosts. In Greenplum Database releases prior to 4.3.10.0, the Linux ed utility is required on Greenplum Database hosts. The gpinitsystem utility requires the Linux utility.
  • During the migration process from Greenplum Database 4.2.x.x, a backup is made of some files and directories in $MASTER_DATA_DIRECTORY. Pivotal recommends that files and directories that are not used by Greenplum Database be backed up, if necessary, and removed from the $MASTER_DATA_DIRECTORY before migration. For information about the Greenplum Database migration utilities, see the Greenplum Database Utility Guide.
Important: If you intend to use an extension package with Greenplum Database 4.3.12.0, you must install and use a Greenplum Database extension packages (gppkg files and contrib modules) that are built for Greenplum Database 4.3.5.0 or later. For custom modules that were used with Greenplum Database 4.3.4.x and earlier, you must rebuild any modules that were built against the provided C language header files for use with Greenplum Database 4.3.5.0 or later.

If you use the Greenplum Database MADlib extension, upgrade to MADlib 1.10 on Greenplum Database 4.3.12.0. If you do not upgrade to MADlib 1.10, the MADlib madpack utility will not function. The MADlib analytics functionality will continue to work. If you upgrade to MADlib 1.9.1, see "Greenplum MADlib Extension for Analytics", in the Greenplum Database Reference Guide.

For detailed upgrade procedures and information, see the following sections:

If you are utilizing Data Domain Boost, you have to re-enter your DD Boost credentials after upgrading from Greenplum Database 4.2.x.x to 4.3.x.x as follows:

gpcrondump --ddboost-host ddboost_hostname --ddboost-user ddboost_user
  --ddboost-backupdir backup_directory
Note: If you do not reenter your login credentials after an upgrade, your backup will never start because the Greenplum Database cannot connect to the Data Domain system. You will receive an error advising you to check your login credentials.

Upgrading from 4.3.x to 4.3.12.0

An upgrade from 4.3.x to 4.3.12.0 involves stopping Greenplum Database, updating the Greenplum Database software binaries, upgrading and restarting Greenplum Database. If you are using Greenplum Extension packages, you must install and use Greenplum Database 4.3.5.0 or later extension packages. If you are using custom modules with the extensions, you must also use modules that were built for use with Greenplum Database 4.3.5.0 or later.

Important: If you are upgrading from Greenplum Database 4.3.x on a Pivotal DCA system, see Upgrading from 4.3.x to 4.3.12.0 on Pivotal DCA Systems. This section is for upgrading to Greenplum Database 4.3.12.0 on non-DCA systems.
Note: If you are upgrading from Greenplum Database between 4.3.0 and 4.3.2, run the fix_ao_upgrade.py utility to check Greenplum Database for the upgrade issue and fix the upgrade issue (See step 11). The utility is in this Greenplum Database directory: $GPHOME/share/postgresql/upgrade

For information about the utility, see fix_ao_upgrade.py Utility.

Note: If your database contains append-optimized tables that were converted from Greenplum Database 4.2.x append-only tables, and you are upgrading from a 4.3.x release earlier than 4.3.6.0, run the fix_visimap_owner.sql script to fix a Greenplum Database append-optimized table issue (See step 12). The utility is in this Greenplum Database directory: $GPHOME/share/postgresql/upgrade

For information about the script, see fix_visimap_owner.sql Script.

Note: If the Greenplum Command Center database gpperfmon is installed in your Greenplum Database system, the migration process changes the distribution key of the Greenplum Database log_alert_* tables to the logtime column. The redistribution of the table data might take some time the first time you start Greenplum Database after migration. The change occurs only the first time you start Greenplum Database after a migration.
  1. Log in to your Greenplum Database master host as the Greenplum administrative user:
    $ su - gpadmin
  2. Uninstall the Greenplum Database gNet extension package if it is installed.

    The gNet extension package contains the software for the gphdfs protocol. For Greenplum Database 4.3.1 and later releases, the extension is bundled with Greenplum Database. The files for gphdfs are installed in $GPHOME/lib/hadoop.

  3. Perform a smart shutdown of your current Greenplum Database 4.3.x system (there can be no active connections to the database). This example uses the -a option to disable confirmation prompts:
    $ gpstop -a
  4. Run the installer for 4.3.12.0 on the Greenplum Database master host.
    When prompted, choose an installation location in the same base directory as your current installation. For example:
    /usr/local/greenplum-db-4.3.12.0
  5. If your Greenplum Database deployment uses LDAP authentication, manually edit the /usr/local/greenplum-db/greenplum_path.sh file to add the line:
    export LDAPCONF=/etc/openldap/ldap.conf
  6. Edit the environment of the Greenplum Database superuser (gpadmin) and make sure you are sourcing the greenplum_path.sh file for the new installation. For example change the following line in .bashrc or your chosen profile file:
    source /usr/local/greenplum-db-4.3.0.0/greenplum_path.sh

    to:

    source /usr/local/greenplum-db-4.3.12.0/greenplum_path.sh

    Or if you are sourcing a symbolic link (/usr/local/greenplum-db) in your profile files, update the link to point to the newly installed version. For example:

    $ rm /usr/local/greenplum-db
    $ ln -s /usr/local/greenplum-db-4.3.12.0 /usr/local/greenplum-db
  7. Source the environment file you just edited. For example:
    $ source ~/.bashrc
  8. Run the gpseginstall utility to install the 4.3.12.0 binaries on all the segment hosts specified in the hostfile. For example:
    $ gpseginstall -f hostfile
  9. Rebuild any modules that were built against the provided C language header files for use with Greenplum Database 4.3.5.0 or later (for example, any shared library files for user-defined functions in $GPHOME/lib). See your operating system documentation and your system administrator for information about rebuilding and compiling modules such as shared libraries.
  10. Use the Greenplum Database gppkg utility to install Greenplum Database extensions. If you were previously using any Greenplum Database extensions such as pgcrypto, PL/R, PL/Java, PL/Perl, and PostGIS, download the corresponding packages from Pivotal Network, and install using this utility. See the Greenplum Database 4.3 Utility Guide for gppkg usage details.
  11. After all segment hosts have been upgraded, you can log in as the gpadmin user and restart your Greenplum Database system:
    # su - gpadmin
    $ gpstart
  12. If you are upgrading a version of Greenplum Database between 4.3.0 and 4.3.2, check your Greenplum Database for inconsistencies due to an incorrect conversion of 4.2.x append-only tables to 4.3.x append-optimized tables.
    Important: The Greenplum Database system must be started but should not be running any SQL commands while the utility is running.
    1. Run the fix_ao_upgrade.py utility with the option --report. The following is an example.
      $ $GPHOME/share/postgresql/upgrade/fix_ao_upgrade.py --host=mdw --port=5432 --report
    2. If the utility displays a list of inconsistencies, fix them by running the fix_ao_upgrade.py utility without the --report option.
      $ $GPHOME/share/postgresql/upgrade/fix_ao_upgrade.py --host=mdw --port=5432
    3. (optional) Run the fix_ao_upgrade.py utility with the option --report again. No inconsistencies should be reported.
  13. For databases that contain append-optimized tables that were created from Greenplum Database 4.2.x append-only tables, run the fix_visimap_owner.sql script. The script resolves an issue associated with relations associated with append-optimized tables. For example, this command runs the script on the database testdb.
    $ psql -d testdb1 -f $GPHOME/share/postgresql/upgrade/fix_visimap_owner.sql

    The script displays this prompt that allows you to display changes to the affected relations without performing the operation.

    Dry run, without making any modifications (y/n)?
    • Enter y to list ownership changes that would have been made. The owner of the relation is not changed.
    • Enter n make the ownership changes and display the changes to relation ownership.
    Note: Pivotal recommends that you run the script during low activity period. Heavy workloads do not affect database functionality but might affect performance.
  14. If you are utilizing Data Domain Boost, you have to re-enter your DD Boost credentials after upgrading from Greenplum Database 4.3.x to 4.3.12.0 as follows:
    gpcrondump --ddboost-host ddboost_hostname --ddboost-user ddboost_user
      --ddboost-backupdir backup_directory
Note: If you do not reenter your login credentials after an upgrade, your backup will never start because the Greenplum Database cannot connect to the Data Domain system. You will receive an error advising you to check your login credentials.

fix_visimap_owner.sql Script

The SQL script fix_visimap_owner.sql resolves ownership issues related to visimap relations that are associated with append-optimized tables.

When upgrading from Greenplum Database 4.2.x to 4.3.x, the 4.2.x append-only tables are converted to 4.3 append-optimized tables. When upgrading from 4.2.x to Greenplum Database 4.3.x earlier than 4.3.6.0, the upgrade process incorrectly assigned the owner of visimap relations to gpadmin, not the owner of the associated append-optimized table.

If you are migrating to this release Greenplum Database from a 4.3.x release earlier than 4.3.6.0, run this SQL script as the gpadmin superuser to fix the incorrect assignment issue for a database.

$GPHOME/share/postgresql/upgrade/fix_visimap_owner.sql

When you run the script, it temporarily creates two functions that update the visimap relations ownership and displays this message that lets you perform a test run without changing ownership.

Dry run, without making any modifications (y/n)?

If you enter y, the script displays the changes that would have been made. The owner of the relation is not changed.

If you enter n, the script changes the owner of the relations and displays the changes that are made.

Before exiting, the script deletes the functions it created.

Note: If you are migrating from Greenplum Database 4.2.x directly to Greenplum Database 4.3.12.0 you do not need to run the fix_visimap_owner.sql script. Also, you can run this script on Greenplum Database 4.3.x earlier than 4.3.6.0 to fix the incorrect ownership assignment of visimap relations.

fix_ao_upgrade.py Utility

The fix_ao_upgrade.py utility checks Greenplum Database for an upgrade issue that is caused when upgrading Greenplum Database 4.2.x to a version of Greenplum Database between 4.3.0 and 4.3.2.

The upgrade process incorrectly converted append-only tables that were in the 4.2.x database to append-optimized tables during an upgrade from Greenplum Database 4.2.x to a Greenplum Database 4.3.x release prior to 4.3.2.1. The incorrect conversion causes append-optimized table inconsistencies in the upgraded Greenplum Database system.

Syntax
fix_ao_upgrade.py {-h master_host | --host=master_host}
     {-p master_port | --port=master_port}
     [-u user | --user=user ]
     [--report] [-v | --verbose] [--help]
Options
-r | --report
Report inconsistencies without making any changes.
-h master_host | --host=master_host
Greenplum Database master hostname or IP address.
-p master_port | --port=master_port
Greenplum Database master port.
-u user | --user=user
User name to connect to Greenplum Database. The user must be a Greenplum Database superuser. Default is gpadmin.
v | --verbose
Verbose output that includes table names.
--help
Show the help message and exit.

If you specify the optional --report option, the utility displays a report of inconsistencies in the Greenplum Database system. No changes to Greenplum Database system are made. If you specify the --verbose option with --report, the table names that are affected by the inconsistencies are included in the output.

Dropping Orphan Tables on Greenplum Database Segments

If you upgraded to Greenplum Database 4.3.6.0 and a user dropped a table, in some cases, the table would be dropped only on the Greenplum Database master, not on the Greenplum Database segments. This created orphan tables on Greenplum Database segments. This issue occurs only with Greenplum Database 4.3.6.0. However, the orphan tables remain in Greenplum Database after upgrading to 4.3.12.0.

For Greenplum Database 4.3.6.2 and later, the installation contains this Python script to check for and drop orphan tables on segments.
$GPHOME/share/postgresql/upgrade/fix_orphan_segment_tables.py
You can run this script on Greenplum Database 4.3.12.0 to check for and drop orphan tables.
The script performs these operations:
  • Checks for orphan tables on segments and generates file that contains a list of the orphan tables.
  • Deletes orphan tables specified in a text file.

You run the script as a Greenplum Database administrator. The script attempts to log into Greenplum Database as user who runs the script.

To check all databases in the Greenplum Database instance, run this command on the Greenplum Database master. Specify the port to connect to Greenplum Database.
$GPHOME/share/postgresql/upgrade/fix_orphan_segment_tables.py -p port

To check a single database, specify the option -d database.

The command generates a list of orphan tables in the text file orphan_tables_file_timestamp. You can review the list and, if needed, modify it.

To delete orphan tables on the Greenplum Database segments, run this command on the Greenplum Database master. Specify the port to connect to Greenplum Database and the file containing the orphan tables to delete.
$GPHOME/share/postgresql/upgrade/fix_orphan_segment_tables.py -p port -f orphan_tables_file_timestamp 

The script connects only to the databases required to drop orphan tables.

Note: Pivotal recommends that you run the script during a period of low activity to prevent any issues that might occur due to concurrent drop operations.

Upgrading from 4.3.x to 4.3.12.0 on Pivotal DCA Systems

Upgrading Greenplum Database from 4.3.x to 4.3.12.0 on a Pivotal DCA system involves stopping Greenplum Database, updating the Greenplum Database software binaries, and restarting Greenplum Database. If you are using Greenplum Extension packages, you must install and use Greenplum Database 4.3.5.0 or later extension packages. If you are using custom modules with the extensions, you must also use modules that were built for use with Greenplum Database 4.3.5.0 or later.

Important: Skip this section if you are not installing Greenplum Database 4.3.12.0 on DCA systems. This section is only for installing Greenplum Database 4.3.12.0 on DCA systems.
Note: If you are upgrading from Greenplum Database between 4.3.0 and 4.3.2, run the fix_ao_upgrade.py utility to check Greenplum Database for the upgrade issue and fix the upgrade issue (See step 8). The utility is in this Greenplum Database directory: $GPHOME/share/postgresql/upgrade

For information about the utility, see fix_ao_upgrade.py Utility.

  1. Log in to your Greenplum Database master host as the Greenplum administrative user (gpadmin):
    # su - gpadmin
  2. Download or copy the installer file to the Greenplum Database master host.
  3. Uninstall the Greenplum Database gNet extension package if it is installed. For information about uninstalling a Greenplum Database extension package, see gppkg in the Greenplum Database Utility Guide.

    The gNet extension package contains the software for the gphdfs protocol. For Greenplum Database 4.3.1 and later releases, the extension is bundled with Greenplum Database. The files for gphdfs are installed in $GPHOME/lib/hadoop.

  4. Perform a smart shutdown of your current Greenplum Database 4.3.x system (there can be no active connections to the database). This example uses the -a option to disable confirmation prompts:
    $ gpstop -a
  5. As root, run the Pivotal DCA installer for 4.3.12.0 on the Greenplum Database master host and specify the file hostfile that lists all hosts in the cluster. If necessary, copy hostfile to the directory containing the installer before running the installer.

    This example command runs the installer for Greenplum Database 4.3.12.0 for Redhat Enterprise Linux 5.x.

    # ./greenplum-db-appliance-4.3.12.0-build-1-RHEL5-x86_64.bin hostfile

    The file hostfile is a text file that lists all hosts in the cluster, one host name per line.

  6. Install Greenplum Database extension packages. For information about installing a Greenplum Database extension package, see gppkg in the Greenplum Database Utility Guide.
    Important: Rebuild any modules that were built against the provided C language header files for use with Greenplum Database 4.3.5.0 or later (for example, any shared library files for user-defined functions in $GPHOME/lib). See your operating system documentation and your system administrator for information about rebuilding and compiling modules such as shared libraries.
  7. After all segment hosts have been upgraded, you can log in as the gpadmin user and restart your Greenplum Database system:
    # su - gpadmin
    $ gpstart
  8. If you are upgrading a version of Greenplum Database between 4.3.0 and 4.3.2, check your Greenplum Database for inconsistencies due to an incorrect conversion of 4.2.x append-only tables to 4.3.x append-optimized tables.
    Important: The Greenplum Database system must be started but should not be running any SQL commands while the utility is running.
    1. Run the fix_ao_upgrade.py utility with the option --report. The following is an example.
      $ $GPHOME/share/postgresql/upgrade/fix_ao_upgrade.py --host=mdw --port=5432 --report
    2. If the utility displays a list of inconsistencies, fix them by running the fix_ao_upgrade.py utility without the --report option.
      $ $GPHOME/share/postgresql/upgrade/fix_ao_upgrade.py --host=mdw --port=5432
    3. (optional) Run the fix_ao_upgrade.py utility with the option --report again. No inconsistencies should be reported.
  9. If you are utilizing Data Domain Boost, you have to re-enter your DD Boost credentials after upgrading from Greenplum Database 4.3.x to 4.3.12.0 as follows:
    gpcrondump --ddboost-host ddboost_hostname --ddboost-user ddboost_user
      --ddboost-backupdir backup_directory
Note: If you do not reenter your login credentials after an upgrade, your backup will never start because the Greenplum Database cannot connect to the Data Domain system. You will receive an error advising you to check your login credentials.

Upgrading from 4.2.x.x to 4.3.12.0

This section describes how you can upgrade from Greenplum Database 4.2.x.x or later to Greenplum Database 4.3.12.0. For users running versions prior to 4.2.x.x of Greenplum Database, see the following:

Planning Your Upgrade

Before you begin your upgrade, make sure the master and all segments (data directories and filespace) have at least 2GB of free space.

Prior to upgrading your database, Pivotal recommends that you run a pre-upgrade check to verify your database is healthy.

You can perform a pre-upgrade check by executing the gpmigrator (_mirror) utility with the --check-only option.

For example:

source $new_gphome/greenplum_path.sh; 
gpmigrator_mirror --check-only $old_gphome $new_gphome
Note: Performing a pre-upgrade check of your database with the gpmigrator (_mirror) utility should done during a database maintenance period. When the utility checks the database catalog, users cannot access the database.
Important: If you intend to use an extension packages with Greenplum Database 4.3.5.0 or later, you must install and use a Greenplum Database extension packages (gppkg files and contrib modules) that are built for Greenplum Database 4.3.5.0 or later. For custom modules that were used with Greenplum Database 4.3.4.x and earlier, you must rebuild any modules that were built against the provided C language header files for use with Greenplum Database 4.3.5.0 or later.

Migrating a Greenplum Database That Contains Append-Only Tables

The migration process converts append-only tables that are in a Greenplum Database to append-optimized tables. For a database that contains a large number of append-only tables, the conversion to append-optimized tables might take a considerable amount of time. Pivotal supplies a user-defined function that can help estimate the time required to migrate from Greenplum Database 4.2.x to 4.3.x. For information about the user-defined function, estimate_42_to_43_migrate_time.pdf.

Append-optimized tables are introduced in Greenplum Database 4.3.0. For information about append-optimized tables, see the release notes for Greenplum Database 4.3.0.

Upgrade Procedure

This section divides the upgrade into the following phases: pre-upgrade preparation, software installation, upgrade execution, and post-upgrade tasks.

We have also provided you with an Upgrade Checklist that summarizes this procedure.

Important: Carefully evaluate each section and perform all required and conditional steps. Failing to perform any of these steps can result in an aborted upgrade, placing your system in an unusable or even unrecoverable state.
Pre-Upgrade Preparation (on your 4.2.x system)

Perform these steps on your current 4.2.x Greenplum Database system. This procedure is performed from your Greenplum master host and should be executed by the Greenplum superuser (gpadmin).

  1. Log in to the Greenplum Database master as the gpadmin user:
    # su - gpadmin
  2. (optional) Vacuum all databases prior to upgrade. For example:
    $ vacuumdb database_name
  3. (optional) Clean out old server log files from your master and segment data directories. For example, to remove log files from 2011 from your segment hosts:
    $ gpssh -f seg_host_file -e 'rm /gpdata/*/gp*/pg_log/gpdb-2011-*.csv'

    Running VACUUM and cleaning out old logs files is not required, but it will reduce the size of Greenplum Database files to be backed up and migrated.

  4. Run gpstate to check for failed segments.
    $ gpstate
  5. If you have failed segments, you must recover them using gprecoverseg before you can upgrade.
    $ gprecoverseg

    Note: It might be necessary to restart the database if the preferred role does not match the current role; for example, if a primary segment is acting as a mirror segment or a mirror segment is acting as a primary segment.

  6. Copy or preserve any additional folders or files (such as backup folders) that you have added in the Greenplum data directories or $GPHOME directory. Only files or folders strictly related to Greenplum Database operations are preserved by the migration utility.
Install the Greenplum Database 4.3 Software Binaries (non-DCA)
Important: If you are installing Greenplum Database 4.3 on a Pivotal DCA system, see Install the Greenplum Database 4.3 Software Binaries on DCA Systems. This section is for installing Greenplum Database 4.3 on non-DCA systems.
  1. Download or copy the installer file to the Greenplum Database master host.
  2. Unzip the installer file. For example:
    # unzip greenplum-db-4.3.12.0-PLATFORM.zip
  3. Launch the installer using bash. For example:
    # /bin/bash greenplum-db-4.3.12.0-PLATFORM.bin
  4. The installer will prompt you to accept the Greenplum Database license agreement. Type yes to accept the license agreement.
  5. The installer will prompt you to provide an installation path. Press ENTER to accept the default install path (for example: /usr/local/greenplum-db-4.3.12.0), or enter an absolute path to an install location. You must have write permissions to the location you specify.
  6. The installer installs the Greenplum Database software and creates a greenplum-db symbolic link one directory level above your version-specific Greenplum installation directory. The symbolic link is used to facilitate patch maintenance and upgrades between versions. The installed location is referred to as $GPHOME.
  7. Source the path file from your new 4.3.12.0 installation. This example changes to the gpadmin user before sourcing the file:
    # su - gpadmin
    $ source /usr/local/greenplum-db-4.3.12.0/greenplum_path.sh
  8. Run the gpseginstall utility to install the 4.3.12.0 binaries on all the segment hosts specified in the hostfile. For example:
    $ gpseginstall -f hostfile
Install the Greenplum Database 4.3 Software Binaries on DCA Systems
Important: Skip this section if you are not installing Greenplum Database 4.3 on DCA systems. This section is only for installing Greenplum Database 4.3 on DCA systems.
  1. Download or copy the installer file to the Greenplum Database master host.
  2. As root, run the Pivotal DCA installer for 4.3.12.0 on the Greenplum Database master host and specify the file hostfile that lists all hosts in the cluster. If necessary, copy hostfile to the directory containing the installer before running the installer.

    This example command runs the installer for Greenplum Database 4.3.12.0.

    # ./greenplum-db-appliance-4.3.12.0-build-1-RHEL5-x86_64.bin hostfile

    The file hostfile is a text file that lists all hosts in the cluster, one host name per line.

Upgrade Execution

During upgrade, all client connections to the master will be locked out. Inform all database users of the upgrade and lockout time frame. From this point onward, users should not be allowed on the system until the upgrade is complete.

  1. As gpadmin, source the path file from your old 4.2.x.x installation. For example:
    $ source /usr/local/greenplum-db-4.2.8.1/greenplum_path.sh

    On a DCA system, the path to the might be similar to /usr/local/GP-4.2.8.1/greenplum_path.sh depending on the installed version.

  2. (optional but strongly recommended) Back up all databases in your Greenplum Database system using gpcrondump. See the Greenplum Database Administrator Guide for more information on how to do backups using gpcrondump. Make sure to secure your backup files in a location outside of your Greenplum data directories.
  3. If your system has a standby master host configured, remove the standby master from your system configuration. For example:
    $ gpinitstandby -r
  4. Perform a clean shutdown of your current Greenplum Database 4.2.x.x system. This example uses the -a option to disable confirmation prompts:
    $ gpstop -a
  5. Source the path file from your new 4.3.12.0 installation. For example:
    $ source /usr/local/greenplum-db-4.3.12.0/greenplum_path.sh

    On a DCA system, the path to the file would be similar to /usr/local/GP-4.3.12.0/greenplum_path.sh.

  6. Update the Greenplum Database environment so it is referencing your new 4.3.12.0 installation.
    1. For example, update the greenplum-db symbolic link on the master and standby master to point to the new 4.3.12.0 installation directory. For example (as root):
      # rm -rf /usr/local/greenplum-db
      # ln -s /usr/local/greenplum-db-4.3.12.0 /usr/local/greenplum-db
      # chown -R gpadmin /usr/local/greenplum-db
      On a DCA system, the ln command would specify the install directory created by the DCA installer. For example:
      # ln -s /usr/local/GP-4.3.12.0 /usr/local/greenplum-db
    2. Using gpssh, also update the greenplum-db symbolic link on all of your segment hosts. For example (as root):
      # gpssh -f segment_hosts_file
      => rm -rf /usr/local/greenplum-db
      => ln -s /usr/local/greenplum-db-4.3.12.0 /usr/local/greenplum-db
      => chown -R gpadmin /usr/local/greenplum-db
      => exit
      On a DCA system, the ln command would specify the install directory created by the DCA installer. For example:
      => ln -s /usr/local/GP-4.3.12.0 /usr/local/greenplum-db
  7. (optional but recommended) Prior to running the migration, perform a pre-upgrade check to verify that your database is healthy by executing the 4.3.4 version of the migration utility with the --check-only option. The command is run as gpadmin. This example runs the gpmigrator_mirror utility as gpadmin:
    $ gpmigrator_mirror --check-only 
       /usr/local/greenplum-db-4.2.6.3 
       /usr/local/greenplum-db-4.3.12.0

    On a DCA system, the old GPHOME location might be similar to /usr/local/GP-4.2.8.1 (depending on the old installed version) and the new GPHOME location would be similar to /usr/local/GP-4.3.12.0.

  8. As gpadmin, run the 4.3.12.0 version of the migration utility specifying your old and new GPHOME locations. If your system has mirrors, use gpmigrator_mirror. If your system does not have mirrors, use gpmigrator. For example on a system with mirrors:
    $ gpmigrator_mirror /usr/local/greenplum-db-4.2.6.3 
       /usr/local/greenplum-db-4.3.12.0

    On a DCA system, the old GPHOME location might be similar to /usr/local/GP-4.2.8.1 (depending on the old installed version) and the new GPHOME location would be similar to /usr/local/GP-4.3.12.0.

    Note: If the migration does not complete successfully, contact Customer Support (see Troubleshooting a Failed Upgrade ).
  9. The migration can take a while to complete. After the migration utility has completed successfully, the Greenplum Database 4.3.12.0 system will be running and accepting connections.
    Note: After the migration utility has completed, the resynchronization of the mirror segments with the primary segments continues. Even though the system is running, the mirrors are not active until the resynchronization is complete.
Post-Upgrade (on your 4.3.12.0 system)
  1. If your system had a standby master host configured, reinitialize your standby master using gpinitstandby:
    $ gpinitstandby -s standby_hostname
  2. If your system uses external tables with gpfdist, stop all gpfdist processes on your ETL servers and reinstall gpfdist using the compatible Greenplum Database 4.3.x Load Tools package. Application Packages are available at Pivotal Network. For information about gpfdist, see the Greenplum Database 4.3 Administrator Guide.
  3. Rebuild any modules that were built against the provided C language header files for use with Greenplum Database 4.3.5.0 or later. (for example, any shared library files for user-defined functions in $GPHOME/lib). See your operating system documentation and your system administrator for information about rebuilding and compiling modules such as shared libraries.
  4. Use the Greenplum Database gppkg utility to install Greenplum Database extensions. If you were previously using any Greenplum Database extensions such as pgcrypto, PL/R, PL/Java, PL/Perl, and PostGIS, download the corresponding packages from Pivotal Network, and install using this utility. See the Greenplum Database Utility Guide for gppkg usage details.
  5. If you want to utilize the Greenplum Command Center management tool, install the latest Command Center Console and update your environment variable to point to the latest Command Center binaries (source the gpperfmon_path.sh file from your new installation). See the Greenplum Command Center documentation for information about installing and configuring Greenplum Command Center.
    Note: The Greenplum Command Center management tool replaces Greenplum Performance Monitor.

    Command Center Console packages are available from Pivotal Network.

  6. (optional) Check the status of Greenplum Database. For example, you can run the Greenplum Database utility gpstate to display status information of a running Greenplum Database.
    $ gpstate
  7. Inform all database users of the completed upgrade. Tell users to update their environment to source the Greenplum Database 4.3.12.0 installation (if necessary).

Upgrade Checklist

This checklist provides a quick overview of all the steps required for an upgrade from 4.2.x.x to 4.3.12.0. Detailed upgrade instructions are provided in Upgrading from 4.2.x.x to 4.3.12.0.

Pre-Upgrade Preparation (on your current system)

* 4.2.x.x system is up and available
Log in to your master host as the gpadmin user (your Greenplum superuser).
(Optional) Run VACUUM on all databases.
(Optional) Remove old server log files from pg_log in your master and segment data directories.
Check for and recover any failed segments (gpstate, gprecoverseg).
Copy or preserve any additional folders or files (such as backup folders).
Install the Greenplum Database 4.3 binaries on all Greenplum hosts.
Inform all database users of the upgrade and lockout time frame.

Upgrade Execution

* The system will be locked down to all user activity during the upgrade process
Backup your current databases.
Remove the standby master (gpinitstandby -r).
Do a clean shutdown of your current system (gpstop).
Update your environment to source the new Greenplum Database 4.3.x installation.
Run the upgrade utility (gpmigrator_mirror if you have mirrors, gpmigrator if you do not).
After the upgrade process finishes successfully, your 4.3.x system will be up and running.

Post-Upgrade (on your 4.3 system)

* The 4.3.x.x system is up
Reinitialize your standby master host (gpinitstandby).
Upgrade gpfdist on all of your ETL hosts.
Rebuild any custom modules against your 4.3.x installation.
Download and install any Greenplum Database extensions.
(Optional) Install the latest Greenplum Command Center and update your environment to point to the latest Command Center binaries.
Inform all database users of the completed upgrade.

For Users Running Greenplum Database 4.1.x.x

Users on a release prior to 4.1.x.x cannot upgrade directly to 4.3.12.0.

  1. Upgrade from your current release to 4.2.x.x (follow the upgrade instructions in the latest Greenplum Database 4.2.x.x release notes available at Pivotal Documentation).
  2. Follow the upgrade instructions in these release notes for Upgrading from 4.2.x.x to 4.3.12.0.

For Users Running Greenplum Database 4.0.x.x

Users on a release prior to 4.1.x.x cannot upgrade directly to 4.3.12.0.

  1. Upgrade from your current release to 4.1.x.x (follow the upgrade instructions in the latest Greenplum Database 4.1.x.x release notes available on Dell EMC Support Zone).
  2. Upgrade from the current release to 4.2.x.x (follow the upgrade instructions in the latest Greenplum Database 4.2.x.x release notes available at Pivotal Documentation).
  3. Follow the upgrade instructions in these release notes for Upgrading from 4.2.x.x to 4.3.12.0.

For Users Running Greenplum Database 3.3.x.x

Users on a release prior to 4.0.x.x cannot upgrade directly to 4.3.12.0.

  1. Upgrade from your current release to the latest 4.0.x.x release (follow the upgrade instructions in the latest Greenplum Database 4.0.x.x release notes available on Dell EMC Support Zone).
  2. Upgrade the 4.0.x.x release to the latest 4.1.x.x release (follow the upgrade instructions in the latest Greenplum Database 4.1.x.x release notes available on Dell EMC Support Zone).
  3. Upgrade from the 4.1.1 release to the latest 4.2.x.x release (follow the upgrade instructions in the latest Greenplum Database 4.2.x.x release notes available at Pivotal Documentation).
  4. Follow the upgrade instructions in these release notes for Upgrading from 4.2.x.x to 4.3.12.0.

Troubleshooting a Failed Upgrade

If you experience issues during the migration process and have active entitlements for Greenplum Database that were purchased through Pivotal, contact Pivotal Support. Information for contacting Pivotal Support is at https://support.pivotal.io.

Be prepared to provide the following information:

  • A completed Upgrade Procedure.
  • Log output from gpmigrator_mirror and gpcheckcat (located in ~/gpAdminLogs)

Greenplum Database Tools Compatibility

Client Tools

Greenplum releases a number of client tool packages on various platforms that can be used to connect to Greenplum Database and the Greenplum Command Center management tool. The following table describes the compatibility of these packages with this Greenplum Database release.

Tool packages are available from Pivotal Network.

Table 5. Greenplum Database Tools Compatibility
Client Package Description of Contents Client Version Server Versions
Greenplum Clients Greenplum Database Command-Line Interface (psql) 4.3 4.3
Greenplum Connectivity Standard PostgreSQL Database Drivers (ODBC, JDBC1)

PostgreSQL Client C API (libpq)

4.3 4.3
Greenplum Loaders Greenplum Database Parallel Data Loading Tools (gpfdist, gpload) 4.3 4.3
Greenplum Command Center Greenplum Database management tool. 1.3.0.2 4.3
Note: 1The JDBC drivers that are shipped with the Greenplum Connectivity Tools are official PostgreSQL JDBC drivers built by the PostgreSQL JDBC Driver team (https://jdbc.postgresql.org).

The Greenplum Database Client Tools, Load Tools, and Connectivity Tools are supported on the following platforms:

  • AIX 5.3L (32-bit)
  • AIX 5.3L and AIX 6.1 (64-bit)
  • Apple OS X on Intel processors (32-bit)
  • HP-UX 11i v3 (B.11.31) Intel Itanium (Client and Load Tools only)
  • Red Hat Enterprise Linux i386 (RHEL 5)
  • Red Hat Enterprise Linux x86_64 6.x (RHEL 6)
  • Red Hat Enterprise Linux x86_64 (RHEL 5)
  • SuSE Linux Enterprise Server x86_64 SLES 11
  • Solaris 10 SPARC32
  • Solaris 10 SPARC64
  • Solaris 10 i386
  • Solaris 10 x86_64
  • Windows 7 (32-bit and 64-bit)
  • Windows Server 2003 R2 (32-bit and 64-bit)
  • Windows Server 2008 R2 (64-bit)
  • Windows XP (32-bit and 64-bit)
Important: Support for SuSE Linux Enterprise Server 64-bit 10 SP4 has been dropped for Greenplum Database 4.3.12.0.

Greenplum Database Extensions Compatibility

Greenplum Database delivers an agile, extensible platform for in-database analytics, leveraging the system’s massively parallel architecture. Greenplum Database enables turn-key in-database analytics with Greenplum extensions.

You can download Greenplum extensions packages from Pivotal Network and install them using the Greenplum Packager Manager (gppkg). See the Greenplum Database Utility Guide for details.

Note that Greenplum Package Manager installation files for extension packages may release outside of standard Database release cycles.

The following table provides information about the compatibility of the Greenplum Database Extensions and their components with this Greenplum Database release.

Note: The PL/Python database extension is already included with the standard Greenplum Database distribution.

Pivotal supplies separate PL/Perl extension packages for Red Hat Enterprise Linux 7.x, 6.x and 5.x. Ensure you install the correct package for your operating system.

Table 6. Greenplum Database Extensions Compatibility
Greenplum Database Extension Extension Components
Name Version
PostGIS 2.0.1 for Greenplum Database 4.3.x.x PostGIS 2.0.3
Proj 4.8.0
Geos 3.3.8
PL/Java 1.3 for Greenplum Database 4.3.x.x PL/Java Based on 1.4.0
Java JDK 1.6.0_26 Update 31
PL/R 2.2 for Greenplum Database 4.3.x.x PL/R 8.3.0.16
R 3.1.1
PL/R 2.1 for Greenplum Database 4.3.x.x PL/R 8.3.0.15
R 3.1.0
PL/R 1.0 for Greenplum Database 4.3.x.x PL/R 8.3.0.12
R 2.13.0
PL/Perl 1.2 for Greenplum Database 4.3.x.x PL/Perl Based on PostgreSQL 9.1
Perl 5.16.3 on RHEL 7.x

5.12.4 on RHEL 6.x

5.5.8 on RHEL 5.x

PL/Perl 1.1 for Greenplum Database PL/Perl Based on PostgreSQL 9.1
Perl 5.12.4 on RHEL 5.x
PL/Perl 1.0 for Greenplum Database PL/Perl Based on PostgreSQL 9.1
Perl 5.12.4 on RHEL 5.x
Pgcrypto 1.2 for Greenplum Database 4.3.x.x Pgcrypto Based on PostgreSQL 8.3
MADlib 1.x for Greenplum Database 4.3.x.x MADlib Based on MADlib version 1.x (1.10, 1.9.1, 1.9, 1.8)
Note: Greenplum Database 4.3.12.0 does not support the PostGIS 1.0 extension package.

Pivotal recommends that you upgrade to MADlib 1.10 on Greenplum Database 4.3.10.0 and later releases. If you do not upgrade MADlib, the MADlib madpack utility will not function on Greenplum Database. The MADlib analytics functionality will continue to work. See "Greenplum MADlib Extension for Analytics", in the Greenplum Database Reference Guide.

Greenplum Database 4.3.12.0 supports these minimum Greenplum Database extensions package versions.

Table 7. Greenplum Database 4.3.12.0 Package Version
Greenplum Database Extension Minimum Package Version
PostGIS 2.0.1 and release gpdb4.3orca
PL/Java 1.3 and release gpdb4.3orca
PL/Perl 1.2 and release gpdb4.3orca
PL/R 2.1 and release gpdb4.3orca
Pgcrypto 1.2 and release gpdb4.3orca
MADlib 1.8 and release gpdb4.3orca
Note: Extension packages for Greenplum Database 4.3.4.x and earlier are not compatible with Greenplum Database 4.3.5.0 and later due to the introduction of GPORCA. Also, extension packages for Greenplum Database 4.3.5.0 and later are not compatible with Greenplum Database 4.3.4.x and earlier.

To use extension packages with Greenplum Database 4.3.12.0, you must install and use Greenplum Database extension packages (gppkg files and contrib modules) that are built for Greenplum Database 4.3.5.0 or later. For custom modules that were used with Greenplum Database 4.3.4.x and earlier, you must rebuild any modules that were built against the provided C language header files for use with Greenplum Database 4.3.12.0.

Package File Naming Convention

For Greenplum Database 4.3, this is the package file naming format.

pkgname-ver_pvpkg-version_gpdbrel-OS-version-arch.gppkg

This example is the package name for a postGIS package.

postgis-ossv2.0.3_pv2.0.1_gpdb4.3-rhel5-x86_64.gppkg

pkgname-ver - The package name and optional version of the software that was used to create the package extension. If the package is based on open source software, the version has format ossvversion. The version is the version of the open source software that the package is based on. For the postGIS package, ossv2.0.3 specifies that the package is based on postGIS version 2.0.3.

pvpkg-version - The package version. The version of the Greenplum Database package. For the postGIS package, pv2.0.1 specifies that the Greenplum Database package version is 2.0.1.

gpdbrel-OS-version-arch - The compatible Greenplum Database release. For the postGIS package, gpdb4.3-rhel5-x86_64 specifies that package is compatible with Greenplum Database 4.3 on Red Hat Enterprise Linux version 5.x, x86 64-bit architecture.

Hadoop Distribution Compatibility

This table lists the supported Hadoop distributions:

Table 8. Supported Hadoop Distributions
Hadoop Distribution Version gp_hadoop_ target_version
Pivotal HD Pivotal HD 3.0, 3.0.1 gphd-3.0
Pivotal HD 2.0, 2.1

Pivotal HD 1.0 1

gphd-2.0
Greenplum HD Greenplum HD 1.2 gphd-1.2
Greenplum HD 1.1 gphd-1.1 (default)
Cloudera CDH 5.2, 5.3, 5.4.x - 5.8.x cdh5
CDH 5.0, 5.1 cdh4.1
CDH 4.1 2 - CDH 4.7 cdh4.1
Hortonworks Data Platform HDP 2.1, 2.2, 2.3, 2.4, 2.5 hdp2
MapR 3 MapR 4.x, MapR 5.x gpmr-1.2
MapR 1.x, 2.x, 3.x gpmr-1.0
Apache Hadoop 2.x hadoop2
Notes:
  1. Pivotal HD 1.0 is a distribution of Hadoop 2.0
  2. For CDH 4.1, only CDH4 with MRv1 is supported
  3. MapR requires the MapR client. For MapR 5.x, only TEXT and CSV are supported in the FORMAT clause of the CREATE EXTERNAL TABLE command.

Greenplum Database 4.3.12.0 Documentation

For the latest Greenplum Database documentation go to Pivotal Documentation. Greenplum Database documentation is provided in HTML and PDF formats.

Table 9. Greenplum Database Documentation
Title Revision
Greenplum Database 4.3.12.0 Release Notes A01
Greenplum Database 4.3 Installation Guide A18
Greenplum Database 4.3 Administrator Guide A25
Greenplum Database 4.3 Reference Guide A26
Greenplum Database 4.3 Utility Guide A25
Greenplum Database 4.3 Client Tools for UNIX A09
Greenplum Database 4.3 Client Tools for Windows A07
Greenplum Database 4.3 Connectivity Tools for UNIX A08
Greenplum Database 4.3 Connectivity Tools for Windows A07
Greenplum Database 4.3 Load Tools for UNIX A11
Greenplum Database 4.3 Load Tools for Windows A11
Greenplum Command Center Administrator Guide *

Greenplum Workload Manager User Guide *

----

----

Note: * HTML format only. Documentation is at gpcc.docs.pivotal.io.