A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 4.x documentation.


Sets the amount of memory (in number of MBs) that all postgres processes of an active segment instance can consume. If a query causes this limit to be exceeded, memory will not be allocated and the query will fail.

Note that this is a local parameter and must be set for every segment in the system (primary and mirrors).

To prevent over allocation of memory, this calculation can estimate a safe gp_vmem_protect_limit value:

((SWAP + (RAM * vm.overcommit_ratio / 100)) / max_number_segments_per_server_with_mirror_failure

The max_number_segments_per_server_with_mirror_failure factor is the maximum number of primary segments that can be running on a host when mirror segments are activated due to a failure. With mirrors arranged in a 4-host block with 8 primary segments per host, for example, a single segment host failure would activate two or three mirror segments on each remaining host in the failed host's block. The max_number_segments_per_server_with_mirror_failure value for this configuration is 11 (8 primary segments plus 3 mirrors activated on failure).

For example, on a segment host with 256GB physical RAM, 32GB of swap space, vm.overcommit_ratio set at 75, and mirrors arranged in a 4-host block configuration with 8 primary segments and 8 mirror segments per host, the calculation is:

(32 + (256 * 75 / 100) / 11 = 23.27GB = 23270MB
Note: The default value for the kernel parameter vm.overcommit_ratio in Red Hat Enterprise Linux is 50.
Value Range Default Set Classifications
integer 8192 local