gp_vmem_protect_limit

A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 4.x documentation.

gp_vmem_protect_limit

Sets the amount of memory (in number of MBs) that all postgres processes of an active segment instance can consume. If a query causes this limit to be exceeded, memory will not be allocated and the query will fail. Note that this is a local parameter and must be set for every segment in the system (primary and mirrors). When setting the parameter value, specify only the numeric value. For example, to specify 4096MB, use the value 4096. Do not add the units MB to the value.

To prevent over-allocation of memory, these calculations can estimate a safe gp_vmem_protect_limit value.

First calculate the value gp_vmem. This is the Greenplum Database memory available on a host
gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM)) / 1.7

where SWAP is the host swap space and RAM is the RAM on the host in GB.

Next, calculate the max_acting_primary_segments. This is the maximum number of primary segments that can be running on a host when mirror segments are activated due to a failure. With mirrors arranged in a 4-host block with 8 primary segments per host, for example, a single segment host failure would activate two or three mirror segments on each remaining host in the failed host's block. The max_acting_primary_segments value for this configuration is 11 (8 primary segments plus 3 mirrors activated on failure).

This is the calculation for gp_vmem_protect_limit. The value should be converted to MB.

gp_vmem_protect_limit = gp_vmem / acting_primary_segments

For scenarios where a large number of workfiles are generated, this is the calculation for gp_vmem that accounts for the workfiles.

gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM - (300KB * total_#_workfiles))) / 1.7

For information about monitoring and managing workfile usage, see the Greenplum Database Administrator Guide.

Based on the gp_vmem value you can calculate the value for the vm.overcommit_ratio operating system kernel parameter. This parameter is set when you configure each Greenplum Database host.
vm.overcommit_ratio = (RAM - (0.026 * gp_vmem)) / RAM
Note: The default value for the kernel parameter vm.overcommit_ratio in Red Hat Enterprise Linux is 50.

For information about the kernel parameter, see the Greenplum Database Installation Guide.

Value Range Default Set Classifications
integer 8192 local

system

restart