A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 4.x documentation.
Sets the amount of memory (in number of MBs) that all postgres processes of an active segment instance can consume. If a query causes this limit to be exceeded, memory will not be allocated and the query will fail. Note that this is a local parameter and must be set for every segment in the system (primary and mirrors).
To prevent over-allocation of memory, these calculations can estimate a safe gp_vmem_protect_limit value.
gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM)) / 1.7
where SWAP is the host swap space and RAM is the RAM on the host in GB.
Next, calculate the max_acting_primary_segments. This is the maximum number of primary segments that can be running on a host when mirror segments are activated due to a failure. With mirrors arranged in a 4-host block with 8 primary segments per host, for example, a single segment host failure would activate two or three mirror segments on each remaining host in the failed host's block. The max_acting_primary_segments value for this configuration is 11 (8 primary segments plus 3 mirrors activated on failure).
This is the calculation for gp_vmem_protect_limit. The value should be converted to MB.
gp_vmem_protect_limit = gp_vmem / acting_primary_segments
For scenarios where a large number of workfiles are generated, this is the calculation for gp_vmem that accounts for the workfiles.
gp_vmem = ((SWAP + RAM) – (7.5GB + 0.05 * RAM - (300KB * total_#_workfiles))) / 1.7
For information about monitoring and managing workfile usage, see the Greenplum Database Administrator Guide.
vm.overcommit_ratio = (RAM - (0.026 * gp_vmem)) / RAM
For information about the kernel parameter, see the Greenplum Database Installation Guide.
|Value Range||Default||Set Classifications|