Member-only story
How JMX metrics from spark applications will help to configure driver/executor memory correctly without wastage(taking guessing out of picture)?
What are jmx metrics → Java Management Extensions (JMX) is a specification for monitoring and managing Java applications.
JMX metrics we are interested in generally are like below
- Spark driver jvm metrics like heap and non heap used, usage and committed stats
- Spark driver jvm metrics like total jvm used, commited and max.
- Spark executor (summarized average) JVMHeapMemory, JVMOffHeapMemory, On/Off HeapExecutionMemory, On/Off HeapStorageMemory, On/OffHeapUnifiedMemory
- GC and process tree related stats e.t.c
But from tuning spark applications perspective above metrics will give insight into over all usage of driver and executor , which in turn help to tune the driver or executor memory.
Example metrics to look while tuning→
DRIVER
- Suppose <application_id/name>.driver.jvm.heap.usage is say 0.2 , that means application is only using 20 percent of the heap ,so we can decrease the driver memory till usage shoots up…