See: Introduction to Troubleshooting in JDK 9: Serviceability Tools Are Your Friends [CON3733] Slide 39 - 43
jcmd 999 VM.log output="file=gc.log" output_options="filecount=5,filesize=10m" what="gc=debug" decorators="time,level"
This is the multi-page printable view of this section. Click here to print.
Install a Java Development Kit from Oracle or Open JDK
Install the latest available Maven.
Install JDK mission control from Oracle Mirror.
Install the standalone version of Visual VM.
Install the standalone version of Eclipse MAT.
Install the Docker container runtime environment.
In this section we’re going to spin up the services needed for the workshop.
The web service is a simple dropwizard server, which has a single API to search for talks from kcdc.info, and returns results about these talks.
Prerequisites:
Assemble the service:
💡 If you have issues with building it locally due to your setup, you can download the server assembly here.
Start the workshop service:
To simulate responses of kcdc.info (as the service may change over time), we will first run a mock instance of this service using WireMock.
Go ahead and start another terminal session where we will run another service to mock a remote dependency of the workshop service. Navigate to the same directory where you cloned this repository, then execute the following commands:
Run the mock service, which will provide the essential end-points to support the service we will be testing:
Alternatively, you can run the mockservice.sh
script which will do both commands, ie: sh mockservice.sh
Example configuration of service:
server:
applicationConnectors:
- type: http
port: 80
adminConnectors:
- type: http
port: 8071
requestLog:
timeZone: UTC
appenders:
- type: file
currentLogFilename: /var/log/java-perf-workshop-server-access.log
threshold: ALL
archive: true
archivedLogFilenamePattern: /var/log/java-perf-workshop-server-access.%d.log.gz
archivedFileCount: 5
logging:
level: INFO
appenders:
- type: file
currentLogFilename: /var/log/java-perf-workshop-server.log
threshold: ALL
archive: true
archivedLogFilenamePattern: /var/log/java-perf-workshop-server-%d.log
archivedFileCount: 5
timeZone: UTC
The service will return back results from the KCDC website on sessions that are available which contain a substring in their title, abstract, or tags. Example:
http://localhost:8080/search?q=clojure
Example results:
If you get a 500
error message when trying to test the service, verify that the wiremock server is running.
{
"code" : 500,
"message" : "There was an error processing your request. It has been logged (ID d8998189f8d4ee8c)."
}
This first part of the workshop, we will begin to learn about looking at the JVM from its native perspective, and begin obtaining JVM based information to correlate to it’s native representation.
A very common utility as to monitor processes in Linux. For our case, we will want to monitor our JVM process. When just executing top
, you will see all processes on the host:
top - 05:32:51 up 8 min, 1 user, load average: 0.05, 0.12, 0.06
Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie
Cpu(s): 11.2%us, 0.7%sy, 0.0%ni, 87.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st
Mem: 1020184k total, 466284k used, 553900k free, 17740k buffers
Swap: 0k total, 0k used, 0k free, 289500k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6730 ec2-user 20 0 2246m 108m 13m S 10.3 10.9 0:14.11 java
1 root 20 0 19596 1616 1292 S 0.0 0.2 0:00.56 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/0
4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
6 root 20 0 0 0 0 S 0.0 0.0 0:00.04 kworker/u30:0
7 root 20 0 0 0 0 S 0.0 0.0 0:00.27 rcu_sched
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
9 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
You can switch in this view to also view the native threads, by hitting Shift+H
.
top - 05:34:15 up 9 min, 1 user, load average: 0.01, 0.09, 0.05
Tasks: 101 total, 1 running, 100 sleeping, 0 stopped, 0 zombie
Cpu(s): 4.0%us, 0.3%sy, 0.0%ni, 95.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1020184k total, 491176k used, 529008k free, 17804k buffers
Swap: 0k total, 0k used, 0k free, 312648k cached
Show threads On
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6736 ec2-user 20 0 2246m 109m 13m S 2.0 11.0 0:05.72 java
6737 ec2-user 20 0 2246m 109m 13m S 0.3 11.0 0:01.69 java
6740 ec2-user 20 0 2246m 109m 13m S 0.3 11.0 0:01.67 java
6746 ec2-user 20 0 2246m 109m 13m S 0.3 11.0 0:00.31 java
6768 ec2-user 20 0 2246m 109m 13m S 0.3 11.0 0:00.03 java
6778 ec2-user 20 0 2246m 109m 13m S 0.3 11.0 0:00.08 java
6777 ec2-user 20 0 15224 1332 1032 R 0.3 0.1 0:00.04 top
1 root 20 0 19596 1616 1292 S 0.0 0.2 0:00.56 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/0
4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0
With these native threads, you can now get better context of what resources are being utilized. To simplify this, you can target top to just watch one parent process and then show threads via:
In our earlier example, our PID was 6730. Therefore, this would produce:
$ top -H -p 6730
top - 05:38:05 up 13 min, 1 user, load average: 0.00, 0.05, 0.05
Tasks: 32 total, 0 running, 32 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1020184k total, 515940k used, 504244k free, 17932k buffers
Swap: 0k total, 0k used, 0k free, 336520k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6739 ec2-user 20 0 2246m 110m 13m S 0.3 11.1 0:00.24 java
6730 ec2-user 20 0 2246m 110m 13m S 0.0 11.1 0:00.00 java
6731 ec2-user 20 0 2246m 110m 13m S 0.0 11.1 0:01.50 java
6732 ec2-user 20 0 2246m 110m 13m S 0.0 11.1 0:01.07 java
6733 ec2-user 20 0 2246m 110m 13m S 0.0 11.1 0:00.00 java
6734 ec2-user 20 0 2246m 110m 13m S 0.0 11.1 0:00.00 java
6735 ec2-user 20 0 2246m 110m 13m S 0.0 11.1 0:00.00 java
6736 ec2-user 20 0 2246m 110m 13m S 0.0 11.1 0:07.33 java
6737 ec2-user 20 0 2246m 110m 13m S 0.0 11.1 0:01.69 java
6738 ec2-user 20 0 2246m 110m 13m S 0.0 11.1 0:00.00 java
To make a simple standard capture of native threads for diagnostic purposes, you can do the following:
This will capture thread iterations of top
by capturing threads of the parent process ID. You can then pipe it to a file.
Example:
There’s a couple of java utilities that can also be used to find the PIDs of your running processes.
The Java Virtual Machine Process Status Tool can be used to see Java processes.
You can get more insight by using some of its flags:
The jcmd utility is an attempt to consolidate other utilities into a single interface.
Running the jcmd
command should display some of the same information that jps
showed us (similar to jps -m
):
$ jcmd
6059 java-perf-workshop-server/target/java-perf-workshop-server-1.1.0-SNAPSHOT.jar server server.yml
6925 wiremock-standalone.jar --port 9090 --root-dir java-perf-workshop-server/src/test/resources
7838 sun.tools.jcmd.JCmd
To capture a thread dump of the JVM, there are many tools that can achieve this. You can do this over JMX remotely, sending an OS signal (kill -3
), or just form the command line (jstack
or jcmd
). To keep it simple, we will just use jcmd
as we will use it later for other diagnostic commands.
💡 You need to be the same OS user when invoking the command as the target JVM.
So, using our earlier example, this would be:
However, correlating a thread dump to the native thread state requires a minor translation. In your thread dump, the native thread identifier is in hexadecimal, while the native thread identifiers from top
are in decimal. Therefore, if you notice a thread spiking in the top
view, you would then want to understand what cod it is correlating to. You can do this simply from the command line via:
Example:
I can then cross reference this thread to my thread dump via the nid
(native thread ID) field:
"C2 CompilerThread0" #5 daemon prio=9 os_prio=0 tid=0x00007efda40d4000 nid=0x1a50 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
An alternative tool fastthread.io
also has a blog on other ways to capture a thread dump
A very simple and quick collection of stats on a JVM can be collected via:
Generally the information in this file may or may not indicate the actual problem, but can provide more context in a concise text file of what all has occurred in the life of that JVM. One simple set of counters related to threads are:
java.threads.daemon=20
java.threads.live=26
java.threads.livePeak=38
java.threads.started=697
In some rare cases, you may need to figure out what system calls your JVM is doing. One way to do this on several flavors of Linux, is to use strace
.
Example:
Example output:
[pid 6741] futex(0x7efda4587228, FUTEX_WAKE_PRIVATE, 1 <unfinished ...>
[pid 6740] write(1, "DEBUG [2015-06-24 05:50:34,727] "..., 168 <unfinished ...>
[pid 6830] gettimeofday( <unfinished ...>
From the write
call, we can see it is tied to the thread ID 6740. If we translate it to x1a54
, which is then tied to this logging thread in the JVM.
"AsyncAppender-Worker-Thread-1" #9 daemon prio=5 os_prio=0 tid=0x00007efda44d1800 nid=0x1a54 waiting on condition [0x00007efd941f0000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000f5d38d20> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
You will notice that the system calls are quite noisy with the JVM, but this type of troubleshooting can greatly help when the JVM is getting “hung” or hitting limits by the OS and it isn’t clear from the exception that you are seeing what it is getting restricted on.
In this workshop, we will be getting more familiar with using Java Mission Control and the Flight Recorder. As we have been running our service, we want to gather more insight of the code in the service by running series of simple tests and measuring the service.
The Java Flight Recorder (JFR) is a very low overhead profiling and diagnostics tool. It was inherited from the JRockit JVM, and originally was offered as part of the HotSpot JVM. It is designed to be “black box” data recorder of the the run-time, which can be used in production environments, making it an attractive tool for profiling code since it has low overhead on the JVM.
Newer versions of JMC are developed as part of the OpenJDK Mission Control project.
To enable the Flight Recorder on the JVM, the following options need to be included on the JVM:
❗ Notice that you have to include -XX:+UnlockCommercialFeatures
first in the options listing. This is because using Flight Recorder is a feature that requires additional licensing when used in production environments. One of the following licenses is required in order to utilize this in a production environment.
However, you may use this in non-production environments without additional licensing.
To get better fidelity on method profiling, include the following options which will enable the compiler to include additional metadata on non-safe code points. This is helpful, as sometimes the metadata will not fully resolve to the correct line in the code.
We will be using Java Mission Control to monitor and evaluate the Java flight recordings. To start up Java Mission Control, simply executing the following in your console:
💡 In order to be able to invoke jmc
(Java Mission Control) from your console, it assumes $JAVA_HOME/bin
is on your $PATH
. If it is not included, go ahead and update your profile to include this so you can easily invoke jmc
from your terminal.
Let’s start profiling our service. Start the service up by enabling JFR:
💡 Starting with Java Mission Control 5.5 (included in Oracle Java 8u40), you no longer have to enable JFR prior to capturing the recording (it will dynamically enable it, after prompting about it).
This will open a window where you apply some settings for the recording. First select that you want this to be a Continuous recording and for Event settings, we will import a template to get some consistent settings for profiling. Within the Template Manager, select Import Files… and import the open_jdk_9_.jfc
included in the content/docs/jmc folder. It should appear as Java Performance Workshop JDK9+ Profile. Select this as the Event Settings and then click on Finish.
For reference, these are the options for the template.
First select that you want this to be a Continuous recording and for Event settings we will use Profiling on Server.
Once your flight recording is being captured in a Continuous recording, it will show a ∞.
💡 You can see the JFR templates (continuous / profile) which are shipped as part of the JRE in: $JAVA_HOME/jre/lib/jfr
. These become helpful if you are wanting to compare your settings to some of the standard ones.
We will want to generate some traffic on the service to measure some general characteristics of the service:
By generating traffic on service, this gives us baseline activity in the JVM to start evaluating the flight recording.
This service under test, is a simple web service which provides results based on a search API. When interacting with the service, you can simply supply a HTTP GET on the search resource with a query parameter (‘q’) with the term that you are searching for. It will then return KCDC’s 2015 sessions that contain that term. Example:
We can utilize Apache Benchmark to generate traffic on the service From the console, we will execute the following to generate traffic against our service. Note, we will use a very basic search of just “a”, since this will generate more results.
An alternative to Apache Benchmark, is loadtest (a node.js equivalent). To install:
Then you can execute similarly:
Alternatively, you can use gatling (a performance library with a scala dsl ).
This should launch the WorkshopSimulation
.
Sample output while running:
After you have played traffic through the service, you can then stop your flight recording from JMC.
Then dump the whole recording.
From a Java Flight Recording, there are several categories of information (pages) you can view from JMC:
In the next section, we'll find out more about what our methods are doing.
Method ProfilingOur first look at the recording will start with finding out where we might be spending time with our code.
We’ll start by taking a look at the detailed Method Profiling Page. This page displays how often a specific method is ran based on sampling.
By default, this view groups threads by method, but you can also include the line number for each method. This can be accomplished by selecting the View Menu option and going to Distinguish Frames By and selecting Line Number.
💡 Generally, you don’t need this, as it can be quite apparent by the base method being invoked where the cost is at. Though, it may be helpful to include in some contexts.
Navigate to the Java Application page. This page provides an overview of the state of the JVM and collates information from a couple of the dedicated pages.
Using the checkboxes on the right of the Graph, select only Method Profiling to narrow down the set of events on the graph:
The Stack Trace view from this page can give you a better count of sampled methods (not just the ones from a specific Method Profiling event)
This information is the equivalent of the Hot Methods tab in JMC 5, see forum discussion.
We can then use this information to correlate what is happening in our code:
In the next section, we'll look at the Exceptions thrown by our application.
ExceptionsIn this section we’ll dive into the Exceptions page and find out what information we can learn from it. Start by selecting the Exceptions page from the Outline.
This page offers the following sections:
.
You can select an exception from the table to filter the information to that particular exception. You could also filter by a particular exception message.
NullPointerExceptions
are a familiar exception you deal with in Java, and often times indicate a programming error. Other exception types often point to a common symptom.
In the next section, we'll look at the Threads used by our application.
ThreadsThe Threads page provides a snapshot of all threads in our application, and we can use it to acquire information about computation and latency events.
We can use the Thread Chart view to select a region of time and view the state of threads and the most common stack frames for that particular time slice:
We can also edit the types of events shown by the chart. Right click on the chart, and select Edit Thread Activity Lanes
We can then narrow down on specific events.
For the purpose of this section filter on only the Sleep events. Then, select a period of time and look at what has caused our threads to go to sleep for that particular period:
Once you’re done with the filtered view, right click on the Thread page and Reset the page (to return to the default state).
We can select an individual thread from the Thread Table to further filter the data just for that thread. From this filtered view, we can inspect what the commonly executed methods are for a particular thread.
We can then use the stacktrace view to correlate what piece of code executed the most.
There’s a couple of threads specifically created by our application, these are the wkshp-conf-loader
named threads. The dw-*
named threads are threads created by DropWizard to handle specific requests (you might have noticed this based on the name containing a REST Resource). Explore the state of some of these threads while thinking about the follow up questions.
In the next section, we'll look at the Memory state of our application.
MemoryThe memory page shows us the overall memory usage and garbage collection events for our recording.
The Class Table lets us view the total memory allocation dedicated to particular instances of that class. Selecting a particular class will allow us to view what code paths are responsible for creating those objects.
We can also filter this view to a particular range in time, for example a range where we went from low to high heap usage:
We can then use this filtered view and drill down through the stack trace to find methods of interest:
Narrow down through some periods of time where our heap rose quickly and then was collected. Look at the memory reclaimed after each collection.
In the next section, we'll look at the I/O used by our application.
I/O OperationsThe I/O pages can provide context of what external resources you are utilizing from you code base, and relate it to other latency events.
For the purpose of our workshop service, we are doing network I/O, and therefore will look at the Socket I/O page.
From this page, we can gain context of what host(s) we are calling, with what frequency, and the cost associated with those calls:
We can then filter on a particular host and port and drill down into a particular piece of code responsible for that call:
Filter through the total I/O time and explore a couple of events with high I/O.
In the following sections, we are going to do some analysis of the JVM’s heap dump.
First, we’ll need to capture a heap dump:
We will then go through three common tools in which we can do some analysis of a heap dump. All of which are free. 😃
We will then tie all our knowledge together and attempt to solve a quick challenge:
In the next section, we'll learn how to acquire a Heap Dump.
Capturing Heap DumpsYou may be aware by now, there are many different ways of getting data from the JVM. Here are just a few ways to get heap dump of the JVM.
With jcmd
, you have the ability to invoke the GC.heap_dump
command. This requires that you
are running as the same OS user as the target JVM process.
Note: If a full path is not specified, the heap_dump will be created relative to the location from where the process was started (when generated with jcmd)
A more traditional approach is using jmap
and invoking the command on the target process.
You can also use jmap
to extract a heap dump from a core dump on the process:
💡 Capturing a heap dump from a core dump on Java 8, you may run into this issue: JDK-8073606.
You can capture a heap dump by invoking a diagnostic MBean which is included in the Oracle JDK. You can do so by opening your favorite MBean browser (ex. jconsole
, jmc
) and invoke
the com.sun.management.HotSpotDiagnostic
. The arguments are:
In the next section, we'll start analyzing our heap dump with JOverflow.
JOverflowIncluded in JDK Mission Control is a plugin that used to be a separate instance, the JOverflow Heap Analyzer. This plugin now adds a context menu selection to Dump Heap of JVMs from the JVM Browser.
JOverflow is a great tool to start our analysis with since it is tailored to identifying common anti-patterns present in most heap dumps. This should give us an idea of the kinds of problems we can see in our JVMs.
Here is a listing of some the patterns that JOverflow identifies:
count == 0
.count != 0
.string1.equals(string2)
and string1 != string2
.Reference: JOverflow: Advanced Heap Analysis in Java Mission Control
As you play with JOverflow, once you click on something, it will begin filtering down your content. If you wish to reduce down to code that you may own, you can start doing a package filter on the ancestor section.
In this example, I want to see duplicate strings for code under cchesser
. In this case, I can see that our workshop service has several duplicate
strings (as this content is replicated on each object):
However, the number of instances / overhead is extremely low (since this is a small service), it gives you visibility of areas that you could optimize.
Since JOverflow’s mechanism of drilling down is by applying filters, it doesn’t have an easy way to undo most operations, and instead relies on resetting the state to the unfiltered state. You can undo all operations by using the backwards arrow button:
We can also use the plugin to drill down a bit into particular instances.
We can enable the instances view by navigating to:
In the following example, we’re going to find:
This should filter the Instances view down to that single object.
We can use this same mechanism to also explore values of some of the classes we own, in the next example we’re going to:
In the next section, we'll continue our analysis in Eclipse MAT.
Eclipse MATEclipse Memory Analyzer is a fairly mature tool on analyzing heap dumps. It is a rich tool that includes the abilities to do OQL queries on the heap, general reporting on anti-patterns, and helpful common views (like the dominator tree). Eclipse Memory Analyzer tool is a separate install which can be a standalone install, or it can run as a plugin within Eclipse.
After you have it installed, go to File and then Open Heap Dump… and specify the hprof
file that you wish to analyze.
💡 When Eclipse Memory Analyzer loads your hprof
file, it will create many other index files to assist in optimally analyzing and navigating through your heap dump. It may make it easier to isolate your prof
file in its own directory prior to opening it, based on the secondary files which are created. You can always just delete these files within Eclipse Memory Analyzer by opening the Heap Dump History window, right-clicking the hprof
which was previously loaded, and selecting Delete Index Files. Example listing of index files which get generated by suffix:
.domIn.index
.domOut.index
.index
.o2ret.index
.a2s.index
.idx.index
.inbound.index
.o2c.index
.o2hprof.index
.outbound.index
.threads
In the following section, we’ll go through a couple of the views and what sort of data we can gather from them.
A common first area to look at your heap within Eclipse Memory Analyzer is the dominator tree. From the dominator tree, you can organize by the retained heap size, and then begin drilling down the tree to see the contributors to the largest GC roots.
The histogram gives you a quick listing of all the top consumers by type. Typically this is going to provide context of the large consumers based on their “lower-level” types. For example, char[]
is a common large contributor, which then will be followed by String
which is a type that is predominately weighted by the size of the char[]
.
From the histogram, we can drill down into particular instances of an object and see what’s referencing them by right clicking on a row and selecting List Objects > with Incoming References
This should let us drill down into an instance and find what exactly is holding on to that specific value:
Since this view shows references by incoming link, we would read the results as:
“angularjs” is pointed to by the [0]
index of the tags
of a specific ConferenceSession
instance, which is pointed to by LocalCache
. Notice that this specific example is the inverse of what we found with the dominator tree, if we explored this table enough we would end up at a ConferenceSessionLoader
reference.
The thread overview is a helpful view when you are looking at contributors based on the execution of code. For example, if you have a JVM which has thrown an OutOfMemoryError
, you can tell if some type of request caused a massive spike which caused the exhaustion of memory, or if the requests may have not been abnormal, there just wasn’t sufficient space to support the request.
It may be the case that the thread that has the OutOfMemoryError in its stack is not the root cause. Consider the following example:
int[]
int[]
.The JVM would potentially throw the OutOfMemory on that second request, since there’s not enough memory to allocate 5Mb, however, it might be the case that the code path for the initial request is allocating memory unnecessarily.
Another strong feature of Eclipse Memory Analyzer is the OQL studio to execute queries to do lookup on objects in your heap.
Lookup our ConferenceSession
class type:
💡 To execute the query, you click on the red exclamation mark (❗). Alternatively, you can press CTRL+Enter
on Windows or Command+Enter
on macOS.
By returning the raw type back, you can expand each one to further look at the contents of the object. If you wanted to just report the object with its title
attribute and the object’s retained sized, you could do this:
Now, you could then filter this by including a WHERE
clause, where you can filter it by
the title or the abstract:
Reference: Eclipse Memory Analyzer OQL Syntax
The auto-completion will also let you invoke fields and method’s from Eclipse Mat’s own internal model for the heap dump. You can view other methods available by exploring the API Reference
The Calcite MAT Plugin adds sort, group by and join behavior (among other things) and an additional SQL query engine.
We can use this plugin to do some more advanced analysis, for example:
In the next section, we'll continue our analysis in VisualVM.
Visual VMVisualVM is a tool which used to be part of Oracle’s JDK and is now a standalone tool. This tool was spawned from the Netbeans Profiler, and it has some helpful tools to inspect a heap dump.
To load the heap dump in VisualVM, you will just go to File -> Load… and specify the File Format to be Heap Dumps….
VisualVM offers the following sections:
For the purpose of this section we are only going to utilize the OQL Console to construct queries on the heap. There are other options in VisualVM (briefly discussed here), but we feel that the other tools presented before this offer much nicer views/data.
The Summary view gives you a quick overview of the state of the heap:
This view also lets you quickly gather the dominators for the heap, similar to the Dominator Tree in Eclipse MAT.
The Objects view groups instances by class.
Within this view, you can inspect individual instances of an object and see both the fields and the incoming references to that instance:
The Threads view visualizes the state of threads.
Within this view, you can drill down into certain threads and view what was specifically referenced in a stack frame:
One of the attractive features for VisualVM is that its OQL Console allows you to write Javascript code to query objects, which you can then do other functions on that data.
From the OQL Console, we will issue queries and then allow you to further play around with the language.
Reference: VisualVM OQL Help
Query for Thread objects which are daemons (you can get context of Threads, since they are objects in your heap too):
Previous versions of VisualVM used to return the results as hyperlinks, which you could then click and begin diving into the instance state:
The newer versions of VisualVM will populate the results in a view similar to the Objects view:
Query for Thread objects (similar to last query), but return multiple attributes as JSON:
VisualVM offers a helper object that represents the heap, the heap
object, which offers a few useful functions.
During our OQL searching, we’ll be using heap.objects
to gather the set of objects we want.
We’ll start by limiting our objects to instances of cchesser.javaperf.workshop.data.ConferenceSession
and assign it to a variable called sessions
(which we’ll reference later).
We’re going to find ConferenceSessions that which have tags, we’ll do that by writing a Javascript function to filter results which have tags. We’ll apply this filter to our sessions
.
Take that previous query and further expand it to list it in JSON with the number of tags. It is often times convenient to have a reference to the object we’re interacting with, to more easily see the available fields.
We can further expand this to sort the results, descending by number of tags, and list the tags in the results:
We could also inline most of this using the inline syntax:
This plugin adds some auto complete and syntax highlighting to the OQL Console
Apply what you’ve learned about OQL in this section and write some javascript that will show the distribution of tag counts in our conference sessions (similar to what we did with calcite mat.)
This solution will return the result as a JSON object.
This solution will return the result as an HTML <ul>
.
In the next section, we'll test our knwolege.
Memory ChallengeThe provided WorkshopService has an intentional memory problem, use the previous sections as reference to try to pin point it.
Once you’ve found the large object, look through it’s properties and determine why this object holds on to so many other objects.
A first glance at the dominator tree will point us to a io.dropwizard.jetty.MutableServletContextHandler
, which is something controlled by the DropWizard framework. Let’s dig through its references until we find some classes in the cchesser
package.
The first instance of a class we “own” is cchesser.javaperf.workshop.resources.WorkshopResource
Let’s look at it’s properties a bit, and we’ll find a cchesser.javaperf.workshop.cache.CleverCache
reference with about 70% of the memory consumption.
We’ll look at its properties in a bit more detail in the following sections.
A first glance at the Histogram View (sorted by retained heap), shows us about 2,000,000 instances of cchesser.javaperf.workshop.data.Searcher$SearchResultElement
Looking through the references to an instance of this type, we end up being referenced by the data
field in cchesser.javaperf.workshop.cache.CleverCache$CacheEntry
We can further inspect the Incoming References for a particular entry, and see what is pointing to it, tracing it back to something we own:
Once again, we’ve ended up at cchesser.javaperf.workshop.resources.WorkshopResource
, specifically a field called resultsByContext
.
We’ll take a closer look at this cchesser.javaperf.workshop.cache.CleverCache
object.
First, lets determine how many instances of cchesser.javaperf.workshop.cache.CleverCache
we have. We’ve found a single instance that is about 70Mb (in our sample dump), so let’s double check if there’s more of these.
We can do that in a couple of ways:
CleverCache
We can find the fields and references for an instance using either Eclipse MAT or VisualVM.
In Eclipse Mat, the fields for an instance display in an Attributes tab
In VisualVM, we can load the Objects view and apply a filter on class name to then inspect the fields and references
Looking at the properties, we find that there’s a cacheLimit
of 250
. Let’s find out exactly how many entries are in the cache (the innerCache
field).
Let’s write a query that will show us what the count of entries in the cache is.
cacheLimit
and size
of the innerCache
map,We can do this a few ways:
we can do that with the following query:
That doesn’t seem right at all, we want at most 250 items but we have about 10,000. Let’s find out why that is the case.
The interaction between the WorkshopResource
and CleverCache
happens through the /search
resource.
cchesser.javaperf.workshop.WorkshopResource
If a context is sent, the previously cached values for the search are returned, otherwise we cache those result values in case they need to be referenced again.
javaperf.workshop.cache.CleverCache
This interaction is NOT thread safe. Consider this scenario:
store
when the innerCache.size()
is 249
isFull()
evaluates to false
innerCache.put
store
when the innerCache.size()
is 249
isFull()
evaluates to falseinnerCache.put
innerCache.put
, innerCache.size()
is now 250
innerCache.put
, innerCache.size()
is now 251
isFull()
will no longer return true
Fortunately, the DropWizard documentation calls out the need to consider thread safety:
Since the WorkshopResource
has a resultsByContext
field that will get shared across threads, that type should be thread safe.
The naive solution would be to just change isFull()
to consider >=
, though that might lead to ConcurrentModificationExceptions
The proper solution would be to make the class Thread Safe
In the next section, we'll continue learn about Garbage Collection logs.
Garbage CollectionsA key piece to understanding what is happening in terms of GC cycles within your service, is enabling GC logs. In this section, we are going to do some analysis on the JVM in regards to it’s garbage collection (GC) cycles.
Here is a set of JVM options you can enable on your service and how to interpret those logs.
-XX:+PrintGCDetails
: Includes more details within your GC log-XX:+PrintGCDateStamps
: Have a readable date/time string to correlate events in your log. Without
this option, your GC log will have elapsed time since the JVM was started. This format (which is reported)
in seconds (with millisecond precision), is not easy for someone to quickly correlate when this event was
logged (as you have to infer the time based on when the JVM was started).-Xloggc
: Specifies the log file for the garbage collection logs (otherwise will go to stdout).
💡 Note: -verbose:gc
is NOT necessary when you set Xloggc
(it is implied).-XX:+UseGCLogFileRotation
: Support rotating your GC log file (you don’t want to let this get too
big).-XX:GCLogFileSize
: Size of the file for rotation (ex. 10M
).-XX:NumberOfGCLogFiles
: Number of GC log files to maintain (ex. 3
). 💡 Note: if you are monitoring
your log file with another solution (like splunk or
logstash), you typically don’t need to be keeping an inventory of
rolled files around, unless you are concerned about log forwarding failing and want to ensure a given amount
is still capable of being captured from a host.With different garbage collectors in the JVM, you will get slightly different GC log formats.
2015-09-30T10:57:20.215+0600: 0.847: [GC (Allocation Failure) [PSYoungGen: 65536K->10748K(76288K)] 65536K->12607K(251392K), 0.0118637 secs] [Times: user=0.03 sys=0.01, real=0.01 secs]
2015-09-30T10:57:20.556+0600: 1.188: [GC (Metadata GC Threshold) [PSYoungGen: 44312K->10748K(141824K)] 46171K->12786K(316928K), 0.0077755 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2015-09-30T10:57:20.564+0600: 1.196: [Full GC (Metadata GC Threshold) [PSYoungGen: 10748K->0K(141824K)] [ParOldGen: 2038K->10444K(116736K)] 12786K->10444K(258560K), [Metaspace: 20976K->20976K(1067008K)], 0.0286381 secs] [Times: user=0.14 sys=0.01, real=0.03 secs]
2015-09-30T11:11:35.994+0600: 0.838: [GC (Allocation Failure) 0.838: [ParNew: 69952K->8703K(78656K), 0.0128204 secs] 69952K->12781K(253440K), 0.0128848 secs] [Times: user=0.04 sys=0.01, real=0.01 secs]
2015-09-30T11:11:38.009+0600: 2.853: [GC (CMS Initial Mark) [1 CMS-initial-mark: 4077K(174784K)] 67493K(253440K), 0.0088311 secs] [Times: user=0.04 sys=0.00, real=0.01 secs]
2015-09-30T11:11:38.018+0600: 2.862: [CMS-concurrent-mark-start]
2015-09-30T11:11:38.018+0600: 2.862: [CMS-concurrent-mark: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2015-09-30T11:11:38.018+0600: 2.862: [CMS-concurrent-preclean-start]
2015-09-30T11:11:38.019+0600: 2.863: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
2015-09-30T11:11:38.019+0600: 2.863: [CMS-concurrent-abortable-preclean-start]
CMS: abort preclean due to time 2015-09-30T11:11:43.074+0600: 7.918: [CMS-concurrent-abortable-preclean: 1.233/5.055 secs] [Times: user=1.23 sys=0.01, real=5.06 secs]
2015-09-30T11:11:43.074+0600: 7.918: [GC (CMS Final Remark) [YG occupancy: 63415 K (78656 K)]7.918: [Rescan (parallel) , 0.0052614 secs]7.924: [weak refs processing, 0.0000337 secs]7.924: [class unloading, 0.0029068 secs]7.927: [scrub symbol table, 0.0025781 secs]7.929: [scrub string table, 0.0002699 secs][1 CMS-remark: 4077K(174784K)] 67493K(253440K), 0.0117740 secs] [Times: user=0.04 sys=0.01, real=0.01 secs]
2015-09-30T11:11:43.086+0600: 7.930: [CMS-concurrent-sweep-start]
2015-09-30T11:11:43.086+0600: 7.930: [CMS-concurrent-sweep: 0.000/0.000 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2015-09-30T11:11:43.087+0600: 7.930: [CMS-concurrent-reset-start]
2015-09-30T11:11:43.110+0600: 7.954: [CMS-concurrent-reset: 0.023/0.023 secs] [Times: user=0.01 sys=0.01, real=0.03 secs]
015-09-30T11:13:03.870+0600: 0.395: [GC pause (G1 Evacuation Pause) (young), 0.0052206 secs]
[Parallel Time: 1.9 ms, GC Workers: 8]
[GC Worker Start (ms): Min: 395.4, Avg: 395.5, Max: 395.8, Diff: 0.3]
[Ext Root Scanning (ms): Min: 0.0, Avg: 0.3, Max: 1.1, Diff: 1.1, Sum: 2.0]
[Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[Processed Buffers: Min: 0, Avg: 0.0, Max: 0, Diff: 0, Sum: 0]
[Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
[Code Root Scanning (ms): Min: 0.0, Avg: 0.1, Max: 0.4, Diff: 0.4, Sum: 0.5]
[Object Copy (ms): Min: 0.7, Avg: 1.4, Max: 1.6, Diff: 0.9, Sum: 11.4]
[Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
[GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
[GC Worker Total (ms): Min: 1.5, Avg: 1.8, Max: 1.8, Diff: 0.3, Sum: 14.2]
[GC Worker End (ms): Min: 397.3, Avg: 397.3, Max: 397.3, Diff: 0.0]
[Code Root Fixup: 0.1 ms]
[Code Root Purge: 0.1 ms]
[Clear CT: 0.1 ms]
[Other: 3.0 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 2.7 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.2 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.0 ms]
[Eden: 24.0M(24.0M)->0.0B(39.0M) Survivors: 0.0B->3072.0K Heap: 24.0M(256.0M)->5754.5K(256.0M)]
[Times: user=0.02 sys=0.01, real=0.00 secs]
2015-09-30T11:13:04.343+0600: 0.868: [GC pause (G1 Evacuation Pause) (young), 0.0082908 secs]
[Parallel Time: 5.1 ms, GC Workers: 8]
[GC Worker Start (ms): Min: 868.3, Avg: 868.4, Max: 868.8, Diff: 0.5]
[Ext Root Scanning (ms): Min: 0.0, Avg: 0.3, Max: 1.7, Diff: 1.7, Sum: 2.7]
[Update RS (ms): Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum: 0.5]
[Processed Buffers: Min: 0, Avg: 0.4, Max: 1, Diff: 1, Sum: 3]
[Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
[Code Root Scanning (ms): Min: 0.0, Avg: 0.4, Max: 1.4, Diff: 1.4, Sum: 3.5]
[Object Copy (ms): Min: 3.2, Avg: 4.0, Max: 4.6, Diff: 1.4, Sum: 32.1]
[Termination (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.6]
[GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1]
[GC Worker Total (ms): Min: 4.6, Avg: 4.9, Max: 5.0, Diff: 0.5, Sum: 39.5]
[GC Worker End (ms): Min: 873.3, Avg: 873.3, Max: 873.4, Diff: 0.0]
[Code Root Fixup: 0.3 ms]
[Code Root Purge: 0.1 ms]
[Clear CT: 0.1 ms]
[Other: 2.7 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 2.4 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.1 ms]
[Eden: 39.0M(39.0M)->0.0B(147.0M) Survivors: 3072.0K->6144.0K Heap: 44.6M(256.0M)->13.9M(256.0M)]
[Times: user=0.04 sys=0.00, real=0.01 secs]
2015-09-30T11:13:04.650+0600: 1.176: [GC pause (Metadata GC Threshold) (young) (initial-mark), 0.0090083 secs]
[Parallel Time: 5.5 ms, GC Workers: 8]
[GC Worker Start (ms): Min: 1175.9, Avg: 1176.0, Max: 1176.0, Diff: 0.1]
[Ext Root Scanning (ms): Min: 1.1, Avg: 1.2, Max: 1.4, Diff: 0.3, Sum: 9.4]
[Update RS (ms): Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum: 1.2]
[Processed Buffers: Min: 0, Avg: 1.1, Max: 2, Diff: 2, Sum: 9]
[Scan RS (ms): Min: 0.1, Avg: 0.1, Max: 0.3, Diff: 0.2, Sum: 1.1]
[Code Root Scanning (ms): Min: 0.0, Avg: 0.2, Max: 0.5, Diff: 0.5, Sum: 2.0]
[Object Copy (ms): Min: 3.4, Avg: 3.7, Max: 3.8, Diff: 0.4, Sum: 29.3]
[Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0]
[GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.2]
[GC Worker Total (ms): Min: 5.4, Avg: 5.4, Max: 5.5, Diff: 0.1, Sum: 43.3]
[GC Worker End (ms): Min: 1181.4, Avg: 1181.4, Max: 1181.4, Diff: 0.0]
[Code Root Fixup: 0.3 ms]
[Code Root Purge: 0.0 ms]
[Clear CT: 0.1 ms]
[Other: 3.0 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 2.7 ms]
[Ref Enq: 0.0 ms]
[Redirty Cards: 0.1 ms]
[Humongous Reclaim: 0.0 ms]
[Free CSet: 0.1 ms]
[Eden: 33.0M(147.0M)->0.0B(140.0M) Survivors: 6144.0K->13.0M Heap: 46.9M(256.0M)->20.9M(256.0M)]
[Times: user=0.04 sys=0.00, real=0.01 secs]
2015-09-30T11:13:04.660+0600: 1.185: [GC concurrent-root-region-scan-start]
2015-09-30T11:13:04.664+0600: 1.190: [GC concurrent-root-region-scan-end, 0.0046509 secs]
2015-09-30T11:13:04.664+0600: 1.190: [GC concurrent-mark-start]
2015-09-30T11:13:04.665+0600: 1.190: [GC concurrent-mark-end, 0.0007287 secs]
2015-09-30T11:13:04.665+0600: 1.190: [GC remark 1.190: [Finalize Marking, 0.0001736 secs] 1.191: [GC ref-proc, 0.0000411 secs] 1.191: [Unloading, 0.0016740 secs], 0.0020377 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
2015-09-30T11:13:04.667+0600: 1.193: [GC cleanup 21M->14M(256M), 0.0004254 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
A simple rule to watch for on your logs is the prefix of either:
[GC ... <- Minor GC cycle (young gen)
[Full GC ... <- Full GC cycle
(❗) Explicit GCs can also be identified, which is when something is invoking the System.gc()
API. Note, this is not good thing, as
something is forcing a GC cycle to occur, rather than letting the JVM trigger this on its own (what should naturally occur).
2015-09-30T12:23:44.425+0600: 195.699: [GC (System.gc()) [PSYoungGen: 39223K->3562K(76288K)] 49685K->14032K(190464K), 0.0047880 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
2015-09-30T12:23:44.430+0600: 195.704: [Full GC (System.gc()) [PSYoungGen: 3562K->0K(76288K)] [ParOldGen: 10469K->9174K(114176K)] 14032K->9174K(190464K), [Metaspace: 25137K->25137K(1071104K)], 0.0724521 secs] [Times: user=0.38 sys=0.01, real=0.07 secs]
It is generally recommended to use the -XX:+DisableExplicitGC
JVM option to disable forceful GC events. This will allow
the JVM to still have garbage collections, but it disables them from being triggered explicitly. The description of this option:
By default calls to System.gc() are enabled (-XX:-DisableExplicitGC). Use -XX:+DisableExplicitGC to disable calls to System.gc(). Note that the JVM still performs garbage collection when necessary.
With our service, let’s go ahead and start it up with GC logging enabled:
In higher versions of the JDK (9+), you can enable GC logging without having to restart the JVM (reference: Thomas Darimont):
See: Introduction to Troubleshooting in JDK 9: Serviceability Tools Are Your Friends [CON3733] Slide 39 - 43
jcmd 999 VM.log output="file=gc.log" output_options="filecount=5,filesize=10m" what="gc=debug" decorators="time,level"
For parsing the logs, we are just going to show a simple approach using R to parse the
gc.log
that we created. To do this, we will use the stringr
package.
Example output:
HeapUsedBeforeGC_KB HeapUsedAfterGC_KB HeapCapacity_KB GCPauseTime_Sec GCUserTime_Sec GCSysTime_Sec GCRealTime_Sec
1 65536 12554 251392 0.0091645 0.03 0.01 0.01
2 45883 12707 316928 0.0092271 0.03 0.01 0.01
Reminder on the types of time being collected:
GCUserTime_Sec
: User space timeGCSysTime_Sec
: Kernel space time (operating system)GCRealTime_Sec
: Complete time takenNotice that the GCRealTime_Sec
should closely align with the GCPauseTime_Sec
value (just rounded up). If you notice that the
GCUserTime_Sec
is much larger than the GCRealTime_Sec
, you can conclude that multiple threads are executing garbage collection,
as GCUserTime_Sec
is just the sum time of all the threads.
There’s an existing tool GC Easy which will do GC log parsing and analysis as well.
In this section we’re going to spin up the containers needed for the workshop.
The maven project uses the fabric8.io/docker-maven-plugin to create two images:
workshop-server
- the docker image for the workshop serviceworkshop-wiremock
- the docker image for the wiremock serviceRun mvn clean package -Pdocker
, the docker
profile enables the docker-maven-plugin.
You can view the generated images with docker image ls | grep workshop
:
Since our Workshop Service depends on the Wiremock Service, we’re going to use docker-compose to create a docker environment with both our services ready to go:
Within the java-perf-workshop
directory, run docker-compose up
:
In another terminal, you can check the status of the containers by running docker ps
:
Our workshop service container is exposing port 8080
and mapping it into the container’s 8080
. Verify that your setup is working by visiting: http://localhost:8080/search?q=docker.
In the next section, we'll learn how to enable Java Monitoring Tooling to work with containers.
Java Monitoring ToolingIn this section we’re going to configure our container to enable remote tooling.
In previous sections of the workshop, we ran all our tooling without any configuration. In previous versions of Java you would have had to configure things even for local monitoring, this is no longer the case with Java 6+:
Any application that is started on the Java SE 6 platform will support the Attach API, and so will automatically be made available for local monitoring and management when needed.
Since the docker-compose network and containers are ran separate from our host (consider them a different machine), we need to enable remote monitoring and management.
We’ll set the following properties as a JAVA_OPTS
environment variable when we start our workshop server container:
JMX remote port
to 8082
: -Dcom.sun.management.jmxremote.port=8082
8082
: -Dcom.sun.management.jmxremote.rmi.port=8082
-Dcom.sun.management.jmxremote.registry.ssl=false
and -Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false
127.0.0.1
. The default value for this will be the container’s IP address, which we are overriding.Set these values as an environment
property on your workshop container:
Spin up your services again with docker-compose up
. Once the services are started, use docker ps
to check the open ports on the workshop server. Notice that 8082
is now mapped as well.
We’ll use JDK Mission Control to create a JMX Connection.
Open JDK Mission Control. Notice that the JVM Browser no longer shows the two services (since they no longer are running on the local host):
Create a new JMX Connection using 0.0.0.0
and 8082
as the host and port:
With our setup, we can connect with other addresses as well:
127.0.0.1
localhost
(since we are exposing port 8082 on the container as port 8082 on the local host)en0
interface using ifconfig/ipconfig
:
In the next section, we'll learn about some Docker tooling.
Docker ToolingIn the next section, X
References