Skip to main content Skip to complementary content

Memory limitation when using tMatchIndexPredict for Apache Spark Streaming

About this task

When the tMatchIndexPredict component is used in a Spark Streaming Job, it exchanges data with the Elasticsearch server repeatedly. The request and response data are stored in the buffer cache. When the maximal memory size of the Job execution is reached, the following error occurs:
Exception in thread "I/O dispatcher 1329" java.lang.OutOfMemoryError: Direct buffer memory
	at java.nio.Bits.reserveMemory(Bits.java:694)
	at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
	at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)

Procedure

  1. Go to the Run job tab > Advanced settings.
  2. Select the Use specific JVM arguments check box.
    The Argument table is enabled.
  3. Click New.
    The Set the VM Argument dialog box is displayed.
  4. Enter -Djdk.nio.maxCachedBufferSize=1048576.
  5. Click OK.
  6. Save the Job and run it.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!