I have a spring boot application running on ubuntu 20 ec2 machine where I am creating around 200000 threads to write data into kafka. However it is failing repeatedly with the following error
[138.470s][warning][os,thread] Attempt to protect stack guard pages failed (0x00007f828d055000-0x00007f828d059000).[138.470s][warning][os,thread] Attempt to deallocate stack guard pages failed.OpenJDK 64-Bit Server VM warning: [138.472s][warning][os,thread] Failed to start thread - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached.INFO: os::commit_memory(0x00007f828cf54000, 16384, 0) failed; error='Not enough space' (errno=12)## There is insufficient memory for the Java Runtime Environment to continue.# Native memory allocation (mmap) failed to map 16384 bytes for committing reserved memory.
I have tried increasing memory of my ec2 instance to 64 gb which have been of no use. I am using docker stats and htop to monitor the memory footprint of the process and when it touches around 10 gb it fails with the given error.
I have also tried increasing the heap size and max memory for the process.
docker run --rm --name test -e JAVA_OPTS=-Xmx64g -v /workspace/logs/test:/logs -t test:master
Below is my code
final int LIMIT = 200000; ExecutorService executorService = Executors.newFixedThreadPool(LIMIT); final CountDownLatch latch = new CountDownLatch(LIMIT); for (int i = 1; i <= LIMIT; i++) { final int counter = i; executorService.execute(() -> { try { kafkaTemplate.send("rf-data", Integer.toString(123), "asdsadsd"); kafkaTemplate.send("rf-data", Integer.toString(123), "zczxczxczxc"); latch.countDown(); } catch (Exception e) { logger.error("Error sending data: ", e); } }); } try { latch.await(); } catch (InterruptedException e) { logger.error("error ltach", e); }