Can you please advise the preferred way of configuration for microservice resources? My assumption was, that it has to always go through manifest file (cumulocity.json) example
"resources": {
"cpu": "1",
"memory": "1G"
}
Why would anyone prefer heap configuration over manisfet resources configuration? IFAIK heap configuration has only partial influence on the memory consumption of the whole microservice (k8s pod)
the JVM size is dynamically configured based on the microservice manifest (cumulocity.json) configuration.
Hence there is usually no need to define it in the maven plugin.
You can also see in the log output how the JVM is configured, e.g. in this case 4G is configured within the microservice manifest.
MEMORY_LIMIT: 3814MB
381MB is left for system
3433MB is left for application
Using JDK8+ memory settings
Java Memory Settings: -Xmx3090m -XX:MaxMetaspaceSize=343m, memory limit: 4G
Thank you! Complimentary to your answer:
building microservice with microservice-package-maven-plugin creates an entrypoint script, where configuration is applied. It can be checked under target/docker-work/resources/entrypoint.sh
#!/bin/sh
if [ -n "$MEMORY_LIMIT" ];
then
value=$(numfmt --from=auto --grouping $MEMORY_LIMIT)
value=$(($value/1048576)) # convert to MB
echo "MEMORY_LIMIT: ${value}MB"
memory_left=$(awk "BEGIN { memory = int($value * 0.1); if (memory <50) {memory = 50} print memory} ")
echo "${memory_left}MB is left for system"
value=$(awk "BEGIN { print(int($value - $memory_left))}") # leave memory space for system
echo "${value}MB is left for application"
if [ $value -lt "128" ]; # if less then 128MB fail
then
echo "Memory left for application is to small must be at lest 128MB"
exit 1;
else
metaspace=$(awk "BEGIN { memory= int($value * 0.1); if (memory >1024) {memory = 1024} else if ( memory < 64 ){ memory = 64 } print memory} ") # take 10% of available memory to metaspace
heap=$(($value - $metaspace))
fi
jvm_heap=""
jvm_metaspace=""
jvm_variable_heap="-Xmx${heap}m"
echo "Using JDK8+ memory settings"
jvm_variable_metaspace="-XX:MaxMetaspaceSize=${metaspace}m"
export JAVA_MEM="${jvm_heap:-`echo $jvm_variable_heap`} ${jvm_metaspace:-`echo $jvm_variable_metaspace`}"
echo "Java Memory Settings: $JAVA_MEM, memory limit: $MEMORY_LIMIT"
fi
jvm_gc=${JAVA_GC:-"-XX:+UseG1GC -XX:+UseStringDeduplication -XX:MinHeapFreeRatio=25 -XX:MaxHeapFreeRatio=75"}
jvm_mem=${JAVA_MEM:-" "}
jvm_opts=${JAVA_OPTS:-"-server -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/unify/heap-dump-%p.hprof"}
arguments=${ARGUMENTS:-" --package.name=unify --package.directory=unify"}
proxy_params=""
if [ -n "$PROXY_HTTP_HOST" ]; then proxy_params="-Dhttp.proxyHost=${PROXY_HTTP_HOST} -DproxyHost=${PROXY_HTTP_HOST}"; fi
if [ -n "$PROXY_HTTP_PORT" ]; then proxy_params="${proxy_params} -Dhttp.proxyPort=${PROXY_HTTP_PORT} -DproxyPort=${PROXY_HTTP_PORT}"; fi
if [ -n "$PROXY_HTTP_NON_PROXY_HOSTS" ]; then proxy_params="${proxy_params} -Dhttp.nonProxyHosts=\"${PROXY_HTTP_NON_PROXY_HOSTS}\""; fi
if [ -n "$PROXY_HTTPS_HOST" ]; then proxy_params="${proxy_params} -Dhttps.proxyHost=${PROXY_HTTPS_HOST}"; fi
if [ -n "$PROXY_HTTPS_PORT" ]; then proxy_params="${proxy_params} -Dhttps.proxyPort=${PROXY_HTTPS_PORT}"; fi
if [ -n "$PROXY_SOCKS_HOST" ]; then proxy_params="${proxy_params} -DsocksProxyHost=${PROXY_SOCKS_HOST}"; fi
if [ -n "$PROXY_SOCKS_PORT" ]; then proxy_params="${proxy_params} -DsocksProxyPort=${PROXY_SOCKS_PORT}"; fi
mkdir -p /var/log/unify; echo "heap dumps /var/log/unify/heap-dump-<pid>.hprof"
java ${jvm_opts} ${jvm_gc} ${jvm_mem} ${proxy_params} -jar /data/unify.jar ${arguments}
one more finding to this topic:
There is a way to set requestedResources in the manifest cumulocity.json file. This will guarantee a minimum pod size in case your JVM process need more memory than the default one (128Mi)