Overview
New dynamic mapper release 5.1.0 incoming with a lot of new features & fixes.
Check it out here:
New Features
- Reliable Messaging for MQTT connectors - We are now handling QoS in mappings & messages correctly. Meaning when creating mappings with QoS 1 or 2 all messages will be only acked at the broker when the processing in Cumulocity was successful. On reconnect of the connector non-acked messages will be retransmitted and again processed if
cleanSession
is set to false. This includes inbound & outbound mappings - Max Failure Count Error handling - It is now possible to specifiy how many mappings errors should lead to a mapping to be deactivated. This is useful if you have faulty mappings which aren’t working at all. They will be automatically deactivated when they reach the max failure count treshold.
- MQTT v5 support - It is now possible to select either v.3.1.1 or v5 for MQTT Broker connections
- Performance benchmark & script - We created a performance script to perform an end-to-end test, using MQTT Service as broker.
- Robustness Service - Add configuration setting maxCPU time in milliseconds for JavaScript code, to avoid malicious code from starving the CPU
- Robustness WebUI - When testing CodeBased mappings including JavaScript in the browser, this is executed seperatedly in a WebWorker and execution is terminated after 250 ms. This increases the stability of the browser.
Performance Test resuts
We achieved 2500 msg/s for 2 CPUs 4 GB RAM configured in the microservice manifest for graphical mappings using JSONata. Messages were sent with QoS 0. Theoretical linear scaling to 16 CPUs would mean up to 20k msg/s can be processed with a single instance of the dynamic mapper.
Code-mappings are currently 50% slower achieving 1250 msg/s (10k msg/s for 16 CPUs) using same setup (2 CPUs, 4 GB RAM) as they use more resources. Here we found things to optimize and are currently planning some improvements.
QoS 1 message could not be tested due to MQTT Service issues but will be repeated when issues are resolved. We expect only a slightly performance impact here at the mapper. We need to block 1 thread until processing is completed but this thread is no platform thread but only a virtual thread, so we can have millions of them at the same time without impacting the receiving threads which will be released immediatly after the message is received.
The main limitation we saw in all cases are the C8Y HTTP Client Pool Connections of the Microservice SDK which is currently limited to 50 concurrent connections. Here is also space for improvement in later releases, currently we are blocking virtual threads using a semaphore of 50. If we would increase the concurrent connections, we might also have a higher throughput.