We are delighted to announce release v3.1 of TorQ, the latest instalment of our kdb+ framework.
The first of our great new additions is kafka.q, which provides q language bindings for the ‘distributed streaming platform’ Apache Kafka, a real time messaging system with persistent storage in message logs. An application architecture built around Kafka could dispense with a tickerplant component, and have RDBs and other real time clients query Kafka on startup for offsets, and play back the data they need.
The next new offering, datareplay.q, has been mentioned in a previous blog post and provides functionality for generating tickerplant function calls from historical data which can be executed by subscriber functions. This can be used to test a known data-set against a subscriber for testing or debugging purposes. It can load this data from the current TorQ session, or from a remote hdb if given its connection handle. It can chunk the data by time increments (as if the tickerplant was in batch mode), and can also generate calls to a custom timer function for the same time increments (defaults to .z.ts). The functions provided by this utility are made available in the .datareplay namespace.
The final new feature is the subscribercutoff.q script, which can be used to provide functionality for cutting off any slow subscribers on any TorQ processes. This gives clients a chance to tidy up their behaviour and will avoid cutting off clients if they happened to have a spike just before the check was performed.
As always, for any questions on TorQ or how we can help, please get in touch.