TorQ 2018 Roadmap

Blog Data Analytics & Engineering 15 Nov 2017

Data Intellect

Below is our current TorQ roadmap for 2018. We welcome any additional suggestions, or prioritisation requests.Woman holding roadmap on a desert road

Data Manipulation Library

The point here is to produce a set of utilities useful for analysing and manipulating datasets. The target audience is new kdb+ developers or business users. Visualisation products (e.g. Panopticon) are probably good target applications as it is useful to pivot, rack, align and fill data for visualisation purposes. The point here is not to re-implement kdb+ syntax in a less obvious way, but to provide utilities for complex operations. For example, aligning asynchronous timeseries datasets can be done in multiple ways depending on required output. Experienced kdb+ programmers know all the ways and the optimal approaches, novices do not.

The work will use some contributed code on code.kx (e.g. general pivot functions) and new code will be developed. Effort will be put into making it as usable as possible, with descriptive errors propagated back to point users in the right direction and full documentation.

Kerberos Integration

The main use case for kerberos is that it avoids passing passwords around the network. You log on once from your local machine (this might be logging on to your windows desktop in the morning). As part of authentication you get a ticket from the kerberos server, which you can use to prove who you are to other services. From the kdb+ point of view, if kerberos is enabled on the kdb+ side what will happen will be:

  • front end app (IDE, kdb+ enabled web app, other custom front end) grabs kerberos ticket from local machine, sends it to kdb+
  • q gets ticket, sends it to kerberos server “this person claims he is Dave, can you confirm?”
  • kerberos replies “yes, valid ticket from a valid session, it is Dave”

From a user experience point of view, you don’t need to keep entering your password. From a kdb+ POV, the backend never sees an actual password, so audits about security of data in transit etc are easier to pass.

Improved Dependency Management

Currently there is no dependency management between versions of TorQ and kdb+ or any application built on top of TorQ. We would like to modify that and make it accessible to clients building their own packages on top of TorQ.

Integration with Monitoring Utilities

In the past we’ve integrated TorQ with several monitoring packages including Geneos, Supervisord, (M)Monit, Nagios, LogStash and Splunk. It makes more sense to integrate with something pre-existing than to build into TorQ. The proposal is to formalise some of these integrations and make them freely available / more easily accessible.

Kx On-Demand Mode

Kx announced the licensing for their on-demand mode at their NYC Meetup last week. The licensing is essentially per-minute, with reduced rates where the core is idle. TorQ in its current form sits reasonably well with an on-demand model as it has dynamic process discovery and load balancing, so processes can be replicated on demand to serve short term needs. The goal here is to tweak TorQ to fit neatly with this licensing model, including perhaps a “low cost” mode – the TorQ framework relies on regular timers for various purposes which would increase the cost (as the core is not idle).

Application: TRTH REST API Downloader

Thomson Reuters have a REST API. There are several steps for downloading datasets and persisting in kdb+ including

  • authentication
  • data request
  • polling for results
  • download
  • load into kdb+

The focus is to create an application pack, similar to TorQ-FX to allow templated requests for different datasets to be run against the TRTH API.

Application: IEX Data Capture

IEX have recently released a full data feed. Build an application pack to capture and store this data feed. This would potentially leverage the work of Himanshu Gupta.

Share this:

LET'S CHAT ABOUT YOUR PROJECT.

GET IN TOUCH