At the core of everything we do is the drive to maximise mission impact for our end users.
We deliver scalable operational advantage through the application of high-grade engineering and deep scientific rigour. This is embodied by our RED methodology: everything we do should be Repeatable , Explainable , and Deployable .
Modern science suffers a crisis of repeatability. Academics and PhDs publish papers and models with little to no thought given to future replication of their work.
At every stage of bringing a solution into operational use, we believe that the decisions, data, and processing brought to bear should be deterministic and reproducible. Key design decisions, feature processing, and data engineering should all be part of a rigorously repeatable pipeline for rapid experimentation that produces trusted and replicable results every time.
We leverage world-class engineering to fool-proof the scientific development process, ensuring that every experiment can be independently replicated with minimum fuss and at maximum pace.
AI models are trained as "black boxes", with no input from a designer as to how they should learn to make decisions; the measured objective is how well the model makes decisions on the given data. Many data scientists consider this "good enough"; they labour under the impression that understanding the decision-making processes of an AI model is out of reach or is at best left as an afterthought.
For any system to deliver reliable and actionable operational intelligence, users must have faith that the way a model makes its decisions reflects the realities of their operating environment.
This can only come from considering how to achieve real explainability during the development lifecycle, and ensuring that the insights being produced for real-world operators contain sufficient detail for them to understand each judgement, and the overall limits of the system's performance.
Model deployment can cover a vast range of use cases and targets, from extreme low-power edge devices to massive-scale cloud environments. Low friction model deployments are critically dependent on making the right decisions throughout a development programme, with careful consideration of the target environment.
With expertise in everything from optimising CUDA access patterns in multi-node multi-core secure cloud environments, to building AXI UDMA streaming IP blocks for executing quantised AI models on FPGAs, our fundamental commitment to repeatability and explainability ensure that whatever the target runtime environment, we can reliably reproduce high accuracy, high throughput, highly efficient models coupled to detailed performance insights.
We harness a broad spectrum of expertise and capabilities to solve your most pressing operational problems. For example:
For some examples of how we can leverage these skills for real operational impact, see the example projects below.
Don't see what you're looking for? Have a wicked-hard problem you'd like to chat about?
In electronic warfare, the ability to target CEMA effects against un-crewed aerial, ground, and maritime systems (collectively, "drones") depends upon the ability to rapidly and accurately classify the type of drone(s) in the immediate threat area. For drones that are controlled using RF channels (i.e., non-fibre optic drones), this information can be gleaned through automated detection and classification of signals in brief broadband detector sweeps from a software defined radio.
There are two key challenges to overcome to solve this classification problem. The first of these is production of a sufficiently diverse dataset describing different drone behaviours in different environments. This can be achieved through classical signal processing and high performance software engineering.
The second problem is one of deployability: this type of solution must be able to run on low-power embedded devices, with a TDP budget of a couple of watts. In this power budget, IQ must be processed through FPGA DSPs (e.g., producing a spectrogram of the sweep for classification), run the AI model, and interpret and integrate its outputs (including confidence explanations) for consumption in the wider EW/CEMA system of interest.
All of this has to be done with a watchful eye on the real end users of such a system; the amount of cognitive budget they have to deal with such a system (and therefore how automated it must be), and what they would consider to be sufficient performance to be useful on the battlefield (conversations with end users suggest that anything that improves on the Mk1 Human Eyeball and artillery-barraged Human Ear would be welcome!).
When asylum seekers are interviewed at the border, their case-worker must navigate a complex array of policy guidance specific to the country-of-origin of their applicant. Not only must they be able to accurately determine the validity of the claim being made, the case worker is responsible for spotting references to potential threats and human rights abuses prevalent in the asylum seeker's home country; they have a dual responsibility to both assess and protect.
Real-time transcription, translation, topic modelling, and retrieval-augmented generation can provide instantaneous advice to such a caseworker, surfacing and accurately citing the most relevant snippets of policy as the interview progresses. These snippets are reasoned about using an LLM , which provides explainability for why each policy item has been surfaced, but then links back to the specific policy in question, meaning that nuance in policy wording cannot be lost, and the LLM cannot hallucinate harmful policy.
Ultimate responsibility for the case's outcomes remains in the hands of the expert caseworker, with a human-machine team proving significantly more efficient and effective than either a human or an AI agent could be when operating alone.
Towed sonar arrays consist of an array of hydrophones deployed behind a maritime vessel for either active or passive monitoring of the underwater acoustic environment for potential threats. For additional complexity, they can be a part of a collaborative team of vessels (often a mixture of crewed and autonomous), with some vessels providing active sonar "pings" whilst others monitor for returns on their towed arrays.
Detection of contacts on these arrays requires monitoring for unusual acoustic signatures. AI anomaly detection techniques can be used for this by training on what "normal" subsurface activity looks like, learning to classify some number of "known" signatures, and then providing signatures for novel, previously unseen, vessels. Akin in many ways to detecting patterns of RF behaviour for drone detection, a dataset suitable for classification of these acoustic signatures and background noise can be used to train models which predict how probable a given set of observations are - and therefore the probability of there being an anomalous signature present.
Not only does this approach enable richer situational awareness for the sonar operator (whether in a crewed subsurface vessel, or an operator of a maritime drone swarm), it can be used to automatically gather novel signatures for after-action analysis, and to tip/alert other systems to the presence of signatures worthy of exploration. This could mean active sonar examination by a semi-autonomous vehicle, or the invocation of more computationally intensive tracking algorithms to localise and track a potential threat vector.
Ready to talk about how we can help solve your most pressing operational problems?