Explore our Anomaly Detection Services
Developing Anomaly Detection Services has been a priority for Tremend in the last few years. The gained expertise, in extracting outliers from a regular behavior, can be useful in many scenarios: prevent shutting down machines while analyzing a system’s data, prioritize unique events that need to be reviewed by a human expert, or identify fraud detection in a bank’s transactional data.
Traditional approaches involve hand-crafted rules in a decision tree that becomes too burdensome to maintain or having to review many alerts in a repetitive and tedious cycle in order to find the true anomalies and irregularities that enclose suspicious behaviors.
Our expertise in detecting anomalies involves examining time-series data points from different technical perspectives: either by decomposing the signal into trend and seasonality, looking at individual points as being a part of the larger clusters, or modeling the data into time-delimited graphs with intrinsic structures. This data is, then, analyzed with specialized statistics and ML models, either supervised, where we find anomalies similar to the previously labeled ones, or unsupervised, where there are no labels involved and we are trying to detect novel scenarios of interest.
As we are dealing with vast amounts of data while undertaking this problem, big data tools (Elasticsearch, Kibana, Logstash) and cloud technologies (Azure, AWS, Databricks) are often employed for faster and better performance.
Cases where we have successfully implemented
Anomaly Detection systems
1. Anomaly Detection in System
Logging and alerting systems such as Nagios are designed to help developer engineers add notifications for preventing or rapidly handling scenarios in which potential threats arise. Our experience shows that, usually, when these alerts come in large numbers, the ones that are of real importance tend to be difficult to isolate from the others. This results in failing to identify a problematic situation and settle it before it escalates.
Another impediment when it comes to handling large decision trees of handcrafted rules is the complexity of building them. The specialist needs to look at several metrics (most of the time, out of hundreds) and extract the patterns which may yield unwanted behaviors. Even if some relevant scenarios are depicted, they need to be translated into one branch or more of intricate conditions such that, on one hand, it won’t affect the existing ones, and on the other, it will not allow other similar legitimate behaviors to be alerted.
Machine learning models are excellent at finding patterns, which makes them a great fit for this problem. By looking at previously detected incidents, it is able to derive the specific set of conditions that are different from the rest, regular ones that lead to those events. There is no need for a human specialist to attempt to programmatically express those in code, as the algorithm will learn a much better version of them. The algorithm will need, however, all the sources of data that a specialist will need in order to isolate the incident.
By using AI when detecting anomalies, we are able to analyze highly-dimensional data from multiple sources in a faster environment. This way, we reduced the number of false-positive alerts by more than 22% and detected unseen patterns that, according to the handcrafted rules, seemed normal, but, in the given context, were threatening the performance of the system.
2. Anomaly Detection in Financial Transactional Data
In the Graphomaly project, we focused on adapting several anomaly detection methods, both supervised and unsupervised, modeling the transactional data as time series or graphs, in order to detect unusual patterns that may conceal a fraud scheme. The goal is to ease the identification of fraud patterns similar to others seen in the past, but also to bring the attention of human operators to potentially harmful new transaction patterns.
Although the supervised approach tends to perform better on such problems, given that we have knowledge and labels of how the anomalous data looks, our experience shows that the most sought-after models are the unsupervised ones. Even if they are more difficult to translate from one case to another, and their explainability ratio is lower than for the others, they are able to capture, from data alone, the set of points that manifest a previously unseen behavior.
By processing and examining the data from different points of view, either supervised or unsupervised, as time-series signals, individual transactions, or starting from known static sub-graphs, we aim at detecting suspicious behaviors stemming from previously encountered fraud schemes or entirely new, unseen patterns, as the deceitful techniques tend to transform and evolve as well.
Given that in financial systems only a small percentage of transactions come from fraudulent activities (usually, less than 0.1% of data points), there is a strong imbalance between the classes in the set, making the problem even more difficult to solve, especially if these anomalies come from various scenarios. In real-world scenarios, the difference between classes may be even higher, with an anomalies ratio smaller than 0.2 per mille. To counteract the disproportion, we employ several techniques such as identifying strong legitimate behavior in transactions, thus reducing the number of regular transactions, or oversampling the instances from the target class.
Machine learning algorithms such as neural networks are able to process large amounts of data, bringing forward suspicious patterns that would, otherwise, require large efforts of manual analysis. In many cases, the spurious activities are spread among several entities, being impossible to detect by looking at one transaction at a time, while a graph modeling is much more suitable.
Regardless of the source of the proposed anomalies, we found that examining and validating the results by a human specialist, alongside automatically reducing the number of false positives of what the system yields represent the best approach to performance increase and cost reduction for financial institutions.
Why choose Tremend’s Anomaly
Tremend stands out in the industry by being able to bring together state-of-the-art models and methods, insights from data through data mining techniques, and domain expert acumen. Given that anomaly detection is tied to the particular business process and, in many cases, a form of identification or prediction is already ensured through hand-designed rules, for example, it is of vital importance that we extract domain knowledge while moving forward. Leveraging experience and technical expertise from both the AI and the target domains, we bring value from the MVP to the production-ready system.
Contact us for any type of Anomaly Detection or AI implementation you need and capitalize on our promise of providing top-notch AI services for your specific requirements.
Get in touch
We are always happy to talk
165 Splaiul Unirii, Timpuri Noi Square,
TN Office 2 building, 4th floor,
District 3, Bucharest, Romania, 030134