Anomaly detection deep learning methods have shown to be quite promising lately when dealing with complex datasets. We tend to think of anomaly detection as an unsupervised learning problem. This is due to the large amount of “unlabeled” samples we tend to have in our dataset. But in most cases, the researcher can provide at least a minimum amount of labeled samples that could be added to the dataset.
Deep SAD can take advantage of this using a combination of semi-supervised learning techniques. Specificly to create an information-theoretic framework for deep anomaly detection based on the idea that the entropy of the latent distribution for “normal” data should be lower than the entropy of the anomalous distribution, which can serve as a theoretical interpretation for our method. Here is a link to the original paper: Deep Sad
The original paper was focused on Computer Vision. But I’ll be looking to use Deep SAD to find malicious events in web server logs.
Make sure to check out my other posts on Deep Learning.