ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article
Revised

Graph neural network-based anomaly detection for river network systems

[version 2; peer review: 1 approved, 2 approved with reservations]
PUBLISHED 28 Mar 2024
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Ecology and Global Change gateway.

Abstract

Background

Water is the lifeblood of river networks, and its quality plays a crucial role in sustaining both aquatic ecosystems and human societies. Real-time monitoring of water quality is increasingly reliant on in-situ sensor technology.

Anomaly detection is crucial for identifying erroneous patterns in sensor data, but can be a challenging task due to the complexity and variability of the data, even under typical conditions. This paper presents a solution to the challenging task of anomaly detection for river network sensor data, which is essential for accurate and continuous monitoring.

Methods

We use a graph neural network model, the recently proposed Graph Deviation Network (GDN), which employs graph attention-based forecasting to capture the complex spatio-temporal relationships between sensors.

We propose an alternate anomaly threshold criteria for the model, GDN+, based on the learned graph. To evaluate the model’s efficacy, we introduce new benchmarking simulation experiments with highly-sophisticated dependency structures and subsequence anomalies of various types. We also introduce software called gnnad.

Results

We further examine the strengths and weaknesses of this baseline approach, GDN, in comparison to other benchmarking methods on complex real-world river network data.

Conclusions

Findings suggest that GDN+ outperforms the baseline approach in high-dimensional data, while also providing improved interpretability.

Keywords

Anomaly Detection, Graph Deviation Network, Graph Neural Network, Multivariate Time Series, Graph Attention Forecasting, Spatio-temporal Data, Complex Systems

Revised Amendments from Version 1

Minor revisions based on reviewer feedback, notably the inclusion of an ablation study surrounding the choice of percentile used for the threshold calculation for GDN+ across various datasets.

See the authors' detailed response to the review by Maurício Araújo Dias
See the authors' detailed response to the review by Ailin Deng

Introduction

River network systems play a vital role as freshwater habitats for aquatic life, and as support for terrestrial ecosystems in riparian zones, but are particularly sensitive to the anthropogenic impacts of climate change, water pollution and over-exploitation, among other factors. As a United Nations Sustainable Development Goal,1 water quality is a major environmental concern worldwide. The use of in-situ [1] sensors for data collection on river networks is increasingly prevalent,2,3 generating large amounts of data that allow for the identification of fine-scale spatial and temporal patterns, trends, and extremes, as well as potential sources of pollutants and their downstream impacts. However, such sensors are susceptible to technical errors relating to the equipment, herein defined as anomalies, for example due to miscalibration, biofouling, electrical interference and battery failure. In contrast, extreme events in rivers occur as result of heavy rain and floods. Technical anomalies must be identified before the data are considered for further analysis, as they can introduce bias in model parameters and affect the validity of statistical inferences, confounding the identification of true changes in water variables. Trustworthy data is needed to produce reliable and accurate assessments of water quality, for enhanced environmental monitoring, and for guiding management decisions in the prioritisation of ecosystem health.

Anomaly detection in river networks is challenging due to the highly dynamic nature of river water even under typical conditions,4 as well as the complex spatial relationships between sensors. The unique spatial relationships between neighbouring sensors on a river network are characterised by a branching network topology with flow direction and connectivity, embedded within the 3-D terrestrial landscape. Common anomalies from data obtained from in-situ sensors are generally characterised by multiple consecutive observations (subsequence or persistent5), including sensor drift and periods of unusually high or low variability, which may indicate the necessity for sensor maintenance or calibration.6 Such anomalies are difficult to detect and often associated with high false negative rates.7 The taxonomy employed to categorize anomalies into distinct types follows Ref. 7.

Earlier statistical studies have focused on developing autocovariance models based on within-river relationships to capture the unique spatial characteristics of rivers.8 Although these methods are adaptable to different climate zones, and have recently been extended to take temporal dependencies into account,9 data sets generated by in-situ sensors still pose significant computational challenges with such prediction methods due to the sheer volume of data.10 Autocovariance matrices must be inverted when fitting spatio-temporal models and making predictions, and the distances between sites must be known. Previous work,11 aimed to detect drift and high-variability anomalies in water quality variables, by studying a range of neural networks calibrated using a Bayesian multi-objective optimisation procedure. However, the study was limited to analyzing univariate time series data, and the supervised methods required a significant amount of labeled data for training, which are not always available.

There are limited unsupervised anomaly detection methods for subsequence anomalies (of variable length), and even less so for multivariate time series anomaly detection.5 One such method uses dynamic clustering on learned segmented windows to identify global and local anomalies.12 However, this algorithm requires the time series to be well-aligned, and is not suitable for the lagged temporal relationships observed with river flow. Another method to detect variable-length subsequence anomalies in multivariate time series data uses dimensionality reduction to construct a one-dimensional feature, to represent the density of a local region in the recurrence representation, indicating the recurrence of patterns obtained by a sliding window.13 A similarity measure is used to classify subsequences as either non-anomalous or anomalous. Only two summary statistics were used in this similarity measure and the results were limited to a low-dimensional simulation study. In contrast, the technique introduced by Ref. 14, DeepAnT, uses a deep convolutional neural network (CNN) to predict one step ahead. This approach uses Euclidean distance of the forecast errors as the anomaly score. However, an anomaly threshold must be provided.

Despite the above initial advances, challenges still remain in detecting persistent variable-length anomalies within high-dimensional data exhibiting noisy and complex spatial and temporal dependencies. With the aim of addressing such challenges, we explore the application of the recently-proposed Graph Deviation Network (GDN),15 and explore refinements with respect to anomaly scoring that address the needs of environmental monitoring. The GDN approach15 is a state-of-the-art model that uses sensor embeddings to capture inter-sensor relationships as a learned graph, and employs graph attention-based forecasting to predict future sensor behaviour. Anomalies are flagged when the error scores are above a calculated threshold value. By learning the interdependencies among variables and predicting based on the typical patterns of the system in a semi-supervised manner, this approach is able to detect deviations when the expected spatial dependencies are disrupted. As such, GDN offers the ability to detect even the small-deviation anomalies generally overlooked by other distance based and density based anomaly detection methods for time series,16,17 while offering robustness to lagged variable relationships. Unlike the commonly-used statistical methods that explicitly model covariance as a function of distance, GDN is flexible in capturing complex variable relationships independent of distance. GDN is also a semi-supervised approach, eliminating the need to label large amounts of data, and offers a computationally efficient solution to handle the ever increasing supply of sensor data. Despite the existing suite of methods developed for anomaly detection, only a limited number of corresponding software packages are available to practitioners. In summary, we have identified the following gaps in the current literature and research on this topic:

  • 1. An urgent need exists for a flexible approach that can effectively capture complex spatial relationships in river networks without the specification of an autocovariance model, and the ability to learn from limited labeled data, in a computationally efficient manner.

  • 2. Lack of data and anomaly generation schemes on which to benchmark methods, that exhibit complex spatial and temporal dependencies, as observed across river networks.

  • 3. Limited open-source software for anomaly detection, which hinders the accessibility and reproducibility of research in this field, and limits the ability for individuals and organisations to implement effective anomaly detection strategies.

Our work makes four primary contributions to the field:

  • 1. An improvement of the GDN approach via the threshold calculation based on the learned graph is presented, and shown to detect anomalies more accurately than GDN while improving the ability to locate anomalies across a network.

  • 2. Methods for simulating new benchmark data with highly-sophisticated spatio-temporal structures are provided, contaminated with various types of persistent anomalies.

  • 3. Numerical studies are conducted, featuring a suite of benchmarking data sets, as well as real-world river network data, to explore the strengths and limitations of GDN (and its variants) in increasingly challenging settings.

  • 4. User-friendly, free open-source software for the GDN/GDN+ approach is made available on the pip repository as gnnad, with data and anomaly generation modules, as well as the publication of a novel real-world data set.

The structure of the remainder of the paper is as follows: The next section details the methods of GDN and the model extension GDN+, and describes the methodology of the simulated data and anomaly generation. In the Results section we present an extensive simulation study on the benchmarking data, as well as a real-world case study. The performance of GDN/GDN+ is assessed against other state-of-the-art anomaly detection models. Further details and example code for the newly-released software are also provided. The paper concludes with a discussion of the findings, and the strengths and weaknesses of the considered models.

Methods

Consider multivariate time series data Y=y1yT, obtained from n sensors over T time ticks. The (univariate) data collected from sensor i=1,,n at time t=1,,T are denoted as yit. Following the standard semi-supervised anomaly detection approach,18,19 non-anomalous data are used for training, while the test set may contain anomalous data. That is, we aim to learn the sensor behaviour using data obtained under standard operational conditions throughout the training phase and identify anomalous sensor readings during testing, as those which deviate substantially from the learned behaviour. As the algorithm output, each test point yt is assigned a binary label, at01, where at=1 indicates an anomaly at time t, anywhere across the full sensor network.

The GDN approach15 for anomaly detection is composed of two aspects:

  • 1. Forecasting-based time series model: a non-linear autoregressive multivariate time series model that involves graph neural networks is trained, and

  • 2. Threshold-based anomaly detection: transformations of the individual forecasting errors are used to determine if an anomaly has occurred, if such errors exceed a calculated threshold.

The above components are described in more detail below.

Forecasting-based Time Series Model. To predict yt, the model takes as input w lags of the multivariate series,

Xtyt1ytw.

The i-th row (containing sensor i’s measurements for the previous w lags) of the above input matrix is represented by the column vector, xit=yit1yitw. Prior to training, the practitioner specifies acceptable candidate relationships via the sets C1,,Cn, where each Ci12n and does not contain i. These sets specify which nodes are allowed to be considered to be connected from node i (noting that the adjacency graph connections are not necessarily symmetric).

The model implicitly learns a graph structure via training sensor embedding parameters vid for i=1,,n which are used to construct a graph. The intuition is that the embedding vectors capture the inherent characteristics of each sensor, and that sensors which are “similar” in terms of the angle between their vector embeddings are considered connected. Formally, the quantity eji is defined as the cosine similarity between the vector embeddings of sensors i and j:

eji=viΤvjvivjIjCi,i,j1n,
with denoting the Euclidean norm, and indicator function I which equals 1 when node j belongs to set Ci, and 0 otherwise. Note that the similarity is forced to be zero if a connecting node is not in the permissible candidate set. Next, let ej,i be the i-th largest value in ej1ejn. A graph-adjacency matrix (and in turn a graph itself) is then constructed from the sensor similarities via:
Aji=Iejiej,Ki=j,
for user-specified K1n which determines the maximum number of edges from a node, referred to as the “Top-K” hyperparameter.

The above describes how the trainable parameters vkk=1n yield a graph. Next, the lagged series are fed individually through a shallow Graph Attention Network20 that uses the previously constructed graph. Here, each node corresponds to a sensor, and the node features for node i are the lagged (univariate) time-series values, xitw. Allow a parameter weight matrix Wd×w to apply a shared linear transform to each node. Then, the output of the network is given by

zit=max0j:Aji>0αijWxit,
where zit is called the node representation, and coefficients αij are the attention paid to node j when computing the representation for node i, with:
(1)
πij=LeakyReLUaΤviWxit+vjWxjt,whereαij=expπijk:Aki>0expπik,
with learnable parameters a2d, where denotes concatenation, and LeakyReLUxmaxδxx for δ>0, with the maximum operation applied elementwise. Note the addition [2] in Equation 1 and that j=1nαij=1. Intuitively, the above is an automated mechanism to aggregate information from a node itself and neighbouring nodes (whilst simultaneously assigning a weight of how much information to take from each neighbour) to compute a vector representing extracted information about node i itself and its neighbours’ interaction with it. The final model output (prediction) is given by,
ŷt=fηv1z1tvnznt,
where fη:d×nn is a feedforward neural network with parameters η, and denotes element-wise multiplication. The model is trained by optimizing the parameters vii=1n, W, a, and η to minimize the mean squared error loss function
L=1Twt=w+1Tŷtyt2.

Threshold-based Anomaly Detection. Given the learned inter-sensor and temporal relationships, we are able to detect anomalies as those which deviate from these interdependencies. An anomalousness score is computed for each time point in the test data. For each sensor i, we denote the prediction error at time t as,

εi,t=|yitŷit|,
with || denoting the absolute value, and the vector of prediction error for each sensor is, εiTw. Since the error values of different sensors may vary substantially, we perform a robust normalisation of each sensor’s errors to prevent any one sensor from overly dominating the others, that is,
ε˜i=εiMedianεiIQRεi,
where IQR denotes inter-quartile range. In the original work by Ref. 15, a time point t is flagged as anomalous if,
At=Imaxiε˜i,t>κ,
using the notation ε˜i,t for the error value at the t-th index. Alternatively, the authors recommend using a simple moving average of ε˜i and flagging time t as anomalous if the maximum of that moving average exceeds κ. The authors specify κ as the maximum of the normalised errors observed on some (non-anomalous) validation data, denoted by variant epsilon, ε˜. However, this is applied to all sensors as a fixed threshold.

Sensor-based anomaly threshold: GDN+

The behavior of water quality variables may differ across space, for example, water level at high-altitude river network branches are generally characterised by rainfall patterns, whereas water level downstream near a river outlet can also be influenced by tidal patterns. A fixed threshold across the network does not allow for any local adaptations in error sensitivity. For this reason, this work also considers the novel sensor-specific threshold calculation,

Ait=Iε˜i,t>κi,
where κi is chosen such that,
1|ε˜j,tj:Aji>0,ij|j:Aji>0,ijIε˜j,t<κi=τ,
for some user-specified percentile, τ0,100, and where || is the cardinality. In other words, the threshold for each sensor, κi, is set as the τ-th percentile of the normalised error scores across the neighbourhood of i, on the validation data set. Unless otherwise stated, we set τ=99. In this way, the sensor threshold is based only on its direct neighbourhood, as opposed to the original method which uses the global maximum, and is thus in tune with the local behaviour of the system, and more robust as a percentile. We refer to the GDN model using this variant of the threshold-based anomaly detection as GDN+.

New class of benchmarking data

The following is a method for simulating synthetic datasets with persistent anomalies inspired by the statistical models recently used to model river network data.9 Let S denote an arbitrary set of individual spatial locations, with locations s1,s2,,snS chosen by experimental design21 or otherwise. Consider a linear mixed model with n×1 response Y, and n×m design matrix X of explanatory variables spatially indexed at locations s1,s2,,sn,

(2)
Yt=β0+Xtβ+Z+ε0,t=1,,T,
with time-homogeneous spatially-correlated random effects ZN0ΣZ and vector of independent noise terms ε0N0σ02I, yielding CovYtXt=xt=ΣZ+σ02I, where I denotes the n×n identity matrix and ε0 is an error term. The covariates Xt for t=1,,T are simulated according to an autoregressive process based on an underlying sequence of independent random fields,
(3)
Xt=Σi=0pφiX˜ti,
where p is the order of the autoregressive process, and
X˜tiidN0ΣX,t=1,,T.

Note that other distributions may be used. Above,

ΣXij=ksisj
for some covariance kernel k. For example,
(4)
kssσα=σ2expss2α,
where denotes the Euclidean norm, σ2>0 is the covariance-scaling parameter, and α is the range parameter that controls the rate of decay of correlation between points over distance. Figure 1 illustrates an example of a generated Gaussian random field evolving over time.

9777884f-6e9c-4ac2-a3bf-614d7e3a311b_figure1.gif

Figure 1. Smooth random Gaussian field used to generate covariate values, X, in the simulation studies.

Examples of the field are shown for t=1,2,3. Sensor locations are shown as white dots.

We consider two scenarios in generating simulated data: 1) spatial relationships characterised by Euclidean distance only, where the covariance matrix, ΣZ, is constructed via the kernel function given in Equation 4, and 2) data simulated on a river network, where ΣZ is constructed via the kernel function given in Equation 5. Together, this approach provides a variety of simulated data with highly-sophisticated dependency structure, see Figure 2.

9777884f-6e9c-4ac2-a3bf-614d7e3a311b_figure2.gif

Figure 2. SimEuc site locations across space (left) and SimRiver site locations along a river network (right).

Both simulations use the same (x, y) coordinates for sensor locations. The direction of flow is from top to bottom for SimRiver. Red dots indicate sites for which time series have been shown in Figure 3 and Figure 4.

Two points si, sj on a river network are said to be flow-connected if they share water flow, and flow-unconnected otherwise. We define stream distance, hRivsisj, as the shortest distance separating si and sj when travelling along a given river network. Tail-up covariance models for river networks, introduced in,8 effectively represent spatial relationships when variables are dominated by flow (e.g. pollutants enter a stream and only impact downstream locations). By construction, the tail-up covariance function only allows for correlation between flow-connected sites:

(5)
kTUsisjσα=ωijσ2exphRivsisjαFij,
where Fij is equal to one if si and sj are flow-connected, and zero otherwise, and ωij is a weighting attributed to each stream segment to account for the upstream branching network structure and ensure stationarity in variance (for full details, see Refs. 22, 23). The weightings corresponding to each segment may incorporate flow volume, or the area of the catchment, or a proxy such as stream order.24 Note that there are various choices of covariance models. Tail-down models allow correlation between both flow-connected and flow-unconnected locations, and may be more suitable for water variables such as temperature, or organisms that can move both upstream and downstream.25 Here we use the exponential function for decay, for further covariance model examples see Ref. 8.

Once the base multivariate time series is constructed as above, it is modified to include persistent anomalies as follows. Hyperparameters are, nanomaly0, the number of subsequence anomalies and, λanomaly>0, the average length of an anomalous subsequence, for each anomaly type. In this example, we consider two types of anomalies, high-variability and drift, see Algorithm 1.

Algorithm 1. Two-type Anomaly Generation.

input : Time series data Y=y1yT, number of locations n, expected length of each anomaly type λdrift and λvar, number of anomalies, ndrift and nvar, variability anomaly scale ζ, and drift anomaly parameter δ.

for anomalydriftvar do

   for i=1,,nanomaly do

    Draw location SUniform12n

    Draw time tUniform1T

    Draw length LPoissonλanomaly

    if anomaly=drift then

     vδ2δ        // drift

    else

     vN0ζ2IL×L     // variability

    yS,t:t+LyS,t:t+L+v

Python Package: Graph-Based Neural Network Anomaly Dectection (gnnad)

The Python package gnnad introduced in this paper extends and generalises the research code originally implemented by Ref. 15, which is incompatible with newer package dependencies and offers only a command line interface. The code is refactored to be modular and user-friendly, with a scikit-inspired interface, and extended to include visualisation, data and anomaly generation modules, as well as the GDN+ model extension. A continuous integration/continuous deployment (CI/CP) pipeline is established with unit testing to ensure that changes to the code are tested and deployed efficiently. Comprehensive documentation now accompanying the codebase enhances readability for future developers, facilitating maintenance, reuse, and modification. Furthermore, rigorous error handling is implemented to improve the software experience. The software developments have resulted in a more robust, user-friendly and easily distributable package that is available via https://github.com/KatieBuc/gnnad and the pip repository, gnnad. See below for example code that shows the process of fitting the GDN+ model.

from sklearn.model_selection import train_test_split

from gnnad.graphanomaly import GNNAD

from gnnad.generate import GenerateAnomaly

# split train test for input data frame, X

X_train, X_test = train_test_split(X, shuffle=False)

# generate anomalies on test set

anoms = GenerateAnomaly(X_test)

X_test = anoms.generate(anoms.variability, lam=3, prop_anom=0.07, seed=45)

X_test = anoms.generate(anoms.drift, lam=11, prop_anom=0.07, seed=234)

y_test = anoms.get_labels()

# instantiate and fit GDN model object

model = GNNAD(threshold_type="max_validation", topk=6, slide_win=200)

fitted_model = model.fit(X_train, X_test, y_test)

# sensor based threshold based on GDN+

pred_label = fitted_model.sensor_threshold_preds(tau=99)

# print evaluation metrics

fitted_model.print_eval_metrics(pred_label)

Results

This section presents a summary of the main findings for anomaly detection using GDN/GDN+ on both simulated and real-world data. To ensure quality of data, the aim for practitioners is to maximise the ability to identify anomalies of different types, while minimising false detection rates. We define the following metrics in terms of true positive (TP), true negative (TN), false positive (FP) and false negative (FN) classifications. In other words, the main priority is to minimise FN, while maintaining a reasonable number of FP, such that it is not an operational burden to check the total number of positive flags. Accordingly, we use recall, defined by TPTP+FN, to evaluate the performance on the test set and to select hyperparameters. That is, the proportion of actual positive cases that were correctly identified by the model(s). We also report the performance using precision TPTP+FP, accuracy TP+TNTP+TN+FP+FN and specificity TNTN+FP.

To evaluate model performance, three existing anomaly detection models are used as benchmarks: 1. The naive (random walk) Autoregressive Integrated Moving Average Model (ARIMA) prediction model from Refs. 7, 26; 2. HDoutliers,17 an unsupervised algorithm designed to identify anomalies in high-dimensional data, based on a distributional model that allows for probability assignment to an anomaly; and 3. DeepAnT,14 an unsupervised, deep learning-based approach to detecting anomalies in time series data.

Simulation study: Benchmark data

Data are generated using the linear-mixed model described in Equation 2, with differing spatial dynamics: SimEuc where the random effect, Z, is characterised by Euclidean distance only, and SimRiver where Z simulates complex river network dynamics,27 using the same site locations and covariate values, X. Detecting anomalies that involve multiple consecutive observations is a difficult task that often requires user intervention, and is the focus of this study. We consider scenarios with drift and high-variability anomaly types, which together contaminate 13.2% of the test data, given ndrift=5,λdrift=11,nvar=24,λvar=3, see Table 1.

Table 1. Details of the three data sets used in the case studies.

Dataset#Train#Test%Anomalies
SimEuc3,0001,00013.2
SimRiver3,0001,00013.2
Herbert12,7453,49958.0

Figure 3 visualises aspects of the SimEuc dataset, where sensor 37 and sensor 8 are in close (Euclidean) proximity, resulting in a high correlation between the locations (0.65), as anticipated. Note that sensor 23 exhibits anomalous behavior, high-variability and drift, consecutively, over time. Compared to the SimRiver dataset, shown in Figure 4, we note how the time series from sensor 37 and sensor 8 are no longer strongly correlated (0.07), despite their close proximity, as they are not flow-connected in the simulated river network.

9777884f-6e9c-4ac2-a3bf-614d7e3a311b_figure3.gif

Figure 3. A selection of time series from the SimEuc data set; sensor 8 and sensor 37 separated by a short Euclidean distance share high correlation (Pearson‘s coefficient of 0.65).

A Savitzky-Golay filter smoothens the time series (purple line). On the bottom, sensor 23 illustrates high-variability and drift anomalies, consecutively (red dots).

9777884f-6e9c-4ac2-a3bf-614d7e3a311b_figure4.gif

Figure 4. A selection of series from the SimRiver data set; sensor 8 and sensor 37 are not flow-connected sites, and share low correlation (Pearson‘s coefficient of 0.07).

Drift anomalies (red dots) are shown in data from sensor 4. A Savitzky-Golay filter is used to smoothen the time series for visual representation (purple line).

Table 2 shows the performance of the different anomaly detection methods. The best performance in terms of recall is highlighted in bold, while the second-best performance is underlined. We observe that GDN/GDN+ outperforms most other models in terms of recall, which is the fraction of true positives among all actual positive instances. Specifically, GDN has a recall score of 83.3% on SimEuc and 72.7% on SimRiv, while GDN+ has the second highest recall score of 85.6% on SimEuc and 78.0% on SimRiv. Although ARIMA performed best in terms of recall, the high percentage of detected anomalies, 81.6% and 85.6%, is impractical to use (discussed below) and results in low accuracy scores of 28.6% and 25.3%, respectively. HDoutliers did not flag any time point as anomalous in any test case. DeepAnT tends to classify most samples as negative, resulting in a low recall score. These results suggest that GDN/GDN+ are best able to detect a high proportion of actual anomalies in the datasets.

Table 2. Anomaly detection performance in terms of recall (%), precision (%), accuracy (%), and specificity (%) of GDN and its variants and baseline methods for the simulation study.

DataModelRecPrecAccSpec
SimEucHDoutliers0.00.086.8100.0
ARIMA88.614.428.619.4
DeepAnT3.811.983.595.7
GDN83.355.388.989.7
GDN+85.648.185.985.9
SimRivHDoutliers0.00.086.8100.0
ARIMA91.714.225.315.1
DeepAnT0.89.185.998.8
GDN72.754.288.390.6
GDN+78.043.183.584.3

Figure 5 shows the classifications of anomalies in the simulation study. For reasons mentioned, recall is the performance metric of interest, but we also consider the trade-off between recall and precision. Lower precision means that the model may also identify some normal instances as anomalies, leading to false positives. In the context of river network anomaly detection, FP may be manually filtered, but it is critical to minimise FN. Note that GDN+ outperforms GDN in minimising the FN count, but at the cost of increasing FP, in both data sets. Such a trade-off is acceptable and considered an improvement in this context. Conversely, while ARIMA has the highest recall score, the number of FP classifications is impractical for practitioners to deal with (>70% of the test data). We also note that drift anomalies are harder to detect than high-variability anomalies, with drift as the majority of FN counts, in all cases.

9777884f-6e9c-4ac2-a3bf-614d7e3a311b_figure5.gif

Figure 5. Anomaly detection performance of ARIMA, GDN, and GDN+ for the simulation study, in terms of true positive (TP), true negative (TN), false positive (FP) and false negative (FN).

The authors of the GDN model demonstrated its efficacy in detecting anywhere-within-system failures at time t by applying a threshold to all sensors within a system. However, the use of sensor-based thresholds in GDN+ has the advantage of indicating anomalies at the individual sensor level. In the context of monitoring river networks, it is crucial to identify the anomalous sensor, i, at a given time t. The percentage of true positives detected at the correct sensor, i, using the sensor-based anomaly threshold, Ait, in GDN+, was 92% and 89% for SimEuc and SimRiver, respectively. Similarly, the rate of true positives detected in the neighbourhood of the correct sensor i were 96% and 91%, respectively. This granularity of information is essential for large networks consisting of independent sensors that are separated by significant spatial distances, where the cost of time and travel for sensor replacement or maintenance is substantial.

Replication study

This section explores the anomaly detection performance of GDN/GDN+ across multiple simulated data sets. The approach is as follows. First, ten new sets of spatial sampling locations are created, and for each set a Gaussian random field evolving over time is simulated, as per Equation 3. For each set of locations, we again consider both the Euclidean spatial characterisation (SimEuc), and the river network spatial characterisation (SimRiver), yielding a total of 20 benchmark data sets. In the first case, we use the Euclidean covariance model in Equation 4, parameterised by, σ215, and, α515, with independent noise parameter, σ0201, and regression parameters β0110, and, β1110, for the linear model in Equation 2. The values of the parameters are chosen uniformly at random. The Tail-up covariance model in Equation 5 is used in the second case, parameterised as above.

Then, anomalies are generated with the following parameters: drift δ36, variability ζ1215, length of anomalies λdrift510, λvar210, and the number of anomalies, ndrift50,100, nvar50,100 (see Algorithm 1). Across all simulations, the size of the data set is fixed to have length, T=4000, and number of sensors, n=40.

Figure 6 illustrates the anomaly detection performance for GDN and GDN+, run on each data set with sliding window length w=3, and Top-K hyperparameter K=5. Note that the total number of anomalies can be seen at the bar height of TP+FN. In every scenario, GDN+ improves the FN count (red line), but at the cost of an increased TP count (orange line). Whether such a tradeoff is tolerable depends on how critical it is in practical scenarios that true anomalies are successfully detected. Note, that performance varies from one scenario to the next. Nevertheless, despite the simulated datasets being extremely noisy and complex, GDN and GDN+ appear to succeed in successful anomaly detection when other methods cannot.

9777884f-6e9c-4ac2-a3bf-614d7e3a311b_figure6.gif

Figure 6. Anomaly detection performance of GDN, and GDN+ across the twenty simulated benchmarks in the replication study.

Note that GDN+ consistently decreases false negatives (red line) in every case, but also increases false positives (orange line).

Case study: Herbert river

This case study examines water-level data collected from eight sites located across the Herbert river, a major river system located in the tropical region of Australia, as shown in Figure 7. The time series data is highly non-stationary, characterised by river events caused by abnormal rainfall patterns, with some coastal sites exhibiting shorter periodicity trends which can be attributed to tidal patterns, see Figure 8. The spatial relationships are complex, and depend on the surrounding water catchment areas, spatial rainfall patterns, dams, and other impediments. In-situ sensors are prone to various anomalies such as battery failure, biofouling (accumulation of microorganisms, plants, algae, or small animals), and damage. In some cases, anomalies can manifest as the absence of a water event (i.e., flatlining) rather than the presence of abnormal time series patterns (i.e., spikes, variability, drift). In real-world scenarios, anomalies can persist for extended periods, and resolving them may require traveling to remote locations to inspect and repair sensors. As seen in Figure 8, anomalies at time t are largely attributed to Sensor 4, which was out of water for long periods of time.

9777884f-6e9c-4ac2-a3bf-614d7e3a311b_figure7.gif

Figure 7. The Herbert river system and sensor locations.

9777884f-6e9c-4ac2-a3bf-614d7e3a311b_figure8.gif

Figure 8. Test data of water level collected from sensors across the Herbert river (light blue), and the corresponding predictions from the GDN model (dark blue).

Actual anomalies (red dots) are shown, along with the predicted anomalies (orange line) on the bottom.

The Herbert river is a challenging data set for all of the anomaly detection models, due to the sparse placement of sensors across the network, i.e., fewer sensors at greater distances apart resulting in weaker spatial relationships, and the test set contains a high proportion of anomalies (58%; see Table 1). GDN applied to the real-world dataset yields a recall of 29.2%, with GDN+ improving recall to 34.8%, see Table 3. Model performance suffers primarily due to the failure to detect the large anomaly spanning across 2022-01-10 in Figure 8. This may be attributed to the learned graph relationships being characterised by river events in the training data, and without such events, it is difficult to identify when a sensor is flat-lining. However, the model successfully identified anomalies spanning across 2021-12-27 and 2022-01-03, which coincided with river events. See also the extended data36 (see Data availability) for an ablation study focusing on the effectiveness of anomaly detection across different thresholds, τ.

Table 3. Anomaly detection performance in terms of recall (%), precision (%), accuracy (%), and specificity (%) of GDN and its variants and baseline methods for the Herbert river case study.

DataModelRecPrecAccSpec
HerbertHDoutliers0.00.042.0100.0
ARIMA30.562.951.377.5
DeepAnT1.639.744.097.0
GDN29.260.650.176.2
GDN+34.859.250.470.0
GDN++100.080.786.770.0

Figure 9 shows the learned graph adjacency matrix, A. Sensor 1, separated geographically by a large distance from the other sensors, has weaker relationships with the other nodes. Interestingly, the attention weights indicate that sensor 3 is strongly influenced by sensor 6, with tidal patterns from sensor 6 being evident in the predictions of sensor 3. Large error scores indicating an anomaly spanning across 2021-12-27 are primarily observed in sensor 2, impacted by anomalous sensors 3 and 4, where the predicted values are low. However, due to the small network of sensors in this case study, it is difficult to determine which sensor had the anomaly.

9777884f-6e9c-4ac2-a3bf-614d7e3a311b_figure9.gif

Figure 9. Graph with learned adjacency matrix, A.

The edge weightings are determined by αij, which indicates attention and depend on model input, Xt.

Adjusting the threshold in the GDN+ model can only enhance performance to a limited extent, since some anomalies may have insignificant error scores due to the underlying time series model. For instance, the anomaly spanning across 2022-01-10 has negligible error scores because the prediction model was performing well and no river events had occurred, making it challenging to detect the flat-lining type of anomaly in this particular scenario. Therefore, relying solely on the model may not be sufficient in practice, and we recommend implementing some basic expert rules. We introduce another model variant GDN++, by applying a simple filter to the time series ensuring that all values are positive, used in conjunction with adjusting the threshold calculation (GDN+). That is, an anomaly at time, t, on sensor, i, is flagged if, maxIyit<0Ait=1. GDN++ successfully detects all anomalies.

Discussion

Multivariate anomaly detection and prediction models for spatio-temporal sensor data have the potential to transform water quality observation, modelling, and management.7,8 The provision of trustworthy sensor data has four major benefits: 1. It enables the production of finer-scale, reliable and more accurate estimates of sediment and nutrient loads, 2. It provides real-time feedback to landholders and managers, 3. It guides compliance with water quality guidelines, and 4. It allows for the assessment of ecosystem health and the prioritisation of management actions for sustaining aquatic ecosystems. However, technical anomalies in the data provided by the sensors can occur due to factors such as low battery power, biofouling of the probes, and sensor miscalibration. As noted by Ref. 7, there are a wide variety of anomaly types present within in-situ sensor data (e.g., high-variability, drift, spikes, shifts). Most anomaly detection methods used for water quality applications tend to target specific anomalies, such as sudden spikes or shifts.2830 Detecting persistent anomalies, such as sensor drift and periods of abnormally high variability, remains very challenging for statistical and machine learning research. Such anomalies are often overlooked by distance and kernel based methods, yet must be detected before the data can be used, because they confound the assessment of status and trends in water quality. Understanding the relationships among, and typical behaviours of, water quality variables and how these differ among climate zones is thus an essential step in distinguishing anomalies from real water quality events.

We investigated the graph-based neural network model, GDN,15 for its ability to capture complex interdependencies between different variables in a semi-supervised manner. As such, GDN offered the ability to capture deviations from expected behaviour within a high-dimensional setting. We developed novel bench-marking data sets for subsequence anomaly detection (of variable length and type), with a range of spatio-temporal complexities, inspired by the statistical models recently used for river network data.9 Results showed that GDN tended to outperform the benchmarks in anomaly detection. We developed a model extension, GDN+, by adjusting the threshold calculation. GDN+ was shown to further improve performance. A replication study with multiple benchmarking data sets demonstrated consistency in these results. Sensor-based thresholds also proved useful in terms of identifying which neighbourhood the anomaly originated from in the simulation study.

We used a real-world case study of water level in the Herbert river, with non-stationary time series characterised by multiple river events caused by abnormal rainfall patterns. In this case, most of the anomalies appeared as flat-lining, due to the river drying out. Considering an individual time series, such anomalies may not appear obvious, as it is the failure to detect river events (in a multivariate setting) that is indicative of an anomalous sensor. Despite the challenges in the data, GDN+ was shown to successfully detect technical anomalies when river events occurred, and combined with a simple expert-based rule, GDN++, all anomalies were successfully detected.

There are two recent methodological extensions to the GDN approach. First, the Fused Sparse Autoencoder and Graph Net31 which extends the GDN approach by augmenting the prediction-based loss with a reconstruction-based term arising from the output of a sparse autoencoder, and a further extension that allows the individual sensors to have multivariate data. Second, a probabilistic (latent variable) extension is trained using variational inference.32 Since these were published contemporaneously to the present research, and only the latter provided accompanying research code, these approaches were not considered in the paper. Future extensions of this work could consider incorporating the above methodological extensions, as well as developing on the existing software package.

Other applications could consider the separation of river events from technical anomalies. In terms of interpretability, since the prediction model aggregates feature data from neighbouring sensors, anomalous data can affect the prediction of any other sensor within the neighbourhood. Therefore, large error scores originating from one sensor anomaly can infiltrate through to the entire neighbourhood, and impairs the ability to attribute an anomaly to a sensor. The extensive body of literature on signal processing for source identification has the potential to inspire future solutions in addressing this issue.33,34

In summary, this work extends and examines the practicality of the GDN approach when applied to an environmental monitoring application of major international concern, with complex spatial and temporal interdependencies. Successfully addressing the challenge of anomaly detection in such settings can facilitate the wider adoption of in-situ sensors and could revolutionise the monitoring and management of air, soil, and water.

Conclusions

This work studied the application of Graph Deviation Network (GDN) based approaches for anomaly detection on the challenging setting on river network data, which often feature sensors that generate high-dimensional data with complex spatio-temporal relationships. We introduced alternative defection criteria for the model (GDN+/GDN++), and their practicality was explored on both real and simulated benchmark data. The findings indicated that GDN and its variants were effective in correctly (and conservatively) identifying anomalies. Benchmark data were generated via an approach that was also introduced in this paper, along with open-source software, and may serve useful in the development and testing of other anomaly-detection methods. In short, we found that graph neural network based approaches to anomaly detection offer a flexible framework, able to capture and model non-standard, highly dynamic, complex relationships over space and time, with the ability to flag a variety of anomaly types. However, the task of anomaly detection on river network sensor data remains a considerable challenge.

Author contributions

KB; Investigation, Formal Analysis, Methodology, Software, Validation, Visualisation, Writing – Original Draft Preparation. RS; Methodology, Supervision, Writing – Review & Editing. KM; Funding Acquisition, Conceptualisation, Supervision, Writing – Review & Editing. ES; Data Curation, Visualisation, Writing – Review & Editing.

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 16 Aug 2023
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Buchhorn K, Santos-Fernandez E, Mengersen K and Salomone R. Graph neural network-based anomaly detection for river network systems [version 2; peer review: 1 approved, 2 approved with reservations]. F1000Research 2024, 12:991 (https://doi.org/10.12688/f1000research.136097.2)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 2
VERSION 2
PUBLISHED 28 Mar 2024
Revised
Views
8
Cite
Reviewer Report 08 Jun 2024
J Vellingiri, Vellore Institute of Technology, Vellore, Tamil Nadu, India 
Approved
VIEWS 8
The author focuses on the importance of anomaly detection to predict water quality. Anomaly detection is critical to detect false patterns in sensor data. And it highlights a difficult process because of the data's complexity and variability. Author proposed a GDN-based method. Overall work is good, I have accepted the proposed work with minor revisions.
 
Minor ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Vellingiri J. Reviewer Report For: Graph neural network-based anomaly detection for river network systems [version 2; peer review: 1 approved, 2 approved with reservations]. F1000Research 2024, 12:991 (https://doi.org/10.5256/f1000research.162799.r282583)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Version 1
VERSION 1
PUBLISHED 16 Aug 2023
Views
15
Cite
Reviewer Report 20 Oct 2023
Maurício Araújo Dias, Department of Mathematics and Computer Science, School of Sciences and Technology, Universidade Estadual Paulista, São Paulo, State of São Paulo, Brazil 
Approved with Reservations
VIEWS 15
  1. In recent years, there has been an increasing interest in distinguishing anomaly detection from outlier detection (Chandola et al., 20091; Chandola et al., 20122; Kittler et al., 20143; Dias et al., 20194; Dias et al., 20225). According
... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Araújo Dias M. Reviewer Report For: Graph neural network-based anomaly detection for river network systems [version 2; peer review: 1 approved, 2 approved with reservations]. F1000Research 2024, 12:991 (https://doi.org/10.5256/f1000research.149233.r208772)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 28 Mar 2024
    Katie Buchhorn, School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia
    28 Mar 2024
    Author Response
    The authors thank the reviewer for their thoughtful feedback. Please see the enumerated responses below:
    1. Clarification of the anomaly taxonomy has been added in the introduction, including reference 7. Thank ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 28 Mar 2024
    Katie Buchhorn, School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia
    28 Mar 2024
    Author Response
    The authors thank the reviewer for their thoughtful feedback. Please see the enumerated responses below:
    1. Clarification of the anomaly taxonomy has been added in the introduction, including reference 7. Thank ... Continue reading
Views
32
Cite
Reviewer Report 28 Sep 2023
Ailin Deng, National University of Singapore, Singapore, Singapore 
Approved with Reservations
VIEWS 32
The authors provide detailed analysis on a previous anomaly detection method, GDN and improve it by proposing a new threshold selection methodology based on the learned local graph structure, denoted as GDN+. The work mainly focuses on the river network ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Deng A. Reviewer Report For: Graph neural network-based anomaly detection for river network systems [version 2; peer review: 1 approved, 2 approved with reservations]. F1000Research 2024, 12:991 (https://doi.org/10.5256/f1000research.149233.r199872)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 28 Mar 2024
    Katie Buchhorn, School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia
    28 Mar 2024
    Author Response
    The authors thank the reviewer for their thoughtful feedback. Please see the responses below:
    Cons:
    - Tau is introduced as a percentile (as opposed to percentage), and ranges between 0-100. ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 28 Mar 2024
    Katie Buchhorn, School of Mathematical Sciences, Queensland University of Technology, Brisbane, Australia
    28 Mar 2024
    Author Response
    The authors thank the reviewer for their thoughtful feedback. Please see the responses below:
    Cons:
    - Tau is introduced as a percentile (as opposed to percentage), and ranges between 0-100. ... Continue reading

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 16 Aug 2023
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.