![]() For these cases, as a remedy, the window-limited GLR is usually considered, where only the past w samples are stored and the maximization is restricted to be over k ∈ ( t − w, t ]. In these cases, one has to store historical data and recompute the MLE θ ^ k, t whenever there is new data, which is not memory efficient nor computational efficient. Another instance is when there is a constraint on the MLE such as sparsity. One such instance is given in Section 1.2. However, in more complex situations, in general MLE θ ^ k + 1, t does not have recursive form and cannot be evaluated using simple summary statistics. For instance, when the post-change distribution is Gaussian with mean θ, θ ^ k + 1, t = ( ∑ i = k + 1 t X i ) / ( t − k ), and θ ^ k + 1, t + 1 = ( t − k ) / ( t − k + 1 ) In simple cases, the MLE θ ^ k + 1, t may have closed-form expressions and may be evaluated recursively. The GLR statistic is more robust than CUSUM, and it is particularly useful when the post-change parameter may vary from one situation to another. The GLR statistic is given by max k < t ∑ i = k + 1 t log ( f θ ^ k, t ( X i ) / f θ 0 ( X t ) ), and a change is announced whenever it exceeds a pre-specified threshold. Without knowing whether the change occurs and where it occurs beforehand when forming the GLR statistic, we have to maximize k over all possible change locations. Using these samples, one can form the MLE denoted as θ ^ k + 1, t. CUSUM procedure has a recursive structure: initialized with W 0 = 0, the likelihood-ratio statistic can be computed according to W t + 1 = max. Assume that before the change, the samples X i follow a distribution f θ 0 and after the change the samples X i follow another distribution f θ 1. A commonly used change-point detection method is the so-called CUSUM procedure that can be derived from likelihood ratios. ![]() Numerical and real data examples validate our theory.Ĭonsider change-point detection with unknown post-change parameters. ![]() Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length) meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. ![]() ![]() When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. ![]()
0 Comments
Leave a Reply. |