This vignette is intended as an introduction to the usage of the kDGLM package, which offers routines for Bayesian analysis of Dynamic Generalized Linear Models, including fitting (filtering and smoothing), forecasting, sampling, intervention and automated monitoring, following the theory developed and/or explored in Kalman (1960), West and Harrison (1997) and Alves et al. (2024).
In this document we will focus exclusively in the usage of the package and will only briefly mention the theory behind these models and only with the intention of highlighting the notation. We highly recommend all users to read the theoretical work (dos Santos et al., 2024) in which we based this package.
This document is organized in the following order:
In this section, we assume the user’s interest lies in analyzing a Time Series {Y⃗t}t = 1T, which adheres to the model described by :
The model comprises:
Per , we define 𝒟t as the cumulative information after observing the first t data points, with 𝒟0 denoting pre-observation knowledge of the process {Yt}t = 1T.
The specification of Wt follows , section 6.3, where Wt = Var[Gtθt − 1|𝒟t − 1] ⊙ (1 − Dt) ⊘ Dt + Ht. Here, Dt (the discount matrix) is an n × n matrix with values between 0 and 1, ⊙ represents the Hadamard product, and ⊘ signifies Hadamard division. Ht is another known n × n matrix specified by the user. This formulation implies that if Dt entries are all 1, and Ht entries are all 0, the model equates to a Generalized Linear Model.
A prototypical example within the general model framework is the Poisson model augmented with a dynamic level featuring linear growth and a single covariate X:
In this model, ℱ denotes the Poisson distribution; the model dimensions are r = k = l = 1; the state vector θt is (μt, νt, βt)′ with dimension n = 3; the link function g is the natural logarithm; and the matrices Ft and Gt are defined as:
$$ F_t=\begin{bmatrix} 1 \\ 0 \\ X_t \end{bmatrix} \quad G_t=\begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$
Consider now a Normal model with unknown mean η1, t and unknown precision η2, t:
For this case, ℱ represents the Normal distribution; the model dimensions are r = 1 and k = l = 2; the state vector θt is (μ1, t, νt, βt, μ2, t, ϕt)′ with dimension n = 5; the link function g and matrices Ft, Gt are:
$$ g\left(\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}\right)= \begin{bmatrix} x_1 \\ \ln(x_2) \end{bmatrix}\quad F_t=\begin{bmatrix} 1 & 0 \\ 0 & 0\\ X_t & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix} \quad G_t=\begin{bmatrix} 1 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0¨& 0 & \phi & 0\\ 0 & 0¨& 0 & 0 & 1 \end{bmatrix} $$
This configuration introduces l = 2 observational parameters, necessitating k = 2 linear predictors. The first linear predictor pertains to the location parameter of the Normal distribution and includes a linear growth model and the covariate Xt. The second linear predictor, associated with the precision parameter, models log precision as an autoregressive (AR) process. We express this model in terms of an Extended Kalman Filter (Kalman, 1960; West and Harrison, 1997). This formulation aligns with the concept of a traditional Stochastic Volatility model, as highlighted by Alves et al. (2024).
Both the Normal and Poisson models illustrate univariate cases. However, the general model also accommodates multivariate outcomes, such as in the multinomial case. Consider a vector of counts Y⃗t = (Y1, t, Y2, t, Y3, t, Y4, t, Y5, t)′, with Yi, T ∈ ℤ and $N_t=\sum_{i=1}^{5}Y_{i,t}$. The model is:
In this multinomial model, ℱ is the Multinomial distribution; the model dimensions are r = 5, l = 5 and k = 4; the state vector θt is (μ1, t, μ2, t, μ3, t, μ4, t)′; Ft and Gt are identity matrices of size 4 × 4; and the link function g maps ℝ5 in ℝ4 as:
$$ g\left(\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{bmatrix}\right)= \begin{bmatrix} \ln\left(\frac{x_1}{x_5}\right) \\ \ln\left(\frac{x_2}{x_5}\right) \\ \ln\left(\frac{x_3}{x_5}\right) \\ \ln\left(\frac{x_4}{x_5}\right) \end{bmatrix} $$
Note that in the Multinomial distribuition, ηi ≥ 0, ∀i and $\sum_{i=1}^{5} \eta_i=1$. Thus, only k = l − 1 linear predictors are necessary to describe this model.
It’s important to emphasize that while we have chosen to illustrate simple model structures, such as a random walk in the log odds for each outcome, neither the general model framework nor the kDGLM package restricts to these configurations. Analysts have the flexibility to tailor models to their specific contexts, including the incorporation of additional latent states to enhance outcome explanation.
Lastly, this general model framework can be extended to encompass multiple outcome models. For further details, see Handling multiple outcomes.
Given the complexity of manually specifying all model components, the kDGLM package includes a range of auxiliary functions to simplify this process. The subsequent section delves into these tools.