Introduction

Introduction

This vignette is intended as an introduction to the usage of the kDGLM package, which offers routines for Bayesian analysis of Dynamic Generalized Linear Models, including fitting (filtering and smoothing), forecasting, sampling, intervention and automated monitoring, following the theory developed and/or explored in Kalman (1960), West and Harrison (1997) and Alves et al. (2024).

In this document we will focus exclusively in the usage of the package and will only briefly mention the theory behind these models and only with the intention of highlighting the notation. We highly recommend all users to read the theoretical work (dos Santos et al., 2024) in which we based this package.

This document is organized in the following order:

  1. First we introduce the notations and the class of models we will be dealing with;
  2. Next we present the details about the specification of the model structure, offering tools that allow for an easy, fast and (hopefully) intuitive way of defining models;
  3. In the following section we discuss details about how the user can specify the observational model;
  4. Then we present some basic examples of model fitting, also showing the auxiliary functions that help the user to analyse the fitted model. We also show tools for easy model selection;
  5. Lastly, we present a variety of advanced examples, combining the basic features shown in previous sections to create more complex models;

Notation

In this section, we assume the user’s interest lies in analyzing a Time Series {Y⃗t}t = 1T, which adheres to the model described by :

The model comprises:

  • Y⃗t = (Y1, t, ..., Yr, t)′, the outcome, is an r-dimensional vector of observed variable.
  • θ⃗t = (θ1, t, ..., θn, t)′, representing the unknown parameters (latent states), is an n-dimensional vector, consistently dimensioned across observations.
  • λ⃗t = (λ1, t, ..., λk, t)′, the linear predictors, is a k-dimensional vector indicating the linear transformation of the latent states. As per , λ⃗t is assumed to be (approximately) Normally distributed at all times and directly corresponds to the observational parameters η⃗t, through a one-to-one correspondence g.
  • η⃗t = (η1, t, ..., ηl, t)′, the observational parameters, is an l-dimensional vector defining the model’s observational aspects. Typically, l = k, but this may not hold in some special cases, such as in the Multinomial model, where k = l − 1.
  • , a distribution from the Exponential Family indexed by η⃗t. Pre-determines the values k and l, along with the link function g.
  • g, the link function, establishes a one-to-one correspondence between λ⃗t and η⃗t.
  • Ft, the design matrix, is a user-defined, mostly known, matrix of size k × n.
  • Gt, the evolution matrix, is a user-defined, mostly known, matrix of size n × n.
  • h⃗t = (h1, t, ..., hn, t)′, the drift, is a known n-dimensional vector, typically set to $\vec{0}$ except for model interventions (refer to subsection ).
  • Wt, a known covariance matrix of size n × n, is specified by the user.

Per , we define 𝒟t as the cumulative information after observing the first t data points, with 𝒟0 denoting pre-observation knowledge of the process {Yt}t = 1T.

The specification of Wt follows , section 6.3, where Wt = Var[Gtθt − 1|𝒟t − 1] ⊙ (1 − Dt) ⊘ Dt + Ht. Here, Dt (the discount matrix) is an n × n matrix with values between 0 and 1, represents the Hadamard product, and signifies Hadamard division. Ht is another known n × n matrix specified by the user. This formulation implies that if Dt entries are all 1, and Ht entries are all 0, the model equates to a Generalized Linear Model.

A prototypical example within the general model framework is the Poisson model augmented with a dynamic level featuring linear growth and a single covariate X:

In this model, denotes the Poisson distribution; the model dimensions are r = k = l = 1; the state vector θt is (μt, νt, βt)′ with dimension n = 3; the link function g is the natural logarithm; and the matrices Ft and Gt are defined as:

$$ F_t=\begin{bmatrix} 1 \\ 0 \\ X_t \end{bmatrix} \quad G_t=\begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$

Consider now a Normal model with unknown mean η1, t and unknown precision η2, t:

For this case, represents the Normal distribution; the model dimensions are r = 1 and k = l = 2; the state vector θt is (μ1, t, νt, βt, μ2, t, ϕt)′ with dimension n = 5; the link function g and matrices Ft, Gt are:

$$ g\left(\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}\right)= \begin{bmatrix} x_1 \\ \ln(x_2) \end{bmatrix}\quad F_t=\begin{bmatrix} 1 & 0 \\ 0 & 0\\ X_t & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix} \quad G_t=\begin{bmatrix} 1 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0¨& 0 & \phi & 0\\ 0 & 0¨& 0 & 0 & 1 \end{bmatrix} $$

This configuration introduces l = 2 observational parameters, necessitating k = 2 linear predictors. The first linear predictor pertains to the location parameter of the Normal distribution and includes a linear growth model and the covariate Xt. The second linear predictor, associated with the precision parameter, models log precision as an autoregressive (AR) process. We express this model in terms of an Extended Kalman Filter (Kalman, 1960; West and Harrison, 1997). This formulation aligns with the concept of a traditional Stochastic Volatility model, as highlighted by Alves et al. (2024).

Both the Normal and Poisson models illustrate univariate cases. However, the general model also accommodates multivariate outcomes, such as in the multinomial case. Consider a vector of counts Y⃗t = (Y1, t, Y2, t, Y3, t, Y4, t, Y5, t)′, with Yi, T ∈ ℤ and $N_t=\sum_{i=1}^{5}Y_{i,t}$. The model is:

In this multinomial model, is the Multinomial distribution; the model dimensions are r = 5, l = 5 and k = 4; the state vector θt is (μ1, t, μ2, t, μ3, t, μ4, t)′; Ft and Gt are identity matrices of size 4 × 4; and the link function g maps 5 in 4 as:

$$ g\left(\begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{bmatrix}\right)= \begin{bmatrix} \ln\left(\frac{x_1}{x_5}\right) \\ \ln\left(\frac{x_2}{x_5}\right) \\ \ln\left(\frac{x_3}{x_5}\right) \\ \ln\left(\frac{x_4}{x_5}\right) \end{bmatrix} $$

Note that in the Multinomial distribuition, ηi ≥ 0, ∀i and $\sum_{i=1}^{5} \eta_i=1$. Thus, only k = l − 1 linear predictors are necessary to describe this model.

It’s important to emphasize that while we have chosen to illustrate simple model structures, such as a random walk in the log odds for each outcome, neither the general model framework nor the kDGLM package restricts to these configurations. Analysts have the flexibility to tailor models to their specific contexts, including the incorporation of additional latent states to enhance outcome explanation.

Lastly, this general model framework can be extended to encompass multiple outcome models. For further details, see Handling multiple outcomes.

Given the complexity of manually specifying all model components, the kDGLM package includes a range of auxiliary functions to simplify this process. The subsequent section delves into these tools.

References

Alves, M. B., Migon, H. S., Marotta, R., and dos Santos, S. V., Junior. (2024). K-parametric dynamic generalized linear models: A sequential approach via information geometry.
dos Santos, S. V., Junior, Alves, M. B., and Migon, H. S. (2024). kDGLM: An r package for bayesian analysis of dynamic generialized linear models.
Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering, 82(Series D), 35–45.
West, M., and Harrison, J. (1997). Bayesian forecasting and dynamic models (springer series in statistics). Hardcover; Springer-Verlag.