Movie!
.
.
Data collected by Date C. Van der Veen, in collaboration with Harriette Riese en Renske Kroeze.
Feeling worthless interacts with feeling helpless
Feeling stressed interacts with feeling the need to do things
Central node: Feeling sad
Cycle of enjoyment, feeling sad, feeling worthless and being active
Having to had to do things leads to letting important things pass
Let \(\pmb{y}_t\) represent the \(P\) length random vector of responses at time point \(t\) in the ESM design. We assume that \(\pmb{y}_t\) follows a stationary centered normal distribution: \[ \pmb{y}_t \sim N\left( \pmb{0}, \pmb{\Sigma} \right) \quad \forall t. \]
We use a lag-1 autoregression model: \[ \begin{aligned} \pmb{y}_{t} &= \pmb{B} \pmb{y}_{t-1} + \pmb{\varepsilon}_{t} \\ \pmb{\varepsilon}_{t} &\sim N\left( \pmb{0}, \pmb{K}^{-1} \right) \end{aligned} \] \(\pmb{B}\) is a asymmetric matrix encoding a directed network and \(\pmb{K}\) is a symmetric matrix encoding an undirected network.
The stationary covariance matrix \(\pmb{\Sigma}\) can be obtained after jointly estimating \(\pmb{K}\) and \(\pmb{B}\):
\[ \mathrm{Vec}\left(\pmb{\Sigma}\right) = \left( \pmb{I} - \pmb{B} \otimes \pmb{B}\right)^{-1} \mathrm{Vec}\left(\pmb{\Theta}\right) \]
The joint distribution of \(\pmb{y}_{t}\) and \(\pmb{y}_{t+1}\) can now be formulated as follows: \[ \begin{bmatrix} \pmb{y}_{t} \\ \pmb{y}_{t+1} \end{bmatrix} \sim N \left( \pmb{0}, \pmb{\Sigma}_{\mathrm{TP}} \right), \] in which \(\pmb{\Sigma}_{\mathrm{TP}}\) is the Toeplitz matrix: \[ \pmb{\Sigma}_{\mathrm{TP}} = \begin{bmatrix} \pmb{\Sigma} & \pmb{\Sigma}\pmb{B}^\top\\ \pmb{B}\pmb{\Sigma} & \pmb{\Sigma} \end{bmatrix}, \]
the stationary differential entropy of the system can be derived to be: \[ h\left( \pmb{y}_t \right) = \frac{1}{2} \log_2 \left( (2\pi e)^P \mid \pmb{\Sigma} \mid \right) \quad \forall t, \]
The joint differential entropy of \(\pmb{y}_t\) and \(\pmb{y}_{t+1}\) is identical except that it uses the Toeplitz matrix: \[ \begin{aligned} h\left( \pmb{y}_t , \pmb{y}_{t+1} \right) &= \frac{1}{2} \log_2 \left( (2\pi e)^{2P} \mid \pmb{\Sigma}_{\mathrm{TP}} \mid \right) \end{aligned} \]
Using these expressions, we can obtain the mutual information between \(\pmb{y}_t\) and \(\pmb{y}_{t+1}\), making use of \(h\left( \pmb{y}_t \right) = h\left( \pmb{y}_{t+1} \right)\): \[ \begin{aligned} I\left(\pmb{y}_t; \pmb{y}_{t+1} \right) &= 2 h\left( \pmb{y}_t \right) - h\left( \pmb{y}_t , \pmb{y}_{t+1} \right) \\ &= \frac{1}{2} \log_2 \left( \frac{ \mid \pmb{\Sigma} \mid^2 }{ \mid \pmb{\Sigma}_{\mathrm{TP}} \mid } \right) \end{aligned} \]
This measure encodes the stability of the system, the amount of information retained in consecutive time points.
The differential entropy of variable \(j\) at time point \(t\) equals: \[ h\left( y_{t,j} \right) = \frac{1}{2} \log_2\left( 2 \pi e \sigma_{jj} \right). \] To compute the joint differential entropy between between \(y_{t,j}\) and \(\pmb{y}_{t+1}\), we need to form the block matrix \(\pmb{\Sigma}^{(j)}_{\mathrm{TP}}\): \[ \pmb{\Sigma}^{(j)}_{\mathrm{TP}} = \begin{bmatrix} \sigma_{jj} & \left(\pmb{\Sigma}\pmb{B}^\top\right)_{j,+} \\ \left(\pmb{B}\pmb{\Sigma}\right)_{+,j} & \pmb{\Sigma} \end{bmatrix}, \] in which \(\left(\pmb{\Sigma}\pmb{B}^\top\right)_{j,+}\) indicates the \(j\)th row-vector of \(\pmb{\Sigma}\pmb{B}^\top\) and \(\left(\pmb{B}\pmb{\Sigma}\right)_{+,j}\) the \(j\)th column-vector of \(\pmb{B}\pmb{\Sigma}\)
the mutual information between \(y_{t,j}\) and \(\pmb{y}_{t+1}\) becomes: \[ \begin{aligned} I\left(y_{t,j}; \pmb{y}_{t+1} \right) &= h\left( y_{t,j} \right) + h\left( \pmb{y}_{t+1} \right) - h\left( y_{t,j} , \pmb{y}_{t+1} \right) \\ &= \frac{1}{2} \log_2\left( \frac{ \sigma_{jj} \mid \pmb{\Sigma} \mid }{ \mid \pmb{\Sigma}^{(j)}_{\mathrm{TP}} \mid } \right). \end{aligned} \]
This measure gives a comparable measure between variables that indicates the relative influence of each variable on the rest of the system at the next time point.
To investigate the unique influence of a variable on the system, we can investigate the mutual information between one variable and all other variables on the next time point conditioned on all other variables on the current time point; \(I\left(y_{t,j}; \pmb{y}_{t+1} \mid \pmb{y}_{t+1}^{-(j)} \right)\):
\[ \begin{aligned} I\left(y_{j,t} ; \pmb{y}_{t+1} \mid \pmb{y}^{-(j)}_{t} \right) &= I\left(\pmb{y}_{t}; \pmb{y}_{t+1} \right) - I\left(\pmb{y}^{-(j)}_{t} ; \pmb{y}_{t+1} \right) \\ &= \frac{1}{2} \log_2\left( \frac{ \mid \pmb{\Sigma} \mid \mid \pmb{\Sigma}^{-(j)}_{\mathrm{TP}} \mid}{ \mid \pmb{\Sigma}_{\mathrm{TP}} \mid \mid \pmb{\Sigma}^{-(j)} \mid} \right) \end{aligned} \]
Level 1: \[ \pmb{y}_t^{(p)} = \pmb{B}^{(p)} \pmb{y}_ {t-1}^{(p)} + \pmb{\varepsilon}_t^{(p)} \]
Level 2: \[ \begin{aligned} \pmb{\beta}_ {ij}^{(p)} &= b_{ij} + u^{(p)}_{ij} \\ u^{(p)}_{ij} &\sim N(0, \sigma_{ij}) \end{aligned} \]
Level 1: \[ \pmb{y}_t^{(p)} = \pmb{B}_1^{(p)} \pmb{y}_ {t-1}^{(p)} + \pmb{B}_2^{(p)} \pmb{y}_ {t-2}^{(p)} + \pmb{\varepsilon}_t^{(p)} \]
Level 2: \[ \begin{aligned} \pmb{\beta}_ {lij}^{(p)} &= b_{lij} + u^{(p)}_{lij} \\ u^{(p)}_{lij} &\sim N(0, \sigma_{lij}) \end{aligned} \]
C = cheerful, E = pleasant event, W = worry, F = fearful, S = sad and R = relaxed
Bringmann, L. F., Vissers, N., Wichers, M., Geschwind, N., Kuppens, P., Peeters, F., … & Tuerlinckx, F. (2013). A network approach to psychopathology: new insights into clinical longitudinal data. PloS one, 8(4), e60188.
Abegaz, Fentaw, and Ernst Wit. 2013. “Sparse Time Series Chain Graphical Models for Reconstructing Genetic Networks.” Biostatistics. Biometrika Trust, kxt005.
Rothman, Adam J, Elizaveta Levina, and Ji Zhu. 2010. “Sparse Multivariate Regression with Covariance Estimation.” Journal of Computational and Graphical Statistics 19 (4). Taylor & Francis: 947–62.
Wild, Beate, Michael Eichler, Hans-Christoph Friederich, Mechthild Hartmann, Stephan Zipfel, and Wolfgang Herzog. 2010. “A Graphical Vector Autoregressive Modelling Approach to the Analysis of Electronic Diary Data.” BMC Medical Research Methodology 10 (1). BioMed Central Ltd: 28.