IOPS 2015 Winter Conference

  • Both models imply a variance-covariance matrix \(\hat{\pmb{\Sigma}}\), aimed to closely resemble the sample variance-covariance matrix \(\pmb{S}\) with positive degrees of freedom.

Structural Equation Modeling

\[ \hat{\pmb{\Sigma}} = \pmb{\Lambda} \left( \pmb{I} - \pmb{B} \right)^{-1} \pmb{\Psi} \left( \pmb{I} - \pmb{B} \right)^{-1\top} \pmb{\Lambda}^{\top} + \pmb{\Theta} \]

  • \(\hat{\pmb{\Sigma}}\): model implied variance-covariance matrix
  • \(\pmb{\Lambda}\): matrix containing factor loadings
  • \(\pmb{B}\): matrix containing structural relationships between latents
  • \(\pmb{\Psi}\): variance-covariance matrix of latents or latent residuals
  • \(\pmb{\Theta}\): matrix containing residual variances and covariances
    • Usually diagonal!

\[ \begin{aligned} \hat{\pmb{\Sigma}} &= \pmb{\Lambda} \pmb{\Psi} \pmb{\Lambda}^{\top} + \pmb{\Theta} \\ \begin{bmatrix} 2 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\ 1 & 1 & 2 & 1 \\ 1 & 1 & 1 & 2 \end{bmatrix} &= \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \begin{bmatrix} 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 & 1 \end{bmatrix} + \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \end{aligned} \]

  • Degrees of freedom: 2

  • Inspiration and headache are independent after conditioning on attending IOPS

Gaussian Graphical Model

In network analysis, multivariate Gaussian data is modeled with the Gaussian Graphical Model (GGM): \[ \hat{\pmb{\Sigma}} = \pmb{\Delta} \left( \pmb{I} - \pmb{\Omega} \right)^{-1}\pmb{\Delta} \]

  • \(\pmb{\Delta}\) is a diagonal scaling matrix
  • \(\pmb{\Omega}\) is a symmetrical matrix with \(0\) on the diagonal and partial correlation coefficients on offdiagonal elements
    • \(\omega_{ij} = \omega_{ji} = \mathrm{Cor}\left( Y_i, Y_j \mid \pmb{Y}^{-(i,j)} \right)\)
    • Encodes a network; there is no edge between node \(Y_i\) and \(Y_j\) if \(\omega_{ij}=0\)
    • A GGM is saturated if all offdiagonal elements in \(\pmb{\Omega}\) are non-zero

\[ \boldsymbol{\Omega} = \begin{bmatrix} 0 & \omega_{12} & 0\\ \omega_{12} & 0 & \omega_{23}\\ 0 & \omega_{23} & 0\\ \end{bmatrix} \]

Sparse configurations of \(\pmb{\Omega}\) can often lead to dense configurations of \(\hat{\pmb{\Sigma}}\)

\[ \boldsymbol{\Omega} = \begin{bmatrix} 0 & 0.5 & 0\\ 0.5 & 0 & 0.5\\ 0 & 0.5 & 0\\ \end{bmatrix}, \pmb{\Delta} = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \]

Results in:

\[ \hat{\boldsymbol{\Sigma}} = \begin{bmatrix} 1.5 & 1 & 0.5 \\ 1 & 2 & 1\\ 0.5 & 1 & 1.5\\ \end{bmatrix} \]

  • If all nodes are connected, \(\hat{\pmb{\Sigma}}\) will be dense
  • Degrees of freedom in this example: 1

SEM problem 1: local indepence

  • Local independence is not plausible; psychological variables interact with each other
  • Thus, theoretically we should expect a saturated models

SEM problem 2: exploratory search

Equivalent model

MacCallum, R. C., Wegener, D. T., Uchino, B. N., & Fabrigar, L. R. (1993). The problem of equivalent models in applications of covariance structure analysis. Psychological bulletin, 114(1), 185.

More equivalent models

More equivalent models

More equivalent models

More equivalent models

More equivalent models

Networks problem 1: latent variables

  • If we could not condition on \(\eta\)

Networks problem 2: Measurement error

Generalized Network Modeling

Augment Structural Equation Models (SEM) by modeling either the residuals or latent covariances as a Gaussian Graphical model (GGM):

Implementing networks in SEM

The variance-covariance matrices in a SEM model can be modeled as a network.

\[ \hat{\pmb{\Sigma}} = \pmb{\Lambda} \left( \pmb{I} - \pmb{B} \right)^{-1} \pmb{\Psi} \left( \pmb{I} - \pmb{B} \right)^{-1\top} \pmb{\Lambda}^{\top} + \pmb{\Theta} \]

Residual networks: \[ \pmb{\Theta} = \pmb{\Delta}_{\pmb{\Theta}} \left( \pmb{I} - \pmb{\Omega}_{\pmb{\Theta}} \right)^{-1} \pmb{\Delta}_{\pmb{\Theta}} \]

Latent networks: \[ \pmb{\Psi} = \pmb{\Delta}_{\pmb{\Psi}} \left( \pmb{I} - \pmb{\Omega}_{\pmb{\Psi}} \right)^{-1} \pmb{\Delta}_{\pmb{\Psi}} \]

Residual Network Modeling (RNM)

\[ \hat{\pmb{\Sigma}} = \pmb{\Lambda} \left( \pmb{I} - \pmb{B} \right)^{-1} \pmb{\Psi} \left( \pmb{I} - \pmb{B} \right)^{-1\top} \pmb{\Lambda}^{\top} + \pmb{\Delta}_{\pmb{\Theta}} \left( \pmb{I} - \pmb{\Omega}_{\pmb{\Theta}} \right)^{-1} \pmb{\Delta}_{\pmb{\Theta}} \]

  • Network is formed at the residuals of SEM
  • Model a network while not assuming no unobserved common causes
  • Model a latent variable structure without the assumption of local independence

Exploratory estimation of network structure:

  • Start with empty residual network
  • Add and remove edges as long as it improves a criterion
    • AIC / BIC / \(\chi^2\) model comparison

Latent Network Modeling (LNM)

\[ \hat{\pmb{\Sigma}} = \pmb{\Lambda} \pmb{\Delta}_{\pmb{\Psi}} \left( \pmb{I} - \pmb{\Omega}_{\pmb{\Psi}} \right)^{-1} \pmb{\Delta}_{\pmb{\Psi}} \pmb{\Lambda}^{\top} + \pmb{\Theta} \]

  • Models conditional independence relations between latent variables as a network
  • Model networks between latent variables
  • Exploratory search for conditional independence relationships between latents

Exploratory estimation of network structure:

  • Start with saturated network (CFA)
  • Remove and re-add edges as long as it improves a criterion
    • AIC / BIC / \(\chi^2\) model comparison

lvnet package

  • Package website: http://github.com/SachaEpskamp/lvnet
  • Uses OpenMx for estimating confirmatory latent variable network models
    • lvnet function
  • Includes exploratory search algorithms
    • lvnetSearch function
  • Also include earlier work on lvglasso lvglasso and EBIClvglasso functions

Installation:

library("devtools")
install_github("sachaepskamp/lvnet")

Load dataset:

# Load package:
library("lvnet")

# Load dataset:
library("lavaan")
data(HolzingerSwineford1939)
Data <- HolzingerSwineford1939[,7:15]

Setup lambda:

# Measurement model:
Lambda <- matrix(0, 9, 3)
Lambda[1:3,1] <- NA
Lambda[4:6,2] <- NA
Lambda[7:9,3] <- NA

Lambda
##       [,1] [,2] [,3]
##  [1,]   NA    0    0
##  [2,]   NA    0    0
##  [3,]   NA    0    0
##  [4,]    0   NA    0
##  [5,]    0   NA    0
##  [6,]    0   NA    0
##  [7,]    0    0   NA
##  [8,]    0    0   NA
##  [9,]    0    0   NA

Fit model:

# Fit CFA model:
CFA <- lvnet(Data, lambda = Lambda)
CFA
## ========== lvnet ANALYSIS RESULTS ========== 
## 
## Input: 
##  Model:           
##  Number of manifests:     9 
##  Number of latents:   3 
##  Number of parameters:    21 
##  Number of observations   301
## 
## Test for exact fit: 
##  Chi-square:      85.02211 
##  DF:          24 
##  p-value:         9.454934e-09
## 
## Information criteria: 
##  AIC:             7517.49 
##  BIC:             7595.339 
##  Adjusted BIC:        7528.739
## 
## Fit indices: 
##  CFI:             0.9306408 
##  NFI:             0.9071607 
##  TLI:             0.8959613 
##  RFI:             0.8607411 
##  IFI:             0.9315741 
##  RNI:             0.9306408 
##  RMR:             0.07502371 
##  SRMR:            0.05952381
## 
## RMSEA: 
##  RMSEA:           0.09206136 
##  90% CI lower bound:  0.07119651 
##  90% CI upper bound:  0.1134724 
##  p-value:         0.0007014875
## 
## Parameter estimates:
##    matrix row col  Estimate  Std.Error
## 1  lambda   1   1 0.8996194 0.08337841
## 2  lambda   2   1 0.4979396 0.08092451
## 3  lambda   3   1 0.6561561 0.07771175
## 4  lambda   4   2 0.9896926 0.05678272
## 5  lambda   5   2 1.1016034 0.06269117
## 6  lambda   6   2 0.9166001 0.05384385
## 7  lambda   7   3 0.6194740 0.07443683
## 8  lambda   8   3 0.7309486 0.07562726
## 9  lambda   9   3 0.6699795 0.07765860
## 10  theta   1   1 0.5490545 0.11926030
## 11  theta   2   2 1.1338393 0.10444129
## 12  theta   3   3 0.8443242 0.09524080
## 13  theta   4   4 0.3711728 0.04804356
## 14  theta   5   5 0.4462551 0.05803133
## 15  theta   6   6 0.3562025 0.04351329
## 16  theta   7   7 0.7993915 0.08771062
## 17  theta   8   8 0.4876971 0.09182149
## 18  theta   9   9 0.5661311 0.09074102
## 19    psi   1   2 0.4585095 0.06356880
## 20    psi   1   3 0.4705346 0.08637672
## 21    psi   2   3 0.2829850 0.07158951

CFA fit is comparable to lavaan:

HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9 '

cfa(HS.model, data=HolzingerSwineford1939)
## lavaan (0.5-18) converged normally after  35 iterations
## 
##   Number of observations                           301
## 
##   Estimator                                         ML
##   Minimum Function Test Statistic               85.306
##   Degrees of freedom                                24
##   P-value (Chi-square)                           0.000

Latent network:

# Latent network:
Omega_psi <- matrix(c(
  0,NA,NA,
  NA,0,0,
  NA,0,0
),3,3,byrow=TRUE)
Omega_psi
##      [,1] [,2] [,3]
## [1,]    0   NA   NA
## [2,]   NA    0    0
## [3,]   NA    0    0

# Fit model:
LNM <- lvnet(Data, lambda = Lambda, omega_psi=Omega_psi)

# Compare fit:
lvnetCompare(cfa=CFA,lnm=LNM)
##           Df      AIC      BIC    Chisq Chisq diff Df diff   Pr(>Chisq)
## Saturated  0       NA       NA  0.00000         NA      NA           NA
## cfa       24 7517.490 7595.339 85.02211  85.022115      24 9.454934e-09
## lnm       25 7516.494 7590.637 86.02334   1.001227       1 3.170137e-01

Exploratory search for latent network:

Res <- lvnetSearch(Data, lambda = Lambda, 
            matrix = "omega_psi", verbose = FALSE)
Res$best$matrices$omega_psi
##           [,1]      [,2]      [,3]
## [1,] 0.0000000 0.4222517 0.4420686
## [2,] 0.4222517 0.0000000 0.0000000
## [3,] 0.4420686 0.0000000 0.0000000

  • Under review

Thank you for your attention!