- Published in Psychometrika
- http://rdcu.be/p3Vj (pre-print also on my website)
SEM 2 Symposium 2017
\[ \hat{\pmb{\Sigma}} = \pmb{\Lambda} \left( \pmb{I} - \pmb{B} \right)^{-1} \pmb{\Psi} \left( \pmb{I} - \pmb{B} \right)^{-1\top} \pmb{\Lambda}^{\top} + \pmb{\Theta} \]
\[ \begin{aligned} \hat{\pmb{\Sigma}} &= \pmb{\Lambda} \pmb{\Psi} \pmb{\Lambda}^{\top} + \pmb{\Theta} \\ \begin{bmatrix} 2 & 1 & 1 & 1 \\ 1 & 2 & 1 & 1 \\ 1 & 1 & 2 & 1 \\ 1 & 1 & 1 & 2 \end{bmatrix} &= \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \begin{bmatrix} 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 & 1 \end{bmatrix} + \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \end{aligned} \]
In network analysis, multivariate Gaussian data is modeled with the Gaussian Graphical Model (GGM): \[ \hat{\pmb{\Sigma}} = \pmb{\Delta} \left( \pmb{I} - \pmb{\Omega} \right)^{-1}\pmb{\Delta} \]
\[ \boldsymbol{\Omega} = \begin{bmatrix} 0 & \\ \omega_{21} & 0 \\ 0 & \omega_{32} & 0\\ \end{bmatrix} \]
Sparse configurations of \(\pmb{\Omega}\) can often lead to dense configurations of \(\hat{\pmb{\Sigma}}\)
\[ \boldsymbol{\Omega} = \begin{bmatrix} 0 & 0.5 & 0\\ 0.5 & 0 & 0.5\\ 0 & 0.5 & 0\\ \end{bmatrix}, \pmb{\Delta} = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \]
Results in:
\[ \hat{\boldsymbol{\Sigma}} = \begin{bmatrix} 1.5 & 1 & 0.5 \\ 1 & 2 & 1\\ 0.5 & 1 & 1.5\\ \end{bmatrix} \]
Augment Structural Equation Models (SEM) by modeling either the residuals or latent covariances as a Gaussian Graphical model (GGM):
The variance-covariance matrices in a SEM model can be modeled as a network.
\[ \hat{\pmb{\Sigma}} = \pmb{\Lambda} \left( \pmb{I} - \pmb{B} \right)^{-1} \pmb{\Psi} \left( \pmb{I} - \pmb{B} \right)^{-1\top} \pmb{\Lambda}^{\top} + \pmb{\Theta} \]
Residual networks: \[ \pmb{\Theta} = \pmb{\Delta}_{\pmb{\Theta}} \left( \pmb{I} - \pmb{\Omega}_{\pmb{\Theta}} \right)^{-1} \pmb{\Delta}_{\pmb{\Theta}} \]
Latent networks: \[ \pmb{\Psi} = \pmb{\Delta}_{\pmb{\Psi}} \left( \pmb{I} - \pmb{\Omega}_{\pmb{\Psi}} \right)^{-1} \pmb{\Delta}_{\pmb{\Psi}} \]
\[ \hat{\pmb{\Sigma}} = \pmb{\Lambda} \left( \pmb{I} - \pmb{B} \right)^{-1} \pmb{\Psi} \left( \pmb{I} - \pmb{B} \right)^{-1\top} \pmb{\Lambda}^{\top} + \pmb{\Delta}_{\pmb{\Theta}} \left( \pmb{I} - \pmb{\Omega}_{\pmb{\Theta}} \right)^{-1} \pmb{\Delta}_{\pmb{\Theta}} \]
\[ \hat{\pmb{\Sigma}} = \pmb{\Lambda} \pmb{\Delta}_{\pmb{\Psi}} \left( \pmb{I} - \pmb{\Omega}_{\pmb{\Psi}} \right)^{-1} \pmb{\Delta}_{\pmb{\Psi}} \pmb{\Lambda}^{\top} + \pmb{\Theta} \]
Two methods for exploratory estimation:
Simulation studies (included in paper) show:
lvnet
functionlvnetSearch
for step-wise searchlvnetLasso
for penalized maximum likelihoodlvglasso
and EBIClvglasso
functionsInstallation:
library("devtools") install_github("sachaepskamp/lvnet")
Load dataset:
# Load package: library("lvnet") # Load dataset: library("lavaan") data(HolzingerSwineford1939) Data <- HolzingerSwineford1939[,7:15]
Setup lambda:
# Measurement model: Lambda <- matrix(0, 9, 3) Lambda[1:3,1] <- NA Lambda[4:6,2] <- NA Lambda[7:9,3] <- NA Lambda
## [,1] [,2] [,3] ## [1,] NA 0 0 ## [2,] NA 0 0 ## [3,] NA 0 0 ## [4,] 0 NA 0 ## [5,] 0 NA 0 ## [6,] 0 NA 0 ## [7,] 0 0 NA ## [8,] 0 0 NA ## [9,] 0 0 NA
Fit model:
# Fit CFA model: CFA <- lvnet(Data, lambda = Lambda) CFA
## ## lvnet estimation completed.: ## - Chi-square (24) = 85.31, p = 0 ## - RMSEA = 0.09 (95% CI: 0.07 - 0.11) ## ## Use summary(object) to inspect more fitmeasures and parameter estimates (see ?summary.lvnet) ## Use plot(object) to plot estimated networks and factor structures (see ?plot.lvnet) ## Use lvnetCompare(object1, object2) to compare lvnet models (see ?lvnetCompare)
CFA fit is comparable to lavaan:
HS.model <- ' visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 ' cfa(HS.model, data=HolzingerSwineford1939)
## lavaan (0.5-23.1097) converged normally after 35 iterations ## ## Number of observations 301 ## ## Estimator ML ## Minimum Function Test Statistic 85.306 ## Degrees of freedom 24 ## P-value (Chi-square) 0.000
Latent network:
# Latent network: Omega_psi <- matrix(c( 0,NA,NA, NA,0,0, NA,0,0 ),3,3,byrow=TRUE) Omega_psi
## [,1] [,2] [,3] ## [1,] 0 NA NA ## [2,] NA 0 0 ## [3,] NA 0 0
# Fit model: LNM <- lvnet(Data, lambda = Lambda, omega_psi=Omega_psi) # Compare fit: lvnetCompare(cfa=CFA,lnm=LNM)
## Df AIC BIC EBIC Chisq Chisq diff Df diff ## Saturated 0 NA NA NA 0.00000 NA NA ## cfa 24 7517.490 7595.339 7835.038 85.30552 85.305522 24 ## lnm 25 7516.494 7590.637 7818.921 86.31009 1.004565 1 ## Pr(>Chisq) ## Saturated NA ## cfa 8.502553e-09 ## lnm 3.162085e-01
Exploratory search for latent network:
Res <- lvnetSearch(Data, lambda = Lambda, matrix = "omega_psi", verbose = FALSE) Res$best$matrices$omega_psi
## [,1] [,2] [,3] ## [1,] 0.0000000 0.4222516 0.4420685 ## [2,] 0.4222516 0.0000000 0.0000000 ## [3,] 0.4420685 0.0000000 0.0000000
Dataset from the psych package on the Big 5 personality traits. This dataset consists of 2800 observations of 25 items designed to measure the 5 central personality traits with 5 items per trait:
library("psych") # Load BFI data: data(bfi) bfi <- bfi[,1:25] # Lavaan model: Mod <- ' A =~ A1 + A2 + A3 + A4 + A5 E =~ E1 + E2 + E3 + E4 + E5 C =~ C1 + C2 + C3 + C4 + C5 N =~ N1 + N2 + N3 + N4 + N5 O =~ O1 + O2 + O3 + O4 + O5 '
Dataset from the psych package on the Big 5 personality traits. This dataset consists of 2800 observations of 25 items designed to measure the 5 central personality traits with 5 items per trait:
library("psych") # Load BFI data: data(bfi) bfi <- bfi[,1:25] # Lavaan model: Mod <- ' A =~ A1 + A2 + A3 + A4 + A5 E =~ E1 + E2 + E3 + E4 + E5 C =~ C1 + C2 + C3 + C4 + C5 N =~ N1 + N2 + N3 + N4 + N5 O =~ O1 + O2 + O3 + O4 + O5 '
# Transform lavaan model to lvnet model matrices: mod <- lav2lvnet(Mod, bfi) # Best RNM model: RNM <- lvnetLasso(bfi, lambda = mod$lambda, lassoMatrix = "omega_theta", nCores = 8, tuning.max = 0.5, nTuning = 100, criterion = "ebic") RNM_best <- RNM$best