Installation | Key Functions | References
lpmec is an R package that provides tools for analyzing latent variable models with measurement error correction, using bootstrapping techniques for inference.
Measurement error in latent predictors (e.g., ideology scores from survey responses, ability measures from test items) causes attenuation bias in regression coefficients—systematically biasing estimates toward zero.
lpmec implements split-sample instrumental variables and OLS correction methods to obtain consistent estimates when your predictor is measured with error.
Key features:
- Split-sample IV estimation for latent predictors
- Multiple estimation methods: EM (emIRT), PCA, MCMC (pscl/NumPyro)
- Bootstrap inference with confidence intervals
- Sensitivity analysis via sensemakr integration
Within an R session, you can install the development version of lpmec from GitHub with:
# Install from GitHub
# install.packages("devtools")
devtools::install_github("cjerzak/lpmec-software", subdir = "lpmec")
library(lpmec)
data(KnowledgeVoteDuty)
# Run correction with bootstrap
results <- lpmec(Y = KnowledgeVoteDuty$voteduty,
observables = as.matrix(KnowledgeVoteDuty[, 2:5]),
n_boot = 50)
print(results)
# Latent Predictor Measurement Error Correction (LPMEC) Model Results
# -------------------------------------------------------------------
# Uncorrected Coefficient (OLS): X.XXX (SE: X.XXX)
# Corrected Coefficient: X.XXX (SE: X.XXX)The correlation-corrected estimates computed here by lpmec closely track the performance of the computationally intensive full Bayesian joint estimation, providing good results regardless of the sample size (N) or number of indicators (M).
For advanced MCMC estimation, lpmec supports the NumPyro backend via Python. NumPyro leverages JAX’s automatic differentiation and JIT/XLA compilation (including optional GPU/TPU execution), which can make HMC/NUTS sampling faster and easier to parallelize than CPU-only MCMC backends. This is optional; the default pscl backend works without any Python setup.
To use the NumPyro backend:
- Install Python dependencies using the built-in setup function:
library(lpmec)
build_backend() # Creates conda environment with JAX and NumPyro- Alternative: Manual installation if you prefer to manage your own Python environment:
# Create and activate a conda environment
conda create -n lpmec python=3.10
conda activate lpmec
# Install JAX and NumPyro
pip install jax jaxlib numpyro- Use the NumPyro backend in your analysis:
results <- lpmec_onerun(
Y = Yobs,
observables = ObservablesMat,
estimation_method = "mcmc",
mcmc_control = list(backend = "numpyro")
)Note: The NumPyro backend requires a working Python installation accessible via the reticulate package.
lpmec_onerun performs a single run of latent variable analysis with measurement error correction (no bootstrapping; 1 split sample partition):
# Generate data
Yobs <- rnorm(1000)
ObservablesMat <- matrix(sample(c(0,1), 1000*10, replace = TRUE), ncol = 10)
# One run of latent error correction method
lpmec::lpmec_onerun(Y = Yobs,
observables = ObservablesMat)
lpmec implements a bootstrap analysis for latent variable models with measurement error correction. We average over n_partition split sample partitions.
# Generate data
Yobs <- rnorm(1000)
ObservablesMat <- matrix(sample(c(0,1), 1000*10, replace = TRUE), ncol = 10)
# Latent error correction method, with partitioning and bootstrap
results <- lpmec::lpmec(
Y = Yobs,
observables = ObservablesMat,
n_boot = 32L,
n_partition = 10L
)
# View the corrected IV coefficient and its standard error
print(results)
lpmec ships with a small example dataset, KnowledgeVoteDuty, drawn from the 2024 American National Election Study. Load it with data(KnowledgeVoteDuty) to try the package on real survey responses.
| Method | Description | Backend |
|---|---|---|
em (default) |
EM algorithm | emIRT |
pca |
Principal component | base R |
averaging |
Simple row means | base R |
mcmc |
Full Bayesian MCMC | pscl or NumPyro |
custom |
User-provided function | - |
- Introduction Vignette - Complete walkthrough with real data
- Run
?lpmecafter installation for function documentation
Contributions to lpmec are welcome! Feel free to submit a pull request or open an issue.
We thank Guilherme Duarte, Jeff Lewis, Umberto Mignozzetti, Aaron Pancost, Erik Snowberg, Chris Tausanovitch, and participants of an MPSA panel for very helpful comments. We thank Major Valls for excellent research assistance.
Connor T. Jerzak, Stephen A. Jessee. Attenuation Bias with Latent Predictors. arXiv:2507.22218, 2025.
@misc{jerzak2025attenuationbiaslatentpredictors,
title={Attenuation Bias with Latent Predictors},
author={Connor T. Jerzak and Stephen A. Jessee},
year={2025},
eprint={2507.22218},
archivePrefix={arXiv},
primaryClass={stat.AP},
url={https://arxiv.org/abs/2507.22218},
}