This is an interim version of our Electronic Legal Deposit Catalogue-eJournals and eBooks while we continue to recover from a cyber-attack.
Ising model selection using ℓ1-regularized linear regression: a statistical mechanics analysis*This article is an updated version of: Meng X, Obuchi T and Kabashima Y 2021 Ising model selection using ℓ1-regularized linear regression: a statistical mechanics analysis Advances in Neural Information Processing Systems vol 34 ed M Ranzato, A Beygelzimer, Y Dauphin, P S Liang and J Wortman Vaughan (New York: Curran Associates) pp 6290–303. (1st November 2022)
Record Type:
Journal Article
Title:
Ising model selection using ℓ1-regularized linear regression: a statistical mechanics analysis*This article is an updated version of: Meng X, Obuchi T and Kabashima Y 2021 Ising model selection using ℓ1-regularized linear regression: a statistical mechanics analysis Advances in Neural Information Processing Systems vol 34 ed M Ranzato, A Beygelzimer, Y Dauphin, P S Liang and J Wortman Vaughan (New York: Curran Associates) pp 6290–303. (1st November 2022)
Main Title:
Ising model selection using ℓ1-regularized linear regression: a statistical mechanics analysis*This article is an updated version of: Meng X, Obuchi T and Kabashima Y 2021 Ising model selection using ℓ1-regularized linear regression: a statistical mechanics analysis Advances in Neural Information Processing Systems vol 34 ed M Ranzato, A Beygelzimer, Y Dauphin, P S Liang and J Wortman Vaughan (New York: Curran Associates) pp 6290–303.
Abstract: We theoretically analyze the typical learning performance of ℓ 1 -regularized linear regression ( ℓ 1 -LinR) for Ising model selection using the replica method from statistical mechanics. For typical random regular graphs in the paramagnetic phase, an accurate estimate of the typical sample complexity of ℓ 1 -LinR is obtained. Remarkably, despite the model misspecification, ℓ 1 -LinR is model selection consistent with the same order of sample complexity as ℓ 1 -regularized logistic regression ( ℓ 1 -LogR), i.e. M = O log N, where N is the number of variables of the Ising model. Moreover, we provide an efficient method to accurately predict the non-asymptotic behavior of ℓ 1 -LinR for moderate M, N, such as precision and recall. Simulations show a fairly good agreement between theoretical predictions and experimental results, even for graphs with many loops, which supports our findings. Although this paper mainly focuses on ℓ 1 -LinR, our method is readily applicable for precisely characterizing the typical learning performances of a wide class of ℓ 1 -regularized M -estimators including ℓ 1 -LogR and interaction screening.