The glmbayes package produces independent and
identically distributed (iid) posterior samples for Bayesian
generalized linear models. When the posterior is not available in closed
form (e.g., Poisson or binomial regression with Normal priors, Gaussian
regression with independent Normal-Gamma priors), the package uses
accept-reject sampling based on tight
enveloping functions. These envelopes are constructed
using the likelihood subgradient approach of (Nygren and Nygren 2006).
This chapter provides a central overview of the envelope-related functions in the package. It consolidates the theoretical foundations, the function map, and the workflow so that users and developers can see how the pieces fit together. For implementation details of the full simulation pipeline, see Chapter A05. For envelope theory in specialized settings, see Chapter A06 (dispersion envelopes) and Chapter A07 (joint envelopes for independent Normal-Gamma priors).
Accept-reject sampling requires a proposal density \(q(\theta)\) and a constant \(a\) such that \(a \cdot q(\theta) \geq \pi(\theta \mid y)\) for all \(\theta\). The posterior \(\pi(\theta \mid y)\) is then sampled by: draw \(\theta \sim q\), accept with probability \(\pi(\theta \mid y) / (a \cdot q(\theta))\). The constant \(a\) controls efficiency: smaller \(a\) means fewer candidates per acceptance. The likelihood subgradient approach constructs tight envelopes, with \(a\) bounded above by \((2/\sqrt{\pi})^k\) for \(k\)-dimensional models when the likelihood is approximately normal (Nygren & Nygren 2006, Theorems 2 and 3). This ensures efficient iid sampling even in high dimensions.
The envelope-based pipeline follows this sequence:
EnvelopeSize, EnvelopeOpt).EnvelopeBuild).EnvelopeSort).rNormalGLM_std or C++
equivalents).For models with unknown dispersion, an additional dispersion
envelope step (EnvelopeDispersionBuild) extends
the coefficient envelope to a joint envelope over \((\beta, \phi)\). This is orchestrated by
EnvelopeOrchestrator for the independent Normal-Gamma
case.
The envelope construction is grounded in Nygren & Nygren (2006), “Likelihood Subgradient Densities,” Journal of the American Statistical Association. This section records the definitions and structural results from the paper that justify mixture envelopes and the \(3^p\) partition. For how these are implemented in glmbayes, see Chapter A05.
Let the posterior be \[ \pi(\theta \mid y) \propto f(y \mid \theta)\,\pi(\theta), \] where \(f(y \mid \theta)\) is the likelihood and \(\pi(\theta)\) is the prior density.
Definition 2. A density \(q(\cdot)\) is a generalized likelihood-subgradient density for \(\pi(\cdot\mid y)\) with prior \(\pi(\cdot)\) and likelihood \(f(y\mid\cdot)\) at \(\bar{\theta}\in\Theta\) if there exist a function \(g\) and a subgradient \(c(\bar{\theta})\) of \(-\log g\) at \(\bar{\theta}\) such that:
Special cases (paper). If \(g\) is the likelihood, \(q\) is a likelihood-subgradient density; log-concave likelihoods (e.g., Poisson and logistic regression) are the main examples. If \(g\) is a constant upper bound on the likelihood, one obtains the Bayesian construction where the prior itself is viewed as a generalized likelihood-subgradient density.
Existence (paper Appendix). Likelihood-subgradient densities exist at every \(\bar{\theta}\) when the prior is a finite mixture of multivariate normals and the likelihood is log-concave. Mixture priors yield particularly simple algebraic forms (Claim 1 below).
Suppose \(\pi(\theta) = \sum_{i=1}^{k} p_i\,\pi_i(\theta\mid \mu_i,\Sigma_i)\) and \(c(\bar{\theta})\) is a subgradient for \(-\log g\) at \(\bar{\theta}\). Then \[ \mathrm{MGF}\!\big(-c(\bar{\theta})\big) = \sum_{i=1}^{k} p_i\, \exp\!\Big( -c(\bar{\theta})^T\mu_i + \tfrac12\,c(\bar{\theta})^T\Sigma_i\,c(\bar{\theta}) \Big), \] and the generalized likelihood-subgradient density is again a mixture of multivariate normals: \[ q(\theta) = \sum_{i=1}^{k} \tilde{p}_i\,\pi_i(\theta\mid \tilde{\mu}_i,\Sigma_i), \] with \[ \tilde{p}_i = \frac{ p_i\,\exp\!\big(-c(\bar{\theta})^T\mu_i + \tfrac12\,c(\bar{\theta})^T\Sigma_i\,c(\bar{\theta})\big) }{ \mathrm{MGF}\!\big(-c(\bar{\theta})\big) }, \qquad \tilde{\mu}_i = \mu_i - \Sigma_i\,c(\bar{\theta}). \]
Define \[ a(\bar{\theta}) = \frac{ g(\bar{\theta})\,\mathrm{MGF}\!\big(-c(\bar{\theta})\big) }{ f(y)\,\exp\!\big(-c(\bar{\theta})^T\bar{\theta}\big) }, \] and \[ h_{\bar{\theta}}(\theta) = \frac{ \exp\!\big(-c(\bar{\theta})^T\bar{\theta}\big)\,f(y\mid\theta) }{ \exp\!\big(-c(\bar{\theta})^T\theta\big)\,g(\bar{\theta}) }. \]
Then Theorem 1 states \[ a(\bar{\theta})\,q_{\bar{\theta}}(\theta) \ge a(\bar{\theta})\,h_{\bar{\theta}}(\theta)\,q_{\bar{\theta}}(\theta) = \pi(\theta\mid y), \] with \[ 0 \le h_{\bar{\theta}}(\theta) \le 1 \quad \forall \theta\in\Theta. \]
If \(f(y\mid\bar{\theta}) = g(\bar{\theta})\), then \(h_{\bar{\theta}}(\bar{\theta})=1\), so the envelope touches the posterior at tangency.
Operationally, \(a(\bar{\theta})\) governs acceptance efficiency: the acceptance probability is \(1/a(\bar{\theta})\).
Following Nygren & Nygren (2006, Section 3.3), the envelope construction assumes the model has been reparameterized into standard form. In this form:
Example 2 in the paper illustrates the special case of a zero-mean normal prior with identity covariance. In this setting:
The generalized likelihood-subgradient density at a tangency point \(\bar{\theta}\) has mean vector \(-c(\bar{\theta})\) and covariance \(I\).
The normalizing integrals over restricted sets factorize across dimensions. Specifically, for a rectangular set \(A = \{\theta : l_L \le \theta \le l_U\}\), we have
\[ \int_{\theta \in A} q^{\bar{\theta}}(\theta)\, d\theta \;=\; \prod_{r=1}^{p} \Big[ \Phi\!\big(l_{U,r} + c_{r}(\bar{\theta})\big) - \Phi\!\big(l_{L,r} + c_{r}(\bar{\theta})\big) \Big], \]
and the truncated expectation in coordinate \(r\) is
\[ \mathbb{E}_{\tilde{q}^{\bar{\theta}}}[\theta_r \mid \theta \in A] = -\,c_{r}(\bar{\theta}) + \frac{ \phi\!\big(l_{L,r} + c_{r}(\bar{\theta})\big) - \phi\!\big(l_{U,r} + c_{r}(\bar{\theta})\big) }{ \Phi\!\big(l_{U,r} + c_{r}(\bar{\theta})\big) - \Phi\!\big(l_{L,r} + c_{r}(\bar{\theta})\big) }. \]
These closed-form expressions explain why EnvelopeBuild
evaluates logU, loglt, and logrt
using univariate normal CDFs and densities, rather than numerical
integration. The gradients (cbars) directly determine the
shifted means of the restricted densities, and the separability across
dimensions makes the grid-based construction computationally
tractable.
Models with Zellner’s \(g\)-priors are essentially in standard form, since in the whitened design space both the prior and likelihood precisions are diagonal. Each dimension still needs to be scaled so that the prior precision is exactly the identity matrix. For other models, standard form can be achieved by reparameterization (e.g. via Cholesky of the posterior precision) or by shifting part of the prior quadratic form into the likelihood.
Following Remark 5 in Nygren & Nygren (2006), each unrestricted likelihood-subgradient density \(q_{\bar{\theta}}(\cdot)\) can be restricted to a subset \(A \subset \Theta\). The restricted density is defined as
\[ \tilde{q}_{\bar{\theta}}(\theta) = \frac{ q_{\bar{\theta}}(\theta)\,\mathbf{1}_{\{\theta \in A\}} } { \int_{\theta' \in A} q_{\bar{\theta}}(\theta')\, d\theta' }, \] and the corresponding constant is
\[ \tilde{a}(\theta_{\text{bar}}) = a(\theta_{\text{bar}})\, \int_{\theta \in A} q_{\theta_{\text{bar}}}(\theta)\, d\theta, \] where \(a(\theta_{\text{bar}})\) is the global normalizing constant from Theorem 1. For every \(\theta \in A\), the identity
\[ \tilde{a}(\theta_{\text{bar}})\, h_{\theta_{\text{bar}}}(\theta)\, \tilde{q}_{\theta_{\text{bar}}}(\theta) = \pi(\theta \mid y) \] holds, ensuring that the restricted densities reproduce the posterior when combined.
The constant \(a(\bar{\theta})\) is defined in Theorem 1 as
\[ a(\bar{\theta}) = \frac{g(\bar{\theta})\,\mathrm{MGF}\!\big(-c(\bar{\theta})\big)} {f(y)\,\exp\!\big(-c(\bar{\theta})^{T}\bar{\theta}\big)}, \] where \(g(\bar{\theta})\) is the reference density at the tangency point, \(c(\bar{\theta})\) is the subgradient of the log-likelihood, and \(\mathrm{MGF}(-c(\bar{\theta}))\) is the moment-generating function of the prior evaluated at \(-c(\bar{\theta})\).
The envelope function \(h_{\bar{\theta}}(\theta)\) is defined as
\[ h_{\bar{\theta}}(\theta) = \frac{\exp\!\big(-c(\bar{\theta})^{T}\bar{\theta}\big)\,f(y\mid\theta)} {\exp\!\big(-c(\bar{\theta})^{T}\theta\big)\,g(\bar{\theta})}, \] and satisfies
\[ 0 \le h_{\bar{\theta}}(\theta) \le 1 \quad \forall\,\theta \in \Theta, \qquad h_{\bar{\theta}}(\bar{\theta}) = 1 \quad \text{if } f(y\mid\bar{\theta}) = g(\bar{\theta}). \]
In the standardized model (zero-mean normal prior with identity covariance and diagonal posterior precision at the mode), the restricted integral \(\int_{A} q_{\bar{\theta}}(\theta)\, d\theta\) factorizes across dimensions and can be evaluated in closed form using normal CDFs:
\[ \int_{\theta \in A} q_{\bar{\theta}}(\theta)\, d\theta = \prod_{r=1}^{p} \Big[ \Phi\!\big(l_{U,r} + c_{r}(\bar{\theta})\big) - \Phi\!\big(l_{L,r} + c_{r}(\bar{\theta})\big) \Big]. \]
In standardized models, where the prior is \(\mathcal{N}(0, I)\), the moment-generating function simplifies to
\[ \mathrm{MGF}\big(-c(\bar{\theta})\big) = \exp\left(\tfrac{1}{2} c(\bar{\theta})^{T} c(\bar{\theta})\right), \] so the constant \(a(\bar{\theta})\) from Theorem 1 becomes
\[ a(\bar{\theta}) = \frac{g(\bar{\theta})\,\exp\left(\tfrac{1}{2} c(\bar{\theta})^{T} c(\bar{\theta})\right)} {f(y)\,\exp\!\big(-c(\bar{\theta})^{T} \bar{\theta}\big)}. \]
This is why EnvelopeBuild computes and stores
logU, loglt, and logrt using
univariate normal CDF evaluations. The constants \(\tilde{a}(\bar{\theta})\) are then obtained
by scaling the global constant with these integrals, and the mixture
weights (PLSD) are normalized accordingly. In practice:
EnvelopeSet_Grid_C2_pointwise evaluates restricted
densities at each grid point.LLconst stores the log of the restricted
integrals.EnvelopeSet_LogP_C2 computes \(\tilde{a}(\bar{\theta})\) and normalizes
mixture weights.Claim 2 in Nygren & Nygren (2006) shows that the posterior density \(\pi(\theta \mid y)\) can be expressed as a mixture of restricted likelihood-subgradient densities. Let \(A_1, \dots, A_m\) be a partition of the parameter space \(\Theta\), and define
\[ \tilde{q}_{\bar{\theta}}(\theta) = \sum_{i=1}^{m} \tilde{p}_i \, q^{\bar{\theta}}_{A_i}(\theta), \qquad \tilde{p}_i = \frac{ \tilde{a}_i }{ \sum_{j=1}^{k} \tilde{a}_j }. \]
In Remark 6 of Nygren & Nygren (2006), the mixture weights \(\tilde{p}_i\) for each restricted likelihood-subgradient density \(q^{\bar{\theta}_i}_{A_i}\) are defined as
\[ \tilde{p}_i = \frac{ g(\bar{\theta}_i)\,\mathrm{MGF}\!\left(-c(\bar{\theta}_i)\right)\, \dfrac{ \int_{\theta \in A_i} q^{{\bar{\theta}_i}}(\theta)\, d\theta }{ \exp\!\left(-c(\bar{\theta}_i)^{\top}\bar{\theta}_i\right) } }{ \displaystyle\sum_{j=1}^{k} g(\bar{\theta}_j)\,\mathrm{MGF}\!\left(-c(\bar{\theta}_j)\right)\, \dfrac{ \int_{\theta \in A_j} q^{{\bar{\theta}_j}}(\theta)\, d\theta }{ \exp\!\left(-c(\bar{\theta}_j)^{\top}\bar{\theta}_j\right) } }. \]
This expression reflects the full normalization of the mixture, where each \(\tilde{p}_i\) is proportional to the restricted constant \(\tilde{a}_i(\bar{\theta}_i)\) from Remark 5, and the denominator sums over all such constants across the partition. The resulting mixture \(\tilde{q}^{\bar{\theta}}(\theta)\) is a valid approximation to the posterior density \(\pi(\theta \mid y)\).
Remark 6 emphasizes that these mixture weights are tractable to compute in standardized models. When the prior is \(\mathcal{N}(0, I)\) and the posterior precision is diagonal at the mode, each integral \(\int_{A_i} q^{\bar{\theta}}(\theta)\, d\theta\) factorizes across dimensions and can be evaluated using normal CDFs:
\[ \int_{\theta\in A_i} q^{{\bar{\theta}}}(\theta)\, d\theta = \displaystyle\prod_{r=1}^{p} \Bigl[ \Phi\!\bigl(l_{U,r}^{(i)} + c_{r}(\bar{\theta})\bigr) - \Phi\!\bigl(l_{L,r}^{(i)} + c_{r}(\bar{\theta})\bigr) \Bigr]. \]
where \(l_{L}^{(i)}\) and \(l_{U}^{(i)}\) are the bounds defining region \(A_i\).
This tractability is central to the envelope construction. It allows
EnvelopeBuild to compute:
LLconst, which stores the log of each restricted
integral \(\log \int_{A_i}
q^{\bar{\theta}}(\theta)\, d\theta\).PLSD, which stores the normalized mixture weights \(\tilde{p}_i\).These quantities are used to construct the combined density \(\tilde{q}_{\bar{\theta}}(\theta)\) and to evaluate the envelope approximation to the posterior. Because all components are normalized and tractable, the mixture is both valid and computationally efficient.
The envelope function \(h_{\bar{\theta}}(\theta)\) is defined in Theorem 1 as
\[ h_{\bar{\theta}}(\theta) = \frac{\exp\!\big(-c(\bar{\theta})^{T} \bar{\theta}\big)\,f(y \mid \theta)} {\exp\!\big(-c(\bar{\theta})^{T} \theta\big)\,g(\bar{\theta})}. \]
When \(g(\bar{\theta}) = f(y \mid \bar{\theta})\), this simplifies to
\[ h_{\bar{\theta}}(\theta) = \exp\!\big( c(\bar{\theta})^{T}(\theta - \bar{\theta}) \big) \cdot \frac{f(y \mid \theta)}{f(y \mid \bar{\theta})}. \]
Taking logarithms yields
\[ \log h_{\bar{\theta}}(\theta) = c(\bar{\theta})^{T}(\theta - \bar{\theta}) + \log f(y \mid \theta) - \log f(y \mid \bar{\theta}), \]
which is tractable as long as the log-likelihood \(\log f(y \mid \theta)\) is. In particular, if the log-likelihood is concave or piecewise affine, then \(\log h_{\bar{\theta}}(\theta)\) inherits that structure and can be efficiently evaluated across grid regions.
This tractability is central to the envelope construction: it allows
EnvelopeBuild to evaluate the envelope function pointwise
using EnvelopeSet_Grid_C2_pointwise, and ensures that the
resulting approximation remains bounded between 0 and 1. At the tangency
point, we recover
\[ h_{\bar{\theta}}(\bar{\theta}) = 1, \] confirming that the envelope touches the posterior density exactly.
The key inequality that ensures envelope dominance follows from the subgradient inequality for concave functions. If \(\log f(y \mid \theta)\) is concave and \(c(\bar{\theta})\) is a subgradient at \(\bar{\theta}\), then
\[ \log f(y \mid \theta) \le \log f(y \mid \bar{\theta}) + c(\bar{\theta})^{T}(\theta - \bar{\theta}), \]
which implies
\[ \log h_{\bar{\theta}}(\theta) \le 0, \qquad h_{\bar{\theta}}(\theta) \le 1. \]
This inequality guarantees that the envelope function dominates the posterior density pointwise, as required by Theorem 1. Equality holds at the tangency point \(\theta = \bar{\theta}\), where
\[ h_{\bar{\theta}}(\bar{\theta}) = 1. \]
Remark 11. If the log-likelihood is concave and twice continuously differentiable, a Cholesky reparameterization from the posterior precision at the unique mode achieves diagonal posterior Hessian at the mode.
Remark 12. For any positive definite prior precision \(P\), there exists diagonal \(D\succ 0\) such that \(P-D\succ 0\).
Remark 13. With \(P\) and \(D\) as in Remark 12, shifting \((P-D)\) from the prior quadratic form into the likelihood leaves the posterior unchanged: the same \(\pi(\theta\mid y)\) arises from prior precision \(D\) and an augmented log-likelihood.
Remark 14. If the prior covariance is diagonal and the posterior Hessian at the mode is diagonal, the model can be reparameterized into standard form.
Remark 15. Any model with Normal prior and log-concave, twice differentiable log-likelihood can be reparameterized into standard form.
Remark 7. Sampling from restricted Normal laws can be done by inverse transform (Fishman 1999).
Remark 8. When intervals lie in extreme Gaussian tails, accurate evaluation of normal CDFs (or their logs) with uniformly small relative error matters; standard approximation theory and tables are discussed in (Hart et al. 1968); normal distribution properties relevant to tail behavior appear in (Bryc 2002).
Empirically (paper), a single optimally placed likelihood-subgradient component, and two-interval partitions, can become loose as sample size grows; the three-interval construction does not exhibit the same deterioration.
Let \(\theta^\ast\) be the posterior mode. Define (univariate case) \[ \omega = \frac{ \sqrt{2} - \exp\!\big(-1.20491 - 0.7321\,(0.5 - \partial^2\log f(\theta^\ast\mid y)/\partial\theta^2)\big) }{ 1 - \partial^2\log f(\theta^\ast\mid y)/\partial\theta^2 }, \] \[ \ell_1=\theta^\ast-0.5\,\omega,\quad \ell_2=\theta^\ast+0.5\,\omega, \] regions \(A_1=(-\infty,\ell_1)\), \(A_2=[\ell_1,\ell_2]\), \(A_3=(\ell_2,\infty)\), and tangencies \(\bar{\theta}_1=\theta^\ast-\omega\), \(\bar{\theta}_2=\theta^\ast\), \(\bar{\theta}_3=\theta^\ast+\omega\).
For \(N\) i.i.d. Normal\((\theta,1)\) observations, \(-\partial^2\log f(\theta^\ast\mid y)/\partial\theta^2 = N\).
Theorem 2. With \(\tilde{a}^\ast(N)\) the combined constant at sample size \(N\), \[ \lim_{N\to\infty} \tilde{a}^\ast(N) = \frac{2}{\sqrt{\pi}}. \] The paper gives a proof sketch using symmetry \(\tilde{a}_1(\bar{\theta}_1)=\tilde{a}_3(\bar{\theta}_3)\) and appendix claims.
Let \(\theta^\ast\) be the unique posterior mode. For each coordinate \(i\), define \[ \omega_i = \frac{ \sqrt{2} - \exp\!\big(-1.20491 - 0.7321\,(0.5 - \partial^2\log f(\theta^\ast\mid y)/\partial\theta_i^2)\big) }{ 1 - \partial^2\log f(\theta^\ast\mid y)/\partial\theta_i^2 }, \] then \(\ell_{i,1}=\theta^\ast_i-0.5\,\omega_i\), \(\ell_{i,2}=\theta^\ast_i+0.5\,\omega_i\), and \[ A_{i,1}=(-\infty,\ell_{i,1}),\quad A_{i,2}=[\ell_{i,1},\ell_{i,2}],\quad A_{i,3}=(\ell_{i,2},\infty). \] Let \(j=(j_1,\ldots,j_p)\in\{1,2,3\}^p\) and \(A^\ast_j=\prod_{i=1}^p A_{i,j_i}\). The \(A^\ast_j\) partition \(\mathbb{R}^p\).
With \(C_{j1}=\{i:j_i=1\}\), \(C_{j2}=\{i:j_i=2\}\), \(C_{j3}=\{i:j_i=3\}\), tangency \(\bar{\theta}_j\) has coordinates \(\theta^\ast_i-\omega_i\), \(\theta^\ast_i\), or \(\theta^\ast_i+\omega_i\) according as \(i\) lies in \(C_{j1}\), \(C_{j2}\), or \(C_{j3}\).
Remark 16. The resulting mixture generalized likelihood-subgradient density is a mixture of restricted multivariate Normal laws for which a direct sampling procedure exists.
For \(k\)-dimensional Normal regression in standard form, the three-interval construction yields \[ \tilde{a}\le \left(\frac{2}{\sqrt{\pi}}\right)^k. \]
For other log-concave models, posterior asymptotic normality suggests analogous practical tightness as the posterior concentrates.
| Simfunction | When | Envelope role |
|---|---|---|
rNormal_reg |
Non-Gaussian families (Poisson, binomial, Gamma, etc.) with Normal prior | EnvelopeBuild for coefficients |
rindepNormalGamma_reg |
Gaussian with independent Normal-Gamma prior | EnvelopeOrchestrator -> EnvelopeBuild +
EnvelopeDispersionBuild |
rGamma_reg |
Gamma family with Gamma prior on dispersion | Dispersion envelope (see Chapter A06) |
For conjugate cases (e.g., Gaussian likelihood with Normal or Normal-Gamma prior), no envelope is constructed; direct sampling is used.
The following functions are invoked in sequence during envelope-based sampling:
glmb_Standardize_Model - Transforms
the model into standard form. Required before
EnvelopeBuild.
EnvelopeSize - Determines the grid
structure (number of tangency points per dimension). Calls
EnvelopeOpt when Gridtype = 2.
EnvelopeBuild - Constructs the
envelope: tangency points, gradients, mixture weights, and sampling
constants. This is the main envelope construction function.
EnvelopeSort - Reorders envelope
components by probability mass to improve sampling efficiency.
EnvelopeSetGrid,
EnvelopeSetLogP - Low-level helpers that
compute log-densities and constants for the envelope. Typically called
internally by EnvelopeBuild or the C++ routines.
rNormalGLM_std - Samples from the
posterior using the constructed envelope via accept-reject. Used for
non-Gaussian GLMs.
EnvelopeDispersionBuild - For
Gaussian regression with independent Normal-Gamma priors, refines the
coefficient envelope to a joint envelope over \((\beta, \phi)\). Called by
EnvelopeOrchestrator.
EnvelopeOrchestrator - High-level
coordinator that runs EnvelopeBuild,
EnvelopeDispersionBuild, and EnvelopeSort for
the independent Normal-Gamma case. Used inside the C++ sampler.
| Function | Purpose | Typically called by | Theory chapter |
|---|---|---|---|
glmb_Standardize_Model |
Reparameterize to standard form | .rNormalGLM_cpp, C++ routines |
A05 |
EnvelopeSize |
Choose grid size per dimension | EnvelopeBuild (or C++ equivalent) |
A05, A08 |
EnvelopeOpt |
Adaptive grid optimization (Gridtype 2) | EnvelopeSize |
A05 |
EnvelopeBuild |
Build coefficient envelope | rNormal_reg, EnvelopeOrchestrator |
A05, A08 |
EnvelopeSort |
Sort envelope components | EnvelopeBuild, EnvelopeOrchestrator |
A08 |
EnvelopeSetGrid |
Compute grid log-densities | EnvelopeBuild (internal) |
- |
EnvelopeSetLogP |
Compute mixture constants | EnvelopeBuild (internal) |
- |
EnvelopeDispersionBuild |
Joint envelope for \((\beta, \phi)\) | EnvelopeOrchestrator |
A07 |
EnvelopeOrchestrator |
Coordinate build + dispersion + sort | rindepNormalGamma_reg C++ |
A07, A08 |
rNormalGLM_std |
Sample from standardized model | rNormal_reg (non-Gaussian path) |
A05 |
EnvelopeEval |
Evaluate envelope at points | Diagnostics, validation | - |
Users typically do not call these functions directly; they are
invoked by glmb(), lmb(),
rglmb(), or rlmb() through the simfunctions.
The Envelope component of the returned object (when
applicable) contains the constructed envelope and can be inspected for
diagnostics.
The Gridtype argument (passed via glmb(),
rglmb(), etc.) controls how many tangency points are used
per dimension. This trades off envelope build cost
against sampling efficiency.
| Gridtype | Behavior | Expected candidates per draw |
|---|---|---|
| 1 | Static threshold: use single-point envelope if \(\sqrt{1+a_i} \leq 2/\sqrt{\pi}\) | \(\prod_i \sqrt{1+a_i}\) or \((2/\sqrt{\pi})^k\) |
| 2 | Adaptive via EnvelopeOpt: choose per
dimension |
Optimized blend |
| 3 | Always three-point envelope in every dimension | \((2/\sqrt{\pi})^k\) |
| 4 | Always single-point at posterior mode | \(\prod_i \sqrt{1+a_i}\) |
Gridtype 2 is the default. It uses
EnvelopeOpt to balance build time (more tangents = more
cost) against acceptance rate (more tangents = tighter envelope = fewer
candidates). The n_envopt argument influences the effective
sample size used in this optimization; larger values encourage tighter
envelopes.
For large models, use_opencl = TRUE can accelerate
EnvelopeBuild by evaluating gradients on the GPU. See
Chapter A10 for OpenCL usage.
The envelope-based samplers rely on restricted (truncated) distributions. Candidates are drawn from normals restricted to intervals (e.g., left tail, center, right tail) and from truncated Gamma distributions.
Normal_ct (pnorm_ct,
rnorm_ct): Restricted Normal CDF and random generation.
Used when sampling from mixture components that correspond to intervals
on each coordinate. Critical when the truncation interval is narrow,
where naive evaluation can underflow.
Gamma_ct (pgamma_ct,
rgamma_ct, ctrgamma): Restricted Gamma
distribution. Used in rGamma_reg and in dispersion
sampling. Provides numerically stable evaluation and sampling under
truncation.
InvGamma_ct
(pinvgamma_ct, qinvgamma_ct,
rinvgamma_ct): Restricted inverse-Gamma on the dispersion
scale. Used when dispersion is bounded for envelope
construction.
These functions support the envelope construction and accept-reject steps; see their help pages for details.
EnvelopeBuild and EnvelopeOpt.EnvelopeBuild.For help on individual functions, use ?EnvelopeBuild,
?EnvelopeSize, ?EnvelopeOrchestrator, etc.