---
title: "Classical Benchmarks: Lalonde and RHC"
output: rmarkdown::html_vignette
vignette: >
  %\VignetteIndexEntry{Classical Benchmarks: Lalonde and RHC}
  %\VignetteEngine{knitr::rmarkdown}
  %\VignetteEncoding{UTF-8}
---

```{r, include = FALSE}
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)
```

This vignette demonstrates the **causaldef** framework on two famous datasets in causal inference:
1.  **Lalonde's Job Training Data**: Validating deficiency against an experimental application.
2.  **Right Heart Catheterization (RHC)**: Handling high-dimensional confounding and uncertainty quantification.

```{r setup}
library(causaldef)
library(stats)

if (!exists("deparse1", envir = baseenv())) {
  deparse1 <- function(expr, collapse = " ", width.cutoff = 500L, ...) {
    paste(deparse(expr, width.cutoff, ...), collapse = collapse)
  }
}
```

## 1. Lalonde's NSW Benchmark

The Lalonde dataset allows us to verify our methods because it contains both an experimental control group and observational comparison groups.

We define the "True" experiment ($E_{target}$) using the randomized data, and the "Observational" experiment ($E_{obs}$) using the CPS controls.

### Data Preparation
```{r lalonde-data}
data("nsw_benchmark")
head(nsw_benchmark)

# 1. The Experimental Benchmark (Gold Standard)
nsw_exp <- subset(nsw_benchmark, sample_id %in% c("nsw_treated", "nsw_control"))

# True Experimental Estimate (Unadjusted difference in means is valid due to randomization)
true_att <- mean(nsw_exp$re78[nsw_exp$treat == 1]) - mean(nsw_exp$re78[nsw_exp$treat == 0])
cat("True Experimental ATT:", round(true_att, 2), "\n")

# 2. The Observational Challenge (NSW Treated + CPS Control)
nsw_obs <- subset(nsw_benchmark, sample_id %in% c("nsw_treated", "cps_control"))

# Naive Observational Estimate (Unadjusted)
naive_est <- mean(nsw_obs$re78[nsw_obs$treat == 1]) - mean(nsw_obs$re78[nsw_obs$treat == 0])
cat("Naive Observational Estimate:", round(naive_est, 2), "\n")
cat("Bias:", round(naive_est - true_att, 2), "\n")
```
The massive bias (negative effect instead of positive) confirms the difficulty of this problem.

### Deficiency Estimation
We now calculate a deficiency proxy $\delta(E_{obs}, E_{do})$ using the available covariates.
```{r lalonde-def}
covariates <- c("age", "education", "black", "hispanic", "married", "nodegree", "re74", "re75")

# Specification
spec_lalonde <- causal_spec(
  data = nsw_obs,
  treatment = "treat",
  outcome = "re78",
  covariates = covariates
)

# Estimate deficiency using Propensity Score Weighting (IPTW)
res_lalonde <- estimate_deficiency(
  spec_lalonde,
  methods = c("unadjusted", "iptw"),
  n_boot = 50 # Kept low for vignette speed
)

print(res_lalonde)
plot(res_lalonde)
```

The proxy $\delta$ summarizes the distance between the reweighted observational distribution and the target randomized experiment. A lower $\delta$ indicates that the reweighting more closely reconstructed the experimental conditions under the PS-TV diagnostic.

## 2. Right Heart Catheterization (RHC)

This dataset involves high-dimensional confounding (50+ covariates). We use it to demonstrate the **Confounding Frontier** and **Policy Regret Bounds**.

### Data Setup
```{r rhc-data}
data("rhc")

# Convert treatment to binary (0/1) for causaldef
# Assuming 'swang1' is the treatment column, usually "RHC" vs "No RHC"
# Check levels first (simulated check here, dataset structure assumed from documentation)
if (is.factor(rhc$swang1)) {
  rhc$treat_bin <- as.numeric(rhc$swang1) - 1 # Assuming factor levels order
} else {
  rhc$treat_bin <- rhc$swang1
}

# Outcome: 30-day survival (inverse of dth30) or just standard outcome
# Let's say outcome is dth30 (binary).
if (is.factor(rhc$dth30)) {
  rhc$outcome_bin <- as.numeric(rhc$dth30) - 1
} else {
  rhc$outcome_bin <- rhc$dth30
}

# Select a subset of covariates for demonstration (to keep it fast)
# Real analysis would use all 50+.
rhc_covars <- c("age", "sex", "race", "aps1", "cat1") 
# Note: 'aps1' is APACHE III score


spec_rhc <- causal_spec(
  data = rhc,
  treatment = "treat_bin",
  outcome = "outcome_bin",
  covariates = rhc_covars
)
```

### Quantifying the Information Gap
```{r rhc-def}
res_rhc <- estimate_deficiency(spec_rhc, methods = "iptw", n_boot = 0)
print(res_rhc)
```

### Policy Regret Bounds
The "Safety Floor" tells us the minimum regret we risk by making a decision based on this imperfect observational evidence.
```{r rhc-policy}
# Utility: Let's say preventing death has utility 1, death has utility 0.
# The outcome is death (1) or survival (0). 
# We want to minimize outcome (death). 
# This is equivalent to utility range [0, 1].

bounds_rhc <- policy_regret_bound(res_rhc, utility_range = c(0, 1), method = "iptw")
print(bounds_rhc)
```
The result typically shows a low safety floor (e.g., < 0.05), suggesting that the observational findings are actionable unless the decision hinges on a very small utility difference.

### Confounding Frontier
Sensitivity analysis: If we missed a confounder $U$ correlated with treatment by $\alpha$ and outcome by $\gamma$, how much would our deficiency increase?
```{r rhc-frontier, fig.height=5, fig.width=6}
frontier <- confounding_frontier(spec_rhc, grid_size = 30)
plot(frontier)
```
The plot shows the "safe" region (low confounding) versus "unsafe" region. If we suspect unmeasured confounders (like specific genetic factors) have strength $|\alpha \gamma| > 0.1$, the yellow/red zones indicate high deficiency.
