---
title: "The causaldef Methodology: Theory and Practice"
output: rmarkdown::html_vignette
vignette: >
  %\VignetteIndexEntry{The causaldef Methodology: Theory and Practice}
  %\VignetteEngine{knitr::rmarkdown}
  %\VignetteEncoding{UTF-8}
---

```{r, include = FALSE}
eval_surv <- getRversion() >= "4.0.0" && requireNamespace("survival", quietly = TRUE)
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  fig.width = 6, 
  fig.height = 4,
  warning = FALSE, 
  message = FALSE
)
```

# Introduction: The Safety Manifesto

**How do we calculate the price of acting on imperfect data?**

Causal inference is typically framed as a geometry problem: "How close is our observational experiment to the target interventional experiment?" While true, this abstraction often hides the stakes.

The `causaldef` package is built on a different premise: **If you rely on observational data, you are accepting a safety risk. You need to calculate the cost of being wrong.**

### Flash Forward: The Destination

Before we dive into the math, here is where we are going. This is the **Safety Floor**: the minimum unavoidable risk you accept when you choose not to run a Randomized Control Trial (RCT).

```{r flash_forward, echo=FALSE, results='hide', fig.width=6, fig.height=4}
library(causaldef)
library(ggplot2)

set.seed(2026)
n <- 500
W <- rnorm(n)
prob_A <- plogis(0.5 * W)
A <- rbinom(n, 1, prob_A)
Y <- 1 + 2 * A + 1 * W + rnorm(n)
Y_nc <- 0.5 * W + rnorm(n)
df <- data.frame(W = W, A = A, Y = Y, Y_nc = Y_nc)

spec <- causal_spec(
  data = df,
  treatment = "A",
  outcome = "Y",
  covariates = "W",
  negative_control = "Y_nc"
)

results <- estimate_deficiency(
  spec,
  methods = c("unadjusted", "iptw", "aipw"),
  n_boot = 50
)

bounds <- policy_regret_bound(results, utility_range = c(0, 10), method = "aipw")
plot(bounds, type = "safety_curve")
```

Figure 1: Regret Bounds vs Deficiency. The transfer penalty and minimax floor scale linearly in $\delta$:

$$ \text{Regret}_{do}(\pi) \leq \text{Regret}_{obs}(\pi) + M\delta $$

*   **Regret**: The cost of making a suboptimal decision based on your data.
*   **Regret_obs**: The regret you can evaluate under the observational regime.
*   **$M\delta$**: The transfer penalty (additive worst-case regret inflation term).
    The minimax safety floor is $(M/2)\delta$.

This is not just about bias—this is calculating the safety price for your decision. You are about to learn how to calculate, bound, and minimize this deficiency ($\delta$).

### The Core Theory: Markov Kernels in Plain English

Formally, we use **Le Cam's theory of statistical experiments** (Akdemir 2026). We seek a Markov Kernel $K$ such that:

$$ P_{do} \approx K P_{obs} $$

**In plain English:**

*   $P_{obs}$ is the messy data you **have**.
*   $P_{do}$ is the clean RCT data you **want**.
*   $K$ (the kernel) is the **transformation** (like matching or weighting) that tries to turn what you have into what you want.
*   $\delta$ (deficiency) is **how much you failed**.

If you get $\delta$ wrong, you need to know how much regret you're signing up for.

---

# Theory Overview: The Car and Destination Analogy

To bridge the gap between Le Cam's abstract theory and your practical decision, use this analogy:

| Concept | Analogy |
|---------|---------|
| Observational experiment | Your car (parked where it is) |
| Interventional experiment (RCT) | Your destination (where you want to be) |
| Deficiency ($\delta$) | **Physical distance** your car is parked from the destination |
| IPW, AIPW estimators | **Drivers** trying to drive the car closer to the destination |
| Safety floor | Risk you accept by **walking** those remaining miles through unknown territory |

When we calculate $\delta$, we are simply asking: **"How far away do I still have to walk?"**

---

# Workflow

We will demonstrate the package workflow using a simulated medical decision problem.

```{r setup}
library(causaldef)
library(ggplot2)

set.seed(2026)

# DAG Helper (using ggplot2 only)
plot_dag <- function(coords, edges, title = NULL) {
  edges_df <- merge(edges, coords, by.x = "from", by.y = "name")
  colnames(edges_df)[c(3,4)] <- c("x_start", "y_start")
  edges_df <- merge(edges_df, coords, by.x = "to", by.y = "name")
  colnames(edges_df)[c(5,6)] <- c("x_end", "y_end")
  
  ggplot2::ggplot(coords, ggplot2::aes(x = x, y = y)) +
    ggplot2::geom_segment(data = edges_df, ggplot2::aes(x = x_start, y = y_start, xend = x_end, yend = y_end),
                 arrow = ggplot2::arrow(length = ggplot2::unit(0.3, "cm"), type = "closed"), 
                 color = "gray40", size = 1, alpha = 0.8) +
    ggplot2::geom_point(size = 14, color = "white", fill = "#4682B4", shape = 21, stroke = 1.5) +
    ggplot2::geom_text(ggplot2::aes(label = name), fontface = "bold", size = 4, color = "white") +
    ggplot2::ggtitle(title) + 
    ggplot2::theme_void(base_size = 14) +
    ggplot2::theme(plot.title = ggplot2::element_text(hjust = 0.5, face = "bold", margin = ggplot2::margin(b = 10))) +
    ggplot2::coord_fixed()
}
```

## Step 1: Data Generation

Consider a scenario where we want to estimate the effect of a new treatment `A` on patient recovery `Y`, but assignment is confounded by severity `W`.

```{r data_gen}
# Simulate data
n <- 500
W <- rnorm(n) # Severity (Confounder)

# Treatment assignment biased by severity
prob_A <- plogis(0.5 * W) 
A <- rbinom(n, 1, prob_A)

# Outcome: Treatment effect = 2, Severity effect = 1
Y <- 1 + 2 * A + 1 * W + rnorm(n)

# Negative Control: Affected by W but not A
Y_nc <- 0.5 * W + rnorm(n)

df <- data.frame(W = W, A = A, Y = Y, Y_nc = Y_nc)

# Visualize the Causal Structure
coords <- data.frame(
  name = c("W", "A", "Y", "Y_nc"),
  x = c(0, -1, 1, 0.5),
  y = c(1, 0, 0, 1) # Pushing Y_nc up/out to side
)
edges <- data.frame(
  from = c("W", "W", "W", "A"),
  to =   c("A", "Y", "Y_nc", "Y")
)
plot_dag(coords, edges, title = "Scenario: Confounding + Negative Control")
```

## Step 2: Specification

We define the causal problem using `causal_spec`.

```{r spec}
spec <- causal_spec(
  data = df,
  treatment = "A",
  outcome = "Y",
  covariates = "W",
  negative_control = "Y_nc"
)
print(spec)
```

## Step 3: Deficiency Estimation

We wish to know if we can "transport" our observational experiment to the ideal randomized experiment. We compare multiple adjustment strategies (our "drivers").

```{r deficiency}
# Compare Baseline (Unadjusted) vs IPTW vs AIPW
results <- estimate_deficiency(
  spec,
  methods = c("unadjusted", "iptw", "aipw"),
  n_boot = 50 
)

print(results)
plot(results, type = "bar")
```

### Interpretation of Deficiency Results: The Distance to Walk

We evaluate our drivers based on how close they got us to the RCT:

*   **Unadjusted** ($\delta \approx 0.26$): Only drove us part of the way. Leaving us with a large distributional mismatch (TV distance ~ 0.26).
*   **IPW / AIPW** ($\delta \approx 0.005$): Drove us **98% of the way** to the RCT. The remaining TV distance is negligible.

We use a Traffic Light system to interpret $\delta$:

| Zone | $\delta$ Value | Interpretation | Color |
|------|---------|----------------|-------|
| 🟢 Green | < 0.05 | Excellent—very close to RCT quality | Green |
| 🟡 Yellow | 0.05 – 0.15 | Good approximation, but some mismatch | Yellow/Amber |
| 🔴 Red | > 0.15 | Adjustment insufficient (TV distance > 0.15) | Red |

## Step 4: Diagnostics (The "Negative Control Trap")

**Warning**: Do not fall into the trap of thinking a high p-value means "safe".

We use the Negative Control outcome `Y_nc` to test our assumptions. `Y_nc` shares confounders with `Y` but is causally unrelated to `A`.

```{r diagnostics}
diag_res <- nc_diagnostic(spec, method = "iptw")
print(diag_res)
```

### Critical Interpretation: The Deductible

*   **delta_bound**: This is your **Noise Floor** (or minimum deductible). Even with a high p-value, you cannot be more precise than this. If your estimated effect is smaller than this noise floor, you have a problem.
*   **kappa ($\kappa$)**: This is the **Skepticism Dial**. It is the multiplier linking the negative control to the actual outcome. $\kappa=1$ assumes confounding strength is identical.
*   **P-value**: A high p-value simply means **"We failed to catch an error"**, not that an error isn't there.

**Interpretation**: "This is your noise floor. Your effect estimate must exceed this to be meaningful."

## Step 5: Effect Estimation

Having validated our strategy (checked the car's position), we estimate the causal effect.

```{r estimation}
# Estimate ATE
effect <- estimate_effect(results, target_method = "iptw")
print(effect)
```

## Step 6: Policy Regret Bounds (Transfer Penalty and Safety Floor)

This is the most critical step. We convert $\delta$ into decision-theoretic quantities.

$$ \text{Transfer Penalty} = M\delta,\qquad \text{Minimax Safety Floor} = (M/2)\delta $$

Where:

*   $M$ = Stakes (difference between best and worst financial/clinical outcome).
*   $\delta$ = Le Cam deficiency (distance to RCT).

```{r regret}
# Utility range: [0, 10] outcome units
bounds <- policy_regret_bound(results, utility_range = c(0, 10), method = "aipw")
print(bounds)
plot(bounds, type = "safety_curve")
```

### Decision Framework

*   **Example**: If Stakes ($M$) = $10M and $\delta$ = 0.01, then:
    - Transfer penalty = $100,000
    - Minimax safety floor = $50,000
*   **Decision**: 

	    *   If expected profit ($5M) >> transfer penalty ($100K) $\rightarrow$ **Signal is Clear**.
	    *   If expected profit ($120K) $\approx$ transfer penalty ($100K) $\rightarrow$ **That's Gambling**.

## Step 7: Sensitivity Analysis & Decision

Finally, if we cannot rule out unobserved confounding, we map the **Confounding Frontier**. We must overlay our Safety Floor to see if we are in "Unsafe Territory".

```{r frontier, fig.width=6, fig.height=5}
# What if there were an unobserved confounder U?
frontier <- confounding_frontier(spec, grid_size = 30)

# Calculate the delta corresponding to an acceptable transfer-penalty limit
# Suppose our max acceptable transfer penalty is 0.2 units, and M=10
# Transfer penalty = M * delta  =>  delta = transfer penalty / M
acceptable_transfer_penalty <- 0.2
M <- 10
delta_threshold <- acceptable_transfer_penalty / M

plot(frontier, type = "heatmap", threshold = c(delta_threshold)) 
```

### Go/No-Go Decision

The contour line represents the border of an acceptable **transfer penalty**.  

**Ask yourself:**
> "Does the unsafe territory in the heat map push beyond your safety floor?"

*   If **Yes**: The analysis has failed. **Do not publish.** Run the RCT.
*   If **No**: We are safe within these bounds. Reliable result.

---

# Advanced: Survival Analysis

`causaldef` supports survival specifications and proxy-based diagnostics for time-to-event settings, with effect estimation available on compatible survival runtimes.

The survival examples below are evaluated only when a compatible `survival`
runtime is available. In the current support matrix, that means `R >= 4.0`.

```{r survival, eval = eval_surv}
data("hct_outcomes")

# Event: Relapse (1) vs Censored (0)
hct_outcomes$relapse <- ifelse(hct_outcomes$event_status == "Relapse", 1, 0)

spec_surv <- causal_spec_survival(
  data = hct_outcomes,
  treatment = "conditioning_intensity",
  time = "time_to_event",
  event = "relapse",
  covariates = c("age", "disease_status", "kps"),
  estimand = "RMST",
  horizon = 24
)

# Estimate deficiency (Distance between Obs and Target Survival Expts)
surv_results <- estimate_deficiency(spec_surv, methods = c("unadjusted", "iptw"))

# Estimate Effect (RMST Difference + Survival Curves)
surv_effect <- estimate_effect(surv_results, target_method = "iptw")

print(surv_effect)
plot(surv_effect)
```

# Conclusion

The `causaldef` package shifts the focus from "p-values" to **information distance** ($\delta$) and **regret**. By quantifying how far our data is from the ideal experiment, we can calculate the **price of safety** and make rigorous decisions.
