You have a problem, \(\mathcal{P}\), whose perfect heuristic \(h^*\) you wish to estimate.
You define a simpler problem, \(\mathcal{P}^{\prime}\), whose perfect heuristic \(h^{\prime *}\) can be used to estimate \(h^*\).
You define a transformation, \(r\), that simplifies instances from \(\mathcal{P}\) into instances \(\mathcal{P}^{\prime}\).
Given \(\Pi \in \mathcal{P}\), you estimate heuristic \(h^*(\Pi)\) by computing heuristic \(h^{\prime *}(r(\Pi))\).
Relaxation
Relaxation means simplifying a problem by taking the solution to a simpler problem as the heuristic estimate for the solution to the actual problem.
Distance from Bucharest to
How do we derive straight-line distances in route-finding by relaxation?:
A perfect heuristic \(h^*\) for \(\mathcal{P}\): Actions = A tile can move from square \(X\) to square \(Y\) if \(X\) is adjacent to \(Y\) and \(Y\) is blank.
How do we derive a heuristic which specifies Manhattan (city-block) distance?:
How do we derive a heuristic which counts the number of misplaced tiles?:

Propositions \(P\): \(\mathit{at}(x)\) for \(x \in \{\mathit{Sy}, \mathit{Ad}, \mathit{Br}, \mathit{Pe}, \mathit{Da}\}\); \(\mathit{v}(x)\) for \(x \in \{\mathit{Sy}, \mathit{Ad}, \mathit{Br}, \mathit{Pe}, \mathit{Da}\}\).
Actions \(a \in A\): \(\mathit{drive}(x,y)\) where \(x,y\) have a road; \(\mathit{pre}_a = \{\mathit{at}(x)\}\), \(\mathit{add}_a = \{\mathit{at}(y), \mathit{v}(y)\}\), \(\mathit{del}_a = \{\mathit{at}(x)\}\).
Initial state \(I\): \(\mathit{at}(\mathit{Sy}), \mathit{v}(\mathit{Sy})\).
Goal \(G\): \(\mathit{at}(\mathit{Sy}), \mathit{v}(x)\) for all \(x\).
Let’s act as if we can achieve each goal directly:
Problem \(\mathcal{P}\): All STRIPS planning tasks; Simpler problem \(\mathcal{P}^{\prime}\): All STRIPS planning tasks with empty preconditions and deletes; Perfect heuristic \(h^{\prime *}\) for \(\mathcal{P}^{\prime}\): Optimal plan cost \((= h^*)\).
Transformation \(r\) ?:
Heuristic value here?:
Notes:
Optimal STRIPS planning with empty preconditions and deletes is still \(\mathit{NP}\)-hard! (Reduction from MINIMUM COVER, of goal set by add lists.)
Relaxation approximates the perfect heuristic \(h^{\prime *}\) for \(\mathcal{P}^{\prime}\).
Definition: Relaxation
Let \(\mathcal{P}\) be a class of planning problems and \(h^*(\Pi)\) denote the optimal plan cost of problem \(\Pi \in \mathcal{P}\).
A relaxation is a triple \(\mathcal{R} = (\mathcal{P}', r, h'^*)\) where
for all \(\Pi \in \mathcal{P}\) \(\,\,{\color{blue}h^{\prime *}(r(\Pi)) \le h^*(\Pi)}\), and the heuristic induced by \(\mathcal{R}\) is \({\color{blue}\,h^{\mathcal R}(\Pi) = h^{\prime *}(r(\Pi))}\).
The relaxation is:
Native if \(\mathcal P' \subseteq \mathcal P\) and \(h^{\prime *}(\Pi') = h^*(\Pi')\) for all \(\Pi' \in \mathcal P'\);
Efficiently constructible if there exists a polynomial-time algorithm that, given \(\Pi \in \mathcal{P}\), computes \(r(\Pi)\);
Efficiently computable if there exists a polynomial-time algorithm that, given \(\Pi' \in \mathcal{P}^{\prime}\), computes \(h^{\prime *}(\Pi')\).
The steps involved:
You have a problem, \(\mathcal{P}\), whose perfect heuristic \(h^*\) you wish to estimate.
You define a simpler problem, \(\mathcal{P}^{\prime}\), whose perfect heuristic \(h^*prime\) can be used to admissiblyestimate \(h^*\).
You define a transformation, \(r\), from \(\mathcal{P}\) into \(\mathcal{P}^{\prime}\).
Given \(\Pi \in \mathcal{P}\), you estimate \(h^*(\Pi)\) by \(h^{\prime *}(r(\Pi))\).
Hence goal counting just approximates \(h^{\prime *}\) by number-of-false-goals.
Problem \(\mathcal{P}\): Route finding
Simpler problem \(\mathcal{P}^{\prime}\):
Perfect heuristic \(h^{\prime *}\) for \(\mathcal{P}^{\prime}\):
Transformation \(r\)?:
Problem \(\mathcal{P}\): All STRIPS planning tasks.
Simpler problem \(\mathcal{P}^{\prime}\): All STRIPS planning tasks with empty preconditions and deletes
Perfect heuristic \(h^{\prime *}\) for \(\mathcal{P}^{\prime}\): Optimal plan cost \(= h^{*}\)
Transformation \(r\)?:
Distance from Bucharest to
Assume relaxation \(\mathcal{R} = (\mathcal{P}^{\prime}, r, h^{\prime *})\): You are pretending to be a bird!
Native?:
Efficiently constructible?:
Efficiently computable?:
Relaxation \(\mathcal{R} = (\mathcal{P}^{\prime}, r, h^{\prime *})\): Use more generous actions rule to obtain Manhattan distance.
Native?:
Efficiently constructible?:
Efficiently computable?:
What if \(\mathcal{R}\) is not efficiently constructible?
Either (a) approximate \(r\), or (b) design \(r\) in a way so that it will typically be feasible, or (c) just live with it and hope for the best.
The vast majority of known relaxations (in planning) are efficiently constructible.
What if \(\mathcal{R}\) is not efficiently computable?
Either (a) approximate \(h^{\prime *}\), or (b) design \(h^{\prime *}\) in a way so that it will typically be feasible, or (c) just live with it and hope for the best.
Many known relaxations (in planning) are efficiently computable, some aren’t. The latter use (a); (b) and (c) are not used anywhere right now.

Propositions \(P\): \(\mathit{at}(x)\) for \(x \in \{\mathit{Sy}, \mathit{Ad}, \mathit{Br}, \mathit{Pe}, \mathit{Da}\}\); \(\mathit{v}(x)\) for \(x \in \{\mathit{Sy}, \mathit{Ad}, \mathit{Br}, \mathit{Pe}, \mathit{Da}\}\).
Actions \(a \in A\): \(\mathit{drive}(x,y)\) where \(x,y\) have a road; \(\mathit{pre}_a = \{\mathit{at}(x)\}\), \(\mathit{add}_a = \{\mathit{at}(y), \mathit{v}(y)\}\), \(\mathit{del}_a = \{\mathit{at}(x)\}\).
Initial state \(I\): \(\mathit{at}(\mathit{Sy}), \mathit{v}(\mathit{Sy})\).
Goal \(G\): \(\mathit{at}(\mathit{Sy}), \mathit{v}(x)\) for all \(x\).
Relaxation \(\mathcal{R} = (\mathcal{P}^{\prime}, r, h^{\prime *})\): Remove preconditions and deletes, then use \(h^*\).
Native?:
Efficiently constructible?:
Efficiently computable?:
What approach, (a), (b) or (c), do we take if not construtible and/or computable?:
Using a relaxation \(\mathcal{R} = (\mathcal{P}^{\prime}, r,h^{\prime *})\) during search:
\(\Pi_s\): is problem \(\Pi\) with initial state replaced by \(s\)
This is the task of finding a plan for search state \(s\)

Question
Say we have a robot with one gripper, two rooms \(A\) and \(B\), and \(n\) balls we must transport.
(A): No
(C): Sure, every admissible \(h\) can be derived via a relaxation.
(B): Yes, just drop the deletes
(D): I’d rather relax at the beach
Relaxation is a method to compute heuristic functions.
Given a problem \(\mathcal P\) we want to solve, we define a relaxed problem \(\mathcal P'\).
Relaxations can be native, efficiently constructible, and/or efficiently computable.
During search, the relaxation is used only inside heuristic computation for each state.
The goal-counting approximation, which is essentially \(h=\) “count the number of goals currently not true” is a very uninformative heuristic function:
The range of heuristic values is small \((0 \dots |G|)\).
We can transform any planning task into an equivalent one where \(h(s) = 1\) for all non-goal states \(s\).
How?:
Ignores almost all structure: heuristic value does not depend on the actions at all!
Is \(h\) safe/goal-aware/admissible/consistent?:
We will see how to compute better heuristic functions in the next Module.