\[\begin{split}\newcommand{\as}{\kw{as}} \newcommand{\case}{\kw{case}} \newcommand{\cons}{\textsf{cons}} \newcommand{\consf}{\textsf{consf}} \newcommand{\emptyf}{\textsf{emptyf}} \newcommand{\End}{\kw{End}} \newcommand{\kwend}{\kw{end}} \newcommand{\even}{\textsf{even}} \newcommand{\evenO}{\textsf{even}_\textsf{O}} \newcommand{\evenS}{\textsf{even}_\textsf{S}} \newcommand{\Fix}{\kw{Fix}} \newcommand{\fix}{\kw{fix}} \newcommand{\for}{\textsf{for}} \newcommand{\forest}{\textsf{forest}} \newcommand{\Functor}{\kw{Functor}} \newcommand{\In}{\kw{in}} \newcommand{\ind}[3]{\kw{Ind}~[#1]\left(#2\mathrm{~:=~}#3\right)} \newcommand{\Indp}[4]{\kw{Ind}_{#4}[#1](#2:=#3)} \newcommand{\Indpstr}[5]{\kw{Ind}_{#4}[#1](#2:=#3)/{#5}} \newcommand{\injective}{\kw{injective}} \newcommand{\kw}[1]{\textsf{#1}} \newcommand{\length}{\textsf{length}} \newcommand{\letin}[3]{\kw{let}~#1:=#2~\kw{in}~#3} \newcommand{\List}{\textsf{list}} \newcommand{\lra}{\longrightarrow} \newcommand{\Match}{\kw{match}} \newcommand{\Mod}[3]{{\kw{Mod}}({#1}:{#2}\,\zeroone{:={#3}})} \newcommand{\ModImp}[3]{{\kw{Mod}}({#1}:{#2}:={#3})} \newcommand{\ModA}[2]{{\kw{ModA}}({#1}=={#2})} \newcommand{\ModS}[2]{{\kw{Mod}}({#1}:{#2})} \newcommand{\ModType}[2]{{\kw{ModType}}({#1}:={#2})} \newcommand{\mto}{.\;} \newcommand{\nat}{\textsf{nat}} \newcommand{\Nil}{\textsf{nil}} \newcommand{\nilhl}{\textsf{nil\_hl}} \newcommand{\nO}{\textsf{O}} \newcommand{\node}{\textsf{node}} \newcommand{\nS}{\textsf{S}} \newcommand{\odd}{\textsf{odd}} \newcommand{\oddS}{\textsf{odd}_\textsf{S}} \newcommand{\ovl}[1]{\overline{#1}} \newcommand{\Pair}{\textsf{pair}} \newcommand{\plus}{\mathsf{plus}} \newcommand{\SProp}{\textsf{SProp}} \newcommand{\Prop}{\textsf{Prop}} \newcommand{\return}{\kw{return}} \newcommand{\Set}{\textsf{Set}} \newcommand{\Sort}{\mathcal{S}} \newcommand{\Str}{\textsf{Stream}} \newcommand{\Struct}{\kw{Struct}} \newcommand{\subst}[3]{#1\{#2/#3\}} \newcommand{\tl}{\textsf{tl}} \newcommand{\tree}{\textsf{tree}} \newcommand{\trii}{\triangleright_\iota} \newcommand{\Type}{\textsf{Type}} \newcommand{\WEV}[3]{\mbox{$#1[] \vdash #2 \lra #3$}} \newcommand{\WEVT}[3]{\mbox{$#1[] \vdash #2 \lra$}\\ \mbox{$ #3$}} \newcommand{\WF}[2]{{\mathcal{W\!F}}(#1)[#2]} \newcommand{\WFE}[1]{\WF{E}{#1}} \newcommand{\WFT}[2]{#1[] \vdash {\mathcal{W\!F}}(#2)} \newcommand{\WFTWOLINES}[2]{{\mathcal{W\!F}}\begin{array}{l}(#1)\\\mbox{}[{#2}]\end{array}} \newcommand{\with}{\kw{with}} \newcommand{\WS}[3]{#1[] \vdash #2 <: #3} \newcommand{\WSE}[2]{\WS{E}{#1}{#2}} \newcommand{\WT}[4]{#1[#2] \vdash #3 : #4} \newcommand{\WTE}[3]{\WT{E}{#1}{#2}{#3}} \newcommand{\WTEG}[2]{\WTE{\Gamma}{#1}{#2}} \newcommand{\WTM}[3]{\WT{#1}{}{#2}{#3}} \newcommand{\zeroone}[1]{[{#1}]} \end{split}\]

Reasoning with equalities

There are multiple notions of equality in Coq:

  • Leibniz equality is the standard way to define equality in Coq and the Calculus of Inductive Constructions, which is in terms of a binary relation, i.e. a binary function that returns a Prop. The standard library defines eq similar to this:

    Inductive eq {A : Type} (x : A) : A -> Prop := eq_refl : eq x x.

    The notation x = y represents the term eq x y. The notation x = y :> A gives the type of x and y explicitly.

  • Setoid equality defines equality in terms of an equivalence relation. A setoid is a set that is equipped with an equivalence relation (see https://en.wikipedia.org/wiki/Setoid). These are needed to form a quotient set or quotient (see https://en.wikipedia.org/wiki/Equivalence_class). In Coq, users generally work with setoids rather than constructing quotients, for which there is no specific support.

  • Definitional equality is equality based on the conversion rules, which Coq can determine automatically. When two terms are definitionally equal, Coq knows it can replace one with the other, such as with change X with Y, among many other advantages. "Convertible" is another way of saying that two terms are definitionally equal.

Tactics for dealing with equality of inductive types such as injection and inversion are described here.

Tactics for simple equalities

Tactic reflexivity

For a goal with the form forall open_binders ,? t = u, verifies that t and u are definitionally equal, and if so, solves the goal (by applying eq_refl). If not, it fails.

The tactic may also be applied to goals with the form forall open_binders ,? R term1 term2 where R is a reflexive relation registered with the Equivalence or Reflexive typeclasses. See Class and Instance.

Error The relation ident is not a declared reflexive relation. Maybe you need to require the Coq.Classes.RelationClasses library
Tactic symmetry simple_occurrences?

Changes a goal that has the form forall open_binders ,? t = u into u = t. simple_occurrences may be used to apply the change in the selected hypotheses and/or the conclusion.

The tactic may also be applied to goals with the form forall open_binders ,? R term1 term2 where R is a symmetric relation registered with the Equivalence or Symmetric typeclasses. See Class and Instance.

Error The relation ident is not a declared symmetric relation. Maybe you need to require the Coq.Classes.RelationClasses library
Tactic transitivity one_term

Changes a goal that has the form forall open_binders ,? t = u into the two subgoals t = one_term and one_term = u.

The tactic may also be applied to goals with the form forall open_binders ,? R term1 term2 where R is a transitive relation registered with the Equivalence or Transitivity typeclasses. See Class and Instance.

Tactic etransitivity

This tactic behaves like transitivity, using a fresh evar instead of a concrete one_term.

Error The relation ident is not a declared transitive relation. Maybe you need to require the Coq.Classes.RelationClasses library
Tactic f_equal

For a goal with the form f a1 ... an = g b1 ... bn, creates subgoals f = g and ai = bi for the n arguments. Subgoals that can be proven by reflexivity or congruence are solved automatically.

Rewriting with Leibniz and setoid equality

Tactic rewrite oriented_rewriter+, occurrences? by ltac_expr3?
oriented_rewriter::=-><-? natural? ?!? one_term_with_bindings

Replaces subterms with other subterms that have been proven to be equal. The type of one_term must have the form:

forall open_binders ,? EQ term1 term2

where EQ is the Leibniz equality eq or a registered setoid equality. Note that eq term1 term2 is typically written with the infix notation term1 = term2. You must Require Setoid to use the tactic with a setoid equality or with setoid rewriting.

rewrite one_term finds subterms matching term1 in the goal, and replaces them with term2 (or the reverse if <- is given). Some of the variables xi are solved by unification, and some of the types A1, …, An may become new subgoals. rewrite won't find occurrences inside forall that refer to variables bound by the forall; use the more advanced setoid_rewrite if you want to find such occurrences.

oriented_rewriter+,

The oriented_rewriters are applied sequentially to the first goal generated by the previous oriented_rewriter. If any of them fail, the tactic fails.

-><-?

For -> (the default), term1 is rewritten into term2. For <-, term2 is rewritten into term1.

natural? ?!?

natural is the number of rewrites to perform. If ? is given, natural is the maximum number of rewrites to perform; otherwise natural is the exact number of rewrites to perform.

? (without natural) performs the rewrite as many times as possible (possibly zero times). This form never fails. ! (without natural) performs the rewrite as many times as possible and at least once. The tactic fails if the requested number of rewrites can't be performed. natural ! is equivalent to natural.

occurrences

If occurrences specifies multiple occurrences, the tactic succeeds if any of them can be rewritten. If not specified, only the first occurrence in the conclusion is replaced.

Note

If at occs_nums is specified, rewriting is always done with setoid rewriting, even for Leibniz equality, which means that you must Require Setoid to use that form. However, note that rewrite (even when using setoid rewriting) and setoid_rewrite don't behave identically (as is noted above and below).

by ltac_expr3

If specified, is used to resolve all side conditions generated by the tactic.

Note

For each selected hypothesis and/or the conclusion, rewrite finds the first matching subterm in depth-first search order. Only subterms identical to that first matched subterm are rewritten. If the at clause is specified, only these subterms are considered when counting occurrences. To select a different set of matching subterms, you can specify how some or all of the free variables are bound by using a with clause (see one_term_with_bindings).

For instance, if we want to rewrite the right-hand side in the following goal, this will not work:

Require Import Arith.
[Loading ML file ring_plugin.cmxs (using legacy method) ... done]
Lemma example x y : x + y = y + x.
1 goal x, y : nat ============================ x + y = y + x
rewrite Nat.add_comm at 2.
Toplevel input, characters 0-25: > rewrite Nat.add_comm at 2. > ^^^^^^^^^^^^^^^^^^^^^^^^^ Error: Invalid occurrence number: 2.

One can explicitly specify how some variables are bound to match a different subterm:

rewrite Nat.add_comm with (m := x).
1 goal x, y : nat ============================ x + y = x + y

Note that the more advanced setoid_rewrite tactic behaves differently, and thus the number of occurrences available to rewrite may differ between the two tactics.

Error Tactic failure: Setoid library not loaded.
Error Cannot find a relation to rewrite.
Error Tactic generated a subgoal identical to the original goal.
Error Found no subterm matching term in ident.
Error Found no subterm matching term in the current goal.

This happens if term does not occur in, respectively, the named hypothesis or the goal.

Tactic erewrite oriented_rewriter+, occurrences? by ltac_expr3?

Works like rewrite, but turns unresolved bindings, if any, into existential variables instead of failing. It has the same parameters as rewrite.

Flag Keyed Unification

This flag makes higher-order unification used by rewrite rely on a set of keys to drive unification. The subterms, considered as rewriting candidates, must start with the same key as the left- or right-hand side of the lemma given to rewrite, and the arguments are then unified up to full reduction.

Command Declare Equivalent Keys one_term one_term
Command Print Equivalent Keys
Tactic rewrite * -><-? one_term in ident? at rewrite_occs? by ltac_expr3?
Tactic rewrite * -><-? one_term at rewrite_occs in ident by ltac_expr3?
Tactic replace one_termfrom with one_termto occurrences? by ltac_expr3?
Tactic replace -><-? one_termfrom occurrences?

The first form replaces all free occurrences of one_termfrom in the current goal with one_termto and generates an equality one_termto = one_termfrom as a subgoal. (Note the generated equality is reversed with respect to the order of the two terms in the tactic syntax; see issue #13480.) When by ltac_expr3 is not present, this equality is automatically solved if it occurs among the hypotheses, or if its symmetric form occurs.

The second form, with -> or no arrow, replaces one_termfrom with termto using the first hypothesis whose type has the form one_termfrom = termto. If <- is given, the tactic uses the first hypothesis with the reverse form, i.e. termto = one_termfrom.

occurrences

The type of and value of forms are not supported. Note you must Require Setoid to use the at clause in occurrences.

by ltac_expr3

Applies the ltac_expr3 to solve the generated equality.

Error Terms do not have convertible types.
Tactic cutrewrite -><-? one_type in ident?

Where one_type is an equality.

Deprecated since version 8.5: Use replace instead.

Tactic substitute -><-? one_term_with_bindings
Tactic subst ident*

For each ident, in order, for which there is a hypothesis in the form ident = term or term = ident, replaces ident with term everywhere in the hypotheses and the conclusion and clears ident and the hypothesis from the context. If there are multiple hypotheses that match the ident, the first one is used. If no ident is given, replacement is done for all hypotheses in the appropriate form in top to bottom order.

If ident is a local definition of the form ident := term, it is also unfolded and cleared.

If ident is a section variable it must have no indirect occurrences in the goal, i.e. no global declarations implicitly depending on the section variable may be present in the goal.

Note

If the hypothesis is itself dependent in the goal, it is replaced by the proof of reflexivity of equality.

Flag Regular Subst Tactic

This flag controls the behavior of subst. When it is activated (it is by default), subst also deals with the following corner cases:

  • A context with ordered hypotheses ident1 = ident2 and ident1 = t, or t′ = ident1 with t′ not a variable, and no other hypotheses of the form ident2 = u or u = ident2; without the flag, a second call to subst would be necessary to replace ident2 by t or t′ respectively.

  • The presence of a recursive equation which without the flag would be a cause of failure of subst.

  • A context with cyclic dependencies as with hypotheses ident1 = f ident2 and ident2 = g ident1 which without the flag would be a cause of failure of subst.

Additionally, it prevents a local definition such as ident := t from being unfolded which otherwise would exceptionally unfold in configurations containing hypotheses of the form ident = u, or u′ = ident with u′ not a variable. Finally, it preserves the initial order of hypotheses, which without the flag it may break.

Error Cannot find any non-recursive equality over ident.
Error Section variable ident occurs implicitly in global declaration qualid present in hypothesis ident.
Error Section variable ident occurs implicitly in global declaration qualid present in the conclusion.

Raised when the variable is a section variable with indirect dependencies in the goal. If ident is a section variable, it must not have any indirect occurrences in the goal, i.e. no global declarations implicitly depending on the section variable may be present in the goal.

Tactic simple subst
Tactic stepl one_term by ltac_expr?

For chaining rewriting steps. It assumes a goal in the form R term1 term2 where R is a binary relation and relies on a database of lemmas of the form forall x y z, R x y -> eq x z -> R z y where eq is typically a setoid equality. The application of stepl one_term then replaces the goal by R one_term term2 and adds a new goal stating eq one_term term1.

If ltac_expr is specified, it is applied to the side condition.

Command Declare Left Step one_term

Adds one_term to the database used by stepl.

This tactic is especially useful for parametric setoids which are not accepted as regular setoids for rewrite and setoid_replace (see Generalized rewriting).

Tactic stepr one_term by ltac_expr?

This behaves like stepl but on the right hand side of the binary relation. Lemmas are expected to be in the form forall x y z, R x y -> eq y z -> R x z.

Command Declare Right Step one_term

Adds term to the database used by stepr.

Rewriting with definitional equality

Tactic change one_termfrom at occs_nums? with? one_termto occurrences?

Replaces terms with other convertible terms. If one_termfrom is not specified, then one_termto replaces the conclusion and/or the specified hypotheses. If one_termfrom is specified, the tactic replaces occurrences of one_termto within the conclusion and/or the specified hypotheses.

one_termfrom at occs_nums? with?

Replaces the occurrences of one_termfrom specified by occs_nums with one_termto, provided that the two one_terms are convertible. one_termfrom may contain pattern variables such as ?x, whose value which will substituted for x in one_termto, such as in change (f ?x ?y) with (g (x, y)) or change (fun x => ?f x) with f.

The at with form is deprecated in 8.14; use with at instead. For at with in H |-, use with in H at |-.

occurrences

If with is not specified, occurrences must only specify entire hypotheses and/or the goal; it must not include any at occs_nums clauses.

Error Not convertible.
Error Found an "at" clause without "with" clause
Tactic now_show one_type

A synonym for change one_type. It can be used to make some proof steps explicit when refactoring a proof script to make it readable.

Tactic change_no_check one_termfrom at occs_nums? with? one_termto occurrences?

For advanced usage. Similar to change, but as an optimization, it skips checking that one_termto is convertible with the goal or one_termfrom.

Recall that the Coq kernel typechecks proofs again when they are concluded to ensure correctness. Hence, using change checks convertibility twice overall, while change_no_check can produce ill-typed terms, but checks convertibility only once. Hence, change_no_check can be useful to speed up certain proof scripts, especially if one knows by construction that the argument is indeed convertible to the goal.

In the following example, change_no_check replaces False with True, but Qed then rejects the proof, ensuring consistency.

Example

Goal False.
1 goal ============================ False
  change_no_check True.
1 goal ============================ True
  exact I.
No more goals.
Qed.
Toplevel input, characters 0-4: > Qed. > ^^^^ Error: The term "I" has type "True" while it is expected to have type "False".

Example

Goal True -> False.
1 goal ============================ True -> False
  intro H.
1 goal H : True ============================ False
  change_no_check False in H.
1 goal H : False ============================ False
  exact H.
No more goals.
Qed.
Toplevel input, characters 0-4: > Qed. > ^^^^ Error: The term "fun H : True => H" has type "True -> True" while it is expected to have type "True -> False".

Applying conversion rules

These tactics apply reductions and expansions, replacing convertible subterms with others that are equal by definition in CIC. They implement different specialized uses of the change tactic. Other ways to apply these reductions are through the Eval command, the Eval clause in the Definition/Example command and the eval tactic.

Tactics described in this section include:

  • lazy and cbv, which allow precise selection of which reduction rules to apply

  • simpl and cbn, which are "clever" tactics meant to give the most readable result

  • hnf and red, which apply reduction rules only to the head of the term

  • vm_compute and native_compute, which are performance-oriented.

Conversion tactics, with two exceptions, only change the types and contexts of existential variables and leave the proof term unchanged. (The vm_compute and native_compute tactics change existential variables in a way similar to other conversions while also adding a single explicit cast to the proof term to tell the kernel which reduction engine to use. See Type cast.) For example:

Goal 3 + 4 = 7.
1 goal ============================ 3 + 4 = 7
Show Proof.
?Goal
Show Existentials.
Existential 1 = ?Goal : [ |- 3 + 4 = 7]
cbv.
1 goal ============================ 7 = 7
Show Proof.
(?Goal : 3 + 4 = 7)
Show Existentials.
Existential 1 = ?Goal : [ |- 7 = 7]
Abort.
Tactic lazy reductions? simple_occurrences
Tactic cbv reductions? simple_occurrences
reductions::=reduction+|head? delta_reductionsreduction::=head|beta|delta delta_reductions?|match|fix|cofix|iota|zetadelta_reductions::=-? [ reference+ ]

Normalize the goal as specified by reductions. If no reductions are specified by name, all reductions are applied. If any reductions are specified by name, then only the named reductions are applied. The reductions include:

head

Do only head reduction, without going under binders. Supported by simpl, cbv, cbn and lazy. If this is the only specified reduction, all other reductions are applied.

beta

beta-reduction of functional application

delta delta_reductions?

delta-reduction: unfolding of transparent constants, see Controlling reduction strategies and the conversion algorithm. The form in reductions without the keyword delta includes beta, iota and zeta reductions in addition to delta using the given delta_reductions.

-? [ reference+ ]

without the -, limits delta unfolding to the listed constants. If the - is present, unfolding is applied to all constants that are not listed. Notice that the delta doesn't apply to variables bound by a let-in construction inside the term itself (use zeta to inline these). Opaque constants are never unfolded except by vm_compute and native_compute (see #4476 and Controlling reduction strategies and the conversion algorithm).

iota

iota-reduction of pattern matching (match) over a constructed term and reduction of fix and cofix expressions. Shorthand for match fix cofix.

zeta

zeta-reduction: reduction of let-in definitions

Normalization is done by first evaluating the head of the expression into weak-head normal form, i.e. until the evaluation is blocked by a variable, an opaque constant, an axiom, such as in x u1 un, match x with end, (fix f x {struct x} := ) x, a constructed form (a \(\lambda\)-expression, constructor, cofixpoint, inductive type, product type or sort) or a redex for which flags prevent reduction of the redex. Once a weak-head normal form is obtained, subterms are recursively reduced using the same strategy.

There are two strategies for reduction to weak-head normal form: lazy (the lazy tactic), or call-by-value (the cbv tactic). The lazy strategy is a call by need strategy, with sharing of reductions: the arguments of a function call are weakly evaluated only when necessary, and if an argument is used several times then it is weakly computed only once. This reduction is efficient for reducing expressions with dead code. For instance, the proofs of a proposition exists x. P(x) reduce to a pair of a witness t and a proof that t satisfies the predicate P. Most of the time, t may be computed without computing the proof of P(t), thanks to the lazy strategy.

Flag Kernel Term Sharing

Turning this flag off disables the sharing of computations in lazy, making it a call-by-name reduction. This also affects the reduction procedure used by the kernel when typechecking. By default sharing is activated.

The call-by-value strategy is the one used in ML languages: the arguments of a function call are systematically weakly evaluated first. The lazy strategy is similar to how Haskell reduces terms. Although the lazy strategy always does fewer reductions than the call-by-value strategy, the latter is generally more efficient for evaluating purely computational expressions (i.e. with little dead code).

Tactic compute delta_reductions? simple_occurrences

A variant form of cbv.

Setting Debug "Cbv" makes cbv (and its derivative compute) print information about the constants it encounters and the unfolding decisions it makes.

Tactic simpl head? delta_reductions? reference_occspattern_occs? simple_occurrences
reference_occs::=reference at occs_nums?pattern_occs::=one_term at occs_nums?

Reduces a term to something still readable instead of fully normalizing it. It performs a sort of strong normalization with two key differences:

  • It unfolds constants only if they lead to an ι-reduction, i.e. reducing a match or unfolding a fixpoint.

  • When reducing a constant unfolding to (co)fixpoints, the tactic uses the name of the constant the (co)fixpoint comes from instead of the (co)fixpoint definition in recursive calls.

occs_nums

Selects which occurrences of one_term to process (counting from left to right on the expression printed using the Printing All flag)

simple_occurrences

Permits selecting whether to reduce the conclusion and/or one or more hypotheses. While the at option of occurrences is not allowed here, reference_occs and pattern_occs have a somewhat less flexible at option for selecting specific occurrences.

simpl can unfold transparent constants whose name can be reused in recursive calls as well as those designated by Arguments reference / commands. For instance, a constant plus' := plus may be unfolded and reused in recursive calls, but a constant such as succ := plus (S O) is not unfolded unless it was specifically designated in an Arguments command such as Arguments succ /..

reference_occspattern_occs can limit the application of simpl to:

Tactic cbn reductions? simple_occurrences

cbn was intended to be a more principled, faster and more predictable replacement for simpl.

The main difference between cbn and simpl is that cbn may unfold constants even when they cannot be reused in recursive calls: in the previous example, succ t is reduced to S t.

Setting Debug "RAKAM" makes cbn print various debugging information. RAKAM is the Refolding Algebraic Krivine Abstract Machine.

Tactic hnf simple_occurrences

Replaces the current goal with its weak-head normal form according to the βδιζ-reduction rules, i.e. it reduces the head of the goal until it becomes a product or an irreducible term. All inner βι-redexes are also reduced. While hnf behaves similarly to simpl and cbn, unlike them, it does not recurse into subterms. The behavior of hnf can be tuned using the Arguments command.

Example: The term fun n : nat => S n + S n is not reduced by hnf.

Note

The δ rule only applies to transparent constants (see Controlling reduction strategies and the conversion algorithm on transparency and opacity).

Tactic red simple_occurrences

βιζ-reduces the head constant of T, if possible, in the selected hypotheses and/or the goal which have the form:

forall open_binders ,? T

(where T does not begin with a forall) to c t1 tn where c is a constant. If c is transparent then it replaces c with its definition and reduces again until no further reduction is possible.

In the term forall open_binders ,? t1 ... tn, where t1 is not a term_application, t1 is the head of the term. In a term with the form forall open_binders ,? c t1 ... tn, where c is a constant, c is the head constant.

Error No head constant to reduce.
Tactic unfold reference_occs+, occurrences?

Applies delta-reduction to the constants specified by each reference_occs. The selected hypotheses and/or goals are then reduced to βιζ-normal form. Use the general reduction tactics if you want to only apply the δ rule, for example cbv delta [ reference ].

reference_occs

If reference is a qualid, it must be a defined transparent constant or local definition (see Top-level definitions and Controlling reduction strategies and the conversion algorithm).

If reference is a string scope_key?, the string is the discriminating symbol of a notation (e.g. "+") or an expression defining a notation (e.g. "_ + _") and the notation is an application whose head symbol is an unfoldable constant, then the tactic unfolds it.

occurrences

If occurrences is specified, the specified occurrences will be replaced in the selected hypotheses and/or goal. Otherwise every occurrence of the constants in the goal is replaced. If multiple reference_occs are given, any at clauses must be in the reference_occs rather than in occurrences.

Error Cannot turn inductiveconstructor into an evaluable reference.

Occurs when trying to unfold something that is defined as an inductive type (or constructor) and not as a definition.

Example

Goal 0 <= 1.
1 goal ============================ 0 <= 1
unfold le.
Toplevel input, characters 7-9: > unfold le. > ^^ Error: Cannot turn inductive le into an evaluable reference.
Error ident is opaque.

Raised if you are trying to unfold a definition that has been marked opaque.

Example

Opaque Nat.add.
Goal 1 + 0 = 1.
1 goal ============================ 1 + 0 = 1
unfold Nat.add.
Toplevel input, characters 0-14: > unfold Nat.add. > ^^^^^^^^^^^^^^ Error: Nat.add is opaque.
Error Bad occurrence number of qualid.
Error qualid does not occur.
Tactic fold one_term+ simple_occurrences

First, this tactic reduces each one_term using the red tactic. Then, every occurrence of the resulting terms in the selected hypotheses and/or goal will be replaced by its associated one_term. This tactic is particularly useful for reversing undesired unfoldings, which may make the goal very hard to read. The undesired unfoldings may be due to the limited capabilities of other reduction tactics. On the other hand, when an unfolded function applied to its argument has been reduced, the fold tactic doesn't do anything.

fold one_term1 one_term2 is equivalent to fold one_term1; fold one_term2.

Example: fold doesn't always undo unfold

Goal ~0=0.
1 goal ============================ 0 <> 0
unfold not.
1 goal ============================ 0 = 0 -> False

This fold doesn't undo the preceeding unfold (it makes no change):

fold not.
1 goal ============================ 0 = 0 -> False

However, this pattern followed by fold does:

pattern (0 = 0).
1 goal ============================ (fun P : Prop => P -> False) (0 = 0)
fold not.
1 goal ============================ 0 <> 0

Example: Use fold to reverse unfolding of fold_right

Require Import Coq.Lists.List.
Local Open Scope list_scope.
Goal forall x xs, fold_right and True (x::xs).
1 goal ============================ forall (x : Prop) (xs : list Prop), fold_right and True (x :: xs)
red.
1 goal ============================ forall (x : Prop) (xs : list Prop), x /\ (fix fold_right (l : list Prop) : Prop := match l with | nil => True | b :: t => b /\ fold_right t end) xs
fold (fold_right and True).
1 goal ============================ forall (x : Prop) (xs : list Prop), x /\ fold_right and True xs
Tactic pattern pattern_occs+, occurrences?

Performs beta-expansion (the inverse of beta-reduction) for the selected hypotheses and/or goals. The one_terms in pattern_occs must be free subterms in the selected items. The expansion is done for each selected item T for a set of one_terms in the pattern_occs by:

  • replacing all selected occurrences of the one_terms in T with fresh variables

  • abstracting these variables

  • applying the abstracted goal to the one_terms

For instance, if the current goal T is expressible as φ(t1 tn) where the notation captures all the instances of the ti in φ, then pattern t1, …, tn generates the equivalent goal (fun (x1:A1 (xn:An) => φ(x1 xn)) t1 tn. If ti occurs in one of the generated types Aj (for j > i), occurrences will also be considered and possibly abstracted.

This tactic can be used, for instance, when the tactic apply fails on matching or to better control the behavior of rewrite.

See the example here.

Fast reduction tactics: vm_compute and native_compute

vm_compute is a brute-force but efficient tactic that first normalizes the terms before comparing them. It is based on a bytecode representation of terms similar to the bytecode representation used in the ZINC virtual machine [Ler90]. It is especially useful for intensive computation of algebraic values, such as numbers, and for reflection-based tactics.

native_compute is based on on converting the Coq code to OCaml.

Note that both these tactics ignore Opaque markings (see issue #4776), nor do they apply unfolding strategies such as from Strategy.

native_compute is typically two to five times faster than vm_compute at applying conversion rules when Coq is running native code, but native_compute requires considerably more overhead. We recommend using native_compute when all of the following are true (otherwise use vm_compute):

  • the running time in vm_compute at least 5-10 seconds

  • the size of the input term is small (e.g. hand-generated code rather than automatically-generated code that may have nested destructs on inductives with dozens or hundreds of constructors)

  • the output is small (e.g. you're returning a boolean, a natural number or an integer rather than a large abstract syntax tree)

These tactics change existential variables in a way similar to other conversions while also adding a single explicit cast (see Type cast) to the proof term to tell the kernel which reduction engine to use.

Tactic vm_compute reference_occspattern_occs? occurrences?

Evaluates the goal using the optimized call-by-value evaluation bytecode-based virtual machine described in [GregoireL02]. This algorithm is dramatically more efficient than the algorithm used for the cbv tactic, but it cannot be fine-tuned. It is especially useful for full evaluation of algebraic objects. This includes the case of reflection-based tactics.

Tactic native_compute reference_occspattern_occs? occurrences?

Evaluates the goal by compilation to OCaml as described in [BDenesGregoire11]. Depending on the configuration, this tactic can either default to vm_compute, recompile dependencies or fail due to some missing precompiled dependencies, see the native-compiler option for details.

Flag NativeCompute Timing

This flag causes all calls to the native compiler to print timing information for the conversion to native code, compilation, execution, and reification phases of native compilation. Timing is printed in units of seconds of wall-clock time.

Flag NativeCompute Profiling

On Linux, if you have the perf profiler installed, this flag makes it possible to profile native_compute evaluations.

Option NativeCompute Profile Filename string

This option specifies the profile output; the default is native_compute_profile.data. The actual filename used will contain extra characters to avoid overwriting an existing file; that filename is reported to the user. That means you can individually profile multiple uses of native_compute in a script. From the Linux command line, run perf report on the profile file to see the results. Consult the perf documentation for more details.

Computing in a term: eval and Eval

Evaluation of a term can be performed with:

Tactic eval red_expr in term
red_expr::=lazy reductions?|cbv reductions?|compute delta_reductions?|vm_compute reference_occspattern_occs?|native_compute reference_occspattern_occs?|red|hnf|simpl head? delta_reductions? reference_occspattern_occs?|cbn reductions?|unfold reference_occs+,|fold one_term+|pattern pattern_occs+,|ident

eval is a value_tactic. It returns the result of applying the conversion rules specified by red_expr. It does not change the proof state.

The red_expr alternatives that begin with a keyword correspond to the tactic with the same name, though in several cases with simpler syntax than the tactic. ident is a named reduction expression created with Declare Reduction.

See also

Section Applying conversion rules.

Command Eval red_expr in term

Performs the specified reduction on term and displays the resulting term with its type. If a proof is open, term may reference hypotheses of the selected goal. Eval is a query_command, so it may be prefixed with a goal selector.

Command Compute term

Evaluates term using the bytecode-based virtual machine. It is a shortcut for Eval vm_compute in term. Compute is a query_command, so it may be prefixed with a goal selector.

Command Declare Reduction ident := red_expr

Declares a short name for the reduction expression red_expr, for instance lazy beta delta [foo bar]. This short name can then be used in Eval ident in or eval constructs. This command accepts the local attribute, which indicates that the reduction will be discarded at the end of the file or module. The name is not qualified. In particular declaring the same name in several modules or in several functor applications will be rejected if these declarations are not local. The name ident cannot be used directly as an Ltac tactic, but nothing prevents the user from also performing a Ltac ident := red_expr.

Controlling reduction strategies and the conversion algorithm

The commands to fine-tune the reduction strategies and the lazy conversion algorithm are described in this section. Also see Effects of Arguments on unfolding, which supports additional fine-tuning.

Command Opaque reference+

Marks the specified constants as opaque so tactics won't unfold them with delta-reduction. "Constants" are items defined by commands such as Definition, Let (with an explicit body), Fixpoint, CoFixpoint and Function.

This command accepts the global attribute. By default, the scope of Opaque is limited to the current section or module.

Opaque also affects Coq's conversion algorithm, causing it to delay unfolding the specified constants as much as possible when it has to check that two distinct applied constants are convertible. See Section Conversion rules.

Command Transparent reference+

The opposite of Opaque, it marks the specified constants as transparent so that tactics may unfold them. See Opaque above.

This command accepts the global attribute. By default, the scope of Transparent is limited to the current section or module.

Note that constants defined by proofs ending with Qed are irreversibly opaque; Transparent will not make them transparent. This is consistent with the usual mathematical practice of proof irrelevance: what matters in a mathematical development is the sequence of lemma statements, not their actual proofs. This distinguishes lemmas from the usual defined constants, whose actual values are of course relevant in general.

Error The reference qualid was not found in the current environment.

There is no constant named qualid in the environment.

Command Strategy strategy_level [ reference+ ]+
strategy_level::=opaque|integer|expand|transparent

Generalizes the behavior of the Opaque and Transparent commands. It is used to fine-tune the strategy for unfolding constants, both at the tactic level and at the kernel level. This command associates a strategy_level with the qualified names in the reference sequence. Whenever two expressions with two distinct head constants are compared (for example, typechecking f x where f : A -> B and x : C will result in converting A and C), the one with lower level is expanded first. In case of a tie, the second one (appearing in the cast type) is expanded.

This command accepts the local attribute, which limits its effect to the current section or module, in which case the section and module behavior is the same as Opaque and Transparent (without global).

Levels can be one of the following (higher to lower):

  • opaque : level of opaque constants. They cannot be expanded by tactics (behaves like +∞, see next item).

  • integer : levels indexed by an integer. Level 0 corresponds to the default behavior, which corresponds to transparent constants. This level can also be referred to as transparent. Negative levels correspond to constants to be expanded before normal transparent constants, while positive levels correspond to constants to be expanded after normal transparent constants.

  • expand : level of constants that should be expanded first (behaves like −∞)

  • transparent : Equivalent to level 0

Command Print Strategy reference

This command prints the strategy currently associated with reference. It fails if reference is not an unfoldable reference, that is, neither a variable nor a constant.

Error The reference is not unfoldable.
Command Print Strategies

Print all the currently non-transparent strategies.

Tactic with_strategy strategy_level_or_var [ reference+ ] ltac_expr3
strategy_level_or_var::=strategy_level|ident

Executes ltac_expr3, applying the alternate unfolding behavior that the Strategy command controls, but only for ltac_expr3. This can be useful for guarding calls to reduction in tactic automation to ensure that certain constants are never unfolded by tactics like simpl and cbn or to ensure that unfolding does not fail.

Example

Opaque id.
Goal id 10 = 10.
1 goal ============================ id 10 = 10
Fail unfold id.
The command has indeed failed with message: id is opaque.
with_strategy transparent [id] unfold id.
1 goal ============================ 10 = 10

Warning

Use this tactic with care, as effects do not persist past the end of the proof script. Notably, this fine-tuning of the conversion strategy is not in effect during Qed nor Defined, so this tactic is most useful either in combination with abstract, which will check the proof early while the fine-tuning is still in effect, or to guard calls to conversion in tactic automation to ensure that, e.g., unfold does not fail just because the user made a constant Opaque.

This can be illustrated with the following example involving the factorial function.

Fixpoint fact (n : nat) : nat :=   match n with   | 0 => 1   | S n' => n * fact n'   end.
fact is defined fact is recursively defined (guarded on 1st argument)

Suppose now that, for whatever reason, we want in general to unfold the id function very late during conversion:

Strategy 1000 [id].

If we try to prove id (fact n) = fact n by reflexivity, it will now take time proportional to \(n!\), because Coq will keep unfolding fact and * and + before it unfolds id, resulting in a full computation of fact n (in unary, because we are using nat), which takes time \(n!\). We can see this cross the relevant threshold at around \(n = 9\):

Goal True.
1 goal ============================ True
Time assert (id (fact 8) = fact 8) by reflexivity.
Finished transaction in 0.181 secs (0.157u,0.024s) (successful) 1 goal H : id (fact 8) = fact 8 ============================ True
Time assert (id (fact 9) = fact 9) by reflexivity.
Finished transaction in 1.338 secs (1.336u,0.002s) (successful) 1 goal H : id (fact 8) = fact 8 H0 : id (fact 9) = fact 9 ============================ True

Note that behavior will be the same if you mark id as Opaque because while most reduction tactics refuse to unfold Opaque constants, conversion treats Opaque as merely a hint to unfold this constant last.

We can get around this issue by using with_strategy:

Goal True.
1 goal ============================ True
Fail Timeout 1 assert (id (fact 100) = fact 100) by reflexivity.
The command has indeed failed with message: Timeout!
Time assert (id (fact 100) = fact 100) by with_strategy -1 [id] reflexivity.
Finished transaction in 0.001 secs (0.001u,0.s) (successful) 1 goal H : id (fact 100) = fact 100 ============================ True

However, when we go to close the proof, we will run into trouble, because the reduction strategy changes are local to the tactic passed to with_strategy.

exact I.
No more goals.
Timeout 1 Defined.
Toplevel input, characters 0-18: > Timeout 1 Defined. > ^^^^^^^^^^^^^^^^^^ Error: Timeout!

We can fix this issue by using abstract:

Goal True.
1 goal ============================ True
Time assert (id (fact 100) = fact 100) by with_strategy -1 [id] abstract reflexivity.
Finished transaction in 0.004 secs (0.004u,0.s) (successful) 1 goal H : id (fact 100) = fact 100 ============================ True
exact I.
No more goals.
Time Defined.
Finished transaction in 0.001 secs (0.001u,0.s) (successful)

On small examples this sort of behavior doesn't matter, but because Coq is a super-linear performance domain in so many places, unless great care is taken, tactic automation using with_strategy may not be robustly performant when scaling the size of the input.

Warning

In much the same way this tactic does not play well with Qed and Defined without using abstract as an intermediary, this tactic does not play well with coqchk, even when used with abstract, due to the inability of tactics to persist information about conversion hints in the proof term. See #12200 for more details.