If you want some more examples, here are course notes about program correctness written by Vašek Chvátal at Concordia University to complement the Rosen text as well.

**What does it mean for a program to be correct?**

When we are first learning to
program, we are often ready to call it correct when it compiles without errors
or warnings. The next step is to check that it gets the correct output for the
test cases.

Does that mean the program will always produce the correct output? **NO!**
If so, it would be equivalent to saying that an existence proof can be used to
prove a statement with a universal quantifier (to be precise ...a non-negated one).

We call a program "correct" when it produces the correct output for every possible input. Thus, testing is an important and time consuming step in programming. The problem
is that the set of all possible inputs to a program is, except in the most trivial cases,
extremely large. So whenever possible, we would prefer to **prove** that
a program is correct. This is particularly true when a program will be executed
a huge number of times (think about the kernel of an operating system or the code to fetch search responses at Google) or the cost of code failure is high (think about
code to auto-pilot a jet or to manage stock trading on the New York Stock Exchange).

- Prove that the correct output is produced if the program successfully completes.
- Prove that the program will always successfully complete (in theory, this is called "halt").

In particular, we assume that a program P is surrounded by two predicates: a `pre` condition and a `post`
condition, notation: **pre {P} post**. This is called a Hoare
triple after the English Computer Scientist Tony Hoare (the creator of
quicksort). The `pre` condition describes what is expected to hold true at the start of the program; the `post` condition is what is expected to hold true at the completion of the program. Note: we use the word "program" here to mean any chunk of code or algorithm.

A program is called **partially correct** if, when `pre`
holds before the program executes and the program terminates, `post`
holds after the program has executed. A program is **fully
correct** if we can also prove that it terminates. We will
concentrate on partial correctness.

We can also put `pre` and `post`
conditions around program fragments. A good style is to do this
at least for each method in our program.
We can use comments to do this.

*Example 1: *

// pre: x = 1 y = 2*x+1; // post: x=1 and y=3

This says: "if x=1 before this code fragment is
executed, then x=1 and y=3 after it is executed." The `pre` condition is
rather constrained here, but programmers often write code assuming strong
constraints on what the input is likely to be at certain points. In this example,
x might be entered through a pull-down menu where the only choices were 0 and 1, and the prior statement to this was an if that checked the value of x.

*Example 2: *

int Multiply(int left, int right){ // pre: left >= 0 int a = left, b = right, product = 0; while(a>0){ product += b; a--; } // post: product = left * right } return product;

This says: "if left >= 0 before the method Multiply is called, then product = left*right after the while loop is executed", and hence Multiply returns the product of its input parameters, if the left one is non negative.

Remember, `pre`{P}`post` says: "if `pre` before P then executing P makes `post`
true after P". It says **nothing** about "if not `pre`
before P". This is yet another manifestation
of the implies operator: false implies anything.

*Example 3:*

// pre: x=1 y = 2; z = x+y; // post: what should we put here?

We are going to insert more predicates into our program so that we can
understand each step and connect the steps.
In order to know what predicates to put, we need to know what we want
to prove. For instance, "y=2" is a correct `post` condition for the first
line in the example above:

// pre: x=1 y = 2; // post: y=2

but if we want to prove that z = 3 after "z=x+y;" we need a stronger predicate:

// pre: x=1 y = 2; // post: x=1 and y=2It appears that we are repeating ourselves in the

So that we now can conclude:

// pre: x=1 and y=2 z = x+y; // post: x=1, y=2 and z=3

The second predicate is the `post` condition of the first assignment and
the `pre` condition of the second assignment and needs to be strong
enough to allow us to draw a desired conclusion.

y=3; z=x+y;

We will be informal about the inference rule for assignment. We state

// the values of the variables in the expression are given {lhs = expression;} // evaluate the expression given the values of the variables and // the laws of arithmetic to get the value of the expression: Vexpr // lhs = Vexpr

We do this informally because there are a lot of technical complications here (having to do with aliasing, meaning that there can be multiple ways to access a certain variable). We will leave this issue alone. If you are interested in this, take a class in compiler analysis or program analysis or program semantics later in your career.

a = z^{2}+ (z * 5) + 10

Prove that the program segment

y = 1; z = x + y;is correct with respect to the initial assertion x=0 and the final assertion z=1.

(Note: this is exercise 1 from Section 4.5 (6th edition) or Section 5.5 (7th edition) in Rosen text.)

Once we have assignment out of the way, we can be more precise about the other basic programming control structures:

**composition**: "S1;S2",**conditionals**"if(t)S" and "if(t) S1; else S2;",**loop**: "while(t)S".

__q{S2}r__

∴ p{S1;S2}r

The predicate q in the middle is the `post` condition for S1 and the `pre`
condition for S2, just as in our earlier example.
It says: "if S1 takes p to q and S2 takes q to r,
then S1;S2 takes p to r", which is completely obvious!
The rule for composition is very similar to the logic inference rule:

__q→r__

∴ p→r

*Example 4:* Swap

// pre: x=v1 and y=v2 {t = x; x=y; y=t;} // ???

We need to know what we want to conclude at the end of the three assignments. So we need to have a proof strategy. Here we want to establish a predicate that shows that the code fragment actually swaps values: that in the end we have (at least) "x=v2 and y = v1". Let us use stepwise composition.

First assignment:

// x=v1 and y=v2 { t = x;}

We need to be complete enough to infer "x=v2 and y=v1" at the end of the three statements.

Second assignment:

// x=v1 and y=v2 { t = x;} // t = v1 and x=v1 and y=v2 { x = y;}

Again, we need to know what we want to prove at the end. We actually do not need the y=v2 clause for the next step, but we do need the t=v1 clause. Now we can finish our inferences:

// x=v1 and y=v2 { t = x;} // t = v1 and x=v1 and y=v2 { x = y;} // t = v1 and x=v2 and y=v2 { y = t;} // t=v1 and x=v2 and y=v1Note: if we wish to make it more concise, we can apply the composition rule as stated above to produce:

// pre: x=v1 and y=v2 { t = x;} { x = y;} { y = t;} // post: t=v1 and x=v2 and y=v1

Here is the proof rule for a conditional without an else clause:

__p∧¬t → q__

∴ p{if(t)S}q

It says:

if if (p and t) hold before S, and executing S will make q true and if (p and not t) implies q then if p is true before {if(t)S}, then q is true after {if(t)S}

*Example 5:*

Prove

// pre: T if (x>max)max=x; // post: max >= x

T (true) before the statement means: I know nothing before the statement gets executed. Our job is to show the two clauses above the line of the proof rule, so that we can conclude the clause below the line:

1. T ∧ x>max (max=x} max=x → max ≥ x

This says: if x>max then the assignment max=x will make max≥x true

2. T ∧ x≤max → max ≥ x

So we can conclude that max≥x after the if statement is executed.

// pre: i=5, x=3, y=5 if x > y then i=i+1;

Given the code fragment:

// pre: T if x > y then i=i+1;

// pre: x=2 and y=1 if x > y then {t = x; x=y; y=t;}

Another variant:

// pre: x=v1 and y=v2 if x > y then {t = x; x=y; y=t;}

Here is the proof rule for a conditional with an else clause:

__p∧¬t {s2}q__

∴ p{if(t)S1 else S2}q

It says:

if if (p and t) and executing S1 will make q true and if (p and not t) and executing S2 will also make q true then if p is true before {if(t)S1 else S2} then q is true afterwards.

*Example 6:*

Prove

// T if(x<0) absX=-x; else absX=x; // absX = |x|

Again, our job is to prove the two clauses above the line, which allows us to conclude the clause below the line.

1. T ∧ x<0 {absX=-x} → absX = |x|

2. T ∧ x≥0 {absX=x} → absX = |x|

So we conclude that absX=|x| after the conditional, no matter whether the then branch or the else branch is executed.

if x < y then z=x else z=y

Verify that the program segment

x = 2 z = x + y if y > 0 then z = z + 1 else z = 0is correct with respect to the initial assertion y=3 and the final assertion z=6.

(Note: this is exercise 3 in section 4.5 (6th edition) or section 5.5 (7th edition) in the Rosen text.)

∴ p{while(t)S} ¬t ∧ p

In this proof rule, p is called the **loop invariant**. Invariant means that
it stays the same throughout. Notice
that the clause above the line says: If p and t before S, then S holds p true,
so p is kept invariant through the loop body S. The whole rule says: if
S holds p invariant, then the while loop will hold p invariant,
and upon termination will make ¬t hold,
because the only way the while loop terminates is for t to be false.

*Example 7:*

Prove

// pre: n>0 int i = 0; while(i<n){ i++; } // post: i=n

This is obvious, but can we prove it? Sure. We can place a predicate after the declaration of i:

// pre: n>0 int i = 0; // i=0 and n>0 while(i<n){ i++; } // post: i=nNote: the predicate in the middle is the

We know *t* (the variable in the loop rule above) is: "i<n". Now we need to create a loop invariant *p*.

The statement "i++" increments i, and therefore i=0 is not kept invariant by the loop body. Note: at this stage, we are asking for a statement broad enough to encompass what was the state before

The loop body increments up to n. So the loop invariant p must be: i≤n

Can we prove that i≤n holds before the while loop? Yes: i=0 and n>0 → i≤n. So we have

// n>0 int i = 0; // i=0 and n>0 implies i<=n // i <= n (the loop invariant) while(i<n){ // i < n (loop test) and i ≤ n (loop invariant) i++; // i ≤ n (loop invariant) } // not (i<n) (negation of loop test) // and (i≤n) (loop invariant) // is equivalent with (i≥n) and (i≤n) // which is equivalent with // i = n QEDAgain, to find out what the loop invariant should be, we need to know what we want to prove at the end of the loop. At the end of the loop we have the invariant and the negation of the loop test. These together must be strong enough to reach the desired conclusion.

Sometimes we need to analyze what *S* is doing within the loop as well. But then we just use the other rules, such as composition and conditionals, to do so and then put everything together.

Use a loop invariant to prove that the following program segment for computing thenth power, wherenis a positive integer, of a real numberxis correct.power = 1; i = 1; while i <= n { power = power * x; i = i + 1; }(Note: this is exercise 7 in Section 4.5 (6th edition) or section 5.5 (7th edition) in the Rosen text.)

**Program Verification Last Comments**

Ideally, when we test our code we check that every `pre` {P} `post` triple
holds. The problem is that we can only test with a small number of
particular input set(s). Therefore, testing does not guarantee that
our program is correct. Proofs of correctness are ideal. However, testing is still
a major component of program development for several reasons. First, it is useful
in finding bugs, i.e., in establishing that our program is incorrect and indentifying in
what ways it is incorrect. Second, it is not always possible to prove correctness;
the algorithm might be too complicated or include steps such as generating a random
number that cannot be adequately constrained. Third, just because the precondition
is written correctly does not guarantee that the accompanying code is. Whenever
possible add `pre` and `post` conditions to your code, prove that the `pre` conditions
imply the `post` conditions and make sure that the assertions match the code
implementation!

The `assert` statement in Java can be used to check your assumptions within your code. See Sun Java learning exercise for a tutorial using assert.

Here is an example from the CS161 class. Look at how many times and how carefully `assert` statements have been embedded to check what is going on.

public int egyptianMultiply(int left, int right){ if (left < 0 || right < 0) return -1; else { assert left >= 0 && right >= 0; // 1 int a = left, b = right, p = 0; assert p+a*b == left * right; // 2 while (a!=0){ assert a!=0 && p+a*b == left*right; // 3 p += (a%2)*b; a/=2; b*=2; assert p+(a*b) == left*right; // 4 } assert a==0; // 5 assert p+a*b == left*right; // 6 assert p == left * right; // 7 return p; }assert will confirm that the boolean expressions return true. Assertion 1 is the precondition for the else block -- the code cannot get to that point, given the conditional, unless it is true. Assertions 3 and 4 are checking the loop invariant. The last set of asserts (5, 6 and 7) are separated out so that Java will check each one and indicate which did not hold as expected, if one does evaluate to false.

This shows one way in which program verification can become part of your standard practice of programming.