Docsity
Docsity

Prepare-se para as provas
Prepare-se para as provas

Estude fácil! Tem muito documento disponível na Docsity


Ganhe pontos para baixar
Ganhe pontos para baixar

Ganhe pontos ajudando outros esrudantes ou compre um plano Premium


Guias e Dicas
Guias e Dicas

EXercicios cormen Parte 1, Exercícios de Complexidade computacional avançada

CLRSCLRS CLRSCLRSCLRSCLRSCLRSCLRSCLRSCLRSCLRSCLRSCLRSCLRS

Tipologia: Exercícios

2017

Compartilhado em 15/06/2017

percival-lucena
percival-lucena 🇧🇷

6 documentos

1 / 217

Toggle sidebar

Esta página não é visível na pré-visualização

Não perca as partes importantes!

bg1
Page 1 of 4 PSU CMPSC 465 Spring 2013
Selected Homework Solutions – Unit 1
CMPSC 465
Exercise 2.1-3
Here, we work with the linear search, specified as follows:
LINEAR-SEARCH(A, v)
Input: A = <a1, a, …, an>; and a value v.
Output: index i if there exists an i in 1..n s.t. v = A[i]; NIL, otherwise.
We can write pseudocode as follows:
LINEAR-SEARCH(A, v)
i = 1
while i A.length and A[i] v // check elements of array until end or we find key
{
i = i + 1
}
if i == A.length + 1 // case that we searched to end of array, didn’t find key
return NIL
else // case that we found key
return i
Here is a loop invariant for the loop above:
At the start of the ith iteration of the while loop, A[1..i-1] doesn’t contain value v
Now we use the loop invariant to do a proof of correctness:
Initialization:
Before the first iteration of the loop, i = 1. The subarray A[1..i-1] is empty, so the loop invariant vacuously holds.
Maintenance:
For i Z s.t. 1 i A.length, consider iteration i. By the loop invariant, at the start of iteration i, A[1..i-1] doesn’t
contain v. The loop body is only executed when A[i] is not v and we have not exceeded A.length. So, when the ith
iteration ends, A[1..i] will not contain value v. Put differently, at the start of the (i+1)st iteration, A[1..i-1] will once again
not contain value v.
Termination:
There are two possible ways the loop terminates:
If there exists an index i such that A[i] == v, then the while loop will terminate at the end of the ith iteration.
The loop invariant says A[1..i-1] doesn’t contain v, which is true. And, in this case, i will not reach A.length + 1,
so the algorithm returns i s.t. A[i] = v, which is correct.
Otherwise, the loop terminates when i = n + 1 (where n = A.length), which implies n = i - 1. By the loop
invariant, A[1..i-1] is the entire array A[1..n], and it doesn’t contain value v, so NIL will correctly be returned by
the algorithm.
Note: Remember a few things f rom intro programming and from Epp:
Remember to think about which kind of loop to use for a problem. We don’t know how man y iterations the linear search loop will run until it’s
done, so we should use an indet erminate loop structure. (If we do, the proof is cleaner.)
As noted in Epp, the only way t o get out of a loop should be by having the loop test fail (or, in the for case, the counter reach the end). Don’t
return or break out of a loop; pr oving the maintenance step becomes very tricky if you do.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Pré-visualização parcial do texto

Baixe EXercicios cormen Parte 1 e outras Exercícios em PDF para Complexidade computacional avançada, somente na Docsity!

Selected Homework Solutions – Unit 1

CMPSC 465

Exercise 2.1- 3

Here, we work with the linear search, specified as follows:

LINEAR-SEARCH( A , v )

Input: A = < a 1 , a , …, a n

; and a value v.

Output: index i if there exists an i in 1.. n s.t. v = A [ i ]; NIL, otherwise.

We can write pseudocode as follows:

LINEAR-SEARCH( A , v )

i = 1

while iA. length and A [ i ] ≠ v // check elements of array until end or we find key

i = i + 1

if i == A.length + 1 // case that we searched to end of array, didn’t find key

return NIL

else // case that we found key

return i

Here is a loop invariant for the loop above:

At the start of the i th iteration of the while loop, A [1.. i - 1] doesn’t contain value v

Now we use the loop invariant to do a proof of correctness:

Initialization:

Before the first iteration of the loop, i = 1. The subarray A [1.. i - 1] is empty, so the loop invariant vacuously holds.

Maintenance:

For iZ s.t. 1 ≤ iA .length, consider iteration i. By the loop invariant, at the start of iteration i , A [1.. i - 1] doesn’t

contain v. The loop body is only executed when A [ i ] is not v and we have not exceeded A. length. So, when the i th

iteration ends , A [1.. i ] will not contain value v. Put differently, at the start of the ( i +1)

st iteration, A [1.. i - 1] will once again

not contain value v.

Termination:

There are two possible ways the loop terminates:

  • If there exists an index i such that A [ i ] == v , then the while loop will terminate at the end of the i th iteration.

The loop invariant says A [1.. i - 1] doesn’t contain v , which is true. And, in this case, i will not reach A. length + 1,

so the algorithm returns i s.t. A [ i ] = v , which is correct.

  • Otherwise, the loop terminates when i = n + 1 (where n = A. length ), which implies n = i - 1. By the loop

invariant, A[1.. i - 1] is the entire array A[1.. n ], and it doesn’t contain value v , so NIL will correctly be returned by

the algorithm.

Note: Remember a few things from intro programming and from Epp:

  • Remember to think about which kind of loop to use for a problem. We don’t know how many iterations the linear search loop will run until it’s

done, so we should use an indeterminate loop structure. (If we do, the proof is cleaner.)

  • As noted in Epp, the only way to get out of a loop should be by having the loop test fail (or, in the for case, the counter reach the end). Don’t

return or break out of a loop; proving the maintenance step becomes very tricky if you do.

Exercise 2.3- 1

The figure below illustrates the operations of the procedure bottom-up of the merge sort on the array

A = {3, 41, 52, 26, 38, 57, 9, 49}:

The algorithm consists of merging pairs of 1-item sequence to form sorted sequences of length 2, merging pairs of sequences

of length 2 to form sorted sequences of length 4, and so on, until two sequences of length n /2 are merged to form the final

sorted sequence of length n.

Exercise 4.4- 1

The recurrence is T ( n ) = 3 T ( ) + n. We use a recurrence tree to determine the asymptotic upper bound on this recurrence.

Because we know that floors and ceilings usually do not matter when solving recurrences, we create a recurrence tree for the

recurrence T ( n ) = 3 T ( n /2) + n. For convenience, we assume that n is an exact power of 2 so that all subproblem sizes are

integers.

Because subproblem sizes decrease by a factor of 2 each time when go down one level, we eventually must reach a boundary

condition T (1). To determine the depth of the tree, we find that the subproblem size for a node at depth i is n /

i

. Thus, the

subproblem size hits n = 1 when n /

i = 1 or, equivalently, when i = lg n. Thus, the tree has lg n + 1 levels (at depth 0, 1, 2, 3,

… , lg n ).

Next we determine the cost at each level of the tree. Each level has 3 times more nodes than the level above, and so the

number of nodes at depth i is 3

i

. Because subproblem sizes reduce by a factor of 2 for each level when go down from the

root, each node at depth i , for i = 0, 1, 2, 3, … , lg n – 1, has a cost of n /

i

. Multiplying, we see that the total cost over all

nodes at depth i , for i = 0, 1, 2, 3, … , lg n – 1, is 3

i

  • n /

i = (3/2)

i n. The bottom level, at depth lg n , has = nodes, each

contributing cost T (1), for a total cost of T (1), which is Θ( ), since we assume that T (1) is a constant.

n

n /

n /4 n /4 n /

n /

n /4 n /4 n /

n /

n /4 n /4 n /

n

3/2 n

2 n

lg n

Total: O ( )

T (1) T (1) T (1) T (1) T (1) T (1)

Exercise 4.5- 1

a) Use the master method to give tight asymptotic bounds for the recurrence T ( n ) = 2 T ( n /4) + 1.

Solution:

For this recurrence, we have a = 2, b = 4, f ( n ) = 1, and thus we have that =. Since f ( n ) = 1 = O ( ),

where = 0.2, we can apply case 1 of the master theorem and conclude that the solution is T ( n ) = Θ( ) = Θ( ) =

b) Use the master method to give tight asymptotic bounds for the recurrence T ( n ) = 2 T ( n /4) +.

Solution:

For this recurrence, we have a = 2, b = 4, f ( n ) = , and thus we have that = = =. Since f ( n ) = Θ( ),

we can apply case 2 of the master theorem and conclude that the solution is T ( n ) = Θ( ).

c) Use the master method to give tight asymptotic bounds for the recurrence T ( n ) = 2 T ( n /4) + n.

Solution:

For this recurrence, we have a = 2, b = 4, f ( n ) = n , and thus we have that =. Since f ( n ) = Ω( ) , where

, we can apply case 3 of the master theorem if we can show that the regularity condition holds for f ( n ).

To show the regularity condition, we need to prove that af ( n / b ) ≤ cf ( n ) for some constant c < 1 and all sufficiently large

n. If we can prove this, we can conclude that T ( n ) = Θ( f ( n )) by case 3 of the master theorem.

Proof of the regularity condition:

af ( n / b ) = 2( n /4) = n / 2 ≤ cf ( n ) for c = 0.7, and n ≥ 2.

So, we can conclude that the solution is T ( n ) = Θ( f ( n )) = Θ( n ).

d) Use the master method to give tight asymptotic bounds for the recurrence T ( n ) = 2 T ( n /4) + n

2 .

Solution:

For this recurrence, we have a = 2, b = 4, f ( n ) = n

2 , and thus we have that =. Since f ( n ) =Ω( ) , where

, we can apply case 3 of the master theorem if we can show that the regularity condition holds for f ( n ).

Proof of the regularity condition:

af ( n / b ) = 2 = (1/8) n

2 ≤ c f ( n ) for c = 0.5, and n ≥ 4.

So, we can conclude that the solution is T ( n ) = Θ( n

2 ).

Selected Homework Solutions – Unit 2

CMPSC 465

Exercise 6 .1- 1

Problem: What are the minimum and maximum numbers of elements in a heap of height h?

Since a heap is an almost-complete binary tree (complete at all levels except possibly the lowest), it has at most

2

3 +…+

h = 2

h +

  • 1 elements (if it is complete) and at least 2

h

  • 1+1=

h elements (if the lowest level has just 1 element

and the other levels are complete).

Exercise 6.1- 3

Problem: Show that in any subtree of a max-heap, the root of the subtree contains the largest value occurring anywhere in

that subtree.

To prove:

For any subtree rooted at node k of a max-heap A [1, 2, ... , n ], the property P ( k ):

The node k of the subtree rooted at k contains the largest value occurring anywhere in that subtree.

Proof :

Base Case:

When kn / 2 ⎣

  • 1 , n

, k is a leaf node of a max-heap since ⎢ n / 2 ⎣

is the index of the last parent, and the

subtree rooted at k just contains one node. Thus, node k contains the largest value in that subtree.

Inductive Step :

Let k s.t. k is an internal node of a max-heap, and assume s.t. k < in , P ( i ) is true. i.e.

The node i of the subtree rooted at i contains the largest value occurring anywhere in that subtree.

[inductive hypothesis]

Now let us consider node k :

  1. k 's left child 2 k and right child 2 k + 1 contain the largest value of k 's left and right subtree, respectively.

(by the inductive hypothesis that for k < in , P ( i ) is true)

  1. k 's value is larger than its left child 2 k and right child 2 k + 1.

(by the max-heap property)

So, we can conclude that node k contains the largest value in the subtree rooted at k.

Thus, by the principle of strong mathematical induction, P ( k ) is true for all nodes in a max-heap.

Exercise 6.1- 4

Problem: Where in a max-heap might the smallest element reside, assuming that all elements are distinct?

The smallest element can only be one of leaf nodes. If not, it will have its own subtree and is larger than any element on that

subtree, which contradicts the fact that it is the smallest element.

Exercise 6. 4 - 3

Problem: What is the running time of HEAPSORT on an array A of length n that is already sorted in increasing order? What

about decreasing order?

The running time of HEAPSORT on an array of length that is already sorted in increasing order is Θ( n lg n ), because even

though it is already sorted, it will be transformed back into a heap and sorted.

The running time of HEAPSORT on an array of length that is sorted in decreasing order will be Θ( n lg n ). This occurs because

even though the heap will be built in linear time, every time the element is removed and HEAPIFY is called, it could cover the

full height of the tree.

Exercise 6.5- 1

Problem: Illustrate the operation of HEAP-EXTRACT-MAX on the heap A = <15, 13, 9, 5, 12, 8, 7, 4, 0, 6, 2, 1>.

Exercise 6. 5 - 2

Problem: Illustrate the operation of MAX-HEAP-INSERT ( A , 10) on the heap A = (15, 13, 9, 5, 12, 8, 7, 4, 0, 6, 2, 1).

Exercise 7. 1 - 3

Problem: Give a brief argument that the running time of PARTITION on a subarray of size n is Θ( n ).

Since each iteration of the for loop involves a constant number of operations and there are n iterations total, the running time

is Θ( n ).

Exercise 7. 2 - 4

Problem: Banks often record transactions on an account in order of the times of the transaction, but many people like to

receive their bank statements with checks listed in order by check number. People usually write checks in order by check

number, and merchants usually cash them with reasonable dispatch. The problem of converting time-of-transaction ordering

is therefore the problem of sorting almost-sorted input. Argue that the procedure INSERTION-SORT would tend to beat the

procedure QUICKSORT on this problem.

INSERTION-SORT’s running time on perfectly-sorted input runs in Θ( n ) time. So, it takes almost Θ( n ) running time to sort an

almost-sorted input with INSERTION-SORT. However, QUICKSORT requires almost Θ( n

2 ) running time, recalling that it takes

Θ( n

2 ) time to sort perfectly-sorted input. This is because when we pick the last element as the pivot, it is usually the biggest

one, and it will produce one subproblem with close to n – 1 elements and one with 0 elements. Since the cost of PARTITION

procedure of QUICKSORT is Θ( n ), the recurrence running time of QUICKSORT is T ( n ) = T ( n – 1) +Θ( n ). In another problem,

we, use the substitution method to prove that the recurrence T ( n ) = T ( n – 1) +Θ( n ) has the solution T ( n ) = Θ( n

2 ). So we

use INSERTION-SORT rather than QUICKSORT in this situation when the input is almost sorted.

Exercise 8. 4 - 2

Problem: Explain why the worst-case running time for bucket sort is Θ( n

2 ). What simple change to the algorithm preserve its

linear average-case running time and makes its worst-case running time O ( n lg n )?

The worst case for bucket sort occurs when the all inputs fall into a single bucket, for example. Since we use INSERTION-

SORT for sorting buckets and INSERTION-SORT has a worst case of Θ( n

2 ), the worst case run time for bucket sort is Θ( n

2 ).

By using an algorithm like MERGE-SORT with worst case run time time of O ( n lg n ) instead of INSERTION-SORT for sorting

buckets, we can ensure that the worst case of bucket sort is O ( n lg n ) without affecting the average case running time.

Exercise 8. 4 - 3

Problem: Let X be a random variable that is equal to the number of heads in two flips of a fair coin. What is E[ X

2 ]? What is

E

2 [ X ]?

E[ X

2 ] = 1

2

  • P(head in one flip) + 0

2

  • P(tail in one flip)

E

2 [ X ] = E[ X ] * E[ X ] as the two flips are independent

= 1/2 * 1/2 as E[ X ] = 1/

Homework Solutions – Unit 3, Chapter 11

CMPSC 465 Spring 2013

Exercise 11.1- 1

Suppose that a dynamic set S is represented by a direct-address table T of length m. Describe a procedure that finds the

maximum element of S. What is the worst-case performance of your procedure?

Solution :

We can do a linear search to find the maximum element in S as follows:

Pre-condition: table T is not empty; m Z

, m ≥ 1.

Post-condition: FCTVAL == maximum value of dynamic set stored in T.

FindMax ( T , m )

max = −∞

for i = 1 to m

if T [ i ] != NIL && max < T [ i ]

max = T [ i ]

return max

In the worst-case searching the entire table is needed. Thus the procedure must take O ( m ) time.

Exercise 11.2- 3

Professor Marley hypothesizes that he can obtain substantial performance gains by modifying the chaining scheme to keep

each list in sorted order. How does the professor’s modification affect the running time for successful searches, unsuccessful

searches, insertions, and deletions?

Solution:

 Successful searches: Θ(1 + α), which is identical to the original running time. The element we search for is equally likely

to be any of the elements in the hash table, and the proof of the running time for successful searches is similar to what

we did in the lecture.

 Unsuccessful searches: 1/2 of the original running time, but still Θ(1 + α), if we simply assume that the probability that

one element's value falls between two consecutive elements in the hash slot is uniformly distributed. This is because the

value of the element we search for is equally likely to fall between any consecutive elements in the hash slot, and once

we find a larger value, we can stop searching. Thus, the running time for unsuccessful searches is a half of the original

running time. Its proof is similar to what we did in the lecture.

 Insertions: Θ(1 + α), compared to the original running time of Θ(1). This is because we need to find the right location

instead of the head to insert the element so that the list remains sorted. The operation of insertions is similar to the

operation of unsuccessful searches in this case.

 Deletions: Θ(1 + α), same as successful searches.

Exercise 11. 3 - 3

Consider a version of the division method in which h ( k ) = k mod m , where m = 2

p

  • 1 and k is a character string interpreted in

radix 2

p

. Show that if we can derive string x from string y by permuting its characters, then x and y hash to the same value.

Give an example of an application in which this property would be undesirable in a hash function.

Solution:

First, we observe that we can generate any permutation by a sequence of interchanges of pairs of characters. One can prove

this property formally, but informally, consider that both heapsort and quicksort work by interchanging pairs of elements and

that they have to be able to produce any permutation of their input array. Thus, it suffices to show that if string x can be

derived from string y by interchanging a single pair of characters, then x and y hash to the same value.

Let x i be the i th character in x , and similarly for y i

. We can interpret x in radix 2

p as , and interpret y as. So

h ( x ) = ( ) mod (

p

  • 1). Similarly, h ( y ) = ( ) mod (

p

  • 1).

Suppose that x and y are identical strings of n characters except that the characters in positions a and b are interchanged:

xa = yb and ya = xb. (1)

Without loss of generality, let a > b. We have:

h ( x ) – h ( y ) = ( ) mod (

p

    1. – ( ) mod (

p

    1. (2)

Since 0 ≤ h ( x ), h ( y ) < 2

p

  • 1, we have that – ( 2

p

  • 1 ) < h ( x ) – h ( y ) < 2

p

    1. If we show that ( h ( x ) – h ( y )) mod ( 2

p − 1 ) = 0, then

h ( x ) = h ( y ). To prove ( h ( x ) – h ( y )) mod (

p − 1) = 0, we have:

( h ( x ) – h ( y )) mod ( 2

p − 1) = (( ) mod (

p

    1. – ( ) mod (

p

  • 1))

mod ( 2

p

    1. by ( 2 )

= ( – ) mod (

p

    1. by relation in footnote

1

= (( xa 2

ap

  • xb 2

bp ) – ( ya 2

ap

  • yb 2

bp )) mod (

p

    1. as x and y are identical strings

of n characters except that

chars. in positions a and

b are interchanged

= (( xa 2

ap

  • xb 2

bp ) – ( xb 2

ap

  • xa 2

bp )) mod (

p

    1. as xa = yb , xb = ya see (1)

= (( x a

  • x b

ap

  • ( x b
  • x a

bp ) mod (

p

    1. by combining like terms

= (( xaxb )

ap

  • ( xaxb )

bp ) mod (

p

    1. as ( xbxa ) = – ( xaxb )

= (( x a

  • x b

ap

  • 2

bp )) mod (

p

    1. by factoring out ( x a
      • x b

= (( xaxb )(

ap ( 2

bp / 2

bp ) – 2

bp )) mod (

p

    1. by multiplication by

bp / 2

bp = 1

= (( xaxb )

bp (

( ab ) p

  • 1)) mod (

p

    1. by factoring out 2

bp

= (( xaxb )

bp ( )(

p

  • 1)) mod (

p

    1. by substituting [ 2

( ab ) p

  • 1 ]

2

= 0 since one factor is 2

p

  • 1

1 Consider the congruence relation: ( m 1 o m 2 ) mod n = (( m 1 mod n ) o ( m 2 mod n )) mod n , where o is +, – , or *

2 Consider the equation = (geometric series) and multiplying both sides by 2

p

  • 1 to get

( ab ) p

  • 1 = ( )(

p

  • 1)]

Exercise 11. 4 - 1

Consider inserting the keys 10, 22 , 31 , 4 , 15 , 28 , 17 , 88 , 59 into a hash table of length m = 11 using open addressing with the

auxiliary hash function h ′( k ) = k. Illustrate the result of inserting these keys using linear probing, using quadratic probing with

c 1 = 1 and c 2 = 3, and using double hashing with h ′( k ) = k and h 2 ′( k ) = 1 + ( k mod ( m – 1 )).

Solution:

Linear Probing

With linear probing, we use the hash function h ( k , i ) = ( h '( k ) + i ) mod m = ( k + i ) mod m. Consider hashing each of the

following keys:

  1. Hashing 10:

h (10, 0) = (10 + 0) mod 11= 10. Thus we have T [10] = 10.

  1. Hashing 22:

h (22, 0) = (22 + 0) mod 11 = 0. Thus we have T [0] = 22.

  1. Hashing 31:

h (31, 0) = (31 + 0) mod 11 = 9. Thus we have T [9] = 31.

  1. Hashing 4:

h (4, 0) = (4 + 0) mod 11 = 4. Thus we have T [4] = 4.

  1. Hashing 15:

h (15, 0) = (15 + 0) mod 11 = 4, collision!

h (15, 1) = (15 + 1 ) mod 11 = 5. Thus we have T [5] = 15.

  1. Hashing 28:

h (28, 0) = (28 + 0) mod 11 = 6. Thus we have T [6] = 28.

  1. Hashing 17:

h (17, 0) = (17 + 0) mod 11 = 6, collision!

h (17, 1) = (17 + 1 ) mod 11 = 7. Thus we have T [7] = 17.

  1. Hashing 88:

h (88, 0) = (88 + 0) mod 11 = 0, collision!

h (88, 1) = (88 + 1 ) mod 11 = 1. Thus we have T [1] = 88.

  1. Hashing 59:

h (59, 0) = (59 + 0) mod 11 = 4, collision!

h (59, 1) = (59 + 1 ) mod 11 = 5, collision!

h (59, 2) = (59 + 2 ) mod 11 = 6, collision!

h (59, 3) = (59 + 3 ) mod 11 = 7, collision!

h (59, 4) = (59 + 4 ) mod 11 = 8. Thus we have T [8] = 59.

The final hash table is shown as:

key 22 88 4 15 28 17 59 31 10

index 1 2 3 4 5 6 7 8 9 10 11

Quadratic Probing

With quadratic probing, and c 1 = 1 , c 2 = 3 , we use the hash function h ( k , i ) = ( h '( k ) + i + 3 i

2 ) mod m = ( k + i + 3 i

2 ) mod m.

Consider hashing each of the following keys:

  1. Hashing 10:

h (10, 0) = (10 + 0 + 0) mod 11 = 10. Thus we have T [10] = 10.

  1. Hashing 22:

h (22, 0) = (22 + 0 + 0) mod 11 = 0. Thus we have T [0] = 22.

  1. Hashing 31:

h (31, 0) = (31 + 0 + 0) mod 11 = 9. Thus we have T [9] = 31.

  1. Hashing 4:

h (4, 0) = (4 + 0 + 0) mod 11 = 4. Thus we have T [4] = 4.

  1. Hashing 15:

h (15, 0) = (15 + 0 + 0) mod 11 = 4, collision!

h (15, 1) = (15 + 1 + 3) mod 11 = 8. Thus we have T [8] = 15.

  1. Hashing 28:

h (28, 0) = (28 + 0 + 0) mod 11 = 6. Thus we have T [6] = 28.

  1. Hashing 17:

h (17, 0) = (17 + 0 + 0) mod 11 = 6, collision!

h (17, 1) = (17 + 1 + 3 ) mod 11 = 10, collision!

h (17, 2) = (17 + 2 + 12 ) mod 11 = 9, collision!

h (17, 3) = (17 + 3 + 27 ) mod 11 = 3. Thus we have T [3] = 17.

  1. Hashing 88:

h (88, 0) = (88 + 0 + 0) mod 11 = 0, collision!

h (88, 1) = ( 88 + 1 + 3 ) mod 11 = 4, collision!

h (88, 2) = ( 88 + 2 + 12 ) mod 11 = 3, collision!

h (88, 3) = ( 88 + 3 + 27 ) mod 11 = 8, collision!

h (88, 4) = ( 88 + 4 + 48 ) mod 11 = 8, collision!

h (88, 5) = ( 88 + 5 + 75 ) mod 11 = 3, collision!

h (88, 6) = ( 88 + 6 + 108 ) mod 11 = 4, collision!

h (88, 7) = ( 88 + 7 + 147 ) mod 11 = 0, collision!

h (88, 8) = ( 88 + 8 + 192 ) mod 11 = 2. Thus we have T [2] = 88.

  1. Hashing 59:

h (59, 0) = (59 + 0 + 0) mod 11 = 4, collision!

h (59, 1) = (59 + 1 + 3 ) mod 11 = 8 , collision!

h (59, 2 ) = (59 + 2 + 12 ) mod 11 = 7. Thus we have T [7] = 59.

The final hash table is shown as:

key 22 88 17 4 28 59 15 31 10

index 1 2 3 4 5 6 7 8 9 10 11

Doubling Hashing

With double hashing, we use the hash function:

h ( k , i ) = ( h '( k ) + ih 2 '( k )) mod m = ( k + i (1 + ( k mod ( m – 1)))) mod m.

Consider hashing each of the following keys:

  1. Hashing 10:

h (10, 0) = (10 + 0) mod 11 = 10. Thus we have T [10] = 10.

  1. Hashing 22:

h (22, 0) = (22 + 0) mod 11 = 0. Thus we have T [0] = 22.

  1. Hashing 31:

h (31, 0) = (31 + 0) mod 11 = 9. Thus we have T [9] = 31.

  1. Hashing 4:

h (4, 0) = (4 + 0) mod 11 = 4. Thus we have T [4] = 4.

  1. Hashing 15:

h (15, 0) = (15 + 0) mod 11 = 4, collision!

h (15, 1) = (15 + 1 * h 2 '( 15 )) mod 11 = 10, collision!

h (15, 2) = (15 + 2 * h 2 '( 15 )) mod 11 = 5. Thus we have T [5] = 15.

  1. Hashing 28:

h (28, 0) = (28 + 0) mod 11 = 6. Thus we have T [6] = 28.

  1. Hashing 17:

h (17, 0) = (17 + 0) mod 11 = 6, collision!

h (17, 1) = ( 17 + 1 * h 2 '( 17 )) mod 11 = 3. Thus we have T [3] = 17.

  1. Hashing 88:

h (88, 0) = (88 + 0) mod 11 = 0, collision!

h (88, 1) = (88 + 1 * h 2 '(88)) mod 11 = 9 , collision!

h (88, 2) = (88 + 2 * h 2 '(88)) mod 11 = 7. Thus we have T [7] = 88.

  1. Hashing 59:

h (59, 0) = (59 + 0) mod 11 = 4, collision!

h (59, 1) = (59 + 1 * h 2 '(59)) mod 11 = 3 , collision!

h (59, 2) = (59 + 2 * h 2 '(59)) mod 11 = 2. Thus we have T [2] = 59.

The final hash table is shown as:

key 22 59 17 4 15 28 88 31 10

index 1 2 3 4 5 6 7 8 9 10 11

Exercise 13.1- 2

Draw the red-black tree that results after TREE-INSERT is called on the tree shown in the figure below with key 36. If the

inserted node is colored red, is the resulting tree a red-black tree? What if it is colored black?

Solution :

If the node with key 36 is inserted and colored red, the red-black tree becomes:

We can see that it violates following red-black tree property:

A red node in the red-black tree cannot have a red node as its child.

So the resulting tree is not a red-black tree.

If the node with key 36 is inserted and colored black, the red-black tree becomes:

We can see that it violates following red-black tree property:

For each node, all paths from the node to descendent leaves contain the same number of black nodes (e.g. consider node

with key 30).

So the resulting tree is not a red-black tree either.

36

36

Exercise 13.1- 5

Show that the longest simple path from a node x in a red-black tree to a descendant leaf has length at most twice that of the

shortest simple path from node x to a descendant leaf.

Proof :

In the longest path, at least every other node is black. In the shortest path, at most every node is black. Since the two paths

contain equal numbers of black nodes, the length of the longest path is at most twice the length of the shortest path.

We can say this more precisely, as follows:

Since every path contains bh( x ) black nodes, even the shortest path from x to a descendant leaf has length at least bh( x ).

By definition, the longest path from x to a descendant leaf has length height( x ). Since the longest path has bh( x ) black

nodes and at least half the nodes on the longest path are black (by property 4 in CLRS), bh( x ) ≥ height( x )/2, so

length of longest path = height( x ) ≤ 2 ×bh( x ) ≤ twice length of shortest path.

Exercise 13. 2 - 1

Write pseudocode for RIGHT-ROTATE.

Solution :

The pseudocode for RIGHT-ROTATE is shown below:

RIGHT-ROTATE ( T , x )

y = x.left //set y

x.left = y.right // turn y ’s right subtree into x ’s left subtree

if y.right ≠ NIL

y.right.p = x

y.p = x:p // link x ’s parent to y

if x.p == NIL

T.root = y

else if x == x.p.right

x.p.right = y

else

x.p.left = y

y.right = x //put x on y ’s right

x.p = y