Search for notes by fellow students, in your own course and all over the country.
Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.
Title: CS 220: Data Structures and Algorithms.
Description: A data structure is a named location that can be used to store and organize data. And, an algorithm is a collection of steps to solve a particular problem. Learning data structures and algorithms allow us to write efficient and optimized computer programs.
Description: A data structure is a named location that can be used to store and organize data. And, an algorithm is a collection of steps to solve a particular problem. Learning data structures and algorithms allow us to write efficient and optimized computer programs.
Document Preview
Extracts from the notes are below, to see the PDF you'll receive please use the links above
Kalinga Institute of Industrial Technology
Data Structures are the programmatic way of storing data so that data can be
used efficiently
...
This tutorial will give you a great
understanding on Data Structures needed to understand the complexity of
enterprise level applications and need of algorithms, and data structures
...
For this, a computer program may need to store data, retrieve data, and
perform computations on the data
...
And, an algorithm is a collection of steps to solve a particular problem
...
Applications of Data Structure and Algorithms
Algorithm is a step-by-step procedure, which defines a set of instructions to
be executed in a certain order to get the desired output
...
e
...
From the data structure point of view, following are some important categories
of algorithms −
Search − Algorithm to search an item in a data structure
...
Insert − Algorithm to insert item in a data structure
...
Delete − Algorithm to delete an existing item from a data structure
...
Atomic − Definition should define a single concept
...
Accurate − Definition should be unambiguous
...
Data Object
Data Object represents an object having a data
...
which determines the values that can be used with the corresponding type of
data, the type of operations that can be performed on the corresponding type
of data
...
For example, most of the languages provide the following
built-in data types
...
These
data types are normally built by the combination of primary or built-in data
types and associated operations on them
...
The
particular data structure chosen largely depends on the frequency of the
operation that needs to be performed on the data structure
...
Most of the data structures make use of arrays to
implement their algorithms
...
Element − Each item stored in an array is called an element
...
Array Representation
Arrays can be declared in various ways in different languages
...
Arrays can be declared in various ways in different languages
...
As per the above illustration, following are the important points to be
considered
...
Array length is 10 which means it can store 10 elements
...
For example, we can
fetch an element at index 6 as 9
...
Traverse − print all the array elements one by one
...
Deletion − Deletes an element at the given index
Search − Searches an element using the given index or by the value
Update − Updates an element at the given index
...
Data Type
Default Value
bool
false
char
0
int
0
float
0
...
0f
void
wchar_t
0
Traverse Operation
This operation is to traverse through the elements of an array
...
h>
main() {
int LA[] = {1,3,5,7,8};
int item = 10, k = 3, n = 5;
int i = 0, j = n;
printf("The original array elements are :\n");
for(i = 0; i
}}
When we compile and execute the above program, it produces the following
result −
Output
The original array elements are :
LA[0] = 1
LA[1] = 3
LA[2] = 5
LA[3] = 7
LA[4] = 8
Insertion Operation
Insert operation is to insert one or more data elements into an array
...
Here, we see a practical implementation of insertion operation, where we add
data at the end of the array −
Example
Following is the implementation of the above algorithm −
Live Demo
#include
Algorithm
Consider LA is a linear array with N elements and K is a positive integer such
that K<=N
...
1
...
Set J = K
3
...
Set LA[J] = LA[J + 1]
5
...
Set N = N-1
7
...
h>
void main() {
int LA[] = {1,3,5,7,8};
int k = 3, n = 5;
int i, j;
printf("The original array elements are :\n");
for(i = 0; i
}
j = k;
while( j < n) {
LA[j-1] = LA[j];
j = j + 1;
}
n = n -1;
printf("The array elements after deletion :\n");
for(i = 0; i
}}
When we compile and execute the above program, it produces the following
result −
Output
The original array elements are :
LA[0] = 1
LA[1] = 3
LA[2] = 5
LA[3] = 7
LA[4] = 8
The array elements after deletion :
LA[0] = 1
LA[1] = 3
LA[2] = 7
LA[3] = 8
Search Operation
You can perform a search for an array element based on its value or its index
...
Following is the algorithm to find an element with a value of ITEM
using sequential search
...
Start
2
...
Repeat steps 4 and 5 while J < N
4
...
Set J = J +1
6
...
Stop
Example
Following is the implementation of the above algorithm −
Live Demo
#include
Algorithm
Consider LA is a linear array with N elements and K is a positive integer such
that K<=N
...
1
...
Set LA[K-1] = ITEM
3
...
h>
void main() {
int LA[] = {1,3,5,7,8};
int k = 3, n = 5, item = 10;
int i, j;
printf("The original array elements are :\n");
for(i = 0; i
}
LA[k-1] = item;
printf("The array elements after updation :\n");
for(i = 0; i
}}
When we compile and execute the above program, it produces the following
result −
Output
The original array elements are :
LA[0] = 1
LA[1] = 3
LA[2] = 5
LA[3] = 7
LA[4] = 8
The array elements after updation :
LA[0] = 1
LA[1] = 3
LA[2] = 10
LA[3] = 7
LA[4] = 8
Linked List
A linked list is a sequence of data structures, which are connected together
via links
...
Each link contains a
connection to another link
...
Following are the important terms to understand the concept of
Linked List
...
Next − Each link of a linked list contains a link to the next link called
Next
...
Linked List Representation
Linked list can be visualized as a chain of nodes, where every node points to
the next node
...
Linked List contains a link element called first
...
Each link is linked with its next link using its next link
...
Types of Linked List
Following are the various types of linked list
...
Doubly Linked List − Items can be navigated forward and backward
...
Basic Operations
Following are the basic operations supported by a list
Insertion − Adds an element at the beginning of the list
...
Display − Displays the complete list
...
Insertion Operation
Adding a new node in linked list is a more than one step activity
...
First, create a node using the same structure
and find the location where it has to be inserted
...
Then point B
...
next −> RightNode;
It should look like this −
Now, the next node at the left should point to the new node
...
next −> NewNode;
This will put the new node in the middle of the two
...
While inserting it at the end, the second last node of the list should
point to the new node and the new node will point to NULL
...
We shall learn with pictorial
representation
...
The left (previous) node of the target node now should point to the next node
of the target node −
LeftNode
...
next;
This will remove the link that was pointing to the target node
...
TargetNode
...
We can keep that in memory otherwise we
can simply deallocate memory and wipe off the target node completely
...
We need to make the last node to be
pointed by the head node and reverse the whole linked list
...
It should be pointing to NULL
...
So we'll have
some temp node, which looks like the head node pointing to the last node
...
Except the node (first node) pointed by the head node, all nodes should point
to their predecessor, making them their new successor
...
We'll make the head node point to the new first node by using the temp node
...
Following are the important terms to understand the concept of doubly
linked list
...
Next − Each link of a linked list contains a link to the next link called
Next
...
LinkedList − A Linked List contains the connection link to the first link
called First and to the last link called Last
...
Doubly Linked List contains a link element called first and last
...
Each link is linked with its next link using its next link
...
The last link carries a link as null to mark the end of the list
...
Insertion − Adds an element at the beginning of the list
...
Insert Last − Adds an element at the end of the list
...
Insert After − Adds an element after an item of the list
...
Display forward − Displays the complete list in a forward manner
...
Insertion Operation
Following code demonstrates the insertion operation at the beginning of a
doubly linked list
...
Example
//delete first itemstruct node* deleteFirst() {
//save reference to first link
struct node *tempLink = head;
//if only one link
if(head->next == NULL) {
last = NULL;
} else {
head->next->prev = NULL;
}
head = head->next;
//return the deleted link
return tempLink;}
Insertion at the End of an Operation
Following code demonstrates the insertion operation at the last position of a
doubly linked list
...
Both Singly
Linked List and Doubly Linked List can be made into a circular linked list
...
Doubly Linked List as Circular
In doubly linked list, the next pointer of the last node points to the first node
and the previous pointer of the first node points to the last node making the
circular in both directions
...
The last link's next points to the first link of the list in both cases of
singly as well as doubly linked list
...
Basic Operations
Following are the important operations supported by a circular list
...
delete − Deletes an element from the start of the list
...
Insertion Operation
Following code demonstrates the insertion operation in a circular linked list
based on single linked list
...
deleteFirst():Begin
if head is null, then
it is Underflow and return
else if next of head = head, then
head := null
deallocate head
else
ptr := head
while next of ptr is not head, do
ptr := next of ptr
next of ptr = next of head
deallocate head
head := next of ptr
end ifEnd
Display List Operation
Following code demonstrates the display list operation in a circular linked list
...
It is named stack as it behaves like a real-world
stack, for example – a deck of cards or a pile of plates, etc
...
For example, we can
place or remove a card or plate from the top of the stack only
...
At any given time, we can
only access the top element of a stack
...
LIFO stands for Last-in-first-out
...
In stack terminology, insertion operation is called PUSH operation and
removal operation is called POP operation
...
Stack can either be a fixed size one or it may have a sense of
dynamic resizing
...
Basic Operations
Stack operations may involve initializing the stack, using it and then
de-initializing it
...
pop() − Removing (accessing) an element from the stack
...
To use a stack efficiently, we need to check the status of stack as well
...
isFull() − check if stack is full
...
At all times, we maintain a pointer to the last PUSHed data on the stack
...
The top pointer provides top value of the stack without actually removing it
...
We initialize top at -1, as the index in array starts from 0
...
Here's
the code −
Example
bool isempty() {
if(top == -1)
return true;
else
return false;}
Push Operation
The process of putting a new data element onto stack is known as a Push
Operation
...
Step 2 − If the stack is full, produces an error and exit
...
Step 4 − Adds data element to the stack location, where top is
pointing
...
If the linked list is used to implement the stack, then in step 3, we need to
allocate space dynamically
...
See the following code −
Example
void push(int data) {
if(!isFull()) {
top = top + 1;
stack[top] = data;
} else {
printf("Could not insert data, Stack is full
...
In an array implementation of pop() operation, the data element is
not actually removed, instead top is decremented to a lower position in the
stack to point to the next value
...
A Pop operation may involve the following steps −
Step 1 − Checks if the stack is empty
...
Step 3 − If the stack is not empty, accesses the data element at
which top is pointing
...
Step 5 − Returns success
...
\n");
}}
Expression Parsing
The way to write arithmetic expression is known as a notation
...
e
...
These notations are
−
Infix Notation
Prefix (Polish) Notation
Postfix (Reverse-Polish) Notation
These notations are named as how they use operator in expression
...
Infix Notation
We write expression in infix notation, e
...
a - b + c, where operators are
used in-between operands
...
An
algorithm to process infix notation could be difficult and costly in terms of time
and space consumption
...
e
...
For example, +ab
...
Prefix notation is also known as Polish Notation
...
In this notation
style, the operator is postfixed to the operands i
...
, the operator is written
after the operands
...
This is equivalent to its infix notation a
+ b
...
No
...
Instead, these infix notations are first
converted into either postfix or prefix notations and then computed
...
Precedence
When an operand is in between two different operators, which operator will
take the operand first, is decided by the precedence of an operator over
others
...
A table of operator precedence is provided later
...
For example, in expression a + b − c, both + and –
have the same precedence, then which part of the expression will be
evaluated first, is determined by associativity of those operators
...
Precedence and associativity determines the order of evaluation of an
expression
...
No
...
At any point of time
in expression evaluation, the order can be altered by using parenthesis
...
We here use parenthesis for a + b to be evaluated
first, like (a + b)*c
...
Unlike
stacks, a queue is open at both its ends
...
Queue
follows First-In-First-Out methodology, i
...
, the data item stored first will be
accessed first
...
More real-world examples can be seen as
queues at the ticket windows and bus-stops
...
The following diagram given below tries to explain queue
representation as data structure −
As in stacks, a queue can also be implemented using Arrays, Linked-lists,
Pointers and Structures
...
Basic Operations
Queue operations may involve initializing or defining the queue, utilizing it,
and then completely erasing it from the memory
...
dequeue() − remove (access) an item from the queue
...
These are −
peek() − Gets the element at the front of the queue without removing it
...
isempty() − Checks if the queue is empty
...
Let's first learn about supportive functions of a queue −
peek()
This function helps to see the data at the front of the queue
...
In
case we maintain the queue in a circular linked-list, the algorithm will differ
...
Here's the C programming code −
Example
bool isempty() {
if(front < 0 || front > rear)
return true;
else
return false;}
Enqueue Operation
Queues maintain two data pointers, front and rear
...
The following steps should be taken to enqueue (insert) data into a queue −
Step 1 − Check if the queue is full
...
Step 3 − If the queue is not full, increment rear pointer to point the next
empty space
...
Step 5 − return success
...
Algorithm for enqueue operation
procedure enqueue(data)
if queue is full
return overflow
endif
rear ← rear + 1
queue[rear] ← data
return true
end procedure
Implementation of enqueue() in C programming language −
Example
int enqueue(int data)
if(isfull())
return 0;
rear = rear + 1;
queue[rear] = data;
return 1;end procedure
Dequeue Operation
Accessing data from the queue is a process of two tasks − access the data
where front is pointing and remove the data after access
...
Step 2 − If the queue is empty, produce underflow error and exit
...
Step 4 − Increment front pointer to point to the next available data
element
...
Algorithm for dequeue operation
procedure dequeue
if queue is empty
return underflow
end if
data = queue[front]
front ← front + 1
return true
end procedure
Implementation of dequeue() in C programming language −
Example
int dequeue() {
if(isempty())
return 0;
int data = queue[front];
front = front + 1;
return data;}
Data Structure and Algorithms Linear Search
Linear search is a very simple search algorithm
...
Every item is checked
and if a match is found then that particular item is returned, otherwise the
search continues till the end of the data collection
...
This search algorithm works on the principle of divide and conquer
...
Binary search looks for a particular item by comparing the middle most item of
the collection
...
If the
middle item is greater than the item, then the item is searched in the sub-array
to the left of the middle item
...
This process continues on the
sub-array as well until the size of the subarray reduces to zero
...
We shall learn the process of binary search with a pictorial example
...
First, we shall determine half of the array by using this formula −
mid = low + (high - low) / 2
Here it is, 0 + (9 - 0 ) / 2 = 4 (integer value of 4
...
So, 4 is the mid of the array
...
e
...
We find that the value at location 4 is 27, which is not a match
...
We change our low to mid + 1 and find the new mid value again
...
We compare the value stored at location 7 with our
target value 31
...
So, the value must be in the lower part from this location
...
This time it is 5
...
We find that
it is a match
...
Binary search halves the searchable items and thus reduces the count of
comparisons to be made to very less numbers
...
set midPoint = lowerBound + ( upperBound - lowerBound ) / 2
if A[midPoint] < x
set lowerBound = midPoint + 1
if A[midPoint] > x
set upperBound = midPoint - 1
if A[midPoint] = x
EXIT: x found at location midPoint
end while
end procedure
Data Structure - Interpolation Search
Interpolation search is an improved variant of binary search
...
For this
algorithm to work properly, the data collection should be in a sorted form and
equally distributed
...
Linear search has worst-case complexity of Ο(n) whereas binary search has
Ο(log n)
...
For example, in case of a telephone directory, if we want to search the
telephone number of Morphius
...
Positioning in Binary Search
In binary search, if the desired data is not found then the rest of the list is
divided in two parts, lower and higher
...
Even when the data is sorted, binary search does not take advantage to
probe the position of the desired data
...
Initially, the probe position is the position of the middle most item of the
collection
...
To split the list into
two parts, we use the following method −
mid = Lo + ((Hi - Lo) / (A[Hi] - A[Lo])) * (X - A[Lo])
where −
A
= list
Lo
= Lowest index of the list
Hi
= Highest index of the list
A[n] = Value stored at index n in the list
If the middle item is greater than the item, then the probe position is again
calculated in the sub-array to the right of the middle item
...
This process
continues on the sub-array as well until the size of subarray reduces to zero
...
Algorithm
As it is an improvisation of the existing BST algorithm, we are mentioning the
steps to search the 'target' data value index, using position probing −
Step 1 − Start searching data from middle of the list
...
Step 3 − If it is not a match, probe position
...
Step 5 − If data is greater than middle, search in higher sub-list
...
Step 7 − Repeat until match
...
In
a hash table, data is stored in an array format, where each data value has its
own unique index value
...
Thus, it becomes a data structure in which insertion and search operations
are very fast irrespective of the size of the data
...
Hashing
Hashing is a technique to convert a range of key values into a range of
indexes of an array
...
Consider an example of hash table of size 20, and the following items
are to be stored
...
(1,20)
(2,70)
(42,80)
(4,25)
(12,44)
(14,32)
(17,11)
(13,78)
(37,98)
Sr
...
Key
Hash
Array Index
1
1
1 % 20 = 1
1
2
2
2 % 20 = 2
2
3
42
42 % 20 = 2
2
4
4
4 % 20 = 4
4
5
12
12 % 20 = 12
12
6
14
14 % 20 = 14
14
7
17
17 % 20 = 17
17
8
13
13 % 20 = 13
13
9
37
37 % 20 = 17
17
Linear Probing
As we can see, it may happen that the hashing technique is used to create an
already used index of the array
...
This technique is called linear probing
...
No
...
Search − Searches an element in a hash table
...
delete − Deletes an element from a hash table
...
struct DataItem {
int data;
int key;
};
Hash Method
Define a hashing method to compute the hash code of the key of the data
item
...
Use linear probing to get the element ahead if the element is not found at the
computed hash code
...
Use linear probing for empty location, if an element is found at the computed
hash code
...
Use linear probing to get the element ahead if an element is not found at the
computed hash code
...
Example
struct DataItem* delete(struct DataItem* item) {
int key = item->key;
//get the hash
int hashIndex = hashCode(key);
//move in array until an empty
while(hashArray[hashIndex] !=NULL) {
if(hashArray[hashIndex]->key == key) {
struct DataItem* temp = hashArray[hashIndex];
//assign a dummy item at deleted position
hashArray[hashIndex] = dummyItem;
return temp;
}
//go to next cell
++hashIndex;
//wrap around the table
hashIndex %= SIZE;
}
return NULL;
}
Data Structure - Sorting Techniques
Sorting refers to arranging data in a particular format
...
Most common orders
are in numerical or lexicographical order
...
Sorting is also used to
represent data in more readable formats
...
Dictionary − The dictionary stores words in an alphabetical order so
that searching of any word becomes easy
...
These algorithms do not require any
extra space and sorting is said to happen in-place, or for example, within the
array itself
...
Bubble sort is an example of
in-place sorting
...
Sorting which uses equal or
more space is called not-in-place sorting
...
Stable and Not Stable Sorting
If a sorting algorithm, after sorting the contents, does not change the
sequence of similar content in which they appear, it is called stable sorting
...
Stability of an algorithm matters when we wish to maintain the sequence of
original elements, like in a tuple for example
...
That is, while sorting if the
source list has some element already sorted, adaptive algorithms will take this
into account and will try not to re-order them
...
They try to force every single element to
be re-ordered to confirm their sortedness
...
For example, 1, 3, 4, 6, 8, 9 are in
increasing order, as every next element is greater than the previous element
...
For example, 9, 8, 6, 4, 3, 1 are in
decreasing order, as every next element is less than the previous element
...
This
order occurs when the sequence contains duplicate values
...
Non-Decreasing Order
A sequence of values is said to be in non-decreasing order, if the
successive element is greater than or equal to its previous element in the
sequence
...
For example, 1, 3, 3, 6, 8, 9 are in non-decreasing order, as every next
element is greater than or equal to (in case of 3) but not less than the previous
one
...
This sorting algorithm is
comparison-based algorithm in which each pair of adjacent elements is
compared and the elements are swapped if they are not in order
...
How Bubble Sort Works?
We take an unsorted array for our example
...
Bubble sort starts with very first two elements, comparing them to check
which one is greater
...
Next, we compare 33 with 27
...
The new array should look like this −
Next we compare 33 and 35
...
Then we move to the next two values, 35 and 10
...
Hence they are not sorted
...
We find that we have reached the end of the array
...
After the second iteration, it should look like this −
Notice that after each iteration, at least one value moves at the end
...
Now we should look into some practical aspects of bubble sort
...
We further assume
that swap function swaps the values of the given array elements
...
This may cause a few complexity issues like what if the array needs no more
swapping as all the elements are already ascending
...
If no swap has occurred, i
...
the array
requires no more processing to be sorted, it will come out of the loop
...
count;
for i = 0 to loop-1 do:
swapped = false
for j = 0 to loop-1 do:
/* compare the adjacent elements */
if list[j] > list[j+1] then
/* swap them */
swap( list[j], list[j+1] )
swapped = true
end if
end for
/*if no number was swapped that means
array is sorted now, break the loop
...
Hence, the next iteration need not include
already sorted elements
...
Data Structure and Algorithms Insertion Sort
This is an in-place comparison-based sorting algorithm
...
For example, the lower part of an array is
maintained to be sorted
...
Hence the name, insertion sort
...
This algorithm is not
suitable for large data sets as its average and worst case complexity are of
Ο(n2), where n is the number of items
...
Insertion sort compares the first two elements
...
For now, 14 is in
sorted sub-list
...
And finds that 33 is not in the correct position
...
It also checks with all the elements of sorted sub-list
...
Hence, the sorted sub-list remains sorted after swapping
...
Next, it compares 33 with 10
...
So we swap them
...
Hence, we swap them too
...
We swap them again
...
This process goes on until all the unsorted values are covered in a sorted
sub-list
...
Algorithm
Now we have a bigger picture of how this sorting technique works, so we can
derive simple steps by which we can achieve insertion sort
...
return 1;Step 2 − Pick
next elementStep 3 − Compare with all elements in the sorted sub-listStep
4 − Shift all the elements in the sorted sub-list that is greater than the
value to be sortedStep 5 − Insert the valueStep 6 − Repeat until list
is sorted
Pseudocode
procedure insertionSort( A : array of items )
int holePosition
int valueToInsert
for i = 1 to length(A) inclusive do:
/* select value to be inserted */
valueToInsert = A[i]
holePosition = i
/*locate hole position for the element to be inserted */
while holePosition > 0 and A[holePosition-1] > valueToInsert do:
A[holePosition] = A[holePosition-1]
holePosition = holePosition -1
end while
/* insert the number at hole position */
A[holePosition] = valueToInsert
end for
end procedure
Data Structure and Algorithms Selection Sort
Selection sort is a simple sorting algorithm
...
Initially,
the sorted part is empty and the unsorted part is the entire list
...
This process continues moving unsorted array boundary by one element to
the right
...
How Selection Sort Works?
Consider the following depicted array as an example
...
The first position where 14 is stored presently, we search the whole list and
find that 10 is the lowest value
...
After one iteration 10, which happens to be the
minimum value in the list, appears in the first position of the sorted list
...
We find that 14 is the second lowest value in the list and it should appear at
the second place
...
After two iterations, two least values are positioned at the beginning in a
sorted manner
...
Following is a pictorial depiction of the entire sorting process −
Now, let us learn some programming aspects of selection sort
...
With worst-case time complexity being Ο(n log n), it is one of the most
respected algorithms
...
How Merge Sort Works?
To understand merge sort, we take an unsorted array as the following −
We know that merge sort first divides the whole array iteratively into equal
halves unless the atomic values are achieved
...
This does not change the sequence of appearance of items in the original
...
We further divide these arrays and we achieve atomic value which can no
more be divided
...
Please note the color codes given to these lists
...
We see that 14 and 33 are in sorted positions
...
We change the order of 19 and 35 whereas 42 and 44 are
placed sequentially
...
After the final merging, the list should look like this −
Now we should learn some programming aspects of merge sorting
...
By definition, if it is only one element in the list, it is sorted
...
Step 1 − if it is only one element in the list it is already sorted, return
...
Step 3 − merge the smaller lists into new list in sorted order
...
As our
algorithms point out two main functions − divide & merge
...
procedure mergesort( var a as array )
if ( n == 1 ) return a
var l1 as array = a[0]
...
a[n]
l1 = mergesort( l1 )
l2 = mergesort( l2 )
return merge( l1, l2 )end procedure
procedure merge( var a as array, var b as array )
var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
add a[0] to the end of c
remove a[0] from a
end if
end while
while ( a has elements )
add a[0] to the end of c
remove a[0] from a
end while
while ( b has elements )
add b[0] to the end of c
remove b[0] from b
end while
return c
end procedure
Data Structure and Algorithms - Shell Sort
Shell sort is a highly efficient sorting algorithm and is based on insertion sort
algorithm
...
This algorithm uses insertion sort on a widely spread elements, first to sort
them and then sorts the less widely spaced elements
...
This interval is calculated based on Knuth's formula as −
Knuth's Formula
h=h*3+1
where −
h is interval with initial value 1
This algorithm is quite efficient for medium-sized data sets as its average and
worst-case complexity of this algorithm depends on the gap sequence the
best known is Ο(n), where n is the number of items
...
How Shell Sort Works?
Let us consider the following example to have an idea of how shell sort works
...
For our
example and ease of understanding, we take the interval of 4
...
Here these values
are {35, 14}, {33, 19}, {42, 27} and {10, 44}
We compare values in each sub-list and swap them (if necessary) in the
original array
...
After this
step, the array should look like this −
Finally, we sort the rest of the array using interval of value 1
...
Following is the step-by-step depiction −
We see that it required only four swaps to sort the rest of the array
...
Step 1 − Initialize the value of hStep 2 − Divide the list into smaller sub-list
of equal interval hStep 3 − Sort these sub-lists using insertion sortStep 3 −
Repeat until complete list is sorted
Pseudocode
Following is the pseudocode for shell sort
...
length /3 do:
interval = interval * 3 + 1
end while
while interval > 0 do:
for outer = interval; outer < A
...
A large array is partitioned into two arrays
one of which holds values smaller than the specified value, say pivot, based
on which the partition is made and another array holds values greater than the
pivot value
...
This algorithm is quite efficient for large-sized data
sets as its average and worst-case complexity are O(n2), respectively
...
The pivot value divides the list into two parts
...
Quick Sort Pivot Algorithm
Based on our understanding of partitioning in quick sort, we will now try to
write an algorithm for it, which is as follows
...
Each partition is then processed for quick sort
...
We will discuss binary tree or
binary search tree specifically
...
A
binary tree has a special condition that each node can have a maximum of
two children
...
Important Terms
Following are the important terms with respect to tree
...
Root − The node at the top of the tree is called root
...
Parent − Any node except the root node has one edge upward to a
node called parent
...
Leaf − The node which does not have any child node is called the leaf
node
...
Visiting − Visiting refers to checking the value of a node when control
is on the node
...
Levels − Level of a node represents the generation of a node
...
keys − Key represents a value of a node based on which a search
operation is to be carried out for a node
...
A node's left child must have a
value less than its parent's value and the node's right child must have a value
greater than its parent value
...
Tree Node
The code to write a tree node would be similar to what is given below
...
struct node {
int data;
struct node *leftChild;
struct node *rightChild;};
In a tree, all nodes share common construct
...
Search − Searches an element in a tree
...
Inorder Traversal − Traverses a tree in an in-order manner
...
We shall learn creating (inserting into) a tree structure and searching a data
item in a tree in this chapter
...
Insert Operation
The very first insertion creates the tree
...
Start searching from the root
node, then if the data is less than the key value, search for the empty location
in the left subtree and insert the data
...
Algorithm
If root is NULL
then create root nodereturn
If root exists then
compare the data with node
...
data
goto right subtree
else
goto left subtree
endwhile
insert data
end If
Implementation
The implementation of insert function should look like this −
void insert(int data) {
struct node *tempNode = (struct node*) malloc(sizeof(struct node));
struct node *current;
struct node *parent;
tempNode->data = data;
tempNode->leftChild = NULL;
tempNode->rightChild = NULL;
//if tree is empty, create root node
if(root == NULL) {
root = tempNode;
} else {
current = root;
parent = NULL;
while(1) {
parent = current;
//go to left of the tree
if(data < parent->data) {
current = current->leftChild;
//insert to the left
if(current == NULL) {
parent->leftChild = tempNode;
return;
}
}
//go to right of the tree
else {
current = current->rightChild;
//insert to the right
if(current == NULL) {
parent->rightChild = tempNode;
return;
}
}
}
}}
Search Operation
Whenever an element is to be searched, start searching from the root node,
then if the data is less than the key value, search for the element in the left
subtree
...
Follow the
same algorithm for each node
...
data is equal to search
...
data
goto right subtree
else
goto left subtree
If data found
return node
endwhile
return data not found
end if
The implementation of this algorithm should look like this
...
Because, all nodes are connected via edges (links) we always start from
the root (head) node
...
There are three ways which we use to traverse a tree −
In-order Traversal
Pre-order Traversal
Post-order Traversal
Generally, we traverse a tree to search or locate a given item or key in the
tree or to print all the values it contains
...
We should always remember that every node may
represent a subtree itself
...
We start from A, and following in-order traversal, we move to its left
subtree B
...
The process goes on until all the
nodes are visited
...
Step
2 − Visit root node
...
Pre-order Traversal
In this traversal method, the root node is visited first, then the left subtree and
finally the right subtree
...
B is also traversed pre-order
...
The output of pre-order traversal of this
tree will be −
A→B→D→E→C→F→G
Algorithm
Until all nodes are traversed −Step 1 − Visit root node
...
Step 3 − Recursively traverse right subtree
...
First we
traverse the left subtree, then the right subtree and finally the root node
...
B is also traversed post-order
...
The output of post-order traversal of this tree will be −
D→E→B→F→G→C→A
Algorithm
Until all nodes are traversed −Step 1 − Recursively traverse left subtree
...
Step 3 − Visit root node
...
The value of the key of the right sub-tree is greater than or equal to the
value of its parent (root) node's key
...
Each node has a key and an associated value
...
Following is a pictorial representation of BST −
We observe that the root node key (27) has all less-valued keys on the left
sub-tree and the higher valued keys on the right sub-tree
...
Insert − Inserts an element in a tree
...
In-order Traversal − Traverses a tree in an in-order manner
...
Node
Define a node having some data, references to its left and right child nodes
...
Then if the data is less than the key value, search for the element in the left
subtree
...
Follow the
same algorithm for each node
...
Start
searching from the root node, then if the data is less than the key value,
search for the empty location in the left subtree and insert the data
...
Algorithm
void insert(int data) {
struct node *tempNode = (struct node*) malloc(sizeof(struct node));
struct node *current;
struct node *parent;
tempNode->data = data;
tempNode->leftChild = NULL;
tempNode->rightChild = NULL;
//if tree is empty
if(root == NULL) {
root = tempNode;
} else {
current = root;
parent = NULL;
while(1) {
parent = current;
//go to left of the tree
if(data < parent->data) {
current = current->leftChild;
//insert to the left
if(current == NULL) {
parent->leftChild = tempNode;
return;
}
} //go to right of the tree
else {
current = current->rightChild;
//insert to the right
if(current == NULL) {
parent->rightChild = tempNode;
return;
}
}
}
}}
Data Structure and Algorithms - AVL Trees
What if the input to binary search tree comes in a sorted (ascending or
descending) manner? It will then look like this −
It is observed that BST's worst-case performance is closest to linear search
algorithms, that is Ο(n)
...
So, a need arises to balance out the existing BST
...
AVL tree checks the height of the left and the
right sub-trees and assures that the difference is not more than 1
...
Here we see that the first tree is balanced and the next two trees are not
balanced −
In the second tree, the left subtree of C has height 2 and the right subtree has
height 0, so the difference is 2
...
AVL
tree permits difference (balance factor) to be only 1
...
AVL Rotations
To balance itself, an AVL tree may perform the following four kinds of
rotations −
Left rotation
Right rotation
Left-Right rotation
Right-Left rotation
The first two rotations are single rotations and the next two rotations are
double rotations
...
With this simple tree, let's understand them one by one
...
We perform the left rotation by making A the
left-subtree of B
...
The tree then needs a right rotation
...
Left-Right Rotation
Double rotations are slightly complex version of already explained versions of
rotations
...
Let's first check how to perform Left-Right rotation
...
State
Action
A node has been inserted into the right subtree of the left subtree
...
These scenarios cause AVL
tree to perform left-right rotation
...
This
makes A, the left subtree of B
...
We shall now right-rotate the tree, making B the new root node of
this subtree
...
The tree is now balanced
...
It is a combination
of right rotation followed by left rotation
...
This makes A, an unbalanced node with balance factor 2
...
Now, B becomes the right
subtree of A
...
A left rotation is performed by making B the new root node of the
subtree
...
The tree is now balanced
...
Fibonacci series starts from two numbers − F0 & F1
...
Fibonacci series satisfies the following conditions −
Fn = Fn-1 + Fn-2
Hence, a Fibonacci series can look like this −
F8 = 0 1 1 2 3 5 8 13
or, this −
F8 = 1 1 2 3 5 8 13 21
Fibonacci Iterative Algorithm
First we try to draft the iterative algorithm for Fibonacci series
...
Fibonacci Recursive Algorithm
Let us learn how to create a recursive algorithm Fibonacci series
...
STARTProcedure Fibonacci(n)
declare f0, f1, fib, loop
set f0 to 0
set f1 to 1
display f0, f1
for loop ← 1 to n
fib ← f0 + f1
f0 ← f1
f1 ← fib
display fib
end for
END
Title: CS 220: Data Structures and Algorithms.
Description: A data structure is a named location that can be used to store and organize data. And, an algorithm is a collection of steps to solve a particular problem. Learning data structures and algorithms allow us to write efficient and optimized computer programs.
Description: A data structure is a named location that can be used to store and organize data. And, an algorithm is a collection of steps to solve a particular problem. Learning data structures and algorithms allow us to write efficient and optimized computer programs.