1. Asymptotic notation is s shorthand way to describe running time for a an algorithm.\
  2. It is line that stays within bounds.
  3. These are also referred to as ‘best case’ and ‘worst case’ scenarios respectively.


  1. An (ADT) is defined is defined as mathematical model of the data object make up a data type as well as the function that operate on these objects.
  2. An Abstract Data Type (ADT) is the specification of the data type which specifies the logical and mathematical model of the data type.
  3. It does not specify how data will be organized in memory and what algorithm will be used for implementing the operations.
  4. An implementation choose a data structure to represent the ADT.
  5. The important step is the definition of ADT that involves mainly two parts:

a.) Description of the way in which components are related to each other.
b.) Statements of operation that can be performed on the data type.


  1. An array can be defined as the collection of the sequential memory locations, which can be referred to by single name along with a number known as the index, to access to particular field or data.
  2. The general form of declaration is:-

           type, variable-name, [size];

  3. Type specifies the type of the elements that will be contained in the array, such as int, float or char and the size indicates the maximum of element that can be stored inside the array.
  4. For example, when we want to store 10 integer values, then we can use the following the declaration, int A [10].
  5. There are two types of array:-
  • One-dimensional array.
  • Multidimensional array.


  1. A linked list, or one-way list, is a linear collection of data elements, called nodes, where the linear order is given by means of pointers.
  2. Each node is divided into two parts: the first part contain the information of the element, and the second part, called the link field or next pointer field, contains the address of the next node in the list.


  • The doubly or two-way linked list uses double set of pointers, one pointing to next node and the other pointing to the preceding node.
  • In doubly linked list, all nodes are linked together by multiple links which help in accessing both the successor and predecessor node for any arbitrary node within the list.
  • Every node in the doubly linked list has three fields:
  •        LPT




  • LPT will point to the node in the left side (or previous node) i.e., LPT will hold the address of the previous node, RPT will point to the node in the right side (or next node) i.e., RPT will hold the address of the next node.
  • INFO field store the information of the node.
  • The structure defined for doubly linked list is:
  • {
    int, info;
    struct, node *rpt;
    struct, node *lpt;


  • Push operation : In push operation, we insert an element onto stack. Before inserting, first we increase the top pointer and then insert the element.
  • Algorithm:


  1. If TOP = MAX ­­­-1 then write “STACK OVERFLOW AND STOP’’
  3. TOP + 1 →TOP
  5. STOP
  • POP operation:- In pop operation, we remove an element from stack. After every pop operation top of stack is decremented by 1.

  1. Algorithm:


  1. If TOP < 0 then write “STACK UNDERFLOW and STOP”
  3. TOP-1→TOP
  4. STOP

TAIL RECURSION:- Tail recursion ( or tail-end recursion) is a special case of recursion in which the last operation of the function, the tail call is a recursive call. Such recursion can be easily transformed to iterations.

  1. Replacing recursion with iteration, manually or automatically, can drastically decrease the amount of stack space used and improve efficiency.
  2. Converting a call to a branch or jump in such a case is called a tail call optimization.
  3. For example :Consider this recursive definition of the factorial function in C :

Factorial (n)

return 1;
return n * factorial (n-1);

4. This definition is tail recursion since the recursive call to factorial is not the last thing in the function (its result has to be multiplied by n).

Factorial (n, accumulator)

If (n==0)
return accumulator ;
return factorial (n-1, n* accumulator);
Factorial (n)
Return factorial (n-1);


HANOI TOWER:- Suppose three pegs, labeled A, B and C is given, and suppose on peg A, there are finite number of n disked with decreasing size.

  1. The object of the game is to move the disks from peg A to peg C using peg B as an auxiliary.
  2. The rule of game is follow:
  • Only one disk may be moved at a time. Specifically only the top disk on any peg may be moved to any other peg.
  • At no time, can be larger disk be placed on a similar disk.


DEQUEUE:- In a dequeue, both insertion and deletion operation are performed at either end of the queues. That is, we can insert an element from the rear end or the front end. Also deletion is possible from either end.

  1. This dequeue can be used both as stack and as a queue.
  2. There are various ways by which this dequeue, element can most common ways of representing this type of dequeue are:
  • Using a doubly linked list.
  • Using a circular array.


        1.) Input-restricted dequeue: In input-restricted dequeue, element can be added at only one end but we can delete the element from both ends.

        2.) Output-restricted dequeue: An output-restricted dequeue, is a dequeue where deletion takes place at only one end but allow insertion at both ends.

PRIORTY QUEUE:- A priority queue is a data structure in which each element has been assigned a value called the priority of the element and an element can be inserted or deleted not only at the ends but at any position on the queue.

  1. A priority queue is a collection of element such that each element has been assigned an explicit or implicit priority and such that the order in which element are deleted and processed comes from the following rules:
  • An element of higher priority is processed before any element of lower priority.
  • Two element with the same priority are processed to the order in which they were inserted to the queue.


  1. Ascending priority queue:- In ascending priority queue, elements can be inserted in an order. But, while deleting elements from the queue, always a small element to be deleted first.
  2. Descending priority queue :- In descending priority queue, elements are inserted in any order but while deleting elements from the queue always a largest element to be deleted first.

INSERTION SORT:- In insertion sort, we pick up a particular value and then insert it at the appropriate place in the sorted sublist, i.e., during kth iteration the element a[k-1].

  1. This task is accomplished by comparing a[k] with a[k-1], a[k-2], a[k-3] and so on until the first element a[j] such that a[j] ≤ a[k] is found.
  2. Then each of the element a[k-1], a[k-2], a[j+1] are moved one position up and then element a[k] is inserted in [j+1]st position in the array.

Insertion-sort (A)

  1. For j :- 2 to length [A]
  2. Do key :- A[j] /*insert A[j} into the sorted sequence A[1….j-1].*/
  3. I :- j-1
  4. While i>0 and A[i] >key
  5. Do A [i+1]:- A[i]
  6. I :- i-1
  7. A[i+1] :- key

Analysis of insertion sort:-

Complexity of best case is O(n)

Complexity of average case is O(n2)

Complexity of worst case is O (n2 )

MERGE SORT :- Merge sort is a sorting algorithm that uses the idea of divide and conquer.

  1. This algorithm divides the array into two halves, sorts them separately and then merges them.
  2. This procedure is recursive, with the base criteria that the number of elements in the array is not more than 1.

Complexity of merge sort algorithm

  • Let f(n) denote the number of comparisons needed to sort an n-element array A array using the merge sort algorithm.
  • The algorithm requires at most log n passes.
  • Moreover, each pass merges a total of n element, and by the discussion on the complexity of merging, each pass will require at most m comparisons.
  • Accordingly, for both the worst case and average case,
    f(n) ≤n log n
  • This algorithm has the same order as heap sort and the same average order as quick sort.
  • The main drawback of merge sort is that it require an auxiliary array with n elements.
  • Each of the other sorting algorithms requires only a finite number of extra locations, which is independent of n.
  • The results are summarized in the following table:

  • Algorithm

    Worst case

    Average case

    Extra memory

    Merge sort

    n log n = O(n log n)

    n log n = O (n log n)


DFS:- (Depth first search) is very important algorithm as based upon DFS, there are O(V+E)-time algorithms for the following problems:-

  • Testing whether graph is connected.
  • Computing a spanning forest of G.
  • Computing the connected components of G
  • Computing a path between two vertices of G or reporting that no such path exits.
  • Computing a cycle in G or reporting that no such cycle exists.

BFS:- (Breadth first search) it is one of the single source shortest path algorithms, so it is used to compute the shortest path.

  • It is also used to solve puzzles such as the Rubik’s cube.
  • BFS is not only the quickest way of solving the Rubik’s cube, but also the most optimal way of solving it.



Who's new

  • ravirajpatil871...
  • shubhambajoria
  • yassir
  • demiholyman890954
  • scottgillum51169040

Get Notified



We are Social


Subscribe to Syndicate