top of page
hv

Algorithms: An Introduction

An algorithm is a list of instructions and/or rules that a computer requires in order to complete a task. Algorithms can also be explained as a series of instructions that are followed, step by step, to do something useful or solve a problem. In computing, algorithms provide computers with a guide to completing actions. They're a list of instructions that explain to a programmer exactly how to complete a task.


Computer algorithms work via input and output methods. They take the input and apply each step of the algorithm to that information to generate an output. For example, a search engine is an algorithm whose input is a search query and then it searches its database for items relevant to the words in the query. It then provides the user with a useful output. You can easily visualise algorithms as a flowchart. The input leads to steps and questions that need handling in order. When each section of the flowchart is completed, the generated result is the output.


The Consortium for IT Software Quality (CISQ) has defined five major desirable structural characteristics that are required for a piece of software to provide useful business value: Reliability, Efficiency, Security, Maintainability and (adequate) Size.


Correctness: An algorithm is said to be correct if for every set of input for which it provides the user with the correct output. If you are not getting the correct output for any particular set of input, then your algorithm is wrong.


Finiteness: The algorithm must always terminate after a finite number of steps. For example, in the case of recursion and loop, your algorithm should terminate otherwise you will end up having a stack overflow and infinite loop scenario.


Efficiency: By the term efficiency, we mean:

  • The algorithm should efficiently use the resources that are available to the system.

  • The computational time should be as little as possible.

  • The memory used by the algorithm should also be as little as possible. (Generally, there is a trade-off between computational time and memory)


Time and Space Complexity


The space complexity of an algorithm is the total space used or needed by the algorithm for its working, for various input sizes.


The time complexity is the number of operations an algorithm performs to complete its task with respect to input size (input size is the total number of elements present in the input), assuming each operation takes the same amount of time. The algorithm that performs the task in the smallest number of operations is considered the most efficient one.


Big O notation is the most common metric for calculating time complexity. It describes the execution time of a task in relation to the number of steps required to complete it.


Big O notation is written in the form of O(n) where O stands for “order of magnitude” and n represents what we are comparing the complexity of a task against. A task can be handled using one of many algorithms, each of varying complexity and scalability over time.


Top-Down Design


Most systems consist of increasingly smaller and smaller subsystems.

Top down design is the name given to breaking a problem down into increasingly smaller and smaller manageable parts (also known as decomposition). In programming, these smaller tasks/parts are sometimes referred to as modules, subroutines or procedures. Advantages of this include:

  1. Breaking a problem down into smaller parts/tasks makes it far easier to understand and solve.

  2. It allows several users to work on the same project, without getting in each other’s way.

  3. Each module of code to be tested separately.


Types of algorithms


There are 6 main types of algorithms including:

  1. Recursive algorithm - This algorithm calls itself with a smaller call as inputs which it gets after solving for the current inputs. It calls itself repeatedly until the problem is solved.

  2. Divide & Conquer algorithm - The algorithm is divided into two fragments, where the first fragment divides the problem into smaller sub-problems, and the second fragment solves the smaller problems and combines them together to produce the final solution conclusively.

  3. Dynamic Programming algorithm - The algorithm works by remembering the results of the past run and using them to find new results. It solves complex problems by breaking them into multiple simpler problems and solves and stores them for future use.

  4. Greedy algorithm - The algorithm is used for solving optimization problems. A locally optimum solution is found and then we hope to find the optimal solution at the global level. However, this method does not guarantee that we will be able to find an optimal solution. This method has 5 components: A selection function, a feasibility function, an objective function and a solution function.

  5. Brute Force algorithm - This is one of the simplest algorithms. This algorithm blindly iterates all possible solutions to search for one or more solutions that may solve the problem. It uses all possible inputs.

  6. Backtracking algorithm - It is a technique used to find a solution in an incremental approach. It solves problems recursively and attempts to find a solution by solving one piece of the problem at a time. If one solution fails, it is removed and backtracked to find another solution. A backtracking algorithm solves a sub-problem and if it fails, it undoes the last known step and starts again to find the solution.


Article was written for CyberClubNPSi

0 views

Recent Posts

See All

Comments


bottom of page