Processing instruction in sequential order to get desired output is called an Algorithm. There exist many different algorithms for solving a particular problem. Thus, the appropriate selection of algorithms becomes critical.
In computational theory, an algorithm must be correct, efficient and easy to implement. To find the correct algorithm we need proof. A correct algorithm must give a comprehensive description and explanation of a theory.
Algorithm selection depends on the problem description. A well-defined problem clearly specifies a set of inputs and characteristics to be analysed in the output. Problems must be defined using a common structure such as the ordering of elements, selection of elements, hierarchical relationship between elements, targeting specific elements, defining boundary from where elements have to be picked up and using the appropriate naming convention of elements.
An algorithm whether it is Greedy or Dynamic Programming should have the following basic functionalities:
- Ability to generate ranked/unranked subsets of sets
- Criteria to pick a set
- Picking up combinations of subsets
- A search mechanism to find out a good set
Difference between Greedy Method and Dynamic Programming
Algorithms must be compared using well defined and efficient techniques. Two of the most famous techniques to compare algorithms include – RAM model of computation and Asymptotic analysis.
As the problem size increases, the number of steps to find a solution also increases. RAM (Random Access Machine) model defines Best, Worst and Average – case complexity.
Comparing Greedy Method and Dynamic Programming using RAM Model:
Greed Method | Dynamic Programming | |
Best Case | O(1) | O(n) |
Worst Case | O(log n) | O(nm) |
Average Case | O(log n) | O(nm) |
Greedy methods have logarithmic time complexity that means when the number of inputs increases, the number of operations that are performed on inputs decreases. As compared to this Dynamic programming has polynomial time complexity causing more time to perform computational operations.
As Greedy methods don’t perform execution on the entire problem set at once they choose the optimal set to work on making them suitable for operating system operations, computer network operations etc.,
Dynamic programming works on a problem set first and then chooses an optimal set to work on. Dynamic programming does not require prior information to build an optimal solution set. But for greedy algorithms to work efficiently, it is necessary for these algorithms to have heuristics to generate subsets of problems.
Dynamic programming requires recursion and recursion conditions are developed as algorithm execution progresses and if recursion conditions do not move towards desired output then the execution time of the algorithm exponentially increases, making them inappropriate for real-time applications proving dynamic programming to be worse.
Operating System and Computer Network operations require address calculation. Addresses consume space in memory and project another computer on the network. This address calculation is done using matrix multiplications and the degree of matrix multiplication increases with the increase in the number of hosts on the internet or with the increase in the number of operations of the operating system. The degree of matrix multiplication is directly proportional to the execution time of instructions. To overcome this, an algorithm having logarithmic execution time of instructions is implemented thus projecting greed methods.
In operating systems, job scheduling algorithms are used to carry out required processes. Each job is broken into pages and page fault mechanisms are used to find the feasibility of the job. Page fault mechanism is implemented using single-bit or two-bit and with the addition of each bit, bit pattern doubles. This operation must be carried out efficiently and to achieve this efficiency operating systems use greedy methods as they have logarithmic time complexity as compared to Dynamic programming having polynomial time complexity.
In the case of cryptography, the requirement is to calculate bn for large n. Any algorithm of cryptography performs n-1 multiplication. We can achieve better efficiency if we partition the problem into two equal halves but partitioning a problem set into two equal halves is not always possible to overcome this dynamic programming is used. Dynamic programming works on problem sets and based on the obtained result, problem sets are broken into subproblems that turn dynamic programming into fast executable algorithms suitable for cryptography. Cryptography requires fast execution and fast execution is achieved using dynamic programming. Dynamic programming has fast exponentiation time execution enabling them to attain polynomial time complexity.
Hardware configuration for Greedy method and Dynamic programming
Dynamic programming contains recursion. Each execution must take the algorithm closer to the terminating condition of recursion. Since dynamic programming possesses polynomial time complexity they have fast exponentiation. In dynamic programming to get desired output within stipulated time period use of appropriate hardware is mandatory. Hence, dynamic programs perform better on hardware that can parallelize tasks. Hardware that can parallelize tasks is costlier. Thus, dynamic programming is expensive in terms of hardware cost.
Greedy methods are implemented in Operating System, Computer Network and other branches of computer science using binary data structures. Most of the binary data structures are implemented using threshold logic units and programmable logic units. Programmable logic units are Central Processing Units (CPU). The precision of CPU depends on the number of bits they can process, thus 16 bit or 64 bit CPUs are able to achieve more precision than 10-bit CPU. But as the CPU-bit increases its cost also increases.
When greedy methods are compared to dynamic programming it is proved that dynamic programming requires much more expensive hardware as compared to greedy methods.
Conclusion
Greedy methods follow sequential execution of procedures blocking backtracking. Sequential execution makes greed methods more effective in the case of operating systems and computer networks. But as greedy methods block backtracking it increases computation overhead compensated by efficient memory usage. There is no guarantee that Greedy methods always remain greedy.
Dynamic programming works on recursion and supports backtracking. Efficient backtracking is achieved using special hardware such as GPU’s, making them costly algorithms. Due to backtracking, the processing time of the dynamic programming increases. Recursion in Dynamic Programming makes them risky algorithms.
Leave a Reply