ActivityMonitoringAnalysis ApplicationChain CInputOutputLibrary DistributedFaultTolerance DynamicProgramming EclipseUMLConcepts GridComputing L01SmartSitePublishing L02SmartSitePublishing L03SmartSitePublishing L04SmartSitePublishing LinuxResources ModelDrivenTransformation ObjectFileFormat ProcessDocumentation ProductDataManagement SoftwareDebugging_1 SoftwareDebugging_2 WhatIsCORBA AutovalTraining.pdf IAmALightPole.pdf stf-r-abstraction.pdf |
## IntroductionThere exist different algorithm design techniques which can be used separately or in combination with others to solve various problems. Some of these are: Divide and Conquer Strategy, Greedy Method, Dynamic Programming. Each of these methods are particularly effective in solving certain class of problems. Dymamic programming (DP) is a technique which can be used when the solution to a problem requires making a sequence of decisions. The problem can be viewed as made up of a number of subproblems with interdependence among these subproblems. DP can be used when it is required to find the optimal solution to such problems. The following are the characteristics of a problem which can be solved using DP: - Optimal Substructure - the structure of the problem must be such that the optimal solution to the problem contains the optimal solution of the subproblems
- Overlapping Subproblems - The subproblem space should be sufficiently small so that the same solution can be applied a number of times to obtain the solution for the main problem.
## Examples of problems where DP can be used- Finding optimal subset - Knapsack, Matrix Chain Multiplication
- Permutation problems - Traveling Salesman problem
- Scheduling problems - job scheduling
## Difference From Other TechniquesDynamic programming differs from the "Divide and Conquer" (D-and-C) method in the fact that the problems solved by D-and-C can have many non-overlapping subproblems - i.e, new subproblems may be generated when the method is applied. DP differs from Greedy method in the fact that greedy method looks at the best possible choice at a particular point and uses it -- a kind of local optimization. So, it may not generate optimal solution in some cases. ## Steps involved in a DP solution to a problem- Finding the structure of the optimal solution to the problem - DP relies on the "Principle of Optimality" which states that the optimal solution of a problem will involve the optimal solutions of the subproblems which make up the problem. So, initially it muse be determined if the problem belongs to such a category.
- Obtaining a recursive function for solving it
- The optimal solution of problems that can be solved by using
DP can be represented by a recurrence relation. By repeatedly
applying the function to the subproblems, the final solution
can be obtained. A typical recursive function may look like:
f(x) = min(a + f(x-1)) where 'a' is a constant The above can be used to the optimal (minimum) value from a set of possible solutions. - Finding the optimal value using the recursive function - Using the recursive function is used to enumerate all possible solutions and then combining them will not be ideal. So, in DP the solutions to subproblems are stored and when they are again seen (during evaluation of some other subproblem) the stored value is just retrieved instead of re-calculating the whole thing again. While this saves a lot of time, there is a space penalty to pay.
## Application of DP to a permutation problem (Graph Tour problem)The Graph Tour problem involves traversing all the nodes in a graph exactly once and returning to the starting vertex in such a way that the total cost is minimized. We will apply each of the above steps to the problem: - Finding the structure of the optimal solution to the problem Here we determine if this problem can be solved using DP.
- Obtaining a recursive function for solving it
g(i,S) = min {cij + g(j,S - {j})} jES
- Finding the optimal value using the recursive function
[To be continued] |