Brief introduction to stochastic optimization
Stochastic optimization is a subcategory in the operations research, which leverages optimization models to solve real-world problems. Unlike its counterpart, deterministic optimization, stochastic optimization (SO) takes into account the uncertainty and tries to find the optimal solution based on the realization of the uncertainty.
Note that most of the materials in this post come from the course EMIS 8384 Stochastic Programming. Credits to Prof. Harsha Gangammanavar, who is a very passionate and knowledgable researcher in the field of Operations Research. I won’t cover details of this course because people need to pay to attend the valuable Harsha’s course.
General Introduction
Whereas deterministic optimization problems are formulated with known parameters, real world applications almost invariably include parameters which are unknown at the time of decision making. When parameters are not known with certainty, but assumed to lie in a known set of possible values, then we can attempt to seek decisions/solutions which are feasible with respect to all possible values of these parameters and optimizes some objective. In stochastic programming models we will assume that we have knowledge of or means to estimate the distribution functions of the underlying stochastic processes governing the uncertain parameters.
We will consider the following generic optimization problem:
\[\min_{x\in X} f(x)\]where \(f(x)\) is the objective function and \(X\in R^n\) is the feasibility set of decision variables. For example, in a linear program \(X\) is characterized as a polyhedron where \(X=\{x|Ax\leq b\}\) and for an integer program \(X = \{x|Ax\leq b, x\in Z^n\}\).
To be continued…