1 Introduction
Determining the optimal well locations and controls in an oil field is a challenging task. The decision is hard since the reservoir performance is affected by geological, engineering, economical and other parameters (Tavallali et al., 2013; Knudsen and Foss, 2013; ShakhsiNiaei et al., 2014). Optimization algorithms provide a systematic way to solve this problem. By using optimization algorithms, a quality solution can be achieved automatically and hence reduce the risk in decisionmaking. Well placement and control optimization generally are computationally expensive and nonconvex, and not every optimization algorithm is appropriate for these problems. Therefore, finding and applying algorithms that are efficient and robust is one of most important tasks in solving well placement and control optimization problems.
In this work, we introduce and apply the multilevel coordinate search (MCS) algorithm for the problems of optimizing well placement, well control, and joint placement with control. MCS, introduced by (Huyer and Neumaier, 1999), is a global optimization algorithm and is designed to handle the complex topography and multimodality of the multidimensional nonlinear objective functions without requiring excessive computing resources. Rios (Rios and Sahinidis, 2013) completed a systematic comparison using a test set of 502 problems and found that MCS outperforms the other 21 derivativefree algorithms tested (see Table 1). Though MCS has shown its superiority in benchmark and real world problems (Huyer and Neumaier, 1999; Rios and Sahinidis, 2013; Lambot et al., 2002), to the best of our knowledge, it has not been applied to the optimization of oil field development. We compare MCS, generalized pattern search (GPS), particle swarm optimization (PSO), and covariance matrix adaptive evolution strategy (CMAES) in four typical test cases from the field of optimal reservoir production development. Our results demonstrate that MCS is strongly competitive and outperforms the other algorithms in most cases.
Solver  Version  Language 

ASA  26.30  C 
BOBYQA  2009  Fortran 
CMAES  3.26beta  Matlab 
DAKOTA/DIRECT  4.2  C++ 
DAKOTA/EA  4.2  C++ 
DAKOTA/PATTERN  4.2  C++ 
DAKOTA/SOLISWETS  4.2  C++ 
DFO  2.0  Fortran 
FMINSEARCH  1.1.6.2  Matlab 
GLOBAL  1.0  Matlab 
HOPSPACK  2.0  C++ 
IMFIL  1.01  Matlab 
MCS  2.0  Matlab 
NEWUOA  2004  Fortran 
NOMAD  3.3  C++ 
PSWARM  1.3  Matlab 
SIDPSM  1.1  Matlab 
SNOBFIT  2.1  Matlab 
TOMLAB/GLCCLUSTER  7.3  Matlab 
TOMLAB/LGO  7.3  Matlab 
TOMLAB/MULTIMIN  7.3  Matlab 
TOMLAB/OQNLP  7.3  Matlab 
Oil field development optimization has two main subproblems: well placement optimization, and well control optimization. These two problems are often treated separately (Oliveira and Reynolds, 2014; Bouzarkouna et al., 2012; Wang et al., 2009; Brouwer and Jansen, 2004). Recently, there has been increasing focus on optimizing well placement and control jointly (Forouzanfar et al., 2015; Humphries et al., 2013; Isebor et al., 2014a). Well placement problems aim to optimize the locations of injection and production wells. The location of each vertical well is parametrized by its plane coordinates , which are usually integers in the reservoir simulator. Well control problems focus on optimizing production scheduling. The optimization variables are often the timevarying bottom hole pressures (BHPs) or the flow rates for each well. The joint problem optimizes well placement and control parameters simultaneously. Thus, the joint problems are more complex and challenging with an increase in the number and type of variables (Isebor et al., 2014a).
In the past, a number of algorithms have been devised and analysed for both separate and joint problem of well placement and control optimization. These algorithms fall into two categories: gradientbased methods and derivativefree methods. Applications of gradientbased methods to oil field problems have been presented in many papers (Volkov and Voskov, 2014; Wang et al., 2009; Brouwer and Jansen, 2004; Zandvliet et al., 2008; Sarma et al., 2006; Zhou et al., 2013). These methods take advantage of the gradient information to guide their search. The gradient of the objective function can be obtained by using adjointbased techniques (Brouwer and Jansen, 2004; Sarma et al., 2006; Zandvliet et al., 2008; Volkov and Voskov, 2014), or may be approximated by using numerical methods such as finite differences (Wang et al., 2009; Zhou et al., 2013). The adjoint method, developed in the 1970s (Chen et al., 1974; Chavent, 1974), is widely used for assisted history matching (Wu et al., 1999; Li et al., 2003) and well production optimization (Asheim, 1988; Zakirov et al., 1996; Brouwer and Jansen, 2004). Gradient based methods have some drawbacks for the well placement and control problem; these problems are nonconvex and generally contain multiple optima. For some problems, particularly well placement, the optimization surface can be very rough, which results in discontinuous gradients (Ciaurri et al., 2011). However, the gradientbased methods are often the most efficient methods especially for the optimal well control problem (Zhao et al., 2013; Handels et al., 2007; Vlemmix et al., 2009; Wang et al., 2007; Forouzanfar and Reynolds, 2014).
For the joint well placement and control optimization problem, two procedures are proposed and studied. The first one is a simultaneous procedure, which optimizes over all well locations and control parameters simultaneously. The second one is a sequential procedure, that decouples the joint problem into the well placement optimization subproblem and the well control placement optimization subproblem. The simultaneous procedure ensures that the best solution exists somewhere in the search space. But it may be difficult to find the global optima because the search space may be very large and rough. The sequential procedure divides the optimization variables into two smaller groups and optimizes separately. For each subproblem, the search space is smaller than the simultaneous one, but it can not ensure the best solution exists in the search space because the optimal location depends on how the well is operated and viceversa. There are several papers (Li et al., 2012; Bellout et al., 2012; Isebor et al., 2014b) which demonstrate that the simultaneous procedure is superior to the sequential approach. In (Humphries et al., 2013; Humphries and Haynes, 2015), however, they found that a sequential procedure was competitive and even preferable to the simultaneous approach in some cases. To test this further, we do a further investigation of the effectiveness of these two procedures using a joint placement and control optimization example.
Many blackbox, derivativefree methods have been used in oil field problems (Merlini Giuliani and Camponogara, 2015)
. Typical algorithms include genetic algorithms (GA)
(AlQahtani et al., 2014; Bouzarkouna et al., 2012), particle swarm optimization (PSO) (Onwunalu and Durlofsky, 2009, 2011), generalized pattern search (GPS) (Asadollahi et al., 2014; Isebor, 2009), covariance matrix adaptation strategy (CMAES) (Bouzarkouna et al., 2012; Forouzanfar et al., 2015) and hybrid approaches (Isebor et al., 2014a; Humphries and Haynes, 2015). These algorithms can be classified as either deterministic or stochastic, and provide global or local search. The stochastic/global approaches have also been hybridized with deterministic/local search techniques. These algorithms only require the value of objective function and involve no explicit gradient calculations. For smooth objective functions, the derivativefree methods generally need more function evaluations to converge to a solution than the gradientbased ones. However, most of the derivativefree algorithms parallelize naturally and easily, which make their efficiency satisfactory
(Ciaurri et al., 2011), and indeed these methods are less likely to become trapped in local optima.We are particularly interested in using the multilevel coordinate search (MCS) algorithm for the following reasons: 1) it combines a global search with a local search, which leads to a quicker convergence than many methods that operate only at the global level. 2) it is an intermediate between heuristic methods that find the global optimum only with high probability and methods that guarantee to find a optimum with a required accuracy. 3) it does not need analytic or numerical derivatives. 4) it is guaranteed to converge if the objective is continuous in the neighbourhood of a global minimizer, no additional smoothness properties are required. 5) the algorithm parameters in MCS have a clear meaning and are easy to choose. 6) it has proved itself in benchmark tests and many real world problems
(Huyer and Neumaier, 1999). Based on these features, we believe that MCS has great potential to solve oil field optimization problems, which are nonconvex, nonlinear, and contain many local optima and discontinuities.In this paper, we apply MCS to optimization problems of varying complexity in terms of the number and type of optimization variables, the dimension and size of the reservoir models, and the number of wells. We investigate the effect of the algorithmic parameters (initialization list type, number of levels, and the effect of local search) on the optimization results. We propose a detailed comparison between MCS and three other popular derivativefree algorithms (GPS, PSO, and CMAES).
This paper is organized as follows: Section 2 describes the formulation of the optimization problems. Section 3 gives an overview of the optimization algorithms considered. In Section 4 we describe our numerical experiments and the corresponding results. Finally, in Section 5 we provide some concluding remarks.
2 Problem Formulation
2.1 General Problem Statement
The objective functions for general oil field development optimization problems are often the net present value (NPV) or cumulative oil production (Sarma et al., 2005; Wang et al., 2009; Isebor et al., 2014a; Oliveira and Reynolds, 2014). We use NPV as the objective function for all our work. NPV accounts for revenue from the oil and gas produced, and for the cost of handling water production and injection. The NPV is defined as
(1) 
where , , and are the flow rates of the gas, oil, water produced and water injected for well at time step , respectively; and are the gas and oil revenue; and are the costs of produced water and the costs of injected water. is total number of time steps, is the time at the end of th time step, is th time step size, provides the appropriate normalization for , e.g., days, and is the fractional discount rate.
The quantities , , are functions of the dynamic state variables (e.g., pressure, saturation), the geological and fluid variables
(e.g., permeability, porosity, viscosity and density of each phase, etc.), the well placement vector
, and the control vector . Hence equation (1) can be written as(2) 
where is a nonlinear response to the input variables – depends on the output from the numerical solution of a system of nonlinear PDEs describing the reservoir response.
For the well placement and control optimization problem, we wish to maximize the net present value by adjusting the placement vector and the well control vector subject to the constraints
(3) 
(4) 
(5) 
(6) 
(7) 
Equation (3) represents the reservoir simulation equations. This constraint ensures that the governing reservoir flow equations are satisfied. Equation (4) describes the nonlinear constraint functions (e.g., distance between wells, well water cuts, facility constraints such as fieldlevel injection limits, etc.). Equations (5–7) are the equality, inequality, and bound constraints of well placement vector and well control vector, respectively. These constraints are related to, for example, the capacity limitations of the wells and/or facilities.
Due to the complexity of the well placement and control optimization problem, the objective function is not convex, it may have many minima, maxima, and saddlepoints. Furthermore, the objective surface is very rough, it is therefore hard to extract gradient information.
To illustrate this, a twodimensional heterogeneous reservoir model is selected. The model uses 5050 grid blocks with four producers, one in each of the four corners. We optimize the location of one injection well. The NPV surface map is generated numerically by putting a single injection well at each grid block and computing the NPV of production from the reservoir. The NPV surface map and contour map are given in Fig. 1. From the figure we can see that the NPV surface is noisy, and includes at least five local optima. This optimization problem has only two variables; it can be regarded as the simplest problem in well placement and control optimization. For more complex problems, the objective surface has far more roughness with more local optima. (Bangerth et al., 2006) and (Forouzanfar et al., 2012) also discuss the roughness of the NPV surface.
2.2 Well placement optimization
In the well placement optimization problem, we seek to determine the optimal locations for a specified number of vertical wells, or the optimal trajectories for a specified number of 3D angled wells. The optimization problem studied here can be expressed as
(8)  
(9) 
For vertical wells, the location of each well is given by its plane coordinates . Thus the total number of variables for well placement vector is , where is the total number of wells, and and are the number of production and injection wells, respectively.
The placement of each of 3D angled well is parameterized by 6 variables, , , , , , and (Yeten et al., 2002; Bouzarkouna et al., 2012; Humphries and Haynes, 2015).
The coordinates of the well heel are given by and , and the depth of the well heel is given by . is the length of the well. is the angle of the well in the  plane, and is the angle of the well makes with the horizontal plane. For this 3D angled well placement optimization problem, the total number of variables is . The parameterizations for vertical wells and 3D angled wells are illustrated in Fig. 2.
As in Section 2.1, the general oilfield development optimization problem includes a set of constraints. For our well placement and control optimization problem, we assume only bound constraints are imposed explicitly. The governing reservoir flow equations, equation (3), are always satisfied since we use a reservoir simulator to calculate the objective function value. Some complex constraints, for example, the minimum distance between wells may be naturally enforced since, if two wells are very close, the NPV will generally not be high. Alternatively, one can impose the minimum distance as explicit constraint as in (Li and Jafarpour, 2012; Emerick et al., 2009). The effect of imposing such constraints explicitly or implicitly on the optimization needs further research. In this paper we choose not to explore this topic. We also require that wells are not outside the reservoir boundary. For reservoirs with irregular boundaries, this is a nonlinear constraint. If a vertical well is placed outside the reservoir, or the 3D angled well is drilled at inactive gridblocks, it can not produce oil. This leads to a low NPV and will be eliminated by the optimization algorithm.
Some complex constraints can be transformed to bound constraints. The placement of a 3D angled well can be parameterized either by or by , where and are the heel point and the toe point of well, respectively (Emerick et al., 2009). Using the latter approach, the constraints of the maximum and minimum well length can be written as
(10) 
which are two nonlinear inequality constraints. For the first approach, the maximum and minimum well length can be stated as bound constraints. Choosing how to enforce constraints requires some experience, and are one of the main difficulties in formulating and solving optimization problems.
Most of the optimization algorithms considered in this paper were originally designed for either the unconstrained optimization problem or problems with bound constraints only. While constraints can decrease the size of the search space, treating constraints explicitly does often burden the computation.
The coordinates of each well are real variables in actual oilfields, but are usually treated as integers (i.e. gridblock locations ) in reservoir simulators. The well coordinate variables in our work are continuous but will be rounded before we pass them to the simulator to evaluate the objective function. This leads a discontinuity in NPV as the well moves across the boundary between two gridblocks. Optimization becomes problematic when there are discontinuities in objective function surface while using gradientbased algorithms. The derivativefree algorithms considered in this paper, MCS, GPS, PSO, and CMAES, do not need this gradient information and hence, can be used in this case. However, restricting the real locations to integer values does introduce an error in the optimized location. The error depends on the gridblock size, and is acceptable in an engineering sense. (Sarma and Chen, 2008) and (Forouzanfar et al., 2012) give ways to deal with this discontinuity in the NPV while optimizing well placement.
2.3 Well control optimization
The well control optimization problem aims to determine the optimal timevarying well setting for each of the production and injection wells. The optimization problem can be stated as follows:
(11)  
(12) 
where denotes the well control vector. Each well can be controlled either by well rate or by bottom hole pressure (BHP). The timevarying well controls are represented by piecewise functions in time with intervals. The number of variables for this problem is .
As in well placement optimization problem, only bound constraints are considered explicitly here.
2.4 Joint well placement and control optimization
The joint problem optimizes both well locations and controls. The optimization problem is defined as follows:
(13)  
(14)  
(15) 
where and denote the well placement and control vectors.
Two procedures are commonly used for joint well placement and control optimization—a simultaneous procedure or a sequential procedure. The simultaneous procedure optimizes well locations and controls simultaneously, hence the number of optimization variables, for vertical wells and for 3D angled wells, is larger than the individual problems, which makes the optimization more difficult.
In the sequential procedure, well placement is optimized first using some reasonable control scheme. The controls are then optimized for the wells using the best locations found in the first stage. These two stages may be repeated. The sequential procedure decouples the joint problem into two separate subproblems, and the difficulty for each subproblem is decreased. The number of optimization variables for the well placement stage is either or (for vertical or angled well), and is for the well control stage. The joint problem is worth studying because the optimal location of each well depends on how the well is operated and viceversa (Humphries et al., 2013; Isebor et al., 2014a; Humphries and Haynes, 2015). Furthermore, (Forouzanfar et al., 2010) point out that the results obtained also depend directly on the specified reservoir life time, which makes the joint optimization problem even more complicated. We use a predefined reservoir life for examples in this paper.
3 Optimization Methods Considered
3.1 Multilevel Coordinate Search
MCS, first proposed by Huyer and Neumaier (Huyer and Neumaier, 1999), was inspired by DIRECT (Jones et al., 1993). MCS is a branching scheme which searches by recursively splitting hyperrectangles. Like DIRECT, MCS is a mathematical programming approach which provides a systematic search within the bound constraints (the bounds can be infinite for MCS). MCS builds upon DIRECT by introducing a multilevel mechanism which allows a balanced global and local search. DIRECT has no local search capability.
Levels are assigned as an increasing function of the number of times a box has been split. The global search portion of the algorithm is accomplished by splitting boxes that have not been searched often – those with a low level. Within a level the boxes with the best function values are selected to complete a local search. The local search builds a quadratic model, determines a promising search direction and performs a line search. This allows for quicker convergence while the global part of the algorithm identifies a region near the global optimum.
MCS allows for a more irregular splitting than DIRECT, giving preference to regions with low function values. Convergence to a global optima is guaranteed as the number of levels goes to infinity if the objective function is continuous around the global optimizer.
(Huyer and Neumaier, 1999) reports that MCS works well in problems where the global optimum can be constrained by finite bound constraints. (Pošík et al., 2012) report very good performance in the early search phase with a small budget of objective function evaluations.
MCS provides numerous heuristic enhancements over DIRECT. Consider a dimensional bound constrained minimization problem
(16)  
(17) 
where denotes the objective function; denotes the optimization variables; and are the lower and upper bound, respectively. The pseudocode of the basic steps of MCS are described in Alg. 1. A complete description of the algorithm is quite complex and can be found in (Huyer and Neumaier, 1999). For the experiments in this paper, we used the implementation of MCS from (Neumaier, 2008). The algorithm was originally designed to minimize a function . To maximize a function we simplify minimize .
During the initialization portion of the algorithm, MCS accepts an initialization list which is used to produce an initial set of boxes partitioning the search space. For th coordinate, an initialization list stores points (). Users can incorporate their knowledge and experience to choose points with a high possibility of obtaining good solution. MCS continues to process and split boxes until some boxes reach level . A box can be split either by rank or by expected gain, depending on the relationship between the box level and number of split times at each coordinate. If a box has been split many times and reached a high level, MCS selects the coordinate which has been split the fewest times and splits along this coordinate (split by rank). If the level of a box is not high, MCS splits along a coordinate with the maximum expected gain according to a quadratic model obtained by fitting function values (split by expect gain). The parameter controls the precision of the global search phase before any local search would be attempted. MCS also has the option to turn off the local search phase.
We provide a simple example to see how MCS works. We consider the objective function , which is a sixhump camel function with 2 unknowns. The bounds for the 2 unknowns are and . The global minimizer for this function is and the global minimum value is . We choose the default parameter settings for MCS: for , the initialization list , , , a maximum number of level , and we turn the local search phase on. Fig. 3 shows a loop of MCS for this problem.
Fig. 3(a) presents the boxes after the initialization procedure (lines 1–4 of Alg. 1). By using the default initialization list, MCS first splits the root box along the
coordinate at the midpoint, the two boundary points, and the points between determined by the golden ratio. Then MCS chooses the new box that has the highest estimated variability and splits it along the
coordinate. We note that the initialization list can also be specified by the user. Different initialization lists results in a different split of the boxes. One other commonly used initialization list is , , . By using this initialization list, the boxes after the initialization procedure are shown in Fig. 4.The initialization list can also be generated automatically with the aid of line searches. Starting from the point , for th coordinate, we complete line searches along this coordinate to find up to minima within function evaluations. All local minimizers found by line searches are put into the initialization list. If the number of minima is less than , then we use the points closest to and obtained by line searches to supplement the initialization list. The best point is taken as the start point for line searches for the next coordinate. We choose and as recommend by (Huyer and Neumaier, 1999) for all examples in this paper. With this setting, MCS needs up to function evaluations to generate the initialization list. This stage will slow down the convergence of the optimization in the early stages, however, such an initialization list can ultimately improve the performance of MCS significantly. Hence, to get the advantages of the line search, the total number of function evaluations needs set to a number much larger than .
After the initialization procedure, the search space will be further split until one of the boxes reaches the maximum level . Fig. 3(b) shows the boxes after the splitting procedure. As mentioned above, decides the depth to which MCS explores a region and hence controls the precision of the global search phase. If , then the boxes obtained are shown in Fig. 5.
Once a box reaches the maximum level, a local search starts from its base point. Fig. 3(c) shows the points evaluated by the local search. The local search stops if the maximum number of steps in local search is reached, or the stopping criteria, , where and are the current and the smallest values of objective function, respectively, is triggered. The maximum number of steps is 50, and , by default. After that, MCS will cycle back to the split procedure.
MCS was originally designed for either unconstrained optimization problems or problems with bound constraints only. In general, there are many natural ways to attempt to extend unconstrained optimization algorithms to handle constraints. For example, the penalty function method (Bouzarkouna et al., 2012; Ciaurri et al., 2011), Lagrange multiplier method (Chen et al., 2012; Brouwer and Jansen, 2004), and so on. To the best of our knowledge, there has been no such paper which has addressed this topic for MCS.
3.2 Configurations of MCS considered
In order to analyse the sensitivity of the parameters in the MCS algorithm, we apply MCS, with 7 different settings of the parameters, to our examples. The 7 settings used are:

MCS1: MCS with its default settings from (Huyer and Neumaier, 1999). A simple initialization list is used consisting of midpoints and boundary points, i.e. . The number of levels is chosen as , where is the dimension of the problem. The maximal number of visits in the local search is 50, and the acceptable relative accuracy for local search is .

MCS2: MCS with the initialization list . Unlike the initialization list in MCS1, the points here are uniformly spaced but do not include the boundary points. The other settings are same as in MCS1.

MCS3: MCS with an autogenerated initialization list. In MCS3, we first perform a sequence of line searches along all coordinate directions to generate the initialization list. The other settings are same as in MCS1.

MCS4: MCS with the initialization list . Unlike the initialization list in MCS1, we use an user defined initial guess instead of the midpoints. The other settings are same as in MCS1.

MCS5: MCS with the initialization list . Unlike the initialization list in MCS2, we use an user defined initial guess instead of the midpoints. The other settings are same as in MCS2.

MCS6: MCS with a larger maximum number of levels, . This is chosen to attempt to improve the global search phase. The other settings are same as in MCS4.

MCS7: MCS without the local search phase. In MCS7, we set the maximal number of visits in the local search to 0. The other settings are same as in MCS4.
3.3 Other algorithms considered
For comparison three algorithms – generalized pattern search (GPS), particle swarm optimization (PSO) and covariance matrix adaptation evolution strategy (CMAES) – are used.
Generalized pattern search (GPS) (Audet and Dennis, 2002; Torczon, 1997; Yin and Cagan, 2000) is a deterministic local search algorithm. It does not require gradients and hence, it can be used on problems that are not continuous or differentiable. For the parameter settings, we use a positive spanning set, where is the dimension of the search space. The expansion factor is set to 2, and the contraction is set to 0.5 (Torczon, 1997; Audet and Dennis, 2002; Kolda et al., 2003).
Particle swarm optimization (PSO) (Kennedy, 2011; Vaz and Vicente, 2007) is a populationbased stochastic search method. PSO’s search mechanism mimics the social behavior of biological organisms such as a flock of birds. PSO can search a very large space of candidate solutions, which reduces the chance of getting trapped at an unsatisfactory local optimum.
The performance of PSO depends on the values assigned to the algorithm parameters. Following the work of (Perez and Behdinan, 2007), our implementation of PSO uses the population size of 50, and the weighting parameters , , and . The best parameters are usually problem dependent. Further tuning, for a specific problem, will likely yield superior performance. Further discussion on this issue can be found in (Clerc, 2006) and (Onwunalu, 2010).
The covariance matrix adaptation strategy (CMAES) (Hansen and Kern, 2004; Loshchilov, 2013; Auger and Hansen, 2005)
is a populationbased stochastic optimization algorithm. Unlike a genetic algorithm (GA), PSO, and other classic populationbased stochastic search algorithms, candidate solutions of CMAES are sampled from a probability distribution which are updated iteratively. For CMAES, we use the settings from
(Hansen and Kern, 2004) (See Table 2). In fact, according to their work, CMAES doses not require significant parameter tuning for its application.Parameter  Value 

PSO and CMAES are stochastic algorithms and the result of each trial is different. Thus, in order to assess the overall performance of these algorithms, we run each of the algorithms many times for each test example.
The objective function evaluations for the well placement and control optimization problems require the evaluation of a numerical oil reservoir simulator. The cost of the total optimization process is completely dominated by the cost of each simulation evaluation. The time spent in the nuts and bolts of the optimization algorithm itself can be neglected. As a result we choose to use the number of simulation runs as our performance indicator to compare the optimization strategies. This is a widely used measure in the well placement and control optimization literature (Isebor et al., 2014a; Forouzanfar and Reynolds, 2014; Brouwer and Jansen, 2004; Humphries et al., 2013).
It is also worth mentioning that, GPS and CMAES need an initial guess to start the optimization processes. PSO can generate an initial population automatically. In our four examples, we use a physically reasonable initial guess for GPS, PSO, and CMAES. For MCS, the initial point is determined by the initialization list. Either the initialization list includes the initial guess (MCS4, MCS5) or an ordinary initialization list is used (MCS1, MCS2, MCS3).
3.4 Algorithm combinations for the joint optimization problem
For the joint well placement and control optimization problem, we consider both a simultaneous procedure and a sequential procedure. The simultaneous procedure optimizes over the well locations and controls simultaneously. To solve the joint problem with a simultaneous approach we consider the 7 different configurations of MCS, and also GPS, PSO, and CMAES.
The sequential procedure divides the optimization process into a well placement optimization stage and a well control optimization stage. Each stage is an independent optimization problem and can be optimized using the same or different algorithms. We label the approaches used for the sequential procedure in the form Algorithm1Algorithm2, where Algorithm1 denotes the algorithm used for the well placement optimization stage and Algorithm2 denotes the algorithm used for the well control optimization stage. Many such combinations are possible.
The combinations considered in this paper are divided into 3 groups. The first group includes MCSMCS, GPSGPS, PSOPSO, and CMAESCMAES, which use the same algorithm in both the well placement stage and the well control stage. The second group includes MCSGPS, MCSPSO, and MCSCMAES, which use MCS for the well placement stage. The third group GPSMCS, PSOMCS, and CMAESMCS uses MCS for the well control stage.
4 Examples and Results
4.1 Example 1, vertical well placement optimization
4.1.1 Reservoir model description
The first example uses the PUNQS3 model, which is a small reservoir model based on an actual North Sea reservoir (Gao et al., 2006). The model uses grid blocks with m and 1761 active grid blocks. The simulation model involves a threephase gasoilwater flow. The field initially contains 6 production wells and no injection wells due to the strong aquifer. Fig. 6 shows the depth of the top face and permeability field, together with the initial well locations of PUNQS3 model. The reservoir production time is 20 years, the bottom hole pressure of each well is fixed at 200 bar.
We seek to optimize the well locations of all 6 wells. The formulation of the well placement optimization problem is given in Section 2.2. The objective function is the net present value (NPV). The simulator used to predict the production dynamics (the flow rates of the gas, oil, water produced and water injected) is Eclipse (GeoQuest, 2014), a commercial reservoir simulator from Schlumberger Ltd. The economic parameters used to calculate NPV are given in Table 3.
Every well has two positional variables which gives a total of 12 optimization variables. Only bound constraints are considered in this example. We force and for all 6 wells.
The optimization problem was solved by using GPS, PSO, CMAES, and all 7 configurations of MCS. The maximum number of simulation runs is set to 600.
Parameter  Value 
Gas revenue (USD/)  0.5 
Oil revenue (USD/)  500.0 
Waterproduction cost (USD/)  80.0 
Annual discount rate  0 
4.1.2 Results and discussion
The results of Example 1 are shown in Table 4
. In this table, the final NPV after 600 simulation runs for each algorithm is given. Moreover, for PSO and CMAES, the maximum, minimum, mean, median, and standard deviation of NPV are given. From the table we can see that GPS obtains the highest NPV value after the 600 simulation runs. Though the maximum NPV for PSO and CMAES are slightly higher than MCS in some cases, MCS generally performs better than PSO and CMAES when compared to the mean and the median NPV.


Plots of the NPV for the four algorithms versus the number of simulation runs are shown in Fig. 7. Note that for PSO and CMAES, 10 trials are performed and the solid lines depict the median NPV over all 10 trials. Since GPS and MCS are deterministic algorithms, only one trial is performed.
From Fig. 7 we can see that MCS showed excellent convergence speed at the early stage of optimization (simulation runs 200). GPS converges slower than MCS at an early stage, but eventually GPS obtains the highest NPV. The two stochastic algorithms, PSO and CMAES, showed slow convergence speed at an early stage of the optimization process.
Since PSO and CMAES are stochastic algorithms, the performance is different for each trial. Fig. 8 shows the range of NPV found amongst the trials of PSO and CMAES. In this figure, the areas between the maximum and minimum NPV are shaded for PSO and CMAES. It is clear that the NPV obtained by PSO and CMAES has a high variation for this example. This suggests that when solving this problem by PSO or CMAES, a single trial has a high risk to obtain an unsatisfactory NPV.
The 7 different MCS configurations are also tested with this example. Detailed results are shown in Fig. 9. These algorithms are divided into 3 groups. The first group (MCS1, MCS2, MCS3, MCS4, and MCS5) uses different initialization lists. This allows us to check the impact of the initialization list. The second group (MCS4, MCS6) uses a different maximum number of levels, . The higher , the better the global search ability. The third group (MCS4, MCS7) is used to analyse the role of local search in MCS.
From the first group MCS3 ultimately achieves the highest NPV, followed by MCS4 and MCS2, and then MCS5 and MCS1. The ultimate difference in NPV between the seven configurations is small (about 5%). Starting from a relatively low NPV, MCS1 obtains a NPV slightly smaller than MCS4. The NPVs obtained by MCS2 and MCS5 are similar. This shows that MCS with a good initial guess in the initialization list has an advantage over the others, but the advantage is very slight with a large computational budget. MCS4 and MCS5, starts the optimization with a relatively high NPV, and has a significant advantage over MCS1 and MCS2 when the computational budget is limited.
Comparing MCS1 and MCS2, MCS2 converges faster than MCS1, and obtains higher NPV than MCS1 ultimately. This indicates that the uniformly spaced initialization list without boundary points is more suitable. To explore this, we normalize the search space to the interval and map the initialization lists and the global optima to the normalized search space in Fig. 10. It is clear that the optimal solution is aligned better with initialization list II (MCS2), which explains its better performance for this problem. The optimal solution is not known a priori in most cases, so although a suitable initialization list can improve the performance of MCS, it is difficult to choose between MCS1 and MCS2 a priori. For MCS4 and MCS5, the difference in the ultimate NPV is small. This indicates that with a good initial guess, the importance of the initialization list decreases.
The second group, MCS4 and MCS6, compares the performance of MCS with different specified maximum levels . MCS with a larger number of maximum levels, namely ultimately obtains a higher NPV than .
The performance of MCS with and without local search are compared in the third group with MCS4 and MCS7. The convergence speed of MCS without local search is severely decreased and the maximum NPV found is reduced.
The initial well locations and the optimized well locations are shown in Fig. 11. In this example, the initial locations are chosen from reasonable positions given by industry – locations used in actual production (Gao et al., 2006). As we can see, the wells are drilled around the gas cap. The optimized well locations are still located around the gas cap. This is reasonable from a petroleum engineering perspective since the gas cap can keep the pressure up and drive oil to the well bores. Fig. 12 shows the cumulative gas, oil, and water production using the initial well locations and optimized locations versus time. It is clear that the optimized well locations can produce more oil and less water. The cumulative gas for optimized well locations is lower, this can keep the reservoir pressure higher and drive more oil.
4.2 Example 2, 3D angled well placement optimization
4.2.1 Reservoir model description
This example uses the Egg model which has been used in numerous papers related to well placement and control optimization (Zandvliet et al., 2008; Fonseca et al., 2014; Siraj et al., 2015). The model uses grid cells of which 18,553 cells are active. The details of the geological and fluid parameter settings of egg model can be found in (Jansen et al., 2014). Fig. 13 shows the reservoir model, displaying the permeability and the default placement of wells. Note that the model was modified slightly to make it more suitable for production with horizontal wells and angled wells. The grid block size is set to , and the net to gross thickness ratio is set to 0.2.
We optimize the placement of 12 3D angled wells (8 producers and 4 injectors) for this example. The total number of variables is 72. We use the default well placement setting from (Jansen et al., 2014) as the initial guess for our optimization problem. That is, the initial and are obtained from the default egg model as shown in Fig. 13. The initial is set to 1 which means the heel of each well lies in the top layer. The initial well length, , is set to 60 m. Initially, we choose and , that is each well is vertical initially. The constraints for this problem include:

and , the coordinates of each well, are between 1 to 60;

, the depth of the well heel, is between 1 to 7;

, the length of the well, is between 50 to 300 m;

The angle lies between 0 to ;

The angle of each well lies between 0 and .
The reservoir simulation time is 20 years. All wells are controlled by bottom hole pressure: 395 bar for production wells and 410 bar for injection wells. The objective function is the NPV and the related economic parameters are given in Table 5.
Parameter  Value 
Base drill cost (USD/well)  25M 
Drilling cost (USD/m)  50,000 
Gas revenue (USD/)  0.5 
Oil revenue (USD/)  500.0 
Waterproduction cost (USD/)  80.0 
Waterinjection cost (USD/)  80.0 
Annual discount rate  0 
4.2.2 Results and discussion
The results of Example 2 are shown in Table 6. In this table, the ultimate NPV after 10000 simulation runs for each algorithm is given. Moreover, for PSO and CMAES, the maximum, minimum, mean, median, and standard deviation of NPV are shown. From the table we can see that unlike the simple Example 1, PSO eventually outperforms the other algorithms. This is because the search space for this problem is much larger than the simple examples, and PSO has good ability explore the entire search space. MCS outperforms the other algorithms with a limited computational budget and without the variability inherent in PSO.


Plots of the NPV for the four algorithms versus the number of simulation runs are shown in Fig. 14. Note that for PSO and CMAES, 5 trials are performed and the solid lines depict the median NPV over all 5 trials. From the figure we can see that MCS showed excellent convergence speed at the early stage of optimization (simulation runs 2000). This is useful for optimization given a limited computational budget.
Fig. 15 shows the range of NPV found amongst the trials of PSO and CMAES. For PSO and CMAES, most trials converge to a NPV between 8 to 9USD. PSO is able to find a much higher NPV (about 11USD). This shows that the problem is hard and most algorithms have only converged to a local optima. It also shows that PSO potentially has a better ability for this type of hard problem.
Fig. 17 shows the performance of the different configurations of MCS. Using the default initialization list, as in MCS1 and MCS2 gives an unreasonable initial guess. MCS4 and MCS5 use a reasonable initial guess leading to a much improved performance. Despite this MCS2 eventually obtains the highest NPV. In Fig. 16 we normalize the search space to the [0,1]interval and map the all optimization candidates (denoted by circles) and the optimal solutions (denoted by crosses) of MCS1 and MCS2 to the normalized search space. From the figure we can see that 46 out of 72 optimization variables in the optimal solution obtained by MCS1 after 10000 function evaluations are still located in the positions defined by the initialization list. This occurs for 23 out of 72 variables for MCS2. For this problem, an uniformly spaced initialization list, not containing any boundary points, is more suitable than an initialization list with boundary points.
MCS3 uses line search to generate the initialization list automatically. The optimization starts with a very low NPV, but eventually obtains a high NPV. This configuration is highly recommended for problems where the users can not provide a good initial guess.
MCS6, using a larger maximum number of levels (), performs better than MCS4 with . For a large scale optimization problem, a larger maximum number of levels is recommended. MCS7, which turns off the local search phase, performs a little worse than MCS4.
The final oil saturation distribution for the initial well placement and the optimal well placement are shown in Fig. 18. All wells are vertical for the initial well placement. For the optimal well placement obtained by the optimization algorithm, some wells are angled. From the final oil saturation distribution, we can see that the well placement obtained by the optimization algorithm gives a larger sweep area, and obtains higher production performance eventually.
4.3 Example 3, well control optimization
4.3.1 Reservoir model description
This example from (Oliveira and Reynolds, 2014) uses a singlelayer reservoir model with uniform grid blocks with m and m. The model consists of four production wells and one injection well. The wells form a fivespot well pattern. We consider an oilwater two phase flow in this model. The permeability field and well placements are shown in Fig. 19. There are two high permeability zones and two low permeability zones in the model. The permeabilities are and , respectively. Detailed reservoir information is given in Table 7.
Parameter  Value 

Reservoir grid  
Grid size (m)  
Porosity  0.2 
Nettogross ratio  0.2 
Initial oil saturation  0.8 
Initial pressure ()  200 
Oil viscosity ()  0.42 
Water viscosity ()  1.7 
The reservoir lifetime is set to 720 days. With a fixed injection rate of for well INJ01, we seek to optimize the liquid rates of four production wells. Two variations of this optimization problem are considered.

Case 1: each well is produced under a liquid rate throughout its lifetime. This gives 4 optimization variables in total.

Case 2: the liquid rate for each well is updated every 90 days (8 control periods). This gives 32 optimization variables in total.
The objective function we use for this example is NPV and the corresponding economic parameters are the same as in Example 1 and are given in Table 3. Only bound constraints are considered and the detailed optimization parameters are given in Table 8.
Parameter  Case 1  Case 2 

Variables  4  32 
Initial rate (PRO01, PRO03) ()  20  20 
Initial rate (PRO02, PRO04) ()  40  40 
Minimum rate ()  0  0 
Maximum rate (PRO01, PRO03) ()  40  40 
Maximum rate (PRO02, PRO04) ()  80  80 
4.3.2 Results and discussion
The results for the two cases of Example 3 are shown in Table 9 and Table 10. Case 1 uses a maximum of 400 simulations to optimize 4 variables and Case 2 uses a maximum of 3200 simulations to optimize 32 variables.
The initial guess for the optimal control strategy for GPS, PSO, and CMAES is the point half way between the lower and upper bounds. Coincidentally, MCS with its default settings also uses the middle value as its start point. So for this example, MCS1 and MCS4 are identical, and MCS2 and MCS5 are identical. Hence, we omit MCS4 and MCS5 while analyzing the results.
From Table 9 we can see that for Case 1, all algorithms GPS, PSO, CMAES and MCS (except for configuration MCS7) are able to obtain a high NPV value at the end of the optimization, and the ultimate difference between the algorithms is small. The mean and median NPV found by PSO is slightly smaller than the other algorithms. For Case 2, similar conclusions can be drawn from Table 10. After 3200 simulation runs, GPS obtains the highest NPV. CMAES and MCS (again except for configuration MCS7) are in the middle, while PSO performs the worst.




Plots of the NPV of the four algorithms versus the number of simulation runs are shown in Fig. 20. As in Example 1, 10 trials are performed for PSO and CMAES, and the solid lines depict the median NPV over all 10 trials of these two algorithms.
From Fig. 20 we can see that, the final NPV obtained by MCS is not the highest over all algorithms tested, however, once again MCS outperforms when the number of simulation runs is limited and is not affected by the variability of the other algorithms. When the number of simulation runs is limited to 15% of the final number of simulation runs (60 simulation runs for Case 1 and 480 simulation runs for Case 2), the NPV obtained by each algorithm is given in Table 11
. Note that in this table we use the median NPV of 10 trials for PSO and CMAES. We use the median instead of the mean because it is less sensitive to outliers in the data. When the total number of simulation runs is limited, MCS showed significant advantages over PSO, GPS, and CMAES. Again MCS7 provides poor results – showing the importance of the local search feature within MCS. This table shows the potential of MCS with a low computational budget.


As we progress from Case 1 to Case 2, the number of optimization variables increases from 4 to 32. The performance of GPS with a low number of simulation runs decreases. In Case 2, the maximum NPV found by GPS is less than the other 3 algorithms when the number of simulation runs is limited to 1000. After 1000 simulation runs, GPS is able to find a higher NPV than PSO. The early stage of the optimization process mainly reflects the global search phase, and the later stage of the optimization process includes the effect of the local search phase for the algorithms tested. In Case 2, it is clear that PSO performs better than GPS at an early stage, but GPS outperforms later. Overall, MCS, which includes both a global search phase and a local search phase, showed a better convergence rate than GPS and PSO.
Fig. 21 shows the range of NPV for the trials for PSO and CMAES for Example 3. In this figure, the areas between the maximum and minimum NPV are shaded for PSO and CMAES. From this figure we can see that for Case 1, the range of the best NPV is large initially and then the range decreases for both PSO and CMAES. CMAES has a small variability near convergence. For Case 2, with a larger number of optimization variables than Case 1, the range of NPV does not decrease for PSO. Each trial falls into a local optima and has a difficult time to escape. The range for CMAES decreases to near zero. This indicates that for PSO and CMAES, a large computational budget can decrease the performance variability for this example. Compared to CMAES, PSO more easily falls into a local optima for problems with a large number of optimization variables.
As in Example 1, we tested different MCS configurations and divided them into 3 groups to do further analysis. The results are shown in Fig. 22 and Fig. 23.
Fig. 22(a) and Fig. 23(a) compare the performance of MCS with different initialization lists for the two cases of Example 3. For Case 1, the convergence rate of MCS2 is the fastest, followed by MCS3 and MCS1. MCS2 and MCS3 ultimately obtain the highest NPV.
For Case 2, MCS1 and MCS2 give a similar convergence rate at an early stage in the optimization process, then MCS1 falls behind MCS2. MCS3 shows a very slow rate of convergence at an early stage of the optimization process, but it obtains the highest NPV finally. MCS3 generates the initialization list by using a line search. This takes a few additional simulation runs before the splitting and local search steps. This explains the slow convergence initially.
The effect of the maximum number of levels is shown in Fig. 22(b) and Fig. 23(b). For Case 1, using (MCS1) performs similarly to using (MCS6). For Case 2, which has 32 variables, using (MCS1) converges slightly faster than using (MCS6), and finally obtains a higher NPV. This indicates that a small number of levels is enough for these cases.
Fig. 22(c) and Fig. 23(c) show that local search plays an important role in MCS, without it the convergence speed decreases significantly.
Fig. 24 presents the optimum controls for wells PRO01 and PRO02 under different control frequencies. We omit the results for well PRO03 and PRO04 because the reservoir is symmetric. The optimum controls become more like a bangbang solution for all wells with an increase in the number of control steps. It is worth noting that the optimum controls for Case 1 are significantly different that those for Case 2. This reflects the different production strategies for wells using a static rate compared to using dynamic well controls in water flooding reservoirs.
4.4 Example 4, joint well placement and control optimization
4.4.1 Reservoir model description
This example use a 2D reservoir model with the permeability and porosity fields taken from the third layer of the SPE10 benchmark model (Christie and Blunt, 2001). It consists of grid cells and the size of each grid cell is . We consider an oilwater two phase flow in this model and the initial oil saturation is 0.8. Fig. 25 shows the permeability and porosity fields of the model.
The optimization problem is to place four wells in the reservoir, including two production wells (P1, P2) and two injection wells (I1, I2). All wells are controlled via BHP that is updated every two years. The production period for this example is 10 years. Thus, there are two location variables and five control variables per well and 28 variables in total. Only bound constraints are considered in this example. The economic and optimization parameters are summarized in Table 12 and Table 13 respectively. Both the simultaneous procedure and the sequential procedure are used for this example.
Parameter  Value 
Oil revenue ()  503.2 
Waterproduction cost ()  75.5 
Waterinjection cost ()  50.3 
Annual discount rate  0 
Parameter  P1  P2  I1  I2 
Initial location  (60,25)  (1,25)  (30,1)  (30,50) 
Initial BHP ()  175  175  362.5  362.5 
Minimum BHP ()  100  100  275  275 
Maximum BHP ()  250  250  450  450 
4.4.2 Simultaneous procedure
The simultaneous procedure optimizes over the well locations and controls simultaneously. For this problem, we optimize the locations and control parameters of the 4 wells. Each well has 2 location variables and 5 control variables, Thus there are 28 variables in total. Given this problem’s complexity, we set the maximum number of simulation runs for this example to be 10000. The maximum, minimum, mean, median, and standard deviation of NPV for each algorithm is given in Table 14. From the table we can see that MCS1 obtains the highest NPV value after 10000 simulation runs. The average NPV for PSO and CMAES are in the middle, while GPS performs the worst. Plots of the NPV of the four algorithms versus the number of simulation runs are shown in Fig. 26.


Fig. 26 shows that MCS converges fastest, followed by CMAES, PSO, and GPS in that order. Unlike Example 1 and 2, the convergence speed of GPS is slowest among all algorithms. The NPV of GPS has a jump at about 4000 simulation runs. It appears that at this point GPS jumps from a local optima.
Fig. 27 shows the range of NPV for the trials of PSO and CMAES. In this figure, the areas between the maximum and minimum NPV are shaded for PSO and CMAES. It is clear that the NPV obtained by PSO and CMAES has a high variation for this example.
As with Examples 1, 2, and 3, we use 7 MCS configurations, and the results are compared within the 3 groups in Fig. 29. Fig. 29(a) shows the performance of MCS with different initialization lists. We use a semilog plot to make this figure clearer. The initialization list with a good initial guess (MCS4 and MCS5), starts its search from a relatively high NPV, but obtains a NPV lower than MCS1, which uses the default initialization list with a boundary point. For this example, MCS1, MCS2, and MCS3 recover quite quickly from the bad initial guess, and are able to converge more quickly.
We can see that for this example, the initialization list without boundary points (MCS2) performs unsatisfactorily both in terms of the convergence rate and the final NPV. This is because the optimal solution for this example lies near the boundary, as shown in Fig. 28. Using a line search to generate the initialization list (MCS3) ultimately obtains the highest NPV for this example.
Fig. 29(b) shows the performance of MCS with different numbers of maximum levels. Using (MCS6) outperforms choosing (MCS4) for this example.
The performance of MCS with and without local search is shown in Fig. 29(c). MCS without local search (MCS7) is clearly inferior.
4.4.3 Sequential procedure
The sequential procedure decouples the joint problem into two separate subproblems. For the well placement optimization subproblem, we optimize the locations of the 4 wells under an assumed control scheme. This gives 8 optimization variables. For the well control optimization subproblem, we optimize the well controls of the 4 wells with assumed well locations. Each well has 5 control steps, this gives 20 optimization variables in total. The maximum number of simulation runs for each well placement optimization stage is 60, while for each well control optimization stage the maximum number of simulation runs is set to 140. And the maximum number of simulation runs for the problem in total is 5000. This allows us to iterate 25 times between the well placement and well control optimization phases. Based on the results of the previous section we use MCS1 in the sequential procedure.
The maximum, minimum, mean, median, and standard deviation of NPV for each algorithm combination is given in Table 15. Plots of the NPV versus the number of simulation runs for each approach are shown in Fig. 30. From Table 15 and Fig. 30 we see that MCSMCS converges faster than the other combinations and obtained the highest NPV value at the end of the optimization. GPSMCS converges slowly at the early stage, but it ultimately obtains the second highest NPV. The combinations which contain stochastic algorithms, especially CMAES, perform unsatisfactorily for this example.
Algorithm  Trials  Max  Min  Mean  Median  Std. 

MCSMCS  1  11.48  11.48  11.48  11.48  0 
GPSGPS  1  8.65  8.65  8.65  8.65  0 
PSOPSO  5  8.11  6.64  7.54  7.89  0.79 
CMAESCMAES  5  6.58  5.75  6.04  5.79  0.47 
MCSGPS  1  8.53  8.53  8.53  8.53  0 
MCSPSO  5  10.78  9.55  10.03  9.76  0.66 
MCSCMAES  5  9.99  9.48  9.69  9.59  0.27 
GPSMCS  1  10.11  10.11  10.11  10.11  0 
PSOMCS  5  9.01  6.63  7.84  7.90  1.19 
CMAESMCS  5  8.98  5.80  7.42  7.47  1.59 
We also compare the optimal NPV obtained using the simultaneous and the sequential procedures using beanplots in Fig. 31. A beanplot (Kampstra, 2008) promotes visual comparison of univariate data between groups. In a beanplot, the individual observations are shown as small lines in a onedimensional scatter plot. In addition, the estimated density of the distributions is visible and the mean (bold line) and median (marker ‘+’) are shown.
From Fig. 31 we can see that, with the simultaneous procedure, the final NPV values obtained by all algorithms are less than USD. With the sequential procedure, MCSMCS, GPSMCS, and MCSPSO can obtain a NPV value higher than USD.
Indeed the simultaneous algorithm becomes trapped in a suboptimal solution. In Fig. 32 we show the NPV obtained around the candidate solution in each of the 28 dimensions by sampling at for . For the well location variables we choose and for the well control variables we set to 1% of the range of the th variable. We see that in most of directions the NPV remains constant. The NPV is lower in a few directions. And only gives an improved NPV in a few directions. Hence the algorithms have difficulty finding an ascent direction.
The optimal well placement and the final oil saturation distribution obtained with the simultaneous and sequential procedures are shown in Fig. 33. The corresponding optimal controls of each well are given in Table 16. The optimal well locations obtained by the simultaneous procedure are significantly different from the locations obtained by the sequential procedure. From the final oil saturation distribution, we can see that, the locations obtained by the sequential procedure give a larger sweep area. The optimal controls obtained by the simultaneous procedure are similar to those found by the sequential algorithm.
In theory, for a joint well placement and control optimization problem, the simultaneous procedure is able to find the global optima, but this is not guaranteed for the sequential procedure since the optimal location of each well depends on how the well is operated and viceversa. The simultaneous procedure, with a larger number of optimization variables, makes the joint problem more difficult. It requires a higher computational budget and has a higher risk of falling into a local optima and achieving a suboptimal solution, especially for a larger scale problem. The sequential procedure, decouples a hard joint problem into two easier subproblems, and hopes to approach the global optima iteratively. In general, the sequential procedure is worth considering in practice.


4.5 Summary
Our test results show that MCS is strongly competitive with existing algorithms for well placement, well control, and joint optimization problems. In all 4 examples, MCS offers good convergence speed, especially when the number of simulation runs is limited. Moreover, MCS does not suffer from the inherent variability of the stochastic algorithms. Based on the results of the examples, for placement and control optimization we suggest a MCS configuration which uses a line search to generate the initialization list. The number of levels is enough for most problems but a higher should be used for difficult problems. Local search is an important part of MCS, and is highly recommended.
5 Concluding Remarks
In this paper, we applied the multilevel coordinate search algorithm for four typical oil field development optimization problems. The problems include well placement optimization, well control optimization, and joint optimization of well placement and control. The performance of MCS has been compared with generalized pattern search, particle swarm optimization, and covariance matrix adaptation evolution strategy through several case studies including both synthetic and real reservoirs. The results presented here demonstrate that the MCS algorithm is strongly competitive, and outperforms the other algorithms in most cases, especially for the joint optimization problem. MCS has significant advantages in solving optimization problems with a limited number of simulation runs. In addition, MCS does not suffer from the inherent variability of the stochastic approaches.
For joint well placement and well control optimization problem, both the simultaneous procedure and the sequential procedure were considered. In our example, the sequential procedure finds the best solution. Although the simultaneous procedure can theoretically obtain the global optima, the sequential procedure is worth considering in practice. The sequential procedure decouples a difficult joint problem to two easier separate subproblem, decreases the number of optimization variables, make the problem easier to solve and decreases the risk of the algorithm falling into a local optima. Among all algorithm combinations considered in this paper, MCSMCS showed best performance both in terms of convergence speed and final NPV value in the sequential procedure.
MCS has shown its potential in our work, but more research is needed. Future work includes applying the MCS algorithm to realistic largescale oil field cases. This will involve an extension of MCS to handle linearly and nonlinearly constrained problems, possibly by a penalty approach.
Acknowledgments
The authors acknowledge funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant Program, the National Science and Technology Major Project of the Ministry of Science and Technology of China (2011ZX05011002), the Foundation for Outstanding Young Scientist in Shandong Province (Grant no. BS2014NJ011), and the program of China Scholarships Council (No. 201406450017).
References
 AlQahtani et al. (2014) AlQahtani, G. D., Alzahabi, A., Spinner, T., Soliman, M. Y., 2014. A computational comparison between optimization techniques for wells placement problem: Mathematical formulations, genetic algorithms and very fast simulated annealing. Journal of Materials Science and Chemical Engineering 02 (10), 59–73.
 Asadollahi et al. (2014) Asadollahi, M., Nævdal, G., Dadashpour, M., Kleppe, J., Feb. 2014. Production optimization using derivative free methods applied to Brugge field case. Journal of Petroleum Science and Engineering 114, 22–37.
 Asheim (1988) Asheim, H., 1988. Maximization of water sweep efficiency by controlling production and injection rates. In: European Petroleum Conference. Society of Petroleum Engineers.
 Audet and Dennis (2002) Audet, C., Dennis, J., Jan. 2002. Analysis of Generalized Pattern Searches. SIAM Journal on Optimization 13 (3), 889–903.

Auger and Hansen (2005)
Auger, A., Hansen, N., Sep. 2005. A restart CMA evolution strategy with increasing population size. In: The 2005 IEEE Congress on Evolutionary Computation, 2005. Vol. 2. pp. 1769–1776.
 Bangerth et al. (2006) Bangerth, W., Klie, H., Wheeler, M. F., Stoffa, P. L., Sen, M. K., Aug 2006. On optimization algorithms for the reservoir oil well placement problem. Computational Geosciences 10 (3), 303–319.
 Bellout et al. (2012) Bellout, M. C., Ciaurri, D. E., Durlofsky, L. J., Foss, B., Kleppe, J., Jul. 2012. Joint optimization of oil well placement and controls. Computational Geosciences 16 (4), 1061–1079.
 Bouzarkouna et al. (2012) Bouzarkouna, Z., Ding, D. Y., Auger, A., Jan. 2012. Well placement optimization with the covariance matrix adaptation evolution strategy and metamodels. Computational Geosciences 16 (1), 75–92.
 Brouwer and Jansen (2004) Brouwer, D. R., Jansen, J. D., 2004. Dynamic Optimization of Waterflooding With Smart Wells Using Optimal Control Theory. SPE Journal 9 (04), 391–402.

Chavent (1974)
Chavent, G., 1974. Identification of functional parameters in partial differential equations. In: Joint Automatic Control Conference. pp. 155–156.
 Chen et al. (2012) Chen, C., Li, G., Reynolds, A., 2012. Robust constrained optimization of shortand longterm net present value for closedloop reservoir management. SPE Journal 17 (03), 849–864.
 Chen et al. (1974) Chen, W. H., Gavalas, G. R., Seinfeld, J. H., Wasserman, M. L., 1974. A New Algorithm for Automatic History Matching. Society of Petroleum Engineers Journal 14 (06), 593–608.
 Christie and Blunt (2001) Christie, M. A., Blunt, M. J., 2001. Tenth SPE comparative solution project: A comparison of upscaling techniques. In: SPE Reservoir Simulation Symposium. Society of Petroleum Engineers.
 Ciaurri et al. (2011) Ciaurri, D. E., Mukerji, T., Durlofsky, L. J., 2011. DerivativeFree Optimization for Oil Field Operations. In: Yang, X.S., Koziel, S. (Eds.), Computational Optimization and Applications in Engineering and Industry. No. 359 in Studies in Computational Intelligence. Springer Berlin Heidelberg, pp. 19–55.
 Clerc (2006) Clerc, M., 2006. Stagnation analysis in particle swarm optimization or what happens when nothing happens. Tech. Rep. CSM460, Department of Computer Science, University of Essex.
 Emerick et al. (2009) Emerick, A. A., Silva, E., Messer, B., Almeida, L. F., Szwarcman, D., Pacheco, M. A. C., Vellasco, M. M. B. R., 2009. Well placement optimization using a genetic algorithm with nonlinear constraints. In: SPE reservoir simulation symposium. Society of Petroleum Engineers.
 Fonseca et al. (2014) Fonseca, R. M., Stordal, A. S., Leeuwenburgh, O., Van den Hof, P. M. J., Jansen, J. D., 2014. Robust ensemblebased multiobjective optimization. In: ECMOR XIV14th European conference on the mathematics of oil recovery.
 Forouzanfar et al. (2010) Forouzanfar, F., Li, G., Reynolds, A. C., 2010. A twostage well placement optimization method based on adjoint gradient. In: SPE Annual Technical Conference and Exhibition. Society of Petroleum Engineers.
 Forouzanfar et al. (2015) Forouzanfar, F., Poquioma, W. E., Reynolds, A. C., 2015. A covariance matrix adaptation algorithm for simultaneous estimation of optimal placement and control of production and water injection wells. In: SPE Reservoir Simulation Symposium. Society of Petroleum Engineers (SPE).
 Forouzanfar and Reynolds (2014) Forouzanfar, F., Reynolds, A. C., Jul. 2014. Joint optimization of number of wells, well locations and controls using a gradientbased algorithm. Chemical Engineering Research and Design 92 (7), 1315–1328.
 Forouzanfar et al. (2012) Forouzanfar, F., Reynolds, A. C., Li, G., May 2012. Optimization of the well locations and completions for vertical and horizontal wells using a derivativefree optimization algorithm. Journal of Petroleum Science and Engineering 86–87, 272–288.
 Gao et al. (2006) Gao, G., Zafari, M., Reynolds, A. C., Dec. 2006. Quantifying uncertainty for the PUNQs3 problem in a bayesian setting with RML and EnKF. SPE Journal 11 (04), 506–515.
 GeoQuest (2014) GeoQuest, S., 2014. ECLIPSE reference manual. Schlumberger, Houston, Texas.
 Handels et al. (2007) Handels, M., Zandvliet, M., Brouwer, R., Jansen, J. D., 2007. Adjointbased wellplacement optimization under production constraints. In: SPE reservoir simulation symposium. Society of Petroleum Engineers.
 Hansen and Kern (2004) Hansen, N., Kern, S., Jan. 2004. Evaluating the CMA evolution strategy on multimodal test functions. In: Yao, X., Burke, E. K. (Eds.), Parallel Problem Solving from Nature  PPSN VIII. No. 3242 in Lecture Notes in Computer Science. Springer Berlin Heidelberg, pp. 282–291.
 Humphries and Haynes (2015) Humphries, T., Haynes, R., Feb. 2015. Joint optimization of well placement and control for nonconventional well types. Journal of Petroleum Science and Engineering 126, 242–253.
 Humphries et al. (2013) Humphries, T. D., Haynes, R. D., James, L. A., Sep. 2013. Simultaneous and sequential approaches to joint optimization of well placement and control. Computational Geosciences 18 (34), 433–448.
 Huyer and Neumaier (1999) Huyer, W., Neumaier, A., 1999. Global optimization by multilevel coordinate search. Journal of Global Optimization 14 (4), 331–355.
 Isebor (2009) Isebor, O. J., 2009. Constrained production optimization with an emphasis on derivativefree methods. Ph.D. thesis, Stanford University.
 Isebor et al. (2014a) Isebor, O. J., Ciaurri, D. E., Durlofsky, L. J., Oct. 2014a. Generalized fielddevelopment optimization with derivativefree procedures. SPE Journal 19 (05), 891–908.
 Isebor et al. (2014b) Isebor, O. J., Durlofsky, L. J., Ciaurri, D. E., Aug. 2014b. A derivativefree methodology with local and global search for the constrained joint optimization of well locations and controls. Computational Geosciences 18 (34), 463–482.
 Jansen et al. (2014) Jansen, J. D., Fonseca, R. M., Kahrobaei, S., Siraj, M. M., Van Essen, G. M., Van den Hof, P. M. J., Nov. 2014. The egg model – a geological ensemble for reservoir simulation. Geoscience Data Journal 1 (2), 192–195.
 Jones et al. (1993) Jones, D. R., Perttunen, C. D., Stuckman, B. E., Oct. 1993. Lipschitzian optimization without the Lipschitz constant. Journal of Optimization Theory and Applications 79 (1), 157–181.
 Kampstra (2008) Kampstra, P., 2008. Beanplot: A boxplot alternative for visual comparison of distributions. Journal of Statistical Software 28 (1).

Kennedy (2011)
Kennedy, J., 2011. Particle Swarm Optimization. In: Sammut, C., Webb, G. I. (Eds.), Encyclopedia of Machine Learning. Springer Science + Business Media, pp. 760–766.
 Knudsen and Foss (2013) Knudsen, B. R., Foss, B., Nov. 2013. Shutin based production optimization of shalegas systems. Computers & Chemical Engineering 58, 54–67.
 Kolda et al. (2003) Kolda, T., Lewis, R., Torczon, V., Jan. 2003. Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods. SIAM Review 45 (3), 385–482.
 Lambot et al. (2002) Lambot, S., Javaux, M., Hupet, F., Vanclooster, M., Nov. 2002. A global multilevel coordinate search procedure for estimating the unsaturated soil hydraulic properties. Water Resources Research 38 (11), 6–15.
 Li and Jafarpour (2012) Li, L., Jafarpour, B., 2012. A variablecontrol well placement optimization for improved reservoir development. Computational Geosciences 16 (4), 871–889.
 Li et al. (2012) Li, L., Jafarpour, B., MohammadKhaninezhad, M. R., Nov. 2012. A simultaneous perturbation stochastic approximation algorithm for coupled well placement and control optimization under geologic uncertainty. Computational Geosciences 17 (1), 167–188.
 Li et al. (2003) Li, R., Reynolds, A. C., Oliver, D. S., 2003. History Matching of ThreePhase Flow Production Data. SPE Journal 8 (04), 328–340.
 Loshchilov (2013) Loshchilov, I., Jun. 2013. CMAES with restarts for solving CEC 2013 benchmark problems. In: 2013 IEEE Congress on Evolutionary Computation. Institute of Electrical & Electronics Engineers (IEEE).
 Merlini Giuliani and Camponogara (2015) Merlini Giuliani, C., Camponogara, E., Apr. 2015. Derivativefree methods applied to daily production optimization of gaslifted oil fields. Computers & Chemical Engineering 75, 60–64.
 Neumaier (2008) Neumaier, A., 2008. MCS: Global optimization by multilevel coordinate search. https://www.mat.univie.ac.at/~neum/software/mcs/.
 Oliveira and Reynolds (2014) Oliveira, D. F., Reynolds, A., Oct. 2014. An adaptive hierarchical multiscale algorithm for estimation of optimal well controls. SPE Journal 19 (05), 909–930.
 Onwunalu (2010) Onwunalu, J. E., 2010. Optimization of field development using particle swarm optimization and new well pattern descriptions. Ph.D. thesis, Stanford University.
 Onwunalu and Durlofsky (2011) Onwunalu, J. E., Durlofsky, L., Sep. 2011. A new wellpatternoptimization procedure for largescale field development. SPE Journal 16 (03), 594–607.
 Onwunalu and Durlofsky (2009) Onwunalu, J. E., Durlofsky, L. J., 2009. Development and application of a new well pattern optimization algorithm for optimizing large scale field development. In: SPE Annual Technical Conference and Exhibition. Society of Petroleum Engineers (SPE).
 Perez and Behdinan (2007) Perez, R. E., Behdinan, K., Oct. 2007. Particle swarm approach for structural design optimization. Computers & Structures 85 (19–20), 1579–1588.
 Pošík et al. (2012) Pošík, P., Huyer, W., Pál, L., Dec. 2012. A comparison of global search algorithms for continuous black box optimization. Evolutionary Computation 20 (4), 509–541.
 Rios and Sahinidis (2013) Rios, L. M., Sahinidis, N. V., Jul. 2013. Derivativefree optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization 56 (3), 1247–1293.
 Sarma et al. (2005) Sarma, P., Aziz, K., Durlofsky, L. J., 2005. Implementation of adjoint solution for optimal control of smart wells. In: SPE Reservoir Simulation Symposium. Society of Petroleum Engineers.
 Sarma and Chen (2008) Sarma, P., Chen, W. H., 2008. Efficient well placement optimization with gradientbased algorithms and adjoint models. In: Intelligent Energy Conference and Exhibition. Society of Petroleum Engineers.
 Sarma et al. (2006) Sarma, P., Durlofsky, L. J., Aziz, K., Chen, W. H., Mar. 2006. Efficient realtime reservoir management using adjointbased optimal control and model updating. Computational Geosciences 10 (1), 3–36.
 ShakhsiNiaei et al. (2014) ShakhsiNiaei, M., Iranmanesh, S. H., Torabi, S. A., Jun. 2014. Optimal planning of oil and gas development projects considering longterm production and transmission. Computers & Chemical Engineering 65, 67–80.
 Siraj et al. (2015) Siraj, M. M., Van den Hof, P. M., Jansen, J. D., 2015. Model and Economic Uncertainties in Balancing ShortTerm and LongTerm Objectives in WaterFlooding Optimization. In: SPE Reservoir Simulation Symposium. Society of Petroleum Engineers.
 Tavallali et al. (2013) Tavallali, M. S., Karimi, I. A., Teo, K. M., Baxendale, D., Ayatollahi, S., Aug. 2013. Optimal producer well placement and production planning in an oil reservoir. Computers & Chemical Engineering 55, 109–125.
 Torczon (1997) Torczon, V., Feb. 1997. On the Convergence of Pattern Search Algorithms. SIAM Journal on Optimization 7 (1), 1–25.
 Vaz and Vicente (2007) Vaz, A. I. F., Vicente, L. N., Oct. 2007. A particle swarm pattern search method for bound constrained global optimization. Journal of Global Optimization 39 (2), 197–219.
 Vlemmix et al. (2009) Vlemmix, S., Joosten, G., Brouwer, R., Jansen, J.D., 2009. Adjointbased Well Trajectory Optimization in a Thin Oil Rim (SPE121891). In: 71st EAGE Conference & Exhibition.
 Volkov and Voskov (2014) Volkov, O., Voskov, D., 2014. Effect of time stepping strategy on adjointbased production optimization. In: ECMOR XIV  14th European conference on the mathematics of oil recovery. EAGE Publications.
 Wang et al. (2007) Wang, C., Li, G., Reynolds, A. C., 2007. Optimal well placement for production optimization. In: Eastern Regional Meeting. Society of Petroleum Engineers.
 Wang et al. (2009) Wang, C., Li, G., Reynolds, A. C., Sep. 2009. Production optimization in closedloop reservoir management. SPE Journal 14 (03), 506–523.
 Wu et al. (1999) Wu, Z., Reynolds, A. C., Oliver, D. S., 1999. Conditioning Geostatistical Models to TwoPhase Production Data. SPE Journal 4 (02), 142–155.
 Yeten et al. (2002) Yeten, B., Durlofsky, L. J., Aziz, K., 2002. Optimization of nonconventional well type, location and trajectory. In: SPE annual technical conference and exhibition. Society of Petroleum Engineers.
 Yin and Cagan (2000) Yin, S., Cagan, J., 2000. An extended pattern search algorithm for threedimensional component layout. Journal of Mechanical Design 122 (1), 102–108.
 Zakirov et al. (1996) Zakirov, I., Aanonsen, S. I., Zakirov, E. S., Palatnik, B. M., 1996. Optimizing reservoir performance by automatic allocation of well rates. In: 5th European Conference on the Mathematics of Oil Recovery.
 Zandvliet et al. (2008) Zandvliet, M., Handels, M., van Essen, G., Brouwer, R., Jansen, J.D., 2008. Adjointbased wellplacement optimization under production constraints. SPE Journal 13 (04), 392–399.

Zhao et al. (2013)
Zhao, H., Chen, C., Do, S., Oliveira, D., Li, G., Reynolds, A., 2013. Maximization of a Dynamic Quadratic Interpolation Model for Production Optimization. SPE Journal 18 (06), 1–012.
 Zhou et al. (2013) Zhou, K., Hou, J., Zhang, X., Du, Q., Kang, X., Jiang, S., Aug. 2013. Optimal control of polymer flooding based on simultaneous perturbation stochastic approximation method guided by finite difference gradient. Computers & Chemical Engineering 55, 40–49.
Comments
There are no comments yet.