| Title | An in-depth Review of Machine Learning Applications in Geothermal Reservoir Engineering |
|---|---|
| Authors | Jesse Nyokabi, Noah Oyugi, Githae Mbugua, Antony Hiuhu, Kumbuso Joshua Nyoni |
| Year | 2023 |
| Conference | World Geothermal Congress |
| Keywords | Geothermal, Reservoir Engineering, Machine Learning, Algorithms |
| Abstract | Reservoir engineering constitutes a major part of the studies regarding geothermal exploration and production. Reservoir engineering has various duties, including conducting experiments, constructing appropriate models, characterization, and forecasting reservoir dynamics. However, traditional engineering approaches started to face challenges as the number of raw field data increased. It pushed the researchers to use more powerful tools for data classification, cleaning and preparing data to be used in models, which enhances a better data evaluation, thus making proper decisions. In addition, simultaneous simulations are sometimes performed, aiming to have optimization and sensitivity analysis during the history matching process. Multi-functional works are required to meet all these deficiencies. Upgrading conventional reservoir engineering approaches with CPUs, or more powerful computers are insufficient since it increases computational cost and is time-consuming. Machine learning techniques have been proposed as the best solution for strong learning capability and computational efficiency. Recently developed algorithms make it possible to handle a very large number of data with high accuracy. The most widely used machine learning approaches are Artificial Neural Network (ANN), Support Vector Machines and Adaptive Neuro-Fuzzy Inference Systems. In this study, these approaches will be introduced by providing their capability and a few limitations. After that, the study focuses on using machine learning techniques in reservoir engineering calculations: Reservoir characterization, PVT calculations, and optimization of good completion. These processes are repeated until all the values reach the output layer. Normally, one hidden layer is good enough for most problems and additional hidden layers usually do not improve the model performance, instead, it may create the risk for converging to a local minimum and make the model more complex. The most typical neural network is the forward feed network, often used for data classification. MLP has a learning function that minimizes a global error function, the least square method. It uses a backpropagation algorithm to update the weights, searching for local minima by performing gradient descent. |