ECIM 2017 'Delivering Value - Applying Methods and Technology'

Earth Science Analytics attended ECIM 2017 in Haugesund this week. ECIM hosts the main E&P Data and IM Conference in Europe. For us, as data/geo scientists, this years theme; 'Delivering Value - Applying Methods and Technology', was most relevant, and we really appreciated the program and the many discussions on data science in the E&P industry. Thanks to @DavidHolmesUK from Dell EMC for inviting us. We will definitely be back next year.

Here's an abstract on our contribution to the event;

Machine Learning Assisted Petroleum Geoscience

 

Petroleum geoscience is hard. Particularly when it comes to predicting properties away from known measurements. It is hard because it is so complex. It is hard because there are no simple rules, like Newton's laws of motion, that help us predict the spatial distribution of, for instance, reservoir properties, or where to look for the next big commercial discovery. It is basically hard to "codify" and "formalize" what we do as petroleum geoscientists. Until today we have attacked these kinds of "hard to codify" problems by assigning teams of human experts to solve them. The combined experience of these experts from multiple disciplines helps us extract knowledge and insights from the data available. What if we could replicate this method with computers? Can we have the computer learn relationships directly from the data, from all relevant sources? This is exactly what machine learning is for, and it works remarkably well when there is enough structured, and labelled, data to train on.

The incredibly rich subsurface data and metadata on the NCS seems to be perfect for machine learning. We will soon be able to use this technology to build incredibly detailed, high-dimensional models using all our data. When machine-learning models, today, are trained on smaller data sets they enable petroleum geoscientists to better understand the spatial distribution of reservoir properties and hydrocarbons. This technology is available, and being used today. It is not solely a technology of the future. Workflow efficiency is being improved by orders of magnitude, today. Prediction accuracy is exceeding that of traditional "best practice", today. Imagine what it will be like tomorrow when really large data sets are available for training models.

This talk illustrates what is being done today with workflow examples and case studies. We discuss how machine learning can be applied to both reservoir characterization and exploration on a regional scale and on prospect level. Machine Learning technology and data science is exposing to geoscientists hidden relationships in measured data; it removes biases and provides metrics for predictions and estimations. We discuss the potential of value creation by applying machine learning on very large data sets, and the value to society that can be created by making data sources openly accessible.

The currently applied machine-learning technology shows that interdisciplinary approaches lead to deeper understanding of our prediction problems and provide a framework for creative solutions and better decisions. The future for decision making technology for exploration and production is here, today, and we should integrate this technology into our workflows to enable data driven and cost effective decisions.

Does machine learning have the power to transform petroleum geoscience today, like Newton's calculus transformed physics more than 300 years ago?

Earth Science Analytics to present at NCS Exploration (May 10-11, 2017)

Earth Science Analytics to present at NCS Exploration (May 10-11, 2017)

The NCS Exploration Conference takes place at Scandic Fornebu, right outside of Oslo (Norway), on May 10-11, 2017.

The backdrop for the conference is that few commercial discoveries are made despite many technical discoveries. Is this due to an exploration toolbox that is not sufficient for the proper de-risking of prospects?

Discover new ways of working with Earth Analytics and Headwave's presentation on machine-learning.


Presentation: Machine-Learning to Reduce Uncertainty
Thursday, May 11, at 15:05 hrs
Presenters: Eirik Larsen - Earth Analytics, Diderich Buch - Headwave

Across industries and academia everyone agrees that risk and return are closely linked. It’s intuitive that new play concepts must involve more risk than near-field exploration in a mature area. What is equally well known is that risk is consistently poorly estimated and understood across most industries. On the downside, human bias makes us underestimate or misinterpret certain types of risk (80% of men believe they are better drivers than the average driver.) On the upside, human creativity and willingness to accept risk is crucial to order to make new, major discoveries - yet the question is which type of risk we accept and which risk can be mitigated.

Most people agree that computers generally do a much better job than humans on consistently determining underlying, stochastic risk given a vast pool of data. Why not use this to our advantage to break the pattern of disappointing exploration results over the past 5-6 years? 187 exploration wells resulted in <10 commercial discoveries during the last five years. 143 BNOK was invested in the 187 exploration wells that discovered only 224 MSm3 rec. o.e. in the last five years. More importantly, what was known and what was not known when those decisions were made?

The NCS contains both highly mature areas, with well understood behavior and areas that are far less explored and understood. In many ways, this is the perfect scenario for computers. The incredibly rich subsurface data and metadata on the NCS is perfect for machine learning. Machine learning, crafted correctly and properly cross-validated, does not contain subjective bias. Machine learning builds incredibly detailed, high-dimensional models using all the data as input. This implies both structured data (e.g. logs, cores) but also unstructured data (reports). Utilized correctly, and in the context of all available data, such AI-assisted workflows will enable petroleum geoscientists to better understand the tectono-stratigraphic development of sedimentary basins in general, and more accurately and quickly predict the nature and occurrence of hydrocarbons in sedimentary basins in particular. More specifically, these applications will enable geoscientists to apply probabilistic quantitative techniques to very large subsurface datasets, thereby facilitating a better understanding of the multidimensional and nonlinear relationships existing between some of the key geological properties (e.g. lithology distribution and properties such as porosity, fluid saturation, source-rock maturity and sealing capacity).

In addition such analysis and decision making require real-time interaction with data and software packages has to be designed for the level of interactivity required by the users. The best quality on decisions is obtained by users seeing a direct response to their parameterizations and choices. Many software packages, however, simply fall short when it comes to feedback with their traditional and inefficient “click, wait for result, change parameters, repeat” approach. Experts in geoscience should, rather, be assisted by software as opposed to the current approach where it is the software that drives the users. Software should keep data live at all times and provide entirely dynamic workflows that honor the acyclic nature of geoscience, improving productivity.

This talk provides insight into the process of machine learning on public data, further learning & updates from non-public data (within an oil company) and future use of data-driven analytics within the context of an application that easily facilitates both data-driven and human creativity exploration of multi-dimensional data at a regional, basin or prospect scale and the way such processes can be integrating into sophisticated cutting-edge SW technologies that are prepared to tackle the challenges of tomorrow and enable energy companies to drastically reduce costs and improve software and end-user efficiencies now and in the future.

For more information, please contact

Eirik Larsen, Earth Analytics, +47 948 74 324, eirik.larsen@earthanalytics.no
Diderich Buch, Headwave, +47 922 90 446, diderich.buch@headwave.com

Machine Learning in Petroleum Geoscience Constructing EarthNET

Under utilization of data due to a lack of time, and insufficient calibration of geophysical methods are just two of the causes behind the disappointing exploration results on the NCS during the last 5-6 years. Our failure to utilize and integrate available data is partly a result of the inefficiency of traditional methods of data analysis, which typically require large amounts of human and financial resources to be spent, and the deployment of costly analytical techniques. The as-yet untapped potential of efficient analytical techniques that utilize all relevant data encourages us to further develop novel data analysis methods.


Our approach to developing more efficient and precise analytical techniques is based on artificial intelligence (AI) and machine learning (ML) technology; i.e. algorithms that can learn and make predictions directly from data. One key advantage of ML in science is the technology’s ability to efficiently handle very large volumes of multidimensional data, thus saving time and cost and, therefore, allowing human resources to be deployed to other, perhaps more creative, tasks. Another advantage is ML’s ability to detect complex, multidimensional patterns that are not readily visible to humans.

We aim to solve the data under-utilization problem described above by implementing ML techniques in petroleum geoscience. By doing so, we aim to provide more reliable and efficient methods for data analytics, and ultimately reduce the number of costly, unsuccessful wells.
Previous studies of the application of ML to petroleum geoscience problems have typically focused on a single task using limited data types. For example, ML-based studies using borehole data has allowed us to predict sedimentary facies, porosity, permeability and fluid saturation, whereas those using seismic data has permitted identification and prediction of reservoir architecture by automatic labeling of geological features observed in seismic attributes. More recently, the as-yet-unrealized potential of ML to help analyse integrated subsurface data sets has been illustrated (e.g. prediction of petrophysical properties, such as resistivity, from a combination of wells and seismic attributes).

We are developing machine-learning technology that can learn from, and make predictions based on, a combination of wireline log data and lab-derived measurements. These algorithms are used to predict rock- and fluid properties that are not directly measured by the wireline logging tools, in wells (or parts of wells) from which lab data are not available. More specifically, we are researching methods for prediction of property data related to; i) source rocks (e.g. TOC, HI, and vitrinite reflectance), ii) reservoir rocks (e.g. porosity, permeability, and fluid saturation), and iii) seal rocks (e.g. fracture pressure and capillary properties). We also focus on predicting electrical properties (e.g. conductivity, horizontal and vertical resistivity), and acoustic properties (e.g. shear velocity, and elastic parameters), which is used as input to ML-assisted geophysical predictions.

The geophysical part of our technology development is focused on relationships between well data and remote sensing (seismic and CSEM) data. We investigate how algorithms trained on different combinations of various seismic attributes and well data affect the accuracy of rock- and fluid property prediction. We explore how algorithms can be trained on a combination of seismic, CSEM, well data, rock- and fluid properties in relatively data-rich ‘reference’ areas, in order to predict rock- and fluid properties based on both seismic and CSEM data where data are sparse. By integrating ML methods with current methods of direct lithology and fluid prediction from geophysical data (e.g. seismic AVO and seismic and CSEM inversion), we aim to mitigate the non-uniqueness problem inherent to each individual geophysical technique. Our ML-approach will provide calibration data for geophysical methods, by making large-scale a priori rock- and fluid-property data accessible.

We strongly believe that, by researching, developing and deploying this technology, we will provide more efficient and accurate analytical methods that can ultimately transform petroleum geoscience into a much more data-driven science.

Artificial intelligence-assisted petroleum geoscience Next generation exploration technology

Large volumes of hydrocarbons (c. 2.9 GSm3 rec. o.e.; NPD 2016) remain to be found on the NCS. Finding and extracting these hydrocarbons is difficult (i.e. 187 exploration wells resulted in <10 commercial discoveries during the last five years) and expensive (i.e. 143 BNOK was invested in the 187 exploration wells that discovered only 224 MSm3 rec. o.e. in the last five years). We believe that under-utilization of data, and of the existing subsurface knowledge base, are at least partly responsible for the disappointing exploration performance. Furthermore, we argue that the incredibly rich subsurface data set available on the NCS can be used much more efficiently to deliver much more precise predictions, and to thus support more profitable investment decisions during hydrocarbon exploration and production.

We argue that Artificial Intelligence (AI), i.e. Machine Learning-based technology, which leverages algorithms that can learn and make predictions directly from data, represents one way to contribute to exploration and production success on the NCS. One key advantage of AI is the technology’s ability to efficiently handle very large volumes of multidimensional data, thus saving time and cost and, therefore, allowing human resources to be deployed to other, perhaps more creative tasks. Another advantage is AI’s ability to detect complex, multidimensional patterns that are not readily detectable by humans.

AI-assisted geoscience applications and workflows will enable petroleum geoscientists to better understand the tectono-stratigraphic development of sedimentary basins in general, and more accurately and quickly predict the nature and occurrence of hydrocarbons in sedimentary basins in particular. More specifically, these applications will enable geoscientists to apply more quantitative techniques to very large subsurface data sets, thereby facilitating a better understanding of the multidimensional and nonlinear relationships existing between some of the key geological properties (e.g. lithology distribution and properties such as porosity, fluid saturation, source-rock maturity and sealing capacity).

In areas where geophysical methods are associated with low resolution and reliability, for example at significant burial depths, we need alternative methods for predicting rock and fluid properties. AI techniques help identify and map relationships between rock properties and the broader geological context in which they occur. When the ML algorithms have learned these relationships directly from data, they can be used to predict (quantitatively and probabilistically) rock properties based on regional geological data.


To tackle this challenge, we need to quantify the multidimensional relationships between the extracted rock- and fluid-properties, and regional data such as structural setting, stratigraphic setting, and other map-based data such as sub-crop-, isopach-, depth-, provenance-, temperature- and pressure-maps. Successful predictions based on regional geological data must be based on an understanding of relationships between multiple geological parameters and their interactions.
AI-assisted petroleum geoscience will enable efficient use of large amounts of hitherto under-utilized subsurface data, and handling of multidimensional parameter sets in a purely data-driven way; this is currently not possible with the technology and workflows available under the current paradigm. Machine learning-based technology will, reduce human bias, which currently is pervasive in petroleum geosciences, and enable much more data-driven analytics and investment-decisions in the E&P industry.

The Earth Analytics Mission

Despite the current downturn in the E&P sector, it is clear that large quantities of hydrocarbons remain yet to be found, and that petroleum as an energy resource will be needed in many years to come. Disappointing exploration results, worldwide as well as on the NCS, illustrate how challenging it is to find commercial accumulations of hydrocarbons. Dry wells negatively impact the ‘bottom lines’ of energy companies struggling to cope with low prices, as well as the economy of governments dependent on tax revenue from the industry. How can we improve the success rate for the benefit of industry and society?

Although policy makers can mitigate the downturn by reducing taxes and making more acreage available, thus making it cheaper for companies to drill more, potentially dry, wells, we argue it is better to encourage the industry to drill less wells (at least partly for environmental reasons), but to drill them based on better predictions that make them more likely to be successful. Technological advances in geophysical data acquisition and processing (i.e. AVO, CSEM) have spawned several waves of exploration, and delivered discoveries in areas where these technologies provide sufficient reliability. Now, in the midst of the data-science renaissance, we argue that the time has come for radically new data-analytics methods to leverage the power of artificial intelligence and machine learning in order to dramatically improve exploration success. The ever-increasing volume of subsurface data are exposing exploration geoscientists and managers to a formidable challenge; how to extract the right intelligence from the data, and how to use this to make better predictions? We argue that these large sub-surface data sets are under-utilized due to a lack of methods with which to handle such large volumes of data; this calls for entirely new methods of knowledge extraction and data-driven predictive analytics.

The Earth Science Analytics AS mission is to improve exploration success in challenging geological settings by developing Geoscience-driven Machine Learning workflows and software capable of delivering high quality data-driven predictions based on complex relationships within large, multidimensional, data sets. A large fraction of the yet-to-be-found hydrocarbons likely reside in parts of sedimentary basins where geophysical techniques such as AVO, seismic inversion and CSEM are less reliable, either due to i) deep burial, ii) thin reservoirs (or HC columns), and/or similar acoustic and electric properties of the hydrocarbon accumulations and their surrounding rocks Traditional approaches to exploration in areas where so-called DHI’s are absent/not expected involve; i) interpretation of stratigraphic surfaces and geological structures from seismic, ii) assessment of lithology and rock properties from neighboring wells, iii) identification of geological (structural, stratigraphic, sedimentological, petrophysical) features and trends, and iv) erection of conceptual models. Statistical analysis based on these data, and conceptual models, are used in order to make probabilistic predictions about the; i) presence and quality of reservoir rocks, ii) presence and quality of sealing rocks, iii) presence and quality of source rocks, iv) presence and size of hydrocarbon traps, and v) presence of migration routes from HC kitchen to trap. Numerous, imperfectly quantified, and often inter-related geological features are used as inputs to the probabilistic predictions of properties of the hydrocarbon prospects. When evaluating prospects, geoscientists (or humans in general) are not able to fully understand the multi-dimensional relationships of all the features that are relevant for the predictions they are making. As a result, prospect evaluation is based on predictions that do not fully exploit all the available data, and the multi-dimensional relationships between all observed features.

Recent developments in Machine Learning (including, but not limited to deep neural networks) enable accurate and efficient predictions in complex multi-dimensional systems. Earth Science Analytics AS is developing software and workflows that leverage Machine Learning in order to extract knowledge from, and make predictions from large subsurface data sets. We believe that our data-driven approach will enable geoscientists and exploration strategy makers to make more precise predictions, more efficiently, while using all available data.