Big data gets even bigger

Big data is not just for the seismic segment. Elaine Maslin takes a look at other areas, such as condition monitoring, where the industry could make significant gains.

A view of the server room at BP’s Center for High-Performance Computing, which opened in 2013. Photo from BP.

 

Big data is here and there’s no stopping it, a speaker at a recent oil conference told the audience.

It is going to get faster and faster, in a lot of areas, he warned, saying that such was the pace of growth it could easily swallow up a lot of resources.

Big data has indeed been growing. Total launched its 2.3 petaflop super computer Pangea in 2013, and the system went on to help analyze seismic data from Total’s Kaombo project in Angola in nine days – compared to four and a half months it would have taken before. The same year, Eni launched its latest super computer, with 3.3 petaflop capacity. Earlier this year, Petroleum Geo-Services topped them all and ordered a new 5 petaflop system.

But, these machines have mostly focused on seismic modeling and data interpretation. What about the broader industry?

Awash with data

Tor Jakob Ramsøy, director, McKinsey & Co., says while the industry more or less invented big data 30 years ago, to handle seismic modeling, it’s “not there” in other areas, such as condition monitoring, where significant gains could be made.

“Information is something this industry is not understanding the value of,” he told the Subsea Valley conference in Olso earlier this year. “You understand the assets in the ground – the geology. What is being done with the other information? [The industry is] just imposing more and more data tags, but there is no evidence of how it is making money out of it [the information gathered from these tags].

“The Johan Sverdrup development will have 70,000 data tags, he says, compared to 30,000 on the average North Sea platform. Yet, in production and operations departments, there are no data scientists. “The industry is drowning in information, but it doesn’t get to those who need to use it,” he says.

A lot of data collected is also wasted. According to Ramsøy, some 40% of data is lost because sensors are binary, i.e. they simply show if a parameter is above or below where it should be, which is important data, but doesn’t give data for trending to aid decision making or planning. More data is then lost because there’s no interface to enable real-time use of analytics.

Further, data management is ad hoc, infrastructure is limited, in terms of high-speed communication links etc., so little is streamed onshore. Forty percent of data generated isn’t stored for future use and the remainder is only stored on the rig, he says. The end result is that about 0.7% of original data generated is actually used, he estimates.

“It is a big paradox. Everyone is talking about big data, but the industry is fooling around with small data.”

Condition monitoring

An example of a high-quality image generated from Total’s Pangea supercomputer. Photo from Total. 

“We think there is an opportunity in condition monitoring,” Ramsøy says, to improve the industry’s poor production efficiency record. A single 50,000 b/d platform could save north of US$200 million, according to a study the McKinsey & Co. has undertaken.

The oil and gas industry could learn from the aerospace industry here, Ramsøy says. In aerospace, condition monitoring is used on turbines so that potential failures are seen before they become problems. This changed the way airlines carried out maintenance, and in doing so reduced maintenance by 30%, Ramsøy says. Because airline operators have more information about the performance of the turbines, they’re also comfortable with using different turbines, making them engine agnostic, Ramsøy says, again, reducing costs.

Other possibilities include real-time comparison of well characteristics, automated analysis to speed up and improve quality of the seismic decision making, an analytic engine to assess mergers and acquisition prospects.

It isn’t always going to be simple. In the oil industry, equipment failure modes come under three categories, he says: wearing out after a period of time or use; infant mortality; random events. Each of these failure modes requires different methodologies for management, which means different data use and handling strategies. But, this shouldn’t be a huge hurdle and, while the majors are able to apply the computing power and systems to adopt such techniques, third party data analytics companies have been popping up in Houston and Norway from Silicon Valley, which could help the smaller players in the market.

Predictability

One such company that has been helping to use big data to solve a well-known and common problem in the industry is software firm SAS Institute. David Dozoul, O&G Advisor, SAS Institute, says the firm completed a large project around managing ESP failure rates in 8000ft water depth in the US Gulf of Mexico.

The project’s aim was to make sure ESPs operated within their operating envelope, which meant real-time monitoring, using a stream of data from different tags and production, which was compared against analysis of historian data in a live model, which was able to react to the environment the ESP was working in at any particular time.

The result was that the company running the ESPs was able to detect a failure three months ahead of it occurring, enabling them to plan maintenance and order replacement parts and resources, increasing uptime and production.

Dozoul, who described the project at Subsea Valley, says all data available was used and that the data sets didn’t need to be perfect – it just needed to be clean. Some 6000 events per day were detected in the project via 17,000 sensors, with 310,000 calculations run and 430,000,000 data points.

The key was being able to model and creating a model that could identify events.

Dozoul says such modeling could also be used to improve 4D seismic data analytics, to help identify the important data and make the analytics faster. “Wherever there is any data, there is a value to analyze so you can model, predict and optimize the process,” he says.

Going subsea

Having condition monitoring from day one would also make extending the life of assets far easier, as it makes it easier to prove the condition and therefore ability of the facility to continue operation beyond its design life, and enable preventative maintenance, says Sigurd Hernæs, senior field development engineer, FMC Technologies.

Today, to renew a plan for development and operation on the Norwegian Continental Shelf, which is mandatory when a facility reaches the end of its designed for life, requires design data, installation and operation information, production data, typical corrosion and erosion information, etc., Hernæs says, adding the need to know about any changing specifications, fatigue, damage from trawling, obsolescence, i.e. in electronics.” Getting these data for topsides is hard enough. Subsea, the difficulties are even greater.

Giving an example of a manifold that has been installed for decades, he says: “We would want to look at potential corrosion in bends inside pipeline, calculate this theoretically, based on production data, and also assess degradation on polymer – such as Teflon and rubber in seals.”

Problems arise when there is missing production data, which is used to estimate corrosion, operational data, showing tie-in forces etc., during installation, and even original design documentation.

FMC Technologies is hoping to make this easier for future projects by introducing a data collector, which collects all data on and going into the subsea system, including data imported from topsides control systems, and multiphase data points, such as leak detectors and apply condition and performance monitoring (CPM). The system could then calculate erosion on continuous basis, Hernæs says, speaking at Subsea Valley. “It would be used quite actively from day one in the life of the field, which means you are far less likely to lose the data,” he says. “It could also be applied to calculate the life of a valve on a Xmas tree to predict fatigue issues, as another example.”

Big gets bigger

Whichever way you look at it, there’s more data and a need for more computing power, but also, crucially, for strategies around what to do with the data and tools or software with which it can be analyzed.

According to tech-focused research firm Technavio, the smart oilfield IT services market 2014-2019 is expected to grow at a CAGR of 5.93% as a result of this growing market. Supercomputers in the oil and gas sector are expected to grow at 7.8% CAGR, according to analysts at IDC.

Faisal Ghaus, Vice President of Technavio says: “Considering the amount of data generation, service providers have come up with sophisticated algorithms and software tools to enhance the decision-making process and optimize productivity, return on investment and net present value of the project.”

Big data and big computing is here to stay.

Current News

CSL-OWL Joint Venture Orders Two Rock Installation Vessels for Offshore Wind

CSL-OWL Joint Venture Orders T

Solstad Offshore Nets $60M in New Vessel Contracts

Solstad Offshore Nets $60M in

ExxonMobil, Hess, CNOOC Withdraw from Guyana’s Oil Block Negotiations

ExxonMobil, Hess, CNOOC Withdr

Velesto Completes Removal of Wrecked Naga 7 Jack-Up Rig Off Malaysia

Velesto Completes Removal of W

Subscribe for OE Digital E‑News

Offshore Engineer Magazine