As a part of creating production, here is a subsea production tree that mounts at the well head. Source: FMC Technologies |
Peter Welander examines how the challenges of networking disparate elements are intense, but workable.
The offshore oil industry is evolving rapidly, as oil and natural gas now begins to flow from deposits considered impossible to tap just a few years ago. Production companies around the world are working to exploit fields in remote locations, in very deep water, and below huge amounts of rock.
At the same time, more production equipment is moving off floating platforms to the sea floor. In years past, a well that produced a mixture of oil, gas, and water would have to send the mixture to the platform for separation. Now that process can occur at the wellhead, with separate streams of oil and gas pumped to the surface.
The costs of maintaining offshore platforms and the people that operate them are sizable, and remain a driver for operators to find ways to move equipment and people off platforms - and even eliminate the need for them entirely. “It’s a whole new business case for developing oil and gas,” Ann Christin Gjerdseth, director, controls and data management for FMC Technologies, says. “The business case for placing production equipment subsea is a very favorable one with regards to cost and the environmental footprint. But moving more to the seabed brings a holistic challenge around automation, and that infrastructure needs to be designed to use more functionality. That’s driving new thinking around automation.”
Integration on a grand scale
Getting oil from a wellhead on the sea floor to an onshore terminal, and eventually, a refinery, requires different processes that have to work together efficiently and safely. Simultaneously, the data also has to go to the enterprise.
When a main automation contractor or system integrator ties together such a large-scale project, invariably there will be some “black boxes” thrown into the mix. A black box could be a major piece of equipment or skidded subsystem (skids) designed to perform a specific function. It remains self-contained, with its own controller and small-scale automation system.
Such skids can perform their functions flawlessly; on the other hand, designers might not be aware of how it should fit into the larger automation scheme for the overall operation. To make it work, the integrator may have to unravel the system’s programming and determine how to make it communicate with the larger system.
Some boxes are blacker than others, so the task might be simple or it might be a headache, but it probably involves writing specialized programming code just to talk to that device.
An installation rendering of a subsea separation system used in Petrobras’ Marlim field. Source: FMC Technologies |
Technology differences
One challenge to offshore integration is that relatively few companies produce equipment for offshore installations. The companies that dominate the market in subsea equipment can be counted on one hand.
Lee Swindler, oil and gas program manager for Maverick Technologies, has worked on these projects firsthand. “I think it is because offshore rigs have tended to be built as a single isolated entity,” he says. “Because of that, they tend to tolerate equipment that doesn’t play well with others. They go to a single engineering company that uses a few select suppliers to design and build the entire rig, whereas onshore facilities tend to be put together more piecemeal.
“That is certainly a big difference with downstream, refinery-type plants where there are established communication and interface protocols that are commonly used and it allows you to mix-and-match suppliers and still end up with an integrated system. (Offshore) you’re left trying to use a single supplier solution in a lot of cases in order to get it to integrate, and even then it’s probably only doing part of what you need, or trying to figure out how to make things work together on your own, which can be time consuming and difficult.”
A platform isn’t a single black box, but a whole series of small, isolated systems that have their own controllers, Paul Bonner, oil and gas vertical leader for Honeywell Process Solutions, explains. “You’ve got compressors on platforms and you’ve got a third-party black box compressor control system,” Bonner said. “There might be an anti-surge control system, or a separate safety interlock system, and so on. They’re all different systems that you think of as being outside the distributed control system (DCS) but have to interface with it, and that presents a big challenge for getting the data back to the shore in a coherent form.”
Look in the mirror
Others look at the situation and say these integration challenges are a self-inflicted problem — companies have to deal with black boxes because they buy black boxes. Gjerdseth says there are plenty of examples of alternate systems that integrate without all the headache because the operators chose an easier path.
“If you go to the Norwegian Continental Shelf, you will see that the tradition there for many years has been to integrate a lot of the systems and automation,” she said. “There aren’t many black boxes. But if you go into the Gulf of Mexico (GOM) and even the UK, you will see that they have used many suppliers and are have a lot of different vendors’ systems as a result. Much depends on the philosophy of your whole system: whether you want one automation platform, or if you want every system to be proprietary.”
Gjerdseth’s contention is that when planned well, a project may still have small pockets of isolated systems that need some extra work to bring into the larger control system. That effort can be minimized if there is a conscious effort from the outset to choose systems designed to interact.
On the other hand, poor planning has the opposite effect. Ben Trombatore, project manager for Mustang Automation & Control has seen appropriate care at a critical time can avoid problems when working with control systems connected to packaged equipment.
“The reasons for these challenges are not so much overcoming the technical hurdles, but rather not providing adequate attention to details,” he says. “As any systems integrator will attest, data interface issues such as preferred communications protocol, data map structure, IP addresses, data security handling, time syncing, cabling specifications, and so on, need to be reviewed thoroughly and understood by both the equipment supplier and the systems integrator prior to start of work. Too often, companies rush to issue (purchase orders) to their equipment vendors with little or no attention to data communications. If these deficiencies are not caught and fixed during the factory acceptance test, then they will definitely pop up during commissioning and start-up, resulting in additional integration costs and quite possibly delays in start-up.”
Creating operational integration
Memories of the Deepwater Horizon oil spill are still fresh enough to keep safety front and center for offshore projects. Integrating all the disparate control systems involved has to provide a means for operators to respond quickly to abnormal situations. Supporting this typically involves the creation of one overarching control strategy that can handle all the control, alarming, safety, and reporting functions for this huge collection of hardware and disparate systems that is designed with varying degrees of cooperation in mind. Without that, there can be major losses of efficiency and potential for disaster.
“If you’re trying to piece together disparate parts to present a unified picture to the operator, it is more of a challenge,” Swindler says. “Your goal as an automation engineer is to try and get the operator the information he needs to make the right decision at the right time, but you have to do it in a consistent manner so that the technology doesn’t block his view and inhibit him knowing what’s really going on out there. That’s the challenge when working with systems that don’t integrate together. If it was easy, anybody could do it.”
Eugene Spiropoulos, senior technical solutions consultant for Yokogawa Corporation of America agrees. “You have disparate systems. You have the larger DCS you’re trying to develop looking one way, but then you have the human-machine interface (HMI) interfacing with the skid looking another way. What you provide to the operators should be seamless. Not just the start and stop of the system. The faceplates and graphics of the system, the look and feel and style of the system should have the same style as the DCS itself.”
The larger issue is that all the different systems have their own way of dealing with information. Each has its own way of treating diagnostic functions, but all of that has to be brought together into a unified system.
Honeywell, ABB, Emerson, Kongsberg, Siemens, and Yokogawa have all accepted the reality at hand. As Bonner says, FMC Technologies built most of the subsea equipment, so it’s easier to work with them and use their system. He says that up until about a year ago, Honeywell would use Modbus to communicate with the wellhead skid. That, however, was a very manual operation that required a lot of work for optimal functionality. But then those DCS suppliers worked with FMC Technologies to eliminate that problematic step by adopting a new protocol.
“We took the FMC-722 protocol, and we’ve embedded that in our controller,” he said. “So rather than having to go through a set of intermediate servers and other approaches, we now integrate natively with FMC Technologies. We can now take our DCS and plug it straight into our controller and talk directly to the FMC topside processing unit (TPU) and pretty well provide all the functionality of the configuration blocks.”
MDIS wants to create a bridge between subsea vendor hardware and the DCS on the platform and onshore systems. Source: OTM |
Creating enterprise integration
Like every other area of process manufacturing, offshore oilfield operating companies are trying to create a higher level of information integration from wellhead to terminal to executive suite. Planners and others on the enterprise level expect current production and information to be available anywhere. Getting all that data gathered and turned into useful information in an environment where even the most basic communication is a challenge calls for a variety of solutions.
“Getting information from daily operations to the enterprise level involves a conversion of paring down of information into something that’s specific for the business,” Spiropoulos says. “At the top level, the guys that are working with SAP, Oracle, and so on, are not interested in daily flow rate. They’re not interested in what the pressure was at noon. They want to know what the business effectiveness was, what the profitability was, what the energy management situation was. This involves getting information out of our control systems and also out of the third-party sub-systems, and cross converting the information into (key performance indicators, or KPIs) for the plant or the process that span different systems.
“This is where it becomes really valuable to guys at the enterprise level. The same way we automate the process, we want to automate the information. In the same way that the process controller knows what to do, the information layer knows what to do. We specify what is important for the KPIs and the planning layer. As the information is collected, our information layer converts that information into what’s useful and pushes it to the relevant people or the relevant systems.”
Data on the move
Integrating all these systems involves moving data. Getting it from the sea floor to the platform is manageable enough using fiber optic cables, but traversing long distances from a platform to an onshore facility can be more challenging. The amount of data has increased, but the ways of moving it have not kept pace. Bandwidth is a major bottleneck. Bonner said the GOM has been a struggle.
“With the exception of one major company that has invested in fiber optic lines out to their platforms, everybody else in the Gulf is relying on one of two satellite companies,” he says. “They’re getting a data rate of around 1-1.2Mb/sec, which is about what you probably get from your home internet provider. You’ve got massive amounts of data on the platform, and massive amounts of data on shore, and you’ve got basically a drinking straw between two fire hoses. Given that amount of bandwidth, you have to be very selective about what you pipe back to the shore.”
Comprehensive solution
When enough companies contend with the same problems again and again, often they will get together to create some sort of standard that can help smooth out the differences. Such has been the case for offshore installations. A group of organizations have worked to create more than one standard for integration:
One group that is currently active is the joint industry group MCS-DCS Interface Standardization (MDIS) network, which works towards bridging subsea vendor hardware and the DCS on the platform and onshore systems. Rachael Mell is a consultant for OTM Consulting, a division of the Sagentia Group, which manages the MDIS program. “We hear, time and time again, that the operating companies want more standardization, especially across projects, because every time you start a new project offshore, you have to do a lot of engineering work just to make sure the equipment you want to use from different vendors will click together and talk to each other,” Mell says.
The group includes participation from the operating companies, subsea vendors, and DCS vendors and integrators.
“It’s fairly even between the three groups and we have major representation from the vendors and operators,” Mell said. “Given the nature of the work we’re doing, the vendors get involved a bit more, and then we rely on the operators to make a decision when there are things the vendors can’t decide on. The operators are the customers, so they get their say at the end of the day.”
One of the major steps the group took was identifying a single communication standard that all participants could use as a unified platform. In 2013, it selected the OPC Foundation’s UA (Unified Architecture) out of a list of eight candidates. OPC UA integrates functionality between the individual OPC Classic specifications, which also deals with communication between software models, into one extensible framework. One of the reasons why MDIS chose OPC UA is that it relates to its ability to work with objects.
Thomas Burke, OPC Foundation president and executive director sees this adoption as a critical step as MDIS builds a strategy to provide interoperability across multivendor multiplatform systems.
“One of the main features of OPC UA is the information model architecture, which allows suppliers to model complex information into the OPC UA namespace,” he says. “A vendor can now build a product that understands the intricate details of a complex data model, and using OPC UA allows other applications to connect and understand the syntax and semantics of the data/information.
“We have been collaborating with upstream oil and gas standard organizations all the way down to subsea vendor oil and gas providers. This will allow easy interoperability and understandability of the data and the metadata associated with all of the simple and complex objects in the architecture.”
Paul Hunkar, president of independent software consulting firm DS Interoperability has been assisting the MDIS standard formation. From his viewpoint the challenge OPC UA helps solve isn’t about control strategy; it’s about interfacing.
“In older systems interfacing was accomplished by mapping tags,” he says. “This tag mapping was prone to errors and incorrect assumptions. Also multiple communication interfaces (protocols) were used, and if a vendor didn’t support one of the interfaces, there were additional complications. OPC UA provides a robust high-speed communication infrastructure. For MDIS, it is expected that the interface will be operating as part of a gateway or controller in the subsea system and in a DCS controller or programmable logic controller (PLC) on the topside system. The beauty of OPC UA is that it supports multiple platforms and can be implemented across all of these hardware solutions. By using OPC UA, vendors are free to continue to use the hardware they are most comfortable with, or the system that the customer specifies.”
A group of organizations worked to create more than one standard for integration. Source: OTM |
Down the road
Using OPC UA is “a great and noble idea and definitely the way to go, but there aren’t a lot of critical systems in the field built on that system at this time,” Yokogawa’s Spiropoulos says. “Adoption will be slow, unless a real use case emerges from the users.”
Revisiting this topic in another five years will likely produce a vastly different discussion. By that time, the standard writing processes will be farther along, vendors will be making related products, and there will be time to implement some of the new systems. Older fields will continue to operate using legacy systems, but those with a long enough operating life will eventually come around.
In 10 years, subsea fields may be invisible as more equipment moves to the seafloor and platforms disappear. Operators will have seamless control over large fields from onshore control centers away from the dangers and inconvenience of living on a platform. Thinking about the future, Gjerdseth says, “What we’re really offering here is subsea production, processing, and intervention. Automation is a key enabler to get those offerings to market.”
A future standard
Choosing OPC UA is definitely a decision based on the long-term future rather than choosing something that might work now, but will be made obsolete quickly.
At the moment, there is far too little equipment available to make an implementation practical, but the vendor community is working on it. Down the road, it is easy to see things falling into place one by one.
“The objects work group is looking at designing the software objects for the major pieces of subsea equipment that this interface needs to control,” said Rachael Mell, a consultant for OTM Consulting, which manages the MDIS program, which chose OPC UA as its single communication standard that all MDIS participants could use as a unified platform standard.
“They finalized the binary objects at the last meeting, and now they’re converting those objects into OPC UA language and going through them and making sure they make sense to everybody involved. The OPC Foundation is taking the object design and converting it into the OPC UA format, and the working group is reviewing that. At the last meeting, they went through the valve object to confirm that they’re all happy with it. Now that will go to review for the whole MDIS network, and every company will get to submit comments. Once finalized, that will be the object that goes into the standard.
“The validation work group is working on the interoperability test, which is planned for June, 2015. The subsea vendors and DCS vendors will bring their equipment to test the interface. That will verify that what we’ve done, so far, really works and that the vendors will be able to implement it. One thing that would motivate the vendors more would be if they had a date. We would like one of the operators to say, ‘We want to use the MDIS standard on this field at this time.’ That’s something we’ve been pushing the operators for lately, but we can’t really get yet,” Mell says. Writing standards is by nature a slow process, and then the vendors have to create products to be implemented by the operators. It doesn’t happen overnight, particularly when the standard involves so major a change as adoption of OPC UA.
“OPC UA’s promise is that it’s object based — it’s a unified architecture in that it’s one system that can do all the different things of the other flavors,” Eugene Spiropoulos, senior technical solutions consultant for Yokogawa Corporation of America, says. •
Peter Welander is a freelance writer and editor specializing in industrial automation.