World Pipelines - January 2015 - page 38

The pipeline industry has what I would describe as a ‘big data’-
like problem, satisfying to a large degree the defining conditions
of big data in the form of high volume, great variety and
frequency of update. I say ‘like’ because, although not of the size
of some data gathering and mining needs such as those tackled by
large financial systems, it certainly does present itself as a problem
with many of the same challenges and data complexities. For the
most part, the pipeline industry has been quite conservative in
how data has been utilised in assessing pipeline condition, risk
levels and operational performance. Consequently, it must learn
from other industries, or other parts of the pipeline industry, and
adopt technologies and methodologies that can help advance
pipeline assessment and risk management. This leads us to a vast
list of possibilities and challenges.
Data management for inline inspections
Where does all the data we need come from? Where should we
be looking for data that can help us? Before a pipeline is placed
into into operation, there is already a substantial volumetric
data challenge – considering that there is data from design, mill
records, as-built details, weld inspections and coating inspections,
as well as pre-commissioning caliper and increasingly from
intelligent pigging fingerprinting. Managing this information
effectively from the outset is not only financially smart, it is
technically smart. It is the foundation of our future integrity
programme. Add to that, it is also becoming mandatory in many
jurisdictions.
However, this is only the beginning of the challenge. In the
operational phase we are faced with the potential of extremely
large data volumes to manage, which increase with the age of
the asset. Many additional sources of information are gathered,
including SCADA data important for fatigue analysis, corrosion
inspection reports, ground movement sensors and of course
inline inspection data. Inline inspection data can have a very high
volume, depending on the type of inspection and the condition
of the pipeline. Consider also that this is a multidimensional data
problem with linear in-pipe or above-ground distance, spatial
(GPS co-ordinates), pipe circumferential location as well as a time
dimension. As the asset ages, operators need to mine, review
and spot trends in the data so they can preempt problems that
will elevate risk levels. Left unchecked, these risks could lead to
asset failure. Think of this as looking for the proverbial needle in
a haystack. How do we map a path to make sure that we find the
needle?
Five ways pipeline integrity software can help
Any effective pipeline integrity programme must address this, and
pipeline integrity software plays a pivotal role. Here are the five
most important ways in which your pipeline integrity software
should help you:
)
)
In managing the data and integrity processes – can you get
to the information in an efficient and timely manner to make
effective real-time decisions?
)
)
In finding the needle in the haystack – it must do the
calculations and help lead you to the problem areas, utilising
analytical modelling, corrosion process models, expert (or
knowledge) based models and assessment methods, statistical
based data mining and analytics or pattern recognition
approaches. It must also incorporate linear and spatial analysis
to identify causal or potentially causal relationships between
defected anomalies and the pipe environment.
)
)
In providing excellent domain-centric visualisation tools – the
software is your eyes’ best friend. How good can your eyes
be at finding the needle? Well, the answer is that they are
probably the most powerful tools we have available to us.
But our eyes do need a little help focusing on the meaningful
relationships in the data. Effective software systems help by
presenting information intuitively and in a way that helps us
rationalise complex information relationships.
)
)
In effective reporting – this is not limited to the must-haves
and routine reports, but includes delivering complex results so
they are clearly understood by decision-makers and lead to
effective performance-enabling actions.
)
)
By easy integration with your enterprise systems – a fully
integrated solution can be a full participant in your operational
risk management programme.
This is the basis upon which DNV GL’s Synergi Pipeline
software is built, and how its consulting team deploys solutions.
Central to this is how the company manages inline inspection data
and the business processes – from inspection planning through to
analysis and actions.
With storage and retrieval, the pipeline open data standard
(PODS) gives an outline data model for the standardised storage
of ILI data. But that is only a part of what is required in delivering
an effective pipeline integrity solution, where we are working with
millions of pipeline features per inspection run and per section of
pipeline. Synergi Pipeline goes beyond this and utilises advanced
database storage and retrieval methods to improve access and
retrieval responses. These include interactive applications, such
as dynamic alignment sheets, and offline analytics, such as risk
assessments. Engineers using the company’s software have many
options for interacting with their data, allowing them to spot
locational and time-dependent trends that could impact future
pipeline performance.
Specialist analytical tools are essential for making sense
of inline inspection data. These include many of the standard
corrosion, crack and dent assessment methods as well as
statistical analysis. Together they can assist in identifying trends
and correlations between inline inspection features and pipeline
properties (for example grade, manufacturer, age, coating type and
operating conditions) and their relationship to the likelihood of
pipeline failure.
The future of pipeline integrity
New technologies are central to the future of pipeline
integrity. They will help us identify and examine apparent
causal relationships that we might otherwise have overlooked.
Quantitative defect and risk assessments are of fundamental
importance in providing measurable factors that impact the
accuracy and variability of our analysis. An example of this is
in calculating the probability of failure of a pipe section based
upon ILI data where we need to verify and incorporate vendor
performance metrics, such as probability of detection and
predicted depth estimate variance, in order to understand the
true likelihood of failure. We must take into account imperfect
measurements or imperfect knowledge of pipeline properties
and GIS for spatial context, such as soil types and proximity or
36
World Pipelines
/
JANUARY 2015
1...,28,29,30,31,32,33,34,35,36,37 39,40,41,42,43,44,45,46,47,48,...92
Powered by FlippingBook