## Line data: Contents |

Four methods of modelling contents are available:

**Uniform**means that the entire line is assumed to be filled with contents of a uniform density.**Free-flooding**results in the line being filled with sea water, up to the instantaneous water surface.**Slug flow**allows for spatial and temporal variation of contents. The contents flow velocity can also vary with time.**Tabular**allows for the general specification of contents as a function of arc length and/or time.

This option allows the axial component of *translational* inertia due to contents to be excluded from the analysis. Typically, the contents axial translational inertia *should* be included – for a line with capped ends, for example, since in that case the contents must follow all translational motion of the line, including its axial motion. But the contents axial translational inertia would *not* be included in the analysis of a line with free-flooding contents, such as a drilling riser in emergency disconnect mode.

The lateral components of contents inertia, both translational and rotational, are always included, i.e. about axes normal to the line's axis. This option determines whether or not the radially offset contribution to the content's lateral rotational inertia is to be included. The axial component of *rotational* inertia of the contents is always excluded, since the contents are assumed to not have to follow any twisting motion of the line.

The **contents temperature** specifies the internal temperature of the line. It is used, along with the contents pressure, as an input when computing an expansion factor from a line type expansion table. There is no need to specify a contents temperature unless expansion tables are being used. If they are not, then this data can be left at its default value of ~ (meaning "not in use").

The **contents pressure** specifies the internal pressure in the line at the given **reference Z level** (relative to global axes). The internal pressure at this Z level is assumed to remain constant throughout the simulation. The internal pressure at other levels is calculated allowing for the static pressure head due to differences in Z level. For slug flow the static pressure head is calculated using a *constant* contents density which takes account of the mean density over the slug flow contents pattern.

All pressures in OrcaFlex are gauge pressures, not absolute pressures. That is, they are relative to atmospheric pressure and so can be as low as -1 atmosphere (-101.3 kPa).

The reference Z level may be set to '~', to represent the Z level of the top end of the line in the reset state.

Note: | Contents temperature, pressure and reference Z level are only available for the uniform and slug flow contents methods. |

Each section of the line is assumed to be full of contents of the given density; the mass of each section is increased accordingly.

The rate of flow of mass through the line, used to calculate the centrifugal and Coriolis forces due to flow of fluid in the line. Positive values mean flow from end A towards end B, negative means from end B towards end A. To convert between mass flow rate, volume flow rate and flow velocity use the simple formulae \begin{aligned} \text{Volume flow rate} &= \text{Mass flow rate} / \rho \\ \text{Flow velocity} &= \text{Volume flow rate} / A \end{aligned} where $\rho$ is the contents density and $A$ is the internal cross sectional area.

The slug flow data allow you to define variation of contents density along the arc length of the line. This *pattern* of contents can also progress along the line over time. The resulting variation in mass, weight and centrifugal and Coriolis forces are all accounted for.

The velocity at which the contents pattern flows along the line. This value may be constant, or can vary with simulation time. Positive values mean flow from end A towards end B, negative means from end B towards end A.

A flow velocity value of zero may be used to represent spatial contents variation with no temporal variation. A variable flow velocity can be used to model, for example, the flow of contents out of a drilling riser in emergency disconnect mode.

Note: | There is no need to ramp flow velocity at the beginning of dynamics – in fact it is better to include the fluid flow in the statics calculation because this removes undesirable transients during the dynamic analysis. Therefore, if you wish to model a constant flow rate, you should simply set the flow velocity to be that constant value. |

The contents density for sections of the line that fall in between slugs.

The spatial variation of contents density, i.e. the contents pattern, is specified in a table in which each row defines a group containing a **number** of identical slugs, characterised by their **density** and **length**, together with the **distance between slugs**.

In addition, each group has a **reference point**, an arc length which can be relative to either end of the line, and the **simulation time** at which the first slug in the group reaches that reference point. If the flow velocity is zero, then instead we adopt the convention that the group of slugs covers arc lengths (measured from end A) beyond the reference point. For example, if the flow velocity is zero and you have a single slug with length $l$ and reference point at end A, then the slug will stretch between arc lengths 0 and $l$.

Simple repeating patterns of slugs can easily be modelled with a single row in the table. For irregular patterns, you can model each individual slug with a row in the table. Use range graphs of contents density to check that your data correspond to your desired pattern of slugs.

The data which control the drawing, or not, of the slug flow within the line, are on the drawing page of the line data form.

With this contents method you can specify the contents of a line as a general function of time and/or arc length. Each row in the tabular contents table consists of the following independent variables:

**Time**, $t$.**Arc length**, $s$.

and dependent variables:

**Density**, $\rho$.**Temperature**, $T$.**(Gauge) pressure**, $P$.**(Mass) flow rate**, $\dot{m}$.**Flow velocity**, $v$.

Notes: | Positive values of flow rate and flow velocity mean flow from end A towards end B, negative means from end B towards end A. |

There is no equivalent of the reference Z level for the tabular contents method. The notion of a reference Z level is meaningless when the pressure is specified as a function of time and arc length. | |

It would be possible for OrcaFlex to infer the flow velocity, $v$, from the mass flow rate, $\dot{m}$, or vice versa, using the density, $\rho$, and the cross-sectional area of the line, $A$. However, this would rely on the discretised model of the line for the cross-section, which may lose accuracy. Instead, we insist that the user supplies both quantities independently. |

A strict requirement is that the data form a *complete* (but not necessarily uniform) grid over the independent variables. If there are $M$ distinct values of $t$ specified, and $N$ distinct values of $s$, then the complete grid requirement means that there must be exactly $M \times N$ rows of data specified, one for each possible $(t, s)$ pair. OrcaFlex will raise an error if this requirement is not satisfied.

The tabular data defines the line contents for any general $(t, s)$ via interpolation between the specified data values. This will be discussed further below.

Whilst the table allows contents to be specified in terms of both $t$ and $s$, it can also be used to specify contents that depend upon just $t$ (i.e. no $s$-dependence) or just $s$ (i.e. no $t$-dependence). This is achieved by setting the independent variable you wish to eliminate to a special value of N/A. This tells OrcaFlex that there is no dependence on this variable.

Note: | If one entry in an independent column is set to N/A, then all other values in the same column must also be set to N/A. |

You might only require a subset of the dependent columns to be full functions of the independent variables. The remaining columns might only depend on one of the independent variables, or might be constant. To avoid having to repeatedly specify the same values on multiple rows of the table, a special value of ditto (") is permitted. This effectively says to *use the value from the row above*. If the row above is also ditto, then OrcaFlex will look at the row above that, and so on and so forth until it finds a non-ditto value.

Note: | A value of ditto is not permitted in the first row of the tabular contents data. |

Ditto values are also permitted in the $t$ and $s$ columns. This can be useful as a way to demarcate the various sections of the table when it is a function of both $t$ and $s$. For instance, if there are are $M$ $t$ values and $N$ $s$ values, then you might specify this as $M$ blocks of $N$ rows, where the $t$ value is fixed in each block (with the $s$ data cycling through the $N$ distinct values). Specifying a time of ditto for all but the first row of each block can make the table much easier to scan by eye (since there will be $N\!-\!1$ ditto values within each block).

The tabular data can be specified either in an *internal* table or in an *external* file.

If an external file is used you must specify the name of the file. You can give either its full path or a relative path. The file itself must be a text file containing the data in the same layout as the internal table. The 7 columns must appear in the same order as the data in the internal table, and can be separated by spaces or tabs. Each line of the file represents one row of data. You must ensure that the data in the file has the same units as the OrcaFlex model which uses it. The file must not contain any header or footer rows. The *preview* button on the data form can be used to check that the data is intepreted as you intend.

For very large tables it can be awkward and slow to work with the data internally, and so an external file may be more effective. Another scenario where an external file might be preferred is when the data are used in multiple different models, e.g. a chain of restarts.

The data are interpreted identically, irrespective of whether they are internal or external.

If the tabular contents data has only $t$-dependence, with each row of the table having the same $s$ value (either a single real value or N/A), then OrcaFlex will linearly interpolate the dependent data ($\rho$, $T$, $P$, $\dot{m}$, $v$) between the user-specified $t$ values. Outside the range of the user-specified $t$ values, a policy of *truncation* is adopted: OrcaFlex will not extrapolate and instead truncate $t$ to be equal to the first or last user-specified value, as appropriate. This results in constant contents properties outside of the user-specified range of $t$ values.

If the tabular contents data has only $s$-dependence and each $t$ value has been set to N/A, then OrcaFlex will linearly interpolate the dependent data ($\rho$, $T$, $P$, $\dot{m}$, $v$) between the user-specified $s$ values. Once again, a policy of truncation is adopted outside of the user-specified $s$ values.

If the tabular contents data has $s$-dependence, with either $t$-dependence or a single real value of $t$ that is not equal to N/A, then a more complicated interpolation scheme is employed. For simplicity, we restrict to a single property, density, which we consider as a function, $\rho(t,s)$, of $t$ and $s$. The other properties ($T(t,s)$, $P(t,s)$, $\dot{m}(t,s)$ and $v(t,s)$) are interpolated in the analogous fashion. To proceed, we supplement the (ordered) user-specified $t$ values, $(t_1, t_2, \ldots , t_M)$, with $t_0 = -\infty$ and $t_{M+1} = +\infty$: $(t_0, t_1, t_2, \ldots , t_M, t_{M+1})$. Any given value of $t$ will be bounded between two entries in this array, $t_i \leq t \lt t_{i+1}$. Each time slice, $t_i$, can be associated with a density profile, $\rho_i(s)$, along the line. Each $\rho_i(s)$ is defined by the arc lengths $(s_1, s_2, \ldots, s_N)$ and their corresponding density values, $(\rho_{i1}, \rho_{i2}, \ldots, \rho_{iN})$, taken from the user's data table. $\rho_{ij}$ is the specified density value at time $t_i$ and arc length $s_j$. To compute $\rho_i(s)$ at arc length values between data points, we again use ordinary linear interpolation for $s \in [s_1, s_N]$, with truncation outside of this range. We also define $\rho_0(s) \equiv \rho_1(s)$ and $\rho_{M+1}(s) \equiv \rho_M (s)$, which is the functional analogue of truncation.

To recap, we now have two bounding time values, $[t_i, t_{i+1})$, and corresponding density profiles, $\rho_i(s)$ and $\rho_{i+1}(s)$, both of which are resolved by linear interpolation on $s$. The outstanding question is how to compute $\rho(t, s)$ from these ingredients. The obvious choice would be to linearly interpolate, such that $\rho(t,s) = (1 - \lambda) \rho_i(s) + \lambda \rho_{i+1}(s)$, where $\lambda = (t - t_i) / (t_{i+1} - t_i)$ (with $\lambda = 1$ for $t \lt t_1$ and $\lambda = 0$ for $t \geq t_M$). However, this approach induces undesirable behaviour. Consider the case of a localised slug moving along the line at velocity $v$. The idealised density profile may be something resembling $\rho(t,s) = e^{-(s-vt)^2/\sigma^2}$, which is a Gaussian of width $\sigma$, the peak of which is at $s = vt$. This will need to be discretised in order for it to be represented in OrcaFlex. We may choose some representative times, say $t_1=25$s and $t_2=75$s, and representative arc lengths, $s \in (0\textrm{m}, 1\textrm{m}, 2\textrm{m}, \ldots, 99\textrm{m}, 100\textrm{m})$. This would result in a tabular contents table containing 202 rows of data. Let us further assume that $v=1 \textrm{ms}^{-1}$ and that the width of the distribution is narrow, say $\sigma=2m$. At $t=25$s, the peak of the density distribution is at $s=25$m; at $t=75$s, the peak is at $s=75$m. Superimposed on the same plot, the distributions $\rho_1(s)$ and $\rho_2(s)$ have virtually no overlap, as can be seen from Figure 1. Using standard linear interpolation, what would the profile $\rho(50\textrm{s}, s)$ look like? This is shown in Figure 2.

Figure 1: | $\rho(25\textrm{s}, s)$ and $\rho(75\textrm{s}, s)$ from the user data table |

Figure 2: | Interpolated form of $\rho(50\textrm{s}, s)$ |

It can clearly be seen that this interpolation approach is completely failing to capture the dynamics of the slug pattern. In fact, the resultant pattern looks more like two isolated standing waves, rather than a single propagating one. To overcome these shortcomings, an alternative interpolation approach is adopted: instead of linearly interpolating between $\rho_i(s)$ and $\rho_{i+1}(s)$, we instead linearly interpolate between the *forwards propagated* profile, $\rho_i^+(t, s)$, and the *backwards propagated* profile, $\rho_{i+1}^-(t, s)$, which will be defined below (there is no practical difference between the $+$ and $-$ superscripts here; they have just been added for ease of understanding). More precisely:
\begin{eqnarray}
\rho(t, s) = (1 - \lambda) \rho_i^+(t, s) + \lambda \rho_{i+1}^-(t, s) \qquad \textrm{for} \; \, t_i \leq t \lt t_{i+1}
\end{eqnarray}
where, as before, $\lambda = (t - t_i) / (t_{i+1} - t_i)$ (again with $\lambda = 1$ for $t \lt t_1$ and $\lambda = 0$ for $t \geq t_M$). If we define the original $\rho_i(s)$ in the following notation:
\begin{eqnarray}
\rho_i(s) = \left\{ (s_1, s_2, \ldots, s_N), (\rho_{i1}, \rho_{i2}, \ldots, \rho_{iN}) \right\}
\end{eqnarray}
then $\rho_i^\pm(t, s)$ is given by:
\begin{eqnarray}
\rho_i^\pm(t, s) = \left\{
\begin{array}{ll}
\left\{ (s_1 + v_{i1} \Delta, s_2 + v_{i2} \Delta, \ldots, s_N + v_{iN} \Delta), (\rho_{i1}, \rho_{i2}, \ldots, \rho_{iN}) \right\} & 1 \lt i \lt M \\
0 & \textrm{otherwise}
\end{array}
\right.
\end{eqnarray}
where $\Delta = t - t_i$, and $v_{ij}$ is the **flow velocity** corresponding to time $t_i$ and arc length $s_j$. In words, $\rho_i^\pm(t, s)$ is $\rho_i(s)$ with all its abscissa values shifted by $v (t - t_i)$. The flow velocity values are taken from the $v$ column in the tabular contents table.

The result, $\rho(50\textrm{s}, s)$, of this new interpolation scheme is shown in Figure 3. Notice that, in this particular example, the resultant profile is exactly what we would have obtained if we had generated $\rho(50\textrm{s}, s)$ directly from the original formula, $\rho(t,s) = e^{-(s-vt)^2/\sigma^2}$. Note that this propagating interpolation scheme also guarantees continuity of the dependent variables, in this case $\rho(t, s)$, across all values of $t$ and $s$.

Figure 3: | Propagated form of $\rho(50\textrm{s}, s)$ |

This data is only used to compute the shape of the line when the model is in reset state. It has no effect on the calculation itself; it is purely cosmetic for the purposes of drawing the line in the 3D view and compiling its properties report. This is necessary because the main tabular contents data cannot be used until it has been validated and processed, which happens at the start of the analysis.

The nominal contents density should be chosen to be a representative value for the density of the contents within the line at the start of analysis.