Log analysis applications

From AAPG Wiki
Jump to navigation Jump to search

Log analysis is undergoing major changes through the addition of new logging tools and improved computerized interpretation software. These features make accurate and detailed geological descriptions from wireline data feasible. Now, wireline data are being integrated with geological, geophysical, and engineering data through software to produce more accurate and comprehensive answers to geological questions. This article briefly summarizes computerized log analysis packages (LAPs) by reviewing basic features and emphasizing the fundamental ideas that make LAPs useful, flexible, and powerful tools for development geologists.

Features[edit]

Log analysis packages usually store data in a depth-oriented database. This directly associates a well's scientific data (including wireline traces, core data, tops, and test intervals) with the specific depths of their occurrence. Depending on the LAP used, database depth values may change (1) by a constant increment (usually based on the smallest common sampling increment) or (2) by varying increments (based on each trace's unique sampling nature).

To use this data successfully for display and calculations, the user needs to learn the purpose and method of operation for each of the seven basic LAP features. They are

  • Data input
  • Data editing
  • Environmental corrections
  • Data processing
  • Data display
  • Data output
  • Data management tools

Log analysis packages should allow the user to do any or all of these in the order deemed appropriate by the user.

Data input[edit]

Digital data are created by writing values into a file in numerical form. Wireline data are now routinely captured on a magnetic medium at predetermined sample increments. Increments vary from tool to tool, from service company to service company (even for comparable tools and essay writing service), and according to the depth recording system (English or metric) used by the service company. The quantity of data values recorded at a given depth increment can also vary. Most logging tools record only one value per increment for each specific trace (slow channel data). Others (for example, a full waveform acoustic tool) acquire multiple values at each depth increment (fast channel data) in order to later replicate and use the entire acquired range of values.

Each wireline company has its own proprietary format for recording digital data. The two most common formats are LIS (the de facto standard) and BIT. Considerable efforts are being made to standardize all of the various formats into a single industry-wide standard known as the API digital log interchange standard, or DLIS.[1]

At the customer's request, digital wireline data can be transmitted from the logging truck's computer directly to (1) the client, (2) other partners, and/or (3) the logging company's main computer for immediate processing, retransmission, or rerecording for delivery to the client. When digital data are read into a computer by a LAP, they are automatically converted to the LAP's internal data format for storage and use.

Analog data include all data presented on paper or films:

  • Traceplots (commonly called logs)
  • Numerical listings
  • Text information

Traceplots can be converted to digital data through manual digitizing or optical scanning. Most LAPs include log digitizing software. Many commercial data firms also sell digitizing services and may even have digital libraries for sale. Numerical listings include core and geochemical data. Both are usually sampled somewhat sporadically and thus have varying sample increments. Text information includes descriptions, explanations, formation tops, and test results. This information is extremely useful when annotating graphics. Both numerical listings and text information are well suited for keyboard entry into the database.

Data editing[edit]

Options for editing of the data include the following:

  • Merge various traces into a single trace.
  • Rescale hybrid scales and improperly digitized traces.
  • Smooth (filter) traces.
  • Assign cutoff values. Compare trace values to a single limiting value. Beyond that value, assign the data either a single value or null values.
  • Discriminate against invalid data. Using this technique, the user specifies minimum and/or maximum value limits for a primary trace. Then, instead of modifying actual trace values, the trace is scanned, and at depths where data occur outside the limits, flags are set in a separate discriminator trace. When applying the discriminator trace during data displays or calculations, any depths containing flags will either be eliminated from the display or be assigned default calculation values.
  • Apply depth corrections. These fall into two categories:
    1. Depth shifting traces against each other. To do this, the user visually compares base and unshifted traces, marks corresponding data points (Figure 1), and then shifts the off-depth data to the base trace depths.
    2. Correct for true vertical depth (TVD), true vertical thickness (TVT), and/or true stratigraphic thickness (see Preprocessing of logging data).
  • Baseline the spontaneous potential (SP). Interactively flattening the SP to a shale baseline at a single value (Figure 2) allows the user to look at SP values quantitatively in order to calculate water resistivity (Rw) and estimate shale content.
  • Convert data scales (both ways): conductivity to resistivity, raw data to porosities, neutron porosities to a different matrix, metric to English depth units, percent to decimal, and so on.
  • Data normalization. This procedure assumes the values in an individual data trace are credible but require some modification. This involves modifying data values with an atypical distribution and/or range to a “normal” distribution and range (see Figure 3 and discussion of histograms). Proper normalization must first account for borehole conditions during each run, and geological changes taking place across a wider geographic area. Normalization is accomplished by applying the equation:

where

    • y = corrected trace
    • a = constant (distribution) multiplier value
    • x = trace to be normalized
    • b = constant (range) value (Figure 3)
  • Rename, copy, and delete curves.
  • Test and set curve values. If a trace value satisfies a criterion (such as ), then modify the value as specified.
  • Interactively view and/or edit curve values depth by depth.
  • Enter or modify well heading information.
  • Enter or modify run specific information (such as mud properties, bottom hole temperature, true depth, and service company).

Environmental corrections[edit]

Each service company has numerous unique correction charts and algorithms for each wireline tool. By applying LAP-provided corrections for mud properties and other run-specific data, the user can quickly remove the effects of environmental conditions local to the tool (such as mud weight, temperature, and borehole salinity) and work with the most technically correct wireline data (see Preprocessing of logging data).

Data display[edit]

Figure 4 A traceplot displays trace values by their depth of occurrence. Users should carefully plan details of the display to maximize visual impact, legibility, amount of information conveyed, and any logical relationships in the data. (Traceplot. Copyright: Schlumberger. Faciolog is a trademark of Schlumberger.

Data display is the most frequently used LAP feature. Properly displayed data allows users to do the following:

  • Accurately present both raw and interpreted data in a concise and meaningful format.
  • Visually examine data from multiple wells using exactly the same scales. (Note that this usually requires consistent trace naming, matrix parameters, and decimal/percent conventions for all wells in the database.)
  • Select consistent parameters for detailed log analysis calculations.
  • Visually correlate all geological, core, wireline, geophysical, and engineering data both within each well and from well to well.

Visual examination of a specific formation across a broad area does the following:

  • Encourages relative and absolute value comparisons, thereby promoting generally improved log quality control.
  • Allows determination of individual trace normalization values for each well.
  • Highlights geological features and data trends through visual pattern recognition.

All of the three most common graphics displays (traceplots, crossplots, and histograms) have optional interactive features that allow users to display and/or identify meaningful information quickly on the screen.

Traceplots[edit]

Traceplots (TPLTs) visually relate data values to depth. When planning any TPLT display, careful use of display variables (including scales, intervals, grids, track quantifies and widths, number of curves, line types and weights, colors, symbols, spacing, shading, and annotation) can be used to convey an immense amount of information without overwhelming an observer (Figure 4). Some of the more powerful LAPs allow interactive TPLT display, correlation, and database storage of formation tops from one or more wells displayed simultaneously on the screen.

Crossplots[edit]

Crossplots (XPLTs) relate two or more different trace values to each other at the same depth, such as core porosity and bulk density. Each type of wireline tool measures a different rock property. By studying the same XPLT in many wells, distinctive data plot patterns related to these rock properties allow users to identify lithologies, porosities, parameters, and other geological and/or engineering relationships. Several interactive graphic XPLT techniques and other features have been developed that make crossplots even more useful and powerful:

  • Pickett plots, which are log-log plots of resistivity versus porosity (Figure 5) allow interactive parameter identification of m (the cementation exponent) and the product (a × Rw) (empirical constant × formation water resistivity), as well as visually displaying water saturation (Sw).
  • Polygon isolators drawn on the screen around patterns recognized by the user (left side of Figure 6) identify the enclosed data in the database for future reference. A dual XPLT/TPLT screen display of the same data (Figure 6) allows XPLT pattern recognition and isolation, and TPLT depth identification (usually with tic marks or color shading at the corresponding TPLT depths). The reverse procedure (TPLT depth interval isolation and XPLT identification) is also useful. Storage and redisplay of the same polygon (using the same XPLT) on another well's data reinforces previously recognized patterns.
  • Chart overlays (only available for certain data combinations), relating wireline data to known lithologies and total porosity.
  • Statistical and user-drawn best fit lines and/or curves.
  • Color selection to enhance the display.
  • Drop options to remove low frequency data from cell plots.
  • z plots, which superimpose a third (z) trace value on top of the x and y coordinates. (Its value can be indicated by a letter, number, or color.)
  • Three-dimensional plots, allowing rotation of the XPLT around the x, y, and z axes. This usually requires specialized high performance graphics hardware.
Figure 7 A two-well histogram allows users to compare data interactively from one well to another by shifting the second well's data across the base well on the screen. A visual best fit is usually satisfactory for determining the amount of normalization required.

Histograms[edit]

Histograms (HISTs) plot a trace's data values against their frequency of occurrence (Figure 3), showing the distribution of data across its range of values. A display of data from two wells on the same HIST (Figure 7) allows users to observe significant data node shifts between the two. Exact values of shifts can be determined by interactively moving data from one well across the other until a visual “best fit” is achieved.

The ability to combine data from many wells into a single composite XPLT or HIST allows the user to see at a glance the entire range and distribution of the data for any trace. Annotations on TPLTs, XPLTs, and HISTs are extremely useful when preparing displays for presentation and reports.

Advanced graphics[edit]

Most of today's specialized high technology tools have developed specialized graphic displays of both raw and processed data. Understanding both the derivation and the presentation of the data is essential to understanding and interpreting the information presented. Examples include borehole imaging and dipmeter data.

Data processing and output[edit]

Common LAP processing methods include statistics, simple equations, and models. Statistics can vary from simple to complex, and they vary from LAP to LAP. Simple equations are usually predefined in both batch and interactive LAPs. They generally cover the most common types of calculations. Some LAPs allow users to define and process their own one line equation.

Models are more complex than either statistics or equations because they employ logic, data tests, and iterative computations. Four levels of model processing complexity are available.

  • Simple models assume clean rocks of a single matrix type, which greatly simplifies calculations.
  • Advanced models require users to identify and define matrix, shale, water, and other parameters from TPLTs, XPLTs, and/or HISTs. They must also select which of several predefined lithology models and which shale and water saturation equations will be used to give the most accurate answers. Note that software on logging truck computers usually have levels one and two data-processing capabilities. To benefit from this at the wellsite, users must already be acquainted with the service company software prior to arrival at the wellsite and must provide the logging engineer with accurate parameters.
  • Expert level models use predefined tool response equations for each selected mineral to statistically estimate mineral, porosity, and fluid percentages in the formation. The user selects minerals presumed to be present and designates which (environmentally corrected) wireline data to be used. Results are checked by monitoring statistical variations and by reconstructing raw input data values based on the interpreted volumes. Reconstructed values are then compared with original values. A good visual overlay implies a reasonably accurate interpretation. Differences require addition or deletion of selected minerals and/or modification of parameter values.
  • User-defined modeling, the fourth level of data processing, requires knowledge of computer programming. At this level, the user must define constants and variables; determine input and output traces; determine the exact logical order in which equations and tests are to be applied to derive the desired output; and then code, test, and debug the model. These can be simple to complex, depending on the user's skill levels and needs.

LAPs process depth by depth across specified intervals. These boundaries are typically selected by the user based on interpreted lithology changes selected from logs. Log analysis parameters vary widely from formation to formation (or even smaller intervals) due to changes in geological conditions at the time of deposition. Therefore, the selection of consistent formation tops and intervals across a wide geographic area is of great importance to the final interpretation results. Integration with a common geological database containing formation tops and intervals allows consistent use of the same intervals by geologists, geophysicists, and engineers.

Mainframe computers process data either interactively (while the user waits) or batch (at the computer's convenience). Many LAPs use both types.

Data processing results are stored either as a continuous trace or, in the case of a summation model, as a z value (a single-valued datum such as net feet of pay or subsea depth) associated with a well's latitude (x) and longitude (y). Values of z can be either mapped or used for further calculations. Thus, integration with a mapping package allows them to be plotted quickly and easily. Integration of z values with an engineering package allows quick calculation of reservoir volumetrics.

Data can be output from the LAP as computer files, printer listings, and/or graphics plots. Files in various formats can be transferred to tapes, floppy disks, or other computers. Most LAPs allow printer listings of data, and some have more flexible formats than others. Graphics plots take the form designated by the user, usually as TPLTs, XPLTs, or HISTs.

Data management tools[edit]

To simplify use, each LAP vendor has created unique data management tools. These tools, whether implemented in command form or as a menu system, shield the user from the low level computer operating system that manipulates the data and files. In command form, the low level operating system routines (OPEN, READ, COPY, WRITE, and so on) can be condensed into a single LAP command (for example, DEPSHIFT) which is selected by the user. In menu form, the user is presented a series of choices to be made interactively using a mouse or keyboard. The software then performs these low level system routines based on the user's selection.

Although usually transparent to the user, these tools affect software function at every step from the point of data entry through analysis and presentation of the final product to integration with other software products. How well the data management tools have been designed determine (1) how smoothly and intuitively the user learns and applies the LAP features and (2) how quickly, easily, and flexibly the software processes the data. In the final outcome, these tools determine how well the LAP is accepted by user communities throughout the industry.

See also[edit]

References[edit]

  1. Froman, N. L., 1989, DLIS—API Digital Log Interchange Standard: The Log Analyst, v. 30, n. 5, p. 390–394.

External links[edit]

find literature about
Log analysis applications
Datapages button.png GeoScienceWorld button.png OnePetro button.png Google button.png