Columbus Gold Corporation
BEST50OTCQX
2018
CGT: TSX | CGTFF: OTCQX
Mineral Resource Estimate Methods and Classification
In Depth Industry Overview

Mineral Resource Estimate
Methods and Classification

Geostatistics March 27, 2026
The geological interpretation controls the resource estimate. I realize that sounds reductive applied to a discipline with fifty years of mathematical development behind it, and I am going to spend a disproportionate part of this essay defending the claim.

The estimation method occupies the methodology section of every NI 43-101 technical report. It fills conference programs. Graduate students spend years on it. And when a resource estimate turns out to be wrong, the cause is almost never the estimation method.

The cause is upstream. Somebody drew a wireframe that enclosed rock which did not belong in the domain. Or somebody merged two geological populations that should have been estimated separately. Or the structural model was too simple for the geology. Kriging did exactly what kriging was told to do. It was told to do the wrong thing.

Estimation Methods
Polygonal and IDW

These get short treatment because the problems they create are well understood and not particularly subtle.

Polygonal estimation assigns each sample's grade to a surrounding area. No interpolation, no blending, no spatial model. The nearest data point dictates everything within its polygon. This remains the backbone of grade control at bulk commodity operations in the Pilbara, in Minas Gerais, in Guinea, where drill spacing is tight and geology is laterally continuous over hundreds of meters. At those spacings, the choice of interpolation method barely affects the result. On a wide-spaced exploration program for a structurally complex gold deposit, polygonal estimation produces a grade-tonnage curve that overstates selectivity because it preserves the full sample variance at a support that does not correspond to the mining selectivity. That is a well-known limitation and I do not need to elaborate on it.

IDW interpolates by weighting samples inversely to distance raised to a power exponent. The exponent gets set to 2 on the vast majority of projects because 2 is the software default. Calibrating the exponent to the deposit's variogram takes a couple of hours: high nugget-to-sill ratio calls for a low exponent, short range calls for a high exponent. This calibration is straightforward and is almost universally skipped. The consequence is felt most strongly in peripheral blocks near classification boundaries, which are the blocks where the choice of interpolation parameters matters most and where the least attention is typically paid.

Kriging

Ordinary kriging determines block grades from composite grades using weights derived from a variogram model. The mathematics were established by Matheron in the 1960s and refined over the following decades. Implementing kriging on a deposit requires dozens of decisions that the mathematics do not constrain, and those decisions shape the resource estimate as much as the method itself.

The variogram. The experimental variogram is computed from sample pairs binned by lag distance and direction. The resulting points are always noisy. A mathematical function (spherical, exponential, nested combinations) is fitted through these points by hand in software such as Snowden Supervisor, the variogram tools in Leapfrog, or SAGE in Datamine. Automated fitting routines exist and produce results that are technically optimal in a least-squares sense and geologically unreasonable in practice, so manual fitting remains universal.

The manual fit is subjective. The nugget, sill, and range are adjusted until the curve passes through or near the experimental points in a way the geostatistician considers geologically plausible. The experimental data rarely constrain the model tightly enough to produce a unique answer. A nugget of 20% of the sill and a nugget of 35% of the sill can both be drawn through the same cloud of experimental points without obvious misfit. The range can vary by a factor of 1.5 or more within the plausible envelope.

These differences cascade through the estimate in specific ways. A lower nugget gives heavier weight to the nearest composite, preserves more local grade variation, and produces a grade-tonnage curve that retains more of its original spread. A higher nugget distributes weight more evenly among surrounding composites, smooths grade variation, compresses the grade-tonnage curve, and pushes high-grade block estimates downward and low-grade block estimates upward.

The range has a different effect. A longer range means the estimate extends further from drill holes. More blocks fall within the zone of data influence. More blocks receive estimates with low kriging variance. More blocks pass whatever geometric threshold the estimator has chosen for Indicated classification. A shorter range confines the estimate tightly around each hole and leaves more of the deposit in the Inferred or unclassified category.

So the variogram fitting session simultaneously determines how smooth the grade model will be and how large the Indicated resource will be. Both outcomes are embedded in a visual exercise that takes a few hours to a few days depending on the project, the number of domains, and the practitioner.

Search parameters. After the variogram is set, the kriging search needs configuration: ellipse dimensions, minimum and maximum number of composites, and octant or sector restrictions. Vann, Bertoli, and Jackson published a systematic method for testing these parameters in a paper at the 2003 International Mining Geology Conference in Bendigo, titled "Quantitative Kriging Neighbourhood Analysis for the Mining Geologist." The approach, called Kriging Neighbourhood Analysis (KNA), tests many parameter combinations against the slope of regression, kriging efficiency, and weight distributions. The parameter set that minimizes conditional bias while maintaining reasonable kriging efficiency gets selected.

KNA should be standard practice. On a meaningful fraction of projects, it is not performed. Search parameters are instead set by precedent from previous projects or by informal experimentation. The difference between a well-tuned and a poorly-tuned search can shift mean block grades in peripheral zones by 5% to 10%, and peripheral zones are where classification boundaries tend to fall.

A recurring issue in deposits drilled from surface at steep angles: vertical data density is high, lateral density is low. If the search requires composites from multiple octants, the algorithm pulls in distant lateral data to fill octant quotas. Those composites may sit in a different geological setting. They dilute the local estimate with information from rock that has different grade characteristics. Relaxing the octant restriction eliminates this problem and creates another: the nearest (vertical) composites dominate the estimate and provide no lateral constraint. The correct setting depends on the local data geometry, and kriging uses a single global parameter set applied everywhere in the domain.

On Kriging Variance

Kriging variance appears as a classification criterion in nearly every NI 43-101 technical report filed on SEDAR that uses kriging. It depends on the data configuration and the variogram model. It does not depend on the data values. A block surrounded by composites at 0.1 g/t gold and a block surrounded by composites at 20 g/t gold, in identical spatial arrangements, produce identical kriging variance. Isaaks and Srivastava's 1989 textbook "An Introduction to Applied Geostatistics" and the Deutsch and Journel GSLIB manual both explain this limitation clearly. The point has been made in print for over 30 years. Kriging variance remains the dominant quantitative classification criterion in public resource reports because it is a standard software output, it maps intuitively to the idea that blocks near drill holes are better known, and replacing it with conditional simulation requires work that many project budgets do not accommodate.

Smoothing. Ordinary kriging is a conditional expectation estimator. Minimizing estimation variance pulls extreme block grades toward the local mean. The grade-tonnage curve from a kriged model is narrower than the curve that would result from perfect knowledge. Blocks that should be high grade are estimated too low. Blocks that should be low grade are estimated too high.

Reconciliation data from Goldfields' St Ives operations in Western Australia, presented at AusIMM conferences during the 2000s, demonstrated this pattern: kriged models overpredicted grade in marginal stopes and underpredicted grade in high-grade stopes. The errors partially cancelled at the annual scale, producing aggregate reconciliation within about 5% to 10%, while individual stope-level discrepancies ran to 15% or more in some domains. Block-level reconciliation data, comparing each mined block to its model prediction, are collected at some operations and almost never published. When they have been presented (Goldfields staff at St Ives; Newcrest staff reporting on Cadia and Telfer at AusIMM events), the results confirm what theory predicts: local prediction error is much larger than aggregate reconciliation suggests.

MIK

Multiple indicator kriging estimates a grade distribution at each block instead of a single grade, by kriging binary indicator variables at multiple thresholds. The output is a conditional cumulative distribution function, from which tonnage and grade above any cutoff can be calculated directly.

MIK matters most for deposits where the economic cutoff grade sits in the central part of the grade distribution. The 2014 edition of AusIMM Monograph 23 ("Mineral Resource and Ore Reserve Estimation: The AusIMM Guide to Good Practice") includes comparative examples showing that MIK-derived recoverable resources can differ from OK-derived recoverable resources by 15% to 25% in contained metal for disseminated gold deposits with moderate cutoff grades. That range represents the difference between a viable project and a failed one in many cases.

Implementation problems are genuine. Nine indicator thresholds and three principal directions produce 27 variograms. The variograms at extreme thresholds (lowest and highest grade cutoffs) are computed from the fewest data pairs, show the most scatter, and get fitted with the least care because time runs short.

These extreme-threshold variograms control the tails of the grade distribution, which is where economic sensitivity concentrates.

Order-relation violations are the other persistent issue. Because each indicator threshold is kriged independently, the resulting conditional CDF at a block can violate monotonicity: the estimated probability of exceeding a higher threshold can be greater than the probability of exceeding a lower threshold. Software corrects this post hoc. In deposits with bimodal grade distributions, the corrections can be aggressive, shifting recovered tonnage above cutoff by enough to affect project economics. The corrections are invisible in the final reported model and are checked by very few external reviewers.

BHP's Olympic Dam resource estimates used indicator kriging approaches for the polymetallic system (copper, uranium, gold, silver, each element with different spatial continuity). Indicator methods were necessary there because the grade populations for different elements had fundamentally different spatial structures that ordinary kriging of a single variable could not accommodate.

Conditional Simulation

Sequential Gaussian Simulation generates multiple equally probable grade-field realizations, each conditioned to the drill data, reproducing the variogram and the global histogram. The spread of simulated block grades across realizations provides a direct measure of uncertainty at every location.

The back-transformation problem limits application in deposits with strongly skewed grade distributions, which includes nearly all precious metal deposits. SGS works in Gaussian (normal score) space. The original grades are transformed to a standard normal distribution, the simulation runs in that space, and the simulated values are back-transformed. For skewed distributions, small perturbations in the upper tail of the Gaussian space back-transform to very large grade differences in original units. Managing this requires capping Gaussian values, modifying the transformation function, or applying the minimum acceptance criteria described by Leuangthong, McLennan, and Deutsch in their 2004 paper in Computers and Geosciences ("Minimum Acceptance Criteria for Geostatistical Realizations"). The validation procedure they describe (checking that each realization reproduces the histogram, the variogram, and the data values) is straightforward in concept and performed with varying rigor in practice.

Emery at the University of Chile has published frameworks for using simulation output for resource classification, mapping the coefficient of variation of block grades across realizations to JORC and CIM confidence categories. Snowden (now part of Datamine) applied these frameworks operationally at mines in Australia and West Africa during the 2000s and 2010s. The consistent finding across multiple published applications was that simulation-based classification produced smaller Measured and Indicated footprints than classification based on kriging variance and drill spacing.

The 2019 CIM Best Practice Guidelines for Estimation of Mineral Resources and Mineral Reserves explicitly endorse conditional simulation as a classification tool. JORC 2012 does not prescribe any specific method, so simulation-based classification is permissible under all major reporting codes. Adoption remains low. I will return to why in the section on institutional context.

Geological Domains

This section is longer than the estimation methods section because this is where the largest resource estimation failures originate.

A geological domain is a volume within which the grade population is treated as statistically homogeneous. Composites from inside the domain inform blocks inside the domain. Composites from outside do not cross the boundary. The boundary is interpretive.

In the early 2000s, wireframes were built by digitizing polylines on vertical cross-sections in Vulcan or Datamine and linking them with triangulated surfaces. Each polyline was a deliberate choice. The geologist examined the drill logs, the assays, and whatever structural and lithological data existed, and drew a line. When a wireframe produced an unusual shape between sections, the geologist could trace it back to their own linework and ask whether they had drawn it wrong.

Leapfrog changed the workflow after about 2010 by generating surfaces implicitly from data points using radial basis functions. The geologist specifies trend direction, boundary offset, and interpolation resolution. The algorithm creates a surface. Faster, smoother, more reproducible between runs. Also more difficult to interrogate. When an implicit surface produces unexpected geometry between drill holes, the question is which parameter setting caused it, and the answer is often buried in the interaction of multiple settings.

Both workflows produce adequate domain models when the geological controls on grade are well understood and the data resolve them. Both fail when the geological interpretation is wrong.

Case Study — Rubicon Minerals

Rubicon Minerals, Phoenix Gold Project, Red Lake, Ontario. The 2012 resource estimate reported 6.2 million tonnes at 8.8 g/t gold in the Indicated and Inferred categories. The deposit sits in the Red Lake greenstone belt, host to what was then Goldcorp's Red Lake mine (now operated by Evolution Mining following a series of acquisitions). The Phoenix orebody was interpreted from surface diamond drilling as a series of broad, gently dipping mineralized zones. Domain wireframes enclosed large continuous volumes. High-grade intercepts, some in the tens and hundreds of grams per tonne over narrow intervals, were used to estimate grades within these broad volumes.

Underground development beginning in 2015 revealed a different geometry. Gold was concentrated in narrow, steeply plunging high-grade shoots at structural intersections, surrounded by weakly mineralized or barren rock. The broad wireframes had incorporated both the shoots and the surrounding rock into single estimation domains. Every high-grade composite from a drill hole that pierced a shoot was used to inform blocks in ground that lay between shoots and contained little gold. The resource was recast at drastically lower tonnage and grade. Rubicon's share price fell from over C$4 in early 2015 to under C$0.20 by early 2016. The mine went on care and maintenance. The company was eventually renamed Battle North Gold and was acquired by Evolution Mining in 2021 for a price that valued the asset at a small fraction of its earlier implied valuation.

The estimation method was ordinary kriging, competently implemented. The variograms were modeled. The QAQC was adequate. The failure was in the wireframes. The domains enclosed rock that did not belong.

Case Study — Troy Resources

Troy Resources, Casposo mine, San Juan Province, Argentina. The Kamila and Julieta vein systems were wireframed as continuous over the strike lengths that the drill spacing appeared to support. Underground mining from 2014 onward encountered faulting and structural offsets that disrupted vein continuity between drill holes. The veins existed, and the grades within the veins were broadly as predicted, but the veins were not where the wireframes placed them between drill holes. Structural offsets moved the vein away from its interpolated position. Quarterly production reconciled below the model prediction. Troy took impairments in 2016 and 2017 and eventually divested the asset.

The character of this failure is different from Rubicon. At Rubicon, the high-grade material occupied a smaller volume than the wireframes assumed. At Casposo, the grade and volume were approximately right, but the spatial position between drill holes was wrong. Kriging estimated grades accurately for the wireframed locations. The vein was at a different location. Both outcomes produce the same operational result: the mine plan encounters something different from what the model predicted.

Case Study — Beadell Resources

Beadell Resources, Tucano mine, Amapá, Brazil. Between 2014 and 2017, Beadell reported mined grades consistently below model predictions. Technical reviews identified problems with domain definition at the saprolite-fresh rock boundary. In tropical weathering environments, grade distribution, bulk density, and metallurgical response all change across the weathering profile. If the domain model does not separate saprolite from fresh rock adequately, or if the boundary is placed inaccurately, composites from fresh rock (typically higher grade, higher density) inform blocks in saprolite (lower grade, lower density, different metallurgy). The model overestimates both grade and tonnage in the saprolite zone. Beadell's reconciliation issues were not attributable to a single cause, mining dilution and other operational factors also contributed, but the weathering boundary domain problem was specifically identified.

Three different deposits. Three different geological settings. Three different countries. Same upstream cause: the estimation method performed as specified, and the geological model fed into it was inadequate.

Vein-hosted deposits carry the highest domain risk because the geometry of a vein system below the resolution of the drill pattern is unknowable without underground exposure. A drill program at 25-meter spacing constrains the position and grade of a shear-hosted vein at each pierce point. Between pierce points, the vein could pinch, swell, offset along a cross-cutting fault, or terminate entirely. The wireframe interpolates between pierce points using whatever algorithm or manual technique the interpreter chose, and if the interpreter assumes continuity in ground where the vein is discontinuous, the domain contains barren rock that gets estimated at ore grade.

Porphyry copper deposits present the opposite problem: the mineralized envelope is large and gradational, and the question is where to draw the outer boundary. Pushing the wireframe outward by 50 meters adds significant tonnage at a marginal grade. Pulling it inward reduces tonnage. Both positions can be geologically defensible when the boundary is gradational rather than sharp. The CIM Best Practice Guidelines recommend that domain boundaries follow geological contacts where possible and that grade shells be avoided as primary domain definitions. Grade shells are widely used anyway, especially at early project stages where the geological understanding is insufficient to define domains on a lithological or structural basis.

Classification

JORC 2012 (Clause 22) defines Indicated resources as estimated with "sufficient" confidence to assume geological and grade continuity. Measured requires evidence to "confirm" continuity. CIM 2014 and 2019 use similar language. Neither code quantifies "sufficient" or "confirm." Glacken and Snowden, in their contributions to the AusIMM Monograph 23, explained the reason for this: a numerical threshold that works for Pilbara iron ore channel deposits, where grade continuity extends over hundreds of meters, would be nonsensical for Witwatersrand gold reefs or Carlin-type systems. The qualitative framework forces deposit-specific justification. It also guarantees inconsistency between practitioners, which is the acknowledged cost.

Classification on most projects begins with drill spacing thresholds inherited from experience on deposits of similar type. Porphyry coppers: Indicated around 50 to 80 meters, Measured around 25 to 40 meters. Narrow-vein gold: Indicated around 20 to 25 meters, Measured around 10 to 15 meters. These thresholds circulate within consulting firms. They migrate between projects. A threshold that a senior geostatistician at SRK or AMC or Cube applied to a deposit with 90-meter variogram ranges gets inherited by the next project, which may have 45-meter ranges and require a completely different drill spacing for the same level of confidence.

The rigorous way to establish classification thresholds is a drill-spacing study. Take a well-drilled portion of the deposit, systematically thin the data (remove every second hole, then two of three, then three of four), re-estimate at each density, and measure how the estimate degrades. The density at which the degradation exceeds an acceptable limit marks the boundary between classification categories. At the feasibility stage on a major project, this work usually gets done because the lender's technical advisor will ask for it. At the maiden resource stage on a TSX-V junior exploration project, the budget for the entire resource estimate may be C$100,000 to C$150,000, and a proper spacing study for a multi-domain deposit takes two to four weeks of a geostatistician's time. The study is not done. The thresholds are borrowed from the previous project.

Slope of regression. This metric measures conditional bias: the systematic tendency of kriging to overestimate blocks that are estimated high and underestimate blocks that are estimated low. A slope of 1.0 means the estimate is conditionally unbiased. A slope of 0.7 means a block estimated at 2.0 g/t should, on average, contain about 1.6 g/t. The difference is smoothing artifact.

Vann, Bertoli, and Jackson in their 2003 paper suggested a minimum slope of 0.8 to 0.9 for Indicated classification. Some consulting firms have adopted internal guidelines in this range. No reporting code mandates a specific threshold. The slope of regression appears as a primary classification criterion in a small fraction of the NI 43-101 technical reports on SEDAR. Kriging variance appears in nearly all of them. In deposits with high nugget effects and moderate drill spacing, the slope of regression for peripheral blocks drops to 0.5 or 0.6, which implies that grade estimates for those blocks carry 20% to 30% conditional bias. Classifying that material as Indicated is difficult to reconcile with "reasonable confidence," but publishing the slope numbers and linking them to classification would mean reclassifying material from Indicated to Inferred, shrinking the resource footprint.

Simulation-based classification uses the coefficient of variation of simulated block grades across conditional simulation realizations as a direct uncertainty measure. Blocks where simulated grades are consistent across realizations are well constrained. Blocks where simulated grades vary widely are poorly constrained. The measure accounts for data configuration, grade variability, nugget effect, and domain geometry simultaneously.

Emery at the University of Chile and consulting groups including Snowden (now Datamine) published multiple applications of this approach through the 2000s and 2010s, in Australia and West Africa among other jurisdictions. In every published application, simulation-based classification produced smaller Measured and Indicated footprints than the same deposit classified using kriging variance and drill spacing criteria. Traditional methods overstate confidence because they rely on proxies for uncertainty rather than measuring it directly.

The 2019 CIM Best Practice Guidelines explicitly recognize simulation as a valid classification tool. Adoption is low because the output is less favorable to the project proponent than traditional methods, and the Competent Person has no obligation to choose it.

Institutional Context

A junior exploration company lists on the TSX-V, raises initial capital, drills 40 holes into a gold deposit, and commissions a resource estimate from a consulting firm. The estimate needs to show enough Indicated ounces to support the next financing round.

The variogram in the across-strike direction has experimental points that could support a range anywhere between 45 and 65 meters. The QP fits 58. The nugget is defensible between 22% and 36% of the sill. The QP fits 25%. The search ellipse is set at 2.0 times the variogram range. The drill spacing threshold for Indicated is set at 50 meters based on precedent. Each parameter is within the defensible range for the dataset. The cumulative effect is an Indicated resource that is 20% to 30% larger than it would have been if the opposite end of each defensible range had been selected.

Each choice can be justified with geological reasoning. The JORC Code and NI 43-101 place responsibility on the CP or QP and trust their judgment. The parameter ranges are genuinely ambiguous. The commercial context exerts a directional pull on where within those ranges the selected values land.

NI 43-101 was introduced in 2001, primarily in response to the Bre-X fraud at Busang in Borneo (systematic salting of drill samples with alluvial gold, exposed in 1997). The regulation addressed data integrity: QP accountability, mandatory site visits, QAQC disclosure requirements, independent verification of sample preparation and analytical procedures. It was effective for that purpose. Parameter selection within defensible ranges is a different kind of problem and is not amenable to regulatory solutions because the ranges are genuinely ambiguous.

The check on this comes from independent technical reviews during due diligence for project financing or acquisition. SLR Consulting (which now incorporates the former Roscoe Postle Associates and the former Behre Dolbear), and the in-house technical teams at project finance banks (Macquarie, Societe Generale, Standard Bank among others), go through variograms, search parameters, domain wireframes, and classification criteria. They identify aggressive estimates with some regularity. These reviews only occur at the financing or acquisition stage. The majority of resource estimates filed publicly by TSX-V and ASX junior companies receive no independent technical scrutiny beyond the QP or CP who signed them.

Reconciliation and the Parallel Models

At every operating mine, two geological models coexist. The public resource model meets reporting code requirements, carries classification labels, is updated annually or at longer intervals, and is disclosed to the market. The operational grade control model is built from blast hole assays or close-spaced RC drilling, is updated as often as weekly, carries no classification, and is never disclosed.

Aggregate annual reconciliation (total tonnes milled, head grade, contained metal, compared to the corresponding volume in the public model) runs within about 5% to 10% for well-managed operations. Goldfields, Newmont, and Barrick have reported numbers in this range at AusIMM and CIM conferences over the years. Aggregate reconciliation can mask large local discrepancies. A stope or bench that delivers 20% more metal than the model predicted compensates in the annual total for one that delivers 18% less. The mine planning team deals with the consequences of these local mismatches (schedule changes, stope redesigns, trucks rerouted from mill to waste dump) while the annual report shows a number within tolerance.

Block-by-block reconciliation, comparing each mined block or stope to its model prediction, is the definitive test of estimation quality. It is collected at some operations and published almost never.

When mine geologists have presented block-level data at conferences (Goldfields staff at St Ives; Newcrest staff on Cadia and Telfer), the results confirm the theoretical expectation: kriging smoothing creates systematic local prediction errors much larger than aggregate reconciliation suggests. The public model overestimates grade in marginal zones and underestimates grade in high-grade zones. The operational grade control model, built from much denser data, captures the local variability that the public model cannot. The mine geologist works with one version of the orebody. The equity analyst works with the other.

Columbus Gold Corporation - Footer
HomeContactQwikReportDisclaimer
©2019 Columbus Gold Corporation All rights reserved
滚动至顶部