Columbus Gold Corporation
BEST50OTCQX
2018
CGT: TSX | CGTFF: OTCQX
IoT Mining Solutions for Smart Monitoring
In Depth Industry Overview

IoT Mining Solutions
for Smart Monitoring

Mining & Technology March 26, 2026
Most of the IoT monitoring systems installed in underground mines over the past five years are no longer producing information that anyone acts on.
Vibration Analysis

If a mine is going to spend money on IoT sensors for one purpose, spectral vibration monitoring on rotating equipment is the purpose. Gas detection, ventilation monitoring, personnel tracking, ground instrumentation, these serve safety and compliance. They do not generate the kind of maintenance cost avoidance that pays for the IoT program on a balance sheet.

A rolling element bearing with an outer race defect generates a vibration impulse every time a ball crosses the damage. The frequency depends on bearing geometry and shaft speed. For a given bearing at a known RPM, the ball pass frequency outer race is a calculable, fixed number. When damage starts, energy rises at that frequency in the vibration spectrum, with harmonics stacking at integer multiples. A skilled analyst identifies the bearing, the fault type, and roughly how far along the damage is, from the frequency content alone.

Temperature monitoring on the same bearing tells you something only after the damage has generated enough heat to conduct through the housing. By then the window for a planned repair has usually narrowed to days.

The ML question in vibration monitoring is more complicated than most vendor literature admits.

A high-speed fan bearing with a clear outer race defect on a constant-speed motor, automated classifiers catch that. A SAG mill trunnion bearing at 12 RPM where the fault energy is barely above the noise floor in a region of the spectrum contaminated by low-frequency structural resonance, the classifier guesses, and it guesses wrong often enough to matter, because trunnion bearing failures are among the most expensive single-component failures in a concentrator. A VFD-driven pump that changes speed throughout the shift presents a different problem: the fault frequencies migrate with RPM and the monitoring system has to order-track and normalize before any comparison to baseline means anything. Plenty of deployed systems do not handle order tracking well, or do not handle it at all, and the result is a chronic stream of false flags on every variable-speed asset.

And then there is the contextual knowledge problem.

One particular mill at one particular site has always run with a slightly elevated 2x vibration component because of a residual misalignment that was accepted at commissioning. Everyone in the reliability group knows about it. The automated system does not, unless someone went back and specifically labeled years of historical spectra from that mill as "normal despite elevated 2x," which in practice nobody did because the labeling effort focused on fault cases.

A crusher countershaft where the ore zone changed and the load profile shifted gradually over weeks. The analyst who has been watching that machine for two years connects the spectral shift to the geological zone change because both happened on the same timeline. The model sees deviation from baseline and generates a flag that wastes a mechanic's time.

Contextual Calls

These contextual calls show up constantly. Out of the machines flagged in a given week, a meaningful fraction are not developing faults. They are load changes, speed setpoint adjustments, seasonal temperature effects on lubricant viscosity, or sensors that have drifted. Without somebody who can sort the real faults from the noise, the system either sends maintenance crews chasing alerts that turn out to be nothing, or the crews learn to ignore the system, and then it misses the fault that was real.

Permanently mounted sensors with edge processing extend two analysts from maybe 70 machines on a manual monthly route to monitoring the entire installed base, with the analysts focusing on the flagged subset. Without the analysts, the flags pile up and the system decays into something the maintenance superintendent checks occasionally and trusts not at all.

Finding people who can do this work is getting harder. The analysts who are good at mine equipment vibration learned it over decades. The certification pipeline produces entry-level generalists. The gap between entry-level generalist and someone who can look at a messy spectrum from a variable-load crusher and tell you what is going on takes years of site-specific mentoring to close.

How These Programs Die

A bearing fault gets detected, maybe four weeks before failure. Alert appears in the IoT platform. The maintenance planner spends the entire shift in the CMMS and has never opened the IoT interface. Two weeks pass. A reliability engineer finds the alert during a periodic review. The bearing has progressed. Parts were not ordered because the CMMS, which drives procurement, had no record of the problem. A routine planned replacement turns into a scramble.

After enough of these episodes, the maintenance team concludes the monitoring system "doesn't work." The monitoring was accurate. The information just never reached the system that controls scheduling and procurement.

Getting an API connection between the IoT platform and the CMMS built sounds simple. In organizational terms, it often is not. The sensors belong to the OT world, industrial hardware, maintained by instrumentation technicians. The CMMS is an IT application behind IT-managed security policies. The API sits on the boundary between two departments with conflicting priorities on patching, network access, and device certification. Where nobody has the authority to force decisions through that boundary, the integration stalls in review for months while the sensors sit in crates.

The financial death mechanism runs on a longer timeline.

Capital cost covers maybe a fifth of the five-year spend. The rest is subscriptions, sensor replacement (underground conditions destroy sensors much faster than spec sheets imply), communication network maintenance as the mine changes shape, integration upkeep, analyst salaries, technician time, training. Programs approved on capital cost alone look like they are hemorrhaging money within two years as operating costs materialize. Management review hits. Monitoring gets classified as discretionary. Cuts land. Sensor replacements deferred, calibration drift accelerates, data quality degrades, operators trust the system less, which justifies more cuts. Eighteen months of that spiral and the system is technically on and functionally dead.

Industry Reality

There are a lot of mining sites in that state right now.

Data Ownership

Most IoT platforms are SaaS. Sensor data lives in the vendor's cloud. If the mine wants to leave, the historical vibration baselines and fault progression examples, the part of the monitoring program that gets more valuable with every year of accumulation, either stay behind or arrive in a format that takes serious effort to use elsewhere.

Put open-format export and API access into the contract. Do it before signing. Most mines do not, and they discover why it mattered the first time they need to change vendors.

Communication and Edge Processing

RF underground is bad and the mine keeps changing shape around the network nodes. LoRa reaches a few hundred meters in a straight tunnel, less around corners and through ventilation doors. Wi-Fi covers less.

The part people underestimate: communication network maintenance is a permanent, full-time job, not a commissioning deliverable. New headings push past the last node, backfill buries cable runs, ventilation changes put obstructions in previously clear signal paths. The vendor's scope ends at commissioning. The mine discovers the ongoing staffing requirement through coverage gaps that multiply.

When the communication link to surface gets cut, blasting, ground movement, cable damage, happens multiple times a year, a cloud-dependent monitoring system loses analytical capability during the conditions that most demand monitoring. Edge nodes doing FFT locally keep monitoring underground conditions through the outage. The hardware for this is cheap. Managing hundreds of distributed Linux computers underground with intermittent connectivity and physically difficult access is where the effort hides. The first time a maintenance crew tries to manually update firmware on 150 edge nodes scattered across multiple levels of a mine, it takes weeks. After that, over-the-air update capability stops being optional.

Sensor Mortality

Mineral dust at Mohs 6-7 grinds past IP67 seals through a mechanism that water immersion testing does not replicate. Blasting shock-cycles MEMS structures into gradual sensitivity loss. Condensation from temperature swings corrodes circuit boards inside housings that passed every factory test.

A sensor that silently loses sensitivity over months is worse than a sensor that dies outright, because the analytics downstream treat its output as accurate. Vibration amplitudes get underreported across all frequencies. Faults that should trigger alerts stay below threshold. Everything looks fine. Nothing is fine.

Quarterly cross-checks with a calibrated handheld instrument are the only proven way to catch this. Budget for annual sensor attrition as an operating expense from day one, not as a warranty issue.

OEM Telemetry, Geotechnical, Spatial Data

Every major equipment OEM runs a proprietary telemetry platform reading from factory sensors wired deeper into the machine than aftermarket IoT can reach. Diagnostically rich data. Commercially guarded. Most mines end up with two monitoring worlds that share nothing: OEM telemetry on the truck fleet, mine IoT on fixed plant.

Continuous IoT piezometers and extensometers give geotechnical engineers faster visibility than weekly manual rounds. Calling that a predictive capability for ground failure is ahead of where analytical methods stand. Not enough failure examples at any single mine for ML classifiers to learn from. Geotechnical IoT is an awareness tool.

LIDAR and SLAM-based spatial capture of underground geometry, every month without it is a month that cannot be digitally reconstructed later. Most mines have not started.

Scope

Fifteen sensors on five conveyor drives. Edge FFT. Automatic work orders into the CMMS when fault thresholds get crossed. One analyst. Quarterly handheld calibration checks.

Deployment Warning

Most programs that collapse after year three started with hundreds of sensors across multiple equipment types before the organization could handle the data from a few dozen. The vendor benefits from a large initial deployment. The mine benefits from a small one that works. The vendor's incentive usually wins the procurement conversation.

Start with what breaks most expensively. Put sensors on those machines. Connect to the work order system. Get an analyst. Expand after the first scope earns enough trust that maintenance planners schedule work based on its output.

Columbus Gold Corporation - Footer
HomeContactQwikReportDisclaimer
©2019 Columbus Gold Corporation All rights reserved
滚动至顶部