If you are not in industry, manufacturing AI is the easiest deployment to miss. There is no chatbot involved. Nothing is generative. The models do not write poetry or paint pictures. They sit on production lines, in warehouses, on mine sites and in shipping containers, doing classification, prediction and control, mostly on streams of sensor data. They have been doing it for over a decade, and the cumulative dollar value of what they have changed is plausibly larger than every consumer AI deployment combined.

Predictive maintenance — the original industrial AI use case

The simplest version of the problem: every piece of industrial equipment fails eventually. Failure modes are sometimes preceded by detectable signals — vibration patterns, temperature anomalies, electrical noise, lubricant chemistry shifts. Fail to detect those signals and the equipment breaks unexpectedly, often expensively. Detect them and you replace the bearing, recalibrate the motor, change the oil before the failure happens.

This was a classical engineering problem long before "AI" was the right word. What machine learning has changed is the scale and richness of the detection. Modern predictive maintenance systems combine vibration sensors, current sensors, infrared thermal imaging, acoustic emission, oil chemistry analysis, and increasingly computer vision into a single model that flags equipment likely to fail in the next N days. GE, Siemens, Honeywell, Schneider Electric and ABB all sell variations on this product. It is one of the few areas where industrial AI has unambiguously paid for itself.

The Australian story here is mining. Rio Tinto's Mine of the Future program at the Pilbara iron-ore operations integrates predictive maintenance across an enormous fleet of trucks, drills, processing equipment and rail. BHP and Fortescue have similar systems. The economics work because mining equipment failures are catastrophically expensive (a haul truck out for a day can cost six figures) and the equipment is heavily instrumented anyway.

Computer vision on production lines

If you make something at scale, quality inspection used to mean a person looking at every unit, or a sample of every batch. Now it is more often a camera and a model. The model has been trained on labelled images — good unit, bad unit, defect type A, defect type B — and runs on a GPU at line speed.

The applications are everywhere. Beverage bottling lines check fill levels and label alignment. Pharmaceutical lines check tablet integrity. Electronics manufacturing checks solder joints on circuit boards. Food processing checks for foreign objects, ripeness, sizing, and contamination. Automotive checks paint quality, panel gaps, weld integrity. The vendors are mostly specialists most readers have never heard of — Cognex, Keyence, Landing AI, Voxel51 — but their cameras are in essentially every modern factory.

The interesting recent development is that the models have become flexible enough that retraining for a new product or defect type is now a matter of days rather than weeks. That changes the economics. Computer vision quality inspection used to be reserved for high-volume products where the engineering effort paid off. It is now economic for far smaller production runs.

Robotics — what is and is not autonomous

The term "robot" is used loosely. A factory robot welding car bodies in 2026 is an industrial arm following a precisely-programmed motion path. There is no AI. The arm does the same motion thousands of times a day with high precision, but it does not perceive its environment.

An autonomous mobile robot in an Amazon fulfilment centre is genuinely autonomous in a more meaningful sense. It uses a combination of LiDAR, cameras and SLAM (simultaneous localisation and mapping) to navigate through a dynamic warehouse, route around obstacles and other robots, and bring shelves to human pickers. Amazon currently operates several hundred thousand of these. The successor systems — Sequoia and Sparrow — extend the autonomy to the picking step itself, using computer vision to identify items and articulated grippers to handle them.

Pick-and-place robots are the harder problem. A general-purpose robot that can grasp arbitrary objects from a cluttered bin is hard because the perception, planning and contact dynamics all interact. Companies like Covariant (recently acquired by Amazon) and Berkshire Grey have made progress, but reliable general manipulation is still not solved. The current state of the art is good enough for many warehouse pick tasks but nowhere near as flexible as a human.

Autonomous haul trucks at iron-ore and coal mines are the most-deployed autonomous heavy vehicle in the world. Rio Tinto's Pilbara fleet of around 130 driverless trucks is the largest, with smaller fleets at BHP, Fortescue, and operations in Chile, the US, Canada and South Africa. The technology stack is mature enough that the trucks operate continuously, with human controllers monitoring fleets remotely from operations centres in Perth.

Supply-chain optimisation

The pandemic exposed how brittle global supply chains had become. The response, partly, has been a wave of investment in AI-based supply-chain visibility and optimisation. The basic idea is to use ML on demand forecasts, lead times, inventory levels, supplier reliability, transport availability, weather and geopolitical indicators to make sourcing and stocking decisions that are robust to disruption.

The leading vendors here are Blue Yonder (formerly JDA), Kinaxis, o9 Solutions, and increasingly the cloud platforms (SAP, Oracle, Microsoft) integrating ML directly into their ERP systems. Maersk and other major shipping companies have built their own ML stacks for vessel routing, container utilisation, and port-call optimisation.

The pattern most large logistics companies are now arriving at is a digital twin — a continuously-updated computational model of their physical operation, against which scenarios can be simulated and decisions tested. The AI is the model. The decisions are still mostly made by humans, but the humans are looking at very different information than they did a decade ago.

Autonomous shipping — the slow story

Maritime autonomy has been more talked-about than deployed, but progress is real. The Mayflower Autonomous Ship completed its transatlantic crossing in 2022 (with several false starts). Several large container shipping companies operate vessels with reduced bridge crews and substantial autonomous-navigation assistance. Norway has been the most ambitious — the Yara Birkeland operates as a fully autonomous container ship on a short Norwegian coastal route. Scaling that to global ocean routes is much harder, and the regulatory framework (IMO conventions, port state control) is still oriented around crewed vessels.

The same general pattern — autonomy works in well-understood, geographically-constrained environments and is far harder in the open world — applies in trucking, where pilots in the US and Australia have demonstrated long-haul highway autonomy but have not yet displaced human drivers at scale. The economics may close in the next few years; the regulation and the public-acceptance question will probably take longer.

Energy and process optimisation

Process industries — steel, aluminium, cement, chemicals, oil refining — run their plants under control systems that have included optimisation algorithms for decades. The recent ML overlay improves on classical control in two ways: it can model non-linear dynamics that classical models simplify, and it can incorporate far more variables (raw material quality, weather, energy prices, demand) into a single decision.

Google's DeepMind has published case studies showing 40% reductions in cooling energy at Google data centres using ML-based control. Similar approaches are now standard in modern data-centre operations, where energy is the dominant cost. BHP's Olympic Dam operation has applied ML control to its uranium-extraction circuits with reported double-digit efficiency improvements. The pattern is similar across heavy industry — not headline-grabbing, mostly invisible to outsiders, but quietly reshaping the unit economics of producing physical things.

Where it has gone wrong

Industrial AI has had fewer dramatic public failures than the consumer-facing kind, partly because the deployments are more constrained and partly because the operators have been careful. The failures that have happened tend to be:

Brittleness under unusual conditions. A predictive maintenance model trained on a fleet of equipment in temperate Australia performs worse on equipment in the tropics. A computer vision quality system trained on one batch of components fails on a new batch with slightly different surface finish. The fixes are well understood; the failures still recur.

Workforce displacement. The autonomous mining truck programs have eliminated tens of thousands of driver jobs. The companies have generally retrained rather than retrenched, but the pattern is real and is the most visible labour-market consequence of industrial automation. Manufacturing employment in advanced economies has been on a long-term decline that automation accelerates.

Cyber and physical-safety incidents. The number of industrial control systems with internet connections has grown faster than the security practices around them. Incidents like the 2017 Triton attack on a Saudi petrochemical plant (the malware was specifically engineered to target safety systems) are reminders that adding AI to industrial control without rigorous security work creates new failure modes.

The honest summary

Manufacturing and logistics is the maturest deployment of AI in any sector. The successes are unflashy and cumulative. A decade of small efficiencies in supply chain, energy use, quality control and predictive maintenance compounds into a substantial fraction of the productivity growth advanced economies have experienced — although attribution is difficult because the gains are diffuse. The political and labour-market consequences will probably continue to outrun the public conversation about them.