Currently, AI is being put to work in OT networks in the energy, water treatment, healthcare, and manufacturing sectors for the same reason it is being used elsewhere: to optimize and automate processes, thereby improving efficiency and uptime.
The worry is that organizations are jumping into a new and far from battle-hardened technology without assessing its limitations, echoing what has been happening in IT. Measuring risk against the industrial control systems (ICS) Purdue Model hierarchy, the guidelines enumerate worries such as adversarial prompt injection and data poisoning, data collection leading to reduced safety, and “AI drift” in which models become less accurate as new data diverges from training data.
Also mentioned: AI can lack the explainability necessary to diagnose errors, there are difficulties meeting compliance requirements as AI rapidly evolves, and there’s a human de-skilling effect caused by a creeping over-dependence on AI. Likewise, AI alerts might lead to distraction and cognitive overload among employees.
