“Everyone knows chatbots hallucinate and can be tricked by prompt injections. This isn’t new,” the Cybernews Research team said in a report. “What’s truly surprising is that Lenovo, despite being aware of these flaws, did not protect itself from potentially malicious user manipulations and chatbot outputs.”
How the attack worked
The vulnerability demonstrated the cascade of security failures that can occur when AI systems lack proper input and output sanitization. The researchers’ attack involved tricking the chatbot into generating malicious HTML code through a prompt that began with a legitimate product inquiry, included instructions to convert responses into HTML format, and embedded code designed to steal session cookies when images failed to load.
When Lenovo’s Lena received the malicious prompt, the researchers noted that “people-pleasing is still the issue that haunts large language models, to the extent that, in this case, Lena accepted our malicious payload, which produced the XSS vulnerability and allowed the capture of session cookies.”
