Anthropic had reportedly declined to fix the prompt injection vector, saying, “After reviewing your report, we were unable to identify any security impact. As such, this has been marked as Not Applicable.” Anthropic did not immediately respond to CSO’s request for comments.
The author, using the alias “WunderWuzzi” for the blog, noted that developers building atop Claude, Amazon Q included, must block these attacks on their own. Most models still parse invisible prompt injection, except OpenAI, which has tackled the issue directly at the model/API layer.
By August 8, 2025, AWS reported the vulnerability resolved, the author said in the blog. However, “no public advisory or CVE will be issued,” so users should ensure they’re running the latest version of Amazon Q Developer for safety.
