It's a fact that AI is a tool and whatever it may do, it's under human guidance at the top level of control.
It's a common strong matter of opinion that were it possible to create sentient devices, it would be a crime to do so.
Anthropomorphisation is cartoonish, commonly borne of childish naiveté.
Such affectation is not appropriate in domains where life and livelihoods are at stake.
Computer security has been a nightmare since the advent of the Harvard architecture, due to stipulated indistinguishability of data and code.
AI architecture is extending this epic hazard towards infinity and beyond, where the construct of the prompt is hopelessly open-ended metadata over which security is meaningless, and agents, built on rusting perforated stacks, are prompting each other to do good knows what. No good is going to come of this.
So saying AI is "discovering" zero-days is cartoonishly tragic: AI is the right tool for the job; this work will never be done due to poor foundational designs.
One path back from the edge abyss is to drop the casual anthropomorphisms and at show a little more seriousness about chiding the path forward.