The messages caused concern, but OpenAI chose not to inform law enforcement.
The messages caused concern, but OpenAI chose not to inform law enforcement.


The individual involved in the mass shooting at Tumbler Ridge, British Columbia, Jesse Van Rootselaar, had been causing concern among OpenAI staff months prior to the incident. In June of this year, Jesse exchanged messages with ChatGPT that included mentions of gun violence, activating the chatbot’s automated review features. Numerous employees expressed their worries that her comments might be indicative of possible real-life violence, urging management to notify the authorities, but the decision was not taken.
Kayla Wood, a spokesperson for OpenAI, informed The Verge that though the possibility of notifying law enforcement was discussed, it was ultimately concluded that it did not represent an “imminent and credible danger” to others. Wood remarked that an examination of the logs showed no evidence of active or imminent violence planning. The company suspended Rootselaar’s account, yet it appears no additional precautionary measures were taken.
Wood expressed, “Our thoughts are with all those affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information regarding the individual and their use of ChatGPT, and we’ll continue to support their investigation.”
On February 10th, a total of nine individuals were killed and 27 wounded, including Rootselaar, marking the most severe mass shooting in Canada since 2020. Rootselaar was discovered deceased at the site of the Tumbler Ridge Secondary School, having suffered an apparent self-inflicted gunshot wound, which was where the majority of the fatalities occurred.
The choice not to inform law enforcement may seem dubious in hindsight; however, Wood stated that OpenAI aims to strike a balance between privacy and safety, avoiding unintended consequences from excessively broad law enforcement referrals.
Updated February 21st: Added statement from OpenAI.