News Can artificial intelligence help stop mass shootings?

Thirty people were killed and 19 injured in six mass shootings in California in less than two weeks, reigniting calls for the US to tackle gun violence.
Earlier this month, President Joe Biden pushed for a nationwide ban on assault rifles; Republicans who oppose such measures have largely remained silent in the aftermath of the attack. In response to other mass shootings, Republicans have called for improved mental health services.
Gridlock in Congress and the apparent ineffectiveness of California’s strong state gun laws have left people looking for alternatives. A relatively new potential solution, using artificial intelligence to enhance security, has attracted interest for its promise to catch shooters before they shoot.
The AI security industry touts cameras that can identify armed suspects loitering outside schools, high-tech metal detectors that can spot concealed guns, and predictive algorithms that can analyze information to flag potential mass shooters.
Officials at companies developing AI-powered security cameras say the technology corrects fallible security personnel who they say often struggle to monitor multiple video feeds and spot emerging threats. Instead, company officials say AI can reliably identify attackers as they prepare to launch an attack, saving security officials precious minutes or seconds and potentially saving lives.
“It’s the best kept secret,” Sam Alaimo, co-founder of AI security firm ZeroEyes, told ABC News. “If there’s an assault rifle outside a school, people want to know more. If one life is saved, that’s a victory.”
However, critics have questioned the effectiveness of these products, saying the companies have failed to provide independently verified data on accuracy. Even if AI works effectively, the technology raises significant concerns about privacy violations and potential discrimination, they added.
“If you’re going to trade your privacy and your liberty for security, the first question you need to ask is: Is your trade off?” Jay Stanley, senior policy analyst at the ACLU’s Speech, Privacy and Technology Project, told ABC News.
AI Security Market
The industry is poised for growth as schools, retailers, and offices consider AI security. The market for products that detect concealed weapons is expected to nearly double to $1.2 billion by 2031 from $630 million in 2022, according to research firm Future Market Insights.
The optimism is due in part to the ubiquity of security cameras, which has allowed artificial intelligence companies to sell software to enhance systems already in use in many buildings.
The National Center for Education Statistics found that as of the 2017-18 school year, 83 percent of public schools said they used surveillance cameras. The group’s survey shows a significant increase from the 1999-2000 school year, when only 19 percent of schools were equipped with security cameras.
“We use existing surveillance systems,” Kris Greiner, vice president of sales at AI-based security company Scylla, told ABC News. “We’re just giving it a brain.”
Companies Working on AI Safety to Prevent Shootings
Scylla, an Austin, Texas-based company founded in 2017, provides artificial intelligence that not only helps security cameras detect hidden weapons, but also suspicious activity, such as evading security or launching a fight, Greiner said. .
When the fully automated system identifies a weapon or a suspicious actor, it notifies officials at the school or business, he said, noting that mass shooters often draw their guns before entering the facility. The system can also be set to immediately deny access and lock the door, he said.
“When every second counts, it’s likely to have a big impact,” Greiner said.
He added that the company has performed about 300 installations in 33 countries, enabling client agencies to overcome common shortcomings of security personnel.
“Imagine a guy sitting in a command center looking at a video wall, and he can only watch four or five cameras for four to five minutes before he starts missing anything,” Greiner said. “There is no limit to what an AI can watch.”
Another AI security company, ZeroEyes, offers similar AI-enhanced video surveillance, but with a narrower purpose: gun detection.
ZeroEyes co-founder Alaimo said one of the founders of the company, founded by former Navy SEALs in 2018, realized that security cameras provided evidence that incriminated mass shootings after the fact, but did little to correct them in the first place. Prevent violence, so entered the industry.
“In most cases, the shooter exposed the gun before pulling the trigger,” Alemo said. “We wanted to get an image of that gun and use that to alert first responders.”
Like Scylla’s product, ZeroEyes AI tracks live video and sounds an alert if a firearm is detected. Alerts from ZeroEyes, however, are sent to an internal control room, where company employees determine whether the situation poses a real threat.
“We have a human in the loop to make sure customers never get a false positive,” Alaimo said, adding that the entire process from alert to verification to communication with the customer takes just three seconds.
AI Safety Accuracy
Stanley of the American Civil Liberties Union (ACLU) said that while AI-enhanced security sounds like a potentially life-saving breakthrough in theory, the accuracy of the product remains uncertain. “If it’s not working, there’s no need to talk about privacy and security,” he said. “The conversation should be over.”
Scylla’s Greiner says the company’s AI is 99.9 percent accurate at identifying weapons such as guns — “a lot of nines.” But he did not say how accurate the system was at identifying suspicious activity, saying the company had not independently verified the system’s accuracy.
“Let’s find a third party to do this — we haven’t done that yet,” Greiner said, adding that the company allows customers to test products before buying them.
ZeroEyes’ Alaimo said the company eliminates false positives by including employee verification as part of its alerting process. But he declined to say how often the AI system showed false positives to employees, or whether employees made mistakes in evaluating the alerts.
“Transparency is key because if communities are going to make hopefully democratic decisions about whether they want to use these technologies in these public spaces, they need to know whether it’s worth it,” Stanley said.
Other Concerns About AI
Efficacy of these systems aside, critics have raised concerns about privacy violations and possible discrimination from AI.
First, more than 30 states allow people to openly carry handguns, making these individuals potential targets for AI-augmented security.
“It’s now legal to carry a gun in most of the country,” Barry Friedman, a law professor at New York University who studies the ethics of artificial intelligence, told ABC News. “It’s hard to know what you’re going to search for in a way that doesn’t violate people’s rights.”
At ZeroEyes, Alaimo said, the AI issues a “non-lethal alert” to customers in situations where an individual is legally armed with a gun, making the customer aware of the weapon’s presence but not an emergency response.
Noting additional privacy concerns, Stanley said security officials have never viewed the vast majority of surveillance footage currently recorded, except where a crime may have occurred. With AI, however, algorithms scan every minute of available footage and, in some cases, watch for activity deemed suspicious or unusual.
“It was horrible,” Stanley said.
Given the racism found in evaluations of facial recognition systems, Stanley warned that AI could suffer the same problem. Friedman added that the issue has the potential to replicate racial inequities in the broader criminal justice system.
“The cost of using these tools when we’re not ready to use them is that people’s lives are going to be ripped apart,” Friedman said. “People are going to be targeted by law enforcement when they shouldn’t be.”
Greiner and Alaimo say their AI system does not assess an individual’s race displayed in security messages. “We don’t identify individuals based on race, gender, ethnicity,” Greiner said. “We literally identify people as people with guns.”
If the U.S. abandons AI solutions, it could face needless tragedy, especially since other solutions have been around for longer, Alaimo said.
“We can and should keep talking about mental health. We can and should keep arguing about gun laws,” Alemo said. “My concern is that today — not a year from now, not 10 years from now — we might have answers to these tougher questions.”
“What we’re doing now is the solution,” he said.