Kathmandu. Questions have been raised about the reliability of security surveillance technology and the need for human supervision after an AI system at Kenwood High School in Baltimore County, Maryland, mistakenly identified Tagi Allen as a man with a weapon.
A 16-year-old student at Kenwood High School was detained and handcuffed on suspicion of falsehood. The school’s AI-powered surveillance system mistakenly identified the empty packet of shrunken Doritos chips (Krisp) in the hands of Talkie who was outside the school at around 7 p.m. on October 20 after football practice.
This incident highlights the serious challenges surrounding AI-based threat detection technology being used in schools. This incident shows how a false warning can quickly create a traumatic and traumatic situation.
Technology and human error
}
This AI system based on police response has been developed by a company called OmniAlert. It monitors video feeds to search for weapons. The size of the packet of chips and the way the tank held it, the computer vision algorithm mistakenly flagged it as a gun.
School officials said the AI warning had been reviewed and cancelled shortly after police arrived, but police responded quickly and took the tank into custody. After the search, it was confirmed that no weapons were found.
Impact on Students
}
Tacky Allen said he was “deeply saddened” when he was confronted by armed officers and asked, “Am I going to die?” The psychological trauma experienced by students from such incidents is a matter of concern for educators and parents.
Expert opinions and ethical questions
AI ethicists and surveillance experts stress that no AI system is flawless. According to them, the management of false warnings through humanitarian intervention should be made mandatory before the threat escalates. They called for transparency about AI’s capabilities and limitations, disclosure of error rates, and a policy framework that ensures students’ rights.
Community and Government Response
Local council members and school boards are now calling for a comprehensive review of AI surveillance tools, monitoring procedures and training staff. So that the unnecessary harm caused by it can be minimized. The incident has sparked a nationwide conversation about surveillance technology in public spaces.
What happens now?
- The omnilight company is reviewing improvements to the detection algorithm to reduce false positives.
- Efforts are underway for legislative proposals to regulate AI in education and public safety settings.
Kenwood High and Baltimore County Schools are conducting an independent audit of the AI system’s performance.TAG_OPEN_li_41

















