Agent AI

From firefighters, to doctors and even call centre workers, it takes a lot of people to keep a city safe. Now, one more agent has joined the watch: artificial intelligence. 

Although the push for new technologies in the public sector has always been present, the COVID-19 pandemic has accelerated it. Globally, law enforcement agencies are expected to spend $18.1 billion on software tools and systems in the next two years. Among these tools, a recent Microsoft study found that two thirds of public sector organisations saw AI as a digital priority that would help them solve complex problems and create organisational change. 

But change is a collaborative effort, and that’s where companies like Hexagon and Microsoft come in. Hexagon is one of the lead developers of sensors and software solutions, as well as a top computer-aided dispatch provider for cities and metro areas all over the world. Microsoft needs no introduction. Both software providers have developed a multitude of solutions aimed at helping public agencies keep citizens – and emergency workers – safe and healthy.  

As a former firefighter and current Director of Public Safety and Justice Solutions at Microsoft, Richard Zak is passionate about the good AI can do in the sector. Him and Jack Williams, Director of Portfolio Marketing at Hexagon, want to shatter the many myths that come to people’s minds when they hear the words “artificial intelligence”. In a conversation with Tech For Good, they discuss the potential of assistive AI to improve the operations of emergency communications services. 

“I look at AI in the same way that, in the fire service, you tell a firefighter that they should learn something from every single call that they respond to,” Zak says. “If they don’t, it’s almost like every call is their first. With assistive AI, what we’re asking is that those operational systems that support them learn from every call as well. There’s a really strong connection between the way that we train first responders and the way that we’re enhancing technology to support them.” 

Zak brings in his expertise of working in the fire service and serving in the boards of the International Association of Chiefs of Police and the Industry Council for Emergency Response Technologies to develop new technologies that truly meet first-responder’s needs. In contrast, Williams’ area of expertise is business intelligence and data analytics. He is also the Product Manager of Hexagon’s Smart Advisor, an assistive AI capability that supports the emergency call centre staff to help them make better and more informed decisions. 

“When you think of the public safety ecosystem and the players involved, we all picture the big strong firemen like Richard was, or the big strong cop running in to save a person from jumping from a building,” Williams says. “But very rarely do people think about the call takers and dispatchers and they are also a vital part of the public safety ecosystem.” 

Every year, an estimated 240 million calls are made to 911 in the US. That’s millions of instances when communications centre personnel make high-stakes split-second decisions with very little information. Because of the siloed data in most legacy computer-aided dispatch systems and the lack of interconnectivity between them, the staff are often required to record information manually or memorise it, inevitably creating information gaps. But AI can help close them.   

“Think of car blind spots, where the rear-view mirror is your blind-spot detector,” Williams says. “By providing a second set of eyes, the assistive AI capability can help solve operational blind spots and improve call takers’ overall wellbeing.” 

Assistive AI tools developed by companies like Hexagon and Microsoft can gather structured and unstructured data from each call, compare it with performance data from previous calls, and predict the most likely resources that will be needed in each situation. Contrary to popular belief, the AI does not replace actual call centre workers, but helps them prioritise information and make more informed decisions. Ultimately, the staff is still in charge of making the final call.  

There's a really strong connection between the way that we train first responders and the way that we're enhancing technology to support them”

“It’s not scary science fiction,” Zak says. “It’s about having systems learn the same way people do to drive better outcomes for the people that they serve. The AI in this situation amplifies people’s drive, it extends their reach, it lets them do more, do better; and it actually accelerates the impact they’re making.” 

AI tools can not only improve emergency workers’ workload, but also their mental health. Call centre jobs commonly carry with them high levels of employee burnout and emotional exhaustion, and this situation is exacerbated by the pressure that comes with working in emergency situations. In turn, this causes high levels of turnover, and forces managers to continuously recruit and train new workers. 

Because of the pandemic, first-responders and those working at dispatch centres have been suffering over the last year from alert fatigue. This is a condition by which people who are exposed to continuous alerts get desensitised to them, leading to missing alerts or delaying responses. Zak and Williams think AI can provide the opportunity to support call takers’ wellbeing and mental health. 

“Wellbeing is something that stirs up a passion in the kind of work we do,” Williams says. “Let’s be honest; public safety has been a whirlwind over the last year and a half, so we looked at assistive AI as a way to help reduce alert fatigue. Because the AI doesn’t get tired, it gets better the more data you give it; the more incidents that it sees. And it also helps to capture and store the institutional knowledge of the organisation. So, if you have a system that has learned over time, it can support new dispatchers by bringing the institutional knowledge along with it.” 

One of the projects Microsoft has worked on in relation to this is a collaboration with law enforcement agencies such as the Chicago Police Department in the US to launch an AI system that looks for indications of fatigue and burnout in its staff. The system will track officer performance and measure it against past records to detect when a staff member might need support, which can be provided in the form of mental health services, additional training or a reallocation to a different role. The goal is to address the situation before it spirals out of control.  

“Rather than being a punitive corrective system, the AI is meant to support the officer,” Zak says. “Instead of leading to punishment, it actually leads to resources. It’s about treating the officer as a whole person, and not just based on the actions that they take.” 

Another public safety group that Microsoft has been helping is war veterans. The company is providing AI tools for iRel8, an organisation that is partnered with the US Veterans Association, to obtain insights about veterans’ mental health and reduce their high suicide and addiction rates. According to the 2019 National Veteran’s Suicide Prevention Report, veterans were 1.5 times more likely to commit suicide than nonveterans, with over 6,000 veteran suicides a year between 2008 and 2016. Vets are suffering, and AI can help them. 

“What iRel8 does is apply AI capabilities as the Veterans Administration interacts with veterans, to identify and interrupt this cycle that could potentially lead to veteran suicide,” Zak says. “This is a very large issue in the United States. It’s something that the Veterans Administration takes really seriously and they found that by applying AI in new ways they could break this cycle, they could engage early on in their work with a veteran, provide those resources, get that counselling and avoid that terrible outcome. 

“Sometimes we fear things that AI can do, but when you think about AI being part of helping veterans and breaking that cycle that can lead to suicide, that is absolutely tech for good.” 

Think of car blind spots, where the rear-view mirror is your blind spot detector. By providing a second set of eyes in the assistive AI capability, we can help solve operational blind spots and improve call takers’ overall wellbeing”

When combined with cloud, artificial intelligence tools can also help public agencies share information securely with one another and bridge jurisdictional barriers. Each agency has its own operational traffic management system, public hazard dispatch system, record system and mobile workforce. This is a lot of information to sync in the case of a natural disaster, or a pandemic. Hexagon’s goal is to reduce these information gaps through its platform Hexagon Connect and create one single integrated site, a platform that “does not belong to a single public agency”, but to all of them.   

“As human beings we think out every possible scenario that can go wrong,” Williams says. “Well, in this world, there’s going to be some stuff that surprises you. And when that happens, you’re going to need to be able to quickly communicate and coordinate action with someone, whether that’s regarding traffic, public safety, or mutual aid. That’s why Microsoft and Hexagon make a pretty good partnership when it comes to addressing, not only public safety, but also broader city government collaboration and coordination efforts.” 

Public agencies have been traditionally resistant to change. However, as public services have had to go online because of the pandemic, the use of these technologies has been unavoidable. A clear example of this has been online trials. Before COVID, stakeholders in the justice system were very resistant to using remote working tools. But, when justice had no option but to move online, people embraced technology in a way they might not have otherwise. 

“Over the last three years, that viewpoint from public safety leaders has really changed,” Zak says. “And this is because it has become so common across the private sector. When a senior leader will talk to me about being concerned about AI, I generally ask them: ‘Did you talk to your phone today? Did you talk to your car? Well, that’s AI’. Decision makers now have a more open approach to using new capabilities like AI, because they are using it every day in their private life.” 

The biggest barrier that public agencies face when implementing new technologies such as artificial intelligence is fear. As the use of AI becomes more and more commonplace, human rights organisations are raising the alarm regarding the terrible effects its abuse could lead to, if left unregulated. 

Companies and regulators need to be aware of the dangers of artificial intelligence and put measures in place to ensure its responsible use. In this journey, Microsoft is leading by example. The company has set six principles around the responsible use of AI: reliability, fairness, transparency, privacy, inclusiveness, and accountability. The goal is to always keep these in mind while developing AI, despite the lack of regulations in the area. 

“Companies should still be building and deploying artificial intelligence in a way that respects people long before the government’s boundaries around it come into play,” Zak says. “No matter how sophisticated the AI system is, a person is always responsible for its operations. There are cars that will drive you home if you set your address, but you’re still responsible for them. That is what we mean about AI accountability. You can’t blame the car for driving off the road; you’re ultimately the driver of the car. 

“You need to put AI principles together with policies. AI can help save lives, but it starts by getting the right principles in place and making sure you have the right policies to support its responsible use. You can’t catch up later. You’ve got to build them both at the same time.” 

AI is not going anywhere. From augmented reality to 3D-surveillance capabilities, it is going to become the foundational piece for a lot of the future technological tools. Eventually, AI will cease to be a separate system, and become woven into all public sector operations. But machines are made by people, and only people can ensure they are used for good. For this reason, AI education is more important now than ever. 

“People need to become more AI-literate because that’s how you can avoid misuse and hold your technology providers, your vendors, and your governments accountable for its responsible use,” Williams says. “AI has the potential to do a lot of good for the world. Once people become more AI-literate, they can have a conversation about it, instead of just conjuring up an image of the Terminator movies.” 

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE