Automation and the problem of AI bias

Over recent years, numerous reports of AI bias have hit the headlines as machine learning and automation have become increasingly commonplace. One type of automation – AI-enabled camera-based surveillance – suffers increasingly from analytical bias, as more and more public and private spaces are being monitored using facial recognition technologies. 

The uncomfortable truth is that many automated or AI systems ‘learn’ from human-generated data, which by its very nature contains elements of prejudice, unconscious bias, and stigma. Ultimately, the algorithms that generate outputs are only as ‘neutral’ as the data sources that underpin them, begging the question: can autonomous solutions ever be capable of generating truly unbiased decisions and outputs?  

When it comes to building automation into environments to improve safety, security and efficiency, I believe the answer is yes. However, it depends fundamentally on what data is captured and how it is captured and processed. A world inundated with cameras has more than just privacy implications – potential AI biases get heavily accentuated. But, with the advent of lidar technology, a new tool is now available to overcome the limitations of camera-only approaches. 

Choosing the right technology for the job

Perception devices are usually the ‘eyes’ of automation. They feed data into solutions that then process it to deliver an output, such as alerting a staff member, signalling an alarm or scheduling an activity. To ensure that these solutions deliver safe and effective outputs based purely on an object or individual’s interaction with its environment, the data captured must be anonymised and highly accurate. Achieving this balance, however, is not as easy as it sounds.  

Cameras and radars have been used for decades as detection and monitoring devices, but they each have significant limitations when it comes to accurate, non-stop, three-dimensional perception of spaces, as well as activity within those spaces.  

Optical cameras generally produce 2D images, which means that a person’s perceived size, location and speed of movement can deviate greatly from reality. Cameras also need sufficient light to be effective (making them problematic in the evenings and at night), are easily fooled by shadows, are occluded by even moderate rain or fog, and are blinded by flashlights. They are also very data intensive, resulting in data storage and transmission cost implications.  

Furthermore, with features like facial recognition now widespread, camera use is being increasingly viewed as problematic. For instance, they can collect unnecessary biometric data, such as a person’s skin colour and facial features. With rare exceptions (e.g., identifying potential criminal suspects), this information is simply not necessary for applications that analyse behavioural patterns. Worse, analytical bias introduced by facial recognition algorithms might further interfere with systems’ inferences, affecting its accuracy and efficiency. 

Anonymised crowd perception

Each of these is a question that automation can answer, analyse and potentially, predict. In the process, it could help businesses and organisations manage services, plan ahead and improve environments where people are present.

Radars, however, are anonymised and can operate in most lighting and environmental conditions. But, due to their low angular resolution, they suffer from poor location and spatial accuracy – both of which are critical for security, safety, crowd analytics and environment management. 

This is where lidar comes in as an effective additional layer in the systems used for monitoring spaces. Lidar sensors build a picture of an object and its surroundings by sending out an invisible, infrared light signal and measuring how long it takes to bounce back. Every fraction of a second, the returning light pulses build a real-time 3D image. Lidar serves as its own source of illumination and thus performs well regardless of lighting conditions. More importantly, due to the wavelength of light used by lidars, the images they generate have high enough resolution for most applications. 

The resulting high-resolution, highly accurate 3D data “point cloud” is the reason why lidar has become an integral part of many self-driving or semi-autonomous vehicles and automated systems. For the same reason, lidar is now more widely known for its intelligent perception capabilities for security, safety and crowd analytics applications too. 

Crucially, all of the data lidar captures and outputs is anonymous – it does not record any biometric data that cameras do. In addition to maximising the protection of everyone’s privacy, this keeps risk of analytical bias to a minimum – the lidar data fed to the master system is purely behavioural, and not based on features or appearance. For crowd analytics, this level of detail is more than sufficient for understanding crowd density, crowd flow and interaction with surrounding spaces.  

For security applications, lidar can also work hand in hand with cameras, providing initial verification of potential threats for the systems to decide whether to switch on the cameras for confirmation. This protects the privacy of those not involved in any suspicious activity or crime and helps minimise bias by implementing initial screening based on behaviours only. 

The potential for unbiased automation

Many reports indicate that the COVID-19 pandemic has accelerated the use of automation by several years, and it’s easy to see why. Social distancing and enhanced hygiene regimes will be with us for the foreseeable future, sparking demand for new automation solutions that detect unsafe situations and trigger corrective measures on the one hand, and enable proactive actions to minimise transmission risk on the other.  

In these contexts, automated systems must be able to detect, classify and process data relating to movement and behaviour in a physical space. Such systems are used to answer key questions, such as: How crowded is an area? How many people have congregated together over what time period? How many individuals has a specific person interacted with? How many people are queueing waiting for services? Has anyone entered a prohibited space? Are people clustering or keeping a safe distance? 

Each of these is a question that automation can answer, analyse and potentially, predict. In the process, it could help businesses and organisations manage services, plan ahead and improve environments where people are present. But, what if the nature of the data being captured meant that individuals using a space were subject to unwarranted privacy intrusions and worse, subject to discrimination based on factors such as ethnicity and gender?  

Innovation that improves safety, creates better environments or makes services better should benefit everyone. This makes anonymised data capture all the more important as we seek to advance automation and shape a future that’s free from bias – either conscious or unconscious. 

Here, lidar is key. It can accurately detect, classify and track a range of objects, including vehicles, people and animals. It can measure distances, dwell times and crowd flows, and flag unauthorised intrusions. What’s more, it requires only a fraction of the data used by camera-based systems, making data processing very quick indeed. All this without needing to capture personal biometric identifiers of any kind. 

As automation continues to shape the world around us, it makes complete sense that lidar, as a pivotal sensor technology, should have a central role in making environments better, safer and more efficient. As with all progress – technological innovation means nothing if it doesn’t improve lives too.  

The high accuracy, 24/7 availability, intelligence and anonymity of lidar makes it a valuable and powerful technology far beyond the autonomous vehicles space with which it is most commonly associated. With lidar, machines can be smart without bias. 

Dr Jun Pei is the CEO and co-founder of Cepton, an industry leading provider of mass-market lidar solutions that serves the automotive industry (ADAS/AV), as well as smart cities, smart spaces, and smart industrial applications. 

Back to top