Dystopia or life-saver? This AI tracks human social distancing in real-time

Using machines to track social distancing may very well be the future of pandemic control. It could also easily be misused.

As the pandemic continues its unprecedented spread across the world, researchers of all kinds are turning their resources to fighting the novel coronavirus. Landing AI, a relatively small company founded by noted computer scientist Andrew Ng, shared news last week of an AI-powered tool capable of analyzing real-time video to check if social distancing protocols are being observed properly.

Landing AI’s social distancing detection tool presents compelling evidence that technology research could be key in curbing the spread of COVID-19. While the tool is still in its early stages, Landing AI hopes sharing news of it now will encourage others to explore similar ideas.

Until a vaccine becomes available, social distancing is the best tool we have in curbing the spread of the virus. It will likely be quite a while until tools like this are widely available, so it’s unlikely that Landing AI’s research will help us in the immediate future. Instead, it provides hope that artificial intelligence could be used to aid public health in years to come. The project does also bring with it inherent concerns about surveillance abuse.

Just three easy steps — As far as artificial intelligence goes, Landing AI’s latest project is actually relatively simplistic in its processes. Landing AI’s blog post outlines its three-step approach to measuring the distance between people as they move: calibrate, detect, and measure.

Calibrating an AI-powered device can be complex, to say the least. This can lead to installation barriers for less tech-knowledgeable users. Landing AI has solved this problem by building a “lightweight tool” to help calibrate the system. You essentially choose four points and map them to a rectangle.

The system then applies a human-detector to the perspective view using open-source tools. This detection step draws a tight box around each person and tracks that box through space.

Finally the system measures the bird’s-eye distance between every two boxes as they move across the viewfinder. Those less than six feet apart — the minimum distance suggested for social distancing — are highlighted with a red box. People keeping a healthy distance are outlined in green.

Landing hopes for the best — Landing AI makes it a point to mention that its system has no way of recognizing individuals. That means no person could be singled out by the software for shaming or reprimanding. The company instead sees the tool as a way to give an overview of how social distancing is being observed in a given place. It even emphasizes that the tool should be used “with transparency and only with informed consent.”

But surveillance is dangerous — This being said, other tools using similar technology might not be quite so anonymous. Landing AI’s mission may be for the greater good, but we would be remiss to not consider how it might be implemented for less altruistic reasons.

Landing AI uses a factory as its prime example for the tool’s use, with the potential to even send up real-time alerts if social distancing is not being observed. By looking at the tool’s video feed, a manager could easily identify which workers had violated social distancing rules. In fact, Amazon is already using similar software to track its workers; the company is using that tracking to threaten workers’ jobs.

Even if workers know about this technology — the transparency mentioned by Landing AI — there’s very little they can do to protest it. It’s not difficult to imagine how this might lead to some abuses of power.

There’s a fine line between surveillance as a useful tool and as a reinforcement of existing power imbalances. Landing AI’s software is undoubtedly a leap into the future of pandemic management. But with very little legal policy in effect around the use of AI surveillance, we must be wary of its potential for misuse.