A Blog by Jonathan Low

 

Apr 18, 2020

AI Can Determine If People Are Social Distancing Properly

The trade-off between health safety and personal information security remains, but until the crisis is past, people will probably opt for more monitoring, at least in the short term. JL

Landing AI reports:

An AI-enabled social distancing detection tool can detect if people are keeping a safe distance from each other by analyzing real time video streams from cameras. Applying a pedestrian detector to the perspective views to draw a bounding box around each pedestrian. We use an open-source pedestrian detection network. To clean up output bounding boxes, we apply minimal post-processing such as non-max suppression (NMS) and rule-based heuristics. Technicians could integrate this software into their security camera systems to monitor the working environment with easy calibration steps.
In the fight against the coronavirus, social distancing has proven to be a very effective measure to slow down the spread of the disease. While millions of people are staying at home to help flatten the curve, many of our customers in the manufacturing and pharmaceutical industries are still having to go to work everyday to make sure our basic needs are met.
To complement customers’ efforts and to help ensure social distancing protocol in their workplace, Landing AI has developed an AI-enabled social distancing detection tool that can detect if people are keeping a safe distance from each other by analyzing real time video streams from the camera.
For example, at a factory that produces protective equipment, technicians could integrate this software into their security camera systems to monitor the working environment with easy calibration steps. As the demo shows below, the detector could highlight people whose distance is below the minimum acceptable distance in red, and draw a line between to emphasize this. The system will also be able to issue an alert to remind people to keep a safe distance if the protocol is violated.
As part of an ongoing effort to keep our customers and others safe, and understanding that the only way through this is with global collaboration, we wanted to share the technical methodology we used to develop this software. Demos will help to visually explain our approach that consists of three main steps: calibration, detection, and measurement.

Calibration

As the input video may be taken from an arbitrary perspective view, the first step of the pipeline is computing the transform (more specifically, the homography) that morphs the perspective view into a bird’s-eye (top-down) view. We term this process calibration. As the input frames are monocular (taken from a single camera), the simplest calibration method involves selecting four points in the perspective view and mapping them to the corners of a rectangle in the bird’s-eye view. This assumes that every person is standing on the same flat ground plane. From this mapping, we can derive a transformation that can be applied to the entire perspective image. This method, while well-known, can be tricky to apply correctly. As such, we have built a lightweight tool that enables even non-technical users to calibrate the system in real time.
During the calibration step, we also estimate the scale factor of the bird’s eye view, e.g. how many pixels correspond to 6 feet in real life.

Detection

The second step of the pipeline involves applying a pedestrian detector to the perspective views to draw a bounding box around each pedestrian. For simplicity, we use an open-source pedestrian detection network based on the Faster R-CNN architecture. To clean up the output bounding boxes, we apply minimal post-processing such as non-max suppression (NMS) and various rule-based heuristics; we should choose rules that are grounded in real-life assumptions (such as humans being taller rather than they are wide), so as to minimize the risk of overfitting.

Measurement

Now, given the bounding box for each person, we estimate their (x, y) location in the bird’s-eye view. Since the calibration step outputs a transformation for the ground plane, we apply said transformation to the bottom-center point of each person’s bounding box, resulting in their position in the bird’s eye view. The last step is to compute the bird’s eye view distance between every pair of people and scale the distances by the scaling factor estimated from calibration. We highlight people whose distance is below the minimum acceptable distance in red, and draw a line between to emphasize this.
As medical experts point out, until a vaccine becomes available, social distancing is our best tool to help mitigate the coronavirus pandemic and as we open up the economy. Our goal with creating this tool and sharing it at such an early stage, is to help our customers and to encourage others to explore new ideas to keep us safe.

1 comments:

John Haves said...

We have to make use of the every bit of technology available along with the manpower to ensure that we are all moving on the right path to curb the pandemic Covid 19. The seriousness here is that a single man's foolish act can cause many people to suffer for it. I do work as a maid in Dubai in a housekeeping company and now I'm contributing to the efforts of the administration by staying home and following social distance rules.

Post a Comment