Traffic Lane Zone
The traffic lane zone is available on the Qb2 product. It can not be used in combination with security or volume zones. |
The traffic lane zone tracks vehicles driving through the zone on a single lane.
Application
- Velocity Estimation
-
Using multiple detections of a vehicle in the zone, its velocity can be calculated.
- Size Estimation
-
While the vehicle passes through the scene, its point cloud is aggregated in every step to create a dense final result and measure the object dimensions.
- Vehicle Classification
-
Using a support vector machine, an 8+1 classification of vehicle types is possible, for more details see the pages on classification and object classes.
Placement
The traffic lane zone should be placed in the scene so that it expands along the y-axis, and rotated so the zone label faces the origin. The vehicles should move in negative y-direction towards the Qb2. Additionally, the front of the zone should end where the field of view of the sensor ends, in order for events to be triggered when the vehicles leave the scene. For a single sensor, it is beneficial to place the zone so that the center line of the field of view of the sensor is in the middle of the lane.
Figure 1. Placement of Multiple Zones
|
Figure 2. Placement of a Single Zone
|
Parameters
See configuration API definition.
- Expected Velocity (unit: \(\frac{m}{s}\))
-
Velocity that the vehicles are expected to have inside the lane zone. This is used as initial velocity during the object tracking.
Algorithm
The algorithm working on the traffic lane zone uses its enclosed portion of the foreground point cloud as input data. On these input points, object detection is performed. Afterwards, the objects are tracked and a class is assigned to them.
Tracking
For each frame, the following tracking steps are performed:
-
Predict the next position of already tracked vehicles.
-
Combine detected clusters that belong to a single vehicle.
-
Assign existing tracks to detected vehicles.
-
Update the vehicles' position and velocity, motion correct and aggregate the point clouds.
-
Create new tracks for vehicles that were detected for the first time.
-
Delete tracks of vehicles that left the zone.
-
Send out the current state of the traffic lane and events for each vehicle that left the zone.
Figure 3. Motion Distorted Vehicle
|
Figure 4. Motion Corrected Vehicle
|
- Aggregation
-
Over the course of the vehicle moving though the traffic lane and closer to the sensor, more and more parts of it become visible. For very long vehicles like trucks or buses with trailers, not all parts of the vehicle might be visible at the same time. Using knowledge about the position of the object, multiple detections across frames can be combined into a single, more dense point cloud.
Classification
If a classification model is used, the tracked vehicles will be classified accordingly. If not, the dimensions of the object bounding boxes will be used to assign the classes car, van or truck from the 8+1 vehicle classification.
Output data
The vehicles in the state and event streams are always given in the local coordinate system of the corresponding traffic lane zone, not in the global coordinate system of the sensor. |
State
See full API definition.
- vehicles
-
List of vehicles currently inside the zone. Each entry contains the position, dimensions, velocity, point cloud and class of the vehicle.
Event
See full API definition.
An event is created every time a vehicle leaves a traffic lane zone.
- zone_uuid
-
The UUID of the traffic lane zone that emitted the event.
- vehicle
-
The vehicle leaving the zone, includes its position, dimensions, velocity, aggregated point cloud and vehicle class.
- id
-
The unique id of the vehicle.