您的位置 首页 > 腾讯云社区

自动驾驶共享多模态轨迹预测---用户6868260

本文提出了一种用于预测在高度交互环境中交通未来轨迹的预测框架。基于自动驾驶车辆均配备有各类传感器(例如:LiDAR扫描器,RGB摄像扥)的现实条件下,本研究旨在运用多种互补性输入通道来提高效果。

本文提出的方式由两部分组成。(1)特征编码。通过其他直接或间接可观察的影响来发现目标个体的移动行为,再通过各种角度,例如从上至下以及正面视角,来提取这些行为。(2)内置多模态。将一系列习得行为模式内植入单一多模态隐藏空间。同时,我们为未来预测特别构建了一个生成模型并制定了带有额外正则项的目标函数。基于两个基准价值数据集,进行广泛评估来评价本文所提出框架的效果。

原文标题:Shared Cross-Modal Trajectory Prediction for Autonomous Driving

We propose a framework for predicting future trajectories of traffic agents in highly interactive environments. On the basis of the fact that autonomous driving vehicles are equipped with various types of sensors (e.g., LiDAR scanner, RGB camera, etc.), our work aims to get benefit from the use of multiple input modalities that are complementary to each other.

The proposed approach is composed of two stages. (i) feature encoding where we discover motion behavior of the target agent with respect to other directly and indirectly observable influences. We extract such behaviors from multiple perspectives such as in top-down and frontal view. (ii) cross-modal embedding where we embed a set of learned behavior representations into a single cross-modal latent space. We construct a generative model and formulate the objective functions with an additional regularizer specifically designed for future prediction. An extensive evaluation is conducted to show the efficacy of the proposed framework using two benchmark driving datasets.

原文链接:https://arxiv.org/abs/2004.00202

原文作者:Chiho Choi

2004.00202.pdf ---来自腾讯云社区的---用户6868260

关于作者: 瞎采新闻

这里可以显示个人介绍!这里可以显示个人介绍!

热门文章

留言与评论(共有 0 条评论)
   
验证码: