您的位置 首页 > 腾讯云社区

无约束多视点视频中动态事件的4D可视化(CS CV)---蔡秋纯

我们提出了一种基于数据驱动的手持式多摄像机动态事件4D时空可视化方法。我们方法的关键是使用特定于场景的自监督神经网络来组成事件的静态和动态方面。虽然从离散的视角捕捉到,但该模型使我们能够连续地在事件的时空中移动。该模型允许我们创建虚拟摄像机,以方便:(1)冻结时间和浏览视图;(2)冻结视图并在时间中移动;(3)同时更改时间和视图。我们也可以编辑视频,如果在任何其他视图中都可见,则可以显示给定视图的遮挡对象。我们验证了我们的方法,在野外挑战活动捕获使用多达15个移动摄像机。

原文标题:4D Visualization of Dynamic Events from Unconstrained Multi-View Videos

原文:We present a data-driven approach for 4D space-time visualization of dynamic events from videos captured by hand-held multiple cameras. Key to our approach is the use of self-supervised neural networks specific to the scene to compose static and dynamic aspects of an event. Though captured from discrete viewpoints, this model enables us to move around the space-time of the event continuously. This model allows us to create virtual cameras that facilitate: (1) freezing the time and exploring views; (2) freezing a view and moving through time; and (3) simultaneously changing both time and view. We can also edit the videos and reveal occluded objects for a given view if it is visible in any of the other views. We validate our approach on challenging in-the-wild events captured using up to 15 mobile cameras.

原文作者:Aayush Bansal, Minh Vo, Yaser Sheikh, Deva Ramanan, Srinivasa Narasimhan

原文地址:https://arxiv.org/abs/2005.13532

无约束多视点视频中动态事件的4D可视化(CS CV).pdf ---来自腾讯云社区的---蔡秋纯

关于作者: 瞎采新闻

这里可以显示个人介绍!这里可以显示个人介绍!

热门文章

留言与评论(共有 0 条评论)
   
验证码: