Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting

https://research.paulengstler.com/invisible-stitch/

Building 3D Scenes With Depth Inpainting

To hallucinate scenes beyond known regions and lift images generated by 2D-based models into three dimensions, current 3D scene generation methods rely on monocular depth estimation networks. For this task, it is crucial to seamlessly integrate the newly hallucinated regions into the existing scene representation. Simple global scale-and-shift operations to the predicted depth map, as used by previous methods, might lead to discontinuities between the scene and its hallucinated extension. We introduce a depth completion network that is able to smoothly extrapolate the existing scene depth based on an input image.

3D scene generation method figure

View a larger version of this figure

Overview of our 3D scene generation method. Starting from an input image I0I_0, we project it to a point cloud based on a depth map predicted by a depth estimation network gg. To extend the scene, we render it from a new view point and query a generative model ff to hallucinate beyond the scene's boundary. Now, we condition gg on the depth of the existing scene and the image of the scene extended by ff to produce a geometrically consistent depth map to project the hallucinated points. This process may be repeated until a 360-degree scene has been generated.

The depth completion network learns to inpaint masked depth map regions by being conditioned on an image and the depth of known regions. We use masks that represent typical occlusion patterns generated by view point changes. To retain the model's ability to predict depth if no sparse depth is available, the sparse depth input is occasionally dropped.

Training procedure figure

View a larger version of this figure

Overview of our training procedure. In this compact training scheme, a depth completion network gg is learned by jointly training depth inpainting as well as depth prediction without a sparse depth input (the ratio being determined by the task probability pp).A teacher network gTg_T is utilized to generate a pseudo ground-truth depth map DD for a given image II. This depth map is then masked with a random mask MM, to obtain a sparse depth input D~\tilde D.

360-Degree Scene Results

"a view of Zion National Park"

Evaluating Scene Geometry

Within the fully generative task of scene generation, evaluating the geometric properties of generated scenes is difficult due to the lack of ground-truth data. Most existing work resorts to image-text similarity scores, which only measures the global semantic alignment of the generation with a text description. To evaluate the geometric consistency and quality of the depth predictions used to build the scene, we propose a new evaluation benchmark. This benchmark quantifies the depth-reconstruction quality on a partial scene with known ground truth depth.

Scene evaluation approach figure

View a larger version of this figure

Overview of our scene consistency evaluation approach. Assume a scene is described by a set of views {v1,v2,… }\{v_1, v_2, \dots\} with associated images, depth maps, and camera poses, where the overlap of two views is described by a function ϕ(vi,vj)\phi(v_i, v_j). For a given view pair (vi,vj)(v_i, v_j) with ϕ(vi,vj)≥τ\phi(v_i, v_j) \geq \tau, we generate a representation, e.g., a point cloud, from the ground-truth (GT) data for viv_i. Then, we render the representation from the view point of vjv_j. We feed the corresponding ground-truth image and the representation's depth into the model under consideration to extrapolate the missing depth. Finally, we calculate the mean absolute error between the result and the ground-truth depth for vjv_j, only considering those regions that were extrapolated.

In both, a real-world and a photorealistic setting, our inpainting model produces predictions that are more faithful to the ground-truth than prior methods.

Abstract

3D scene generation has quickly become a challenging new research direction, fueled by consistent improvements of 2D generative diffusion models. Most prior work in this area generates scenes by iteratively stitching newly generated frames with existing geometry. These works often depend on pre-trained monocular depth estimators to lift the generated images into 3D, fusing them with the existing scene representation. These approaches are then often evaluated via a text metric, measuring the similarity between the generated images and a given text prompt. In this work, we make two fundamental contributions to the field of 3D scene generation. First, we note that lifting images to 3D with a monocular depth estimation model is suboptimal as it ignores the geometry of the existing scene. We thus introduce a novel depth completion model, trained via teacher distillation and self-training to learn the 3D fusion process, resulting in improved geometric coherence of the scene. Second, we introduce a new benchmarking scheme for scene generation methods that is based on ground truth geometry, and thus measures the quality of the structure of the scene.

{
"by": "jasondavies",
"descendants": 6,
"id": 40221345,
"kids": [
40234184,
40231106
],
"score": 127,
"time": 1714557887,
"title": "Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting",
"type": "story",
"url": "https://research.paulengstler.com/invisible-stitch/"
}
{
"author": null,
"date": null,
"description": "We introduce a depth completion network that allows generating geometrically coherent 3D scenes, and present a new benchmarking scheme for scene generation methods that is based on ground truth geometry.",
"image": "https://research.paulengstler.com/invisible-stitch/images/paper_projection_figure_stacked.jpg",
"logo": null,
"publisher": "Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting",
"title": "Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting",
"url": "https://research.paulengstler.com/invisible-stitch/"
}
{
"url": "https://research.paulengstler.com/invisible-stitch/",
"title": "Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting",
"description": "We introduce a depth completion network that allows generating geometrically coherent 3D scenes, and present a new benchmarking scheme for scene generation methods that is based on ground truth geometry.",
"links": [
"https://research.paulengstler.com/invisible-stitch/"
],
"image": "",
"content": "<p><video autoplay loop muted><source src=\"https://research.paulengstler.com/invisible-stitch/videos/paper_480p.mp4\" type=\"video/mp4\"></source>Your browser does not support the video tag.</video></p><div><h2 id=\"dataset\">Building 3D Scenes With Depth Inpainting</h2><p>To hallucinate scenes beyond known regions and lift images generated by 2D-based models into three dimensions, current 3D scene generation methods rely on monocular depth estimation networks. For this task, it is crucial to <i>seamlessly</i> integrate the newly hallucinated regions into the existing scene representation. Simple global scale-and-shift operations to the predicted depth map, as used by previous methods, might lead to discontinuities between the scene and its hallucinated extension. We introduce a <b>depth completion network</b> that is able to smoothly extrapolate the existing scene depth based on an input image.</p><div><p><img src=\"https://research.paulengstler.com/invisible-stitch/images/paper_projection_figure_stacked.jpg\" alt=\"3D scene generation method figure\" /></p><p><a target=\"_blank\" href=\"https://research.paulengstler.com/invisible-stitch/images/paper_projection_figure_stacked.png\">View a larger version of this figure <svg></svg></a></p><p><span>Overview of our 3D scene generation method. </span>Starting from an input image <span><span><span>I0I_0</span></span></span>, we project it to a point cloud based on a depth map predicted by a depth estimation network <span><span><span>gg</span></span></span>. To extend the scene, we render it from a new view point and query a generative model <span><span><span>ff</span></span></span> to hallucinate beyond the scene's boundary. Now, we condition <span><span><span>gg</span></span></span> on the depth of the existing scene and the image of the scene extended by <span><span><span>ff</span></span></span> to produce a geometrically consistent depth map to project the hallucinated points. This process may be repeated until a 360-degree scene has been generated.</p></div><p>The depth completion network learns to inpaint masked depth map regions by being conditioned on an image and the depth of known regions. We use masks that represent typical occlusion patterns generated by view point changes. To retain the model's ability to predict depth if no sparse depth is available, the sparse depth input is occasionally dropped.</p><div><p><img src=\"https://research.paulengstler.com/invisible-stitch/images/paper_training_procedure.jpg\" alt=\"Training procedure figure\" /></p><p><a target=\"_blank\" href=\"https://research.paulengstler.com/invisible-stitch/images/paper_training_procedure.png\">View a larger version of this figure <svg></svg></a></p><p><span>Overview of our training procedure. </span>In this compact training scheme, a depth completion network <span><span><span>gg</span></span></span> is learned by jointly training depth inpainting as well as depth prediction without a sparse depth input (the ratio being determined by the task probability <span><span><span>pp</span></span></span>).A teacher network <span><span><span>gTg_T</span></span></span> is utilized to generate a pseudo ground-truth depth map <span><span><span>DD</span></span></span> for a given image <span><span><span>II</span></span></span>. This depth map is then masked with a random mask <span><span><span>MM</span></span></span>, to obtain a sparse depth input <span><span><span>D~\\tilde D</span></span></span>.</p></div><h2>360-Degree Scene Results</h2><div><p><video width=\"360\" height=\"240\" autoplay loop muted><source src=\"https://research.paulengstler.com/invisible-stitch/videos/demo_zion_360_rgb.mp4\" type=\"video/mp4\"></source>Your browser does not support the video tag.</video><video width=\"360\" height=\"240\" autoplay loop muted><source src=\"https://research.paulengstler.com/invisible-stitch/videos/demo_zion_360.mp4\" type=\"video/mp4\"></source>Your browser does not support the video tag.</video></p><p>\"a view of Zion National Park\"</p></div><h2>Evaluating Scene Geometry</h2><p>Within the fully generative task of scene generation, evaluating the geometric properties of generated scenes is difficult due to the lack of ground-truth data. Most existing work resorts to image-text similarity scores, which only measures the global semantic alignment of the generation with a text description. To evaluate the geometric consistency and quality of the depth predictions used to build the scene, we propose a new evaluation benchmark. This benchmark quantifies the depth-reconstruction quality on a partial scene with known ground truth depth.</p><div><p><img src=\"https://research.paulengstler.com/invisible-stitch/images/paper_sce_figure.jpg\" alt=\"Scene evaluation approach figure\" /></p><p><a target=\"_blank\" href=\"https://research.paulengstler.com/invisible-stitch/images/paper_sce_figure.png\">View a larger version of this figure <svg></svg></a></p><p><span>Overview of our scene consistency evaluation approach. </span>Assume a scene is described by a set of views <span><span><span>{v1,v2,… }\\{v_1, v_2, \\dots\\}</span></span></span> with associated images, depth maps, and camera poses, where the overlap of two views is described by a function <span><span><span>ϕ(vi,vj)\\phi(v_i, v_j)</span></span></span>. For a given view pair <span><span><span>(vi,vj)(v_i, v_j)</span></span></span> with <span><span><span>ϕ(vi,vj)≥τ\\phi(v_i, v_j) \\geq \\tau</span></span></span>, we generate a representation, e.g., a point cloud, from the ground-truth (GT) data for <span><span><span>viv_i</span></span></span>. Then, we render the representation from the view point of <span><span><span>vjv_j</span></span></span>. We feed the corresponding ground-truth image and the representation's depth into the model under consideration to extrapolate the missing depth. Finally, we calculate the mean absolute error between the result and the ground-truth depth for <span><span><span>vjv_j</span></span></span>, only considering those regions that were extrapolated.</p></div><p>In both, a real-world and a photorealistic setting, our inpainting model produces predictions that are more faithful to the ground-truth than prior methods.</p><h2>Abstract</h2><p>3D scene generation has quickly become a challenging new research direction, fueled by consistent improvements of 2D generative diffusion models. Most prior work in this area generates scenes by iteratively stitching newly generated frames with existing geometry. These works often depend on pre-trained monocular depth estimators to lift the generated images into 3D, fusing them with the existing scene representation. These approaches are then often evaluated via a text metric, measuring the similarity between the generated images and a given text prompt. In this work, we make two fundamental contributions to the field of 3D scene generation. First, we note that lifting images to 3D with a monocular depth estimation model is suboptimal as it ignores the geometry of the existing scene. We thus introduce a novel depth completion model, trained via teacher distillation and self-training to learn the 3D fusion process, resulting in improved geometric coherence of the scene. Second, we introduce a new benchmarking scheme for scene generation methods that is based on ground truth geometry, and thus measures the quality of the structure of the scene.</p></div>",
"author": "",
"favicon": "",
"source": "research.paulengstler.com",
"published": "",
"ttr": 157,
"type": ""
}