3D for everyone? Nvidia’s Magic3D can generate 3D models from text

A poison dart frog rendered as a 3D mannequin by Magic3DEnlarge / A poison dart frog rendered as a 3D mannequin by Magic3D. (credit: Nvidia)

On Friday, researchers from Nvidia introduced Magic3D,. an AI mannequin that can generate 3D models from textual content descriptions. After getting into a immediate such as, “A blue poison-dart frog sitting on a water lily,” Magic3D generates a 3D mesh model, comprehensive with coloured texture, in about forty minutes. With modifications, the resulting mannequin can be used in video games or CGI artwork scenes

In. its academic. paper, Nvidia frames Magic3D as a response to DreamFusion,. a text-to-3D mannequin that Google researchers introduced in September. Similar to how DreamFusion uses a text-to-image mannequin to generate a 2D photograph that then gets optimized into volumetric NeRF. (Neural radiance field) data, Magic3D uses a two-stage process that takes a coarse mannequin generated in low decision and optimizes it to greater decision. According to the paper’s authors, the resulting Magic3D method can generate 3D objects two times swifter than DreamFusion

.

.

Magic3D can also participate in prompt-based editing of 3D meshes. Given a low-resolution 3D mannequin and a base prompt, it is attainable to alter the textual content to change the resulting mannequin. Also, Magic3D’s authors exhibit preserving the similar topic all through several generations (a idea usually referred to as coherence) and making use of the model of a 2D photograph (such as a cubist painting) to a 3D model

Read. three remaining paragraphs | Comments

.