
Table of Contents
The Game Developers Conference (GDC) 2025 just happened in San Francisco late March, and our team was present at the show to talk to partners and customers, and to explore latest trends hands-on and chat with other experts in the field of real-time 3D technology.
It has been a blast, with NVIDIA’s GTC 2025 event happening in parallel, and we were able to collect so many great impressions during the week.
Here are our highlights from the GDC 2025 show floor:
3D GenAI and the view of 3D tech artists in 2025
One of the biggest topics this year was, of course, 3D generative AI (“GenAI”) – methods and models that “generate”, seemingly out of thin air, production-ready 3D models... at least that’s the vision. In reality, current 3D generative AI models usually are trained on massive amounts of 2D images, which then are used to generate a new image that is statistically plausible w.r.t. the user’s prompt and based on the training data. Such 2D images are then, through another network, transformed into 3D models. This typically introduces a couple of limitations, such as the resulting 3D models being just “mesh blobs”, but without semantic segmentation or a well-structured scene graph. In addition, PBR materials must be trained and reconstructed with additional methods, since most 2D image generators typically don’t output render elements, but just pixels. So, this year at GDC, we were wondering to see some of the most advanced 3D GenAI companies showing their results in practice... which was very interesting, as we will see below.
Now, when you’re a 3D tech artist, like many in our team, there is an important topic to consider: how much is 3D generative AI going to take away artist jobs, and what parts do we want to leave to humans in the future? Also, what data are these models trained on? Many artists have not given their permission to companies to train on their artworks, but many companies still do so – this, together with the, more or less obvious, goal of 3D generative AI platforms to replace what 3D artists do in one or the other way, is a highly controversial topic. So, we went over the GDC showfloor with mixed feelings: fascination and interest in learning more about the current state of the art in 3D GenAI.
Hyper3D.AI
One of the stars of the show this year was, in our opinion, Hyper3D AI - a GenAI startup showing their “Rodin” 3D GenAI model. With a great team of researchers located at ShanghaiTech University, their team has already published technical papers at SIGGRAPH and achieves stunning results when it comes to segmented assets, untextured or with materials. Their innovative solution can also texture an object based on an image (e.g., a photo), which is a pretty cool approach. In our practical test, the latter didn’t work super well yet (in terms of how faithfully it represented the materials on the photo), but it was inspiring to see their team focusing on actually making their engine think “3D first” and being smart about generation of metadata too – such as physical properties like weight, which could be leveraged for simulation scenarios, for example. Looking at these kind of “out of the box” approaches for 3D GenAI, doing more than “just” creating a mesh blob, made us imagine how such scenarios would work hand-in-hand with classical 3D modeling, done by an artist (for example, letting the AI texture your asset for you, or propose physics properties for a simulation, while you do the modeling and actual art). Finally, they offer more fine-grained artistic control through “control nets” - for example, you can prescribe a bounding box for the object. Oh, and bonus points for their use of a Web-based path tracer to use more physically-accurate rendering!
Meshy.AI
Another “hot” GenAI startup we looked at was Meshy. With substantial funding, strong marketing and a growing user base, they are positioning themselves as the number one solution for 3D GenAI. We tried to challenge their solution with different prompts, leading to a couple of fun and interesting results: When trying character prompts, such as “Shark man with a flower on the head”, we got very free and different interpretations of our query that we could choose from. In this particular example, we eventually got an OK-looking “Shark man” character, but the flower ended up more on the chest instead. Still, the character looks OK, and we noticed Meshy’s marketing material shows a lot of characters, with even options for automated rigging to animate those in some way. However, when we tried something else, like our electric guitar example “red stratocaster with yellow to red sunburst”, we didn’t get any usable 3D electric guitar (some had two necks, some were missing essential parts). This led us to think that most of their data must have been trained on entertainment visuals from characters and similar material, and that we are probably witnessing shortcomings in the training data, intertwined with limitations of the generation algorithms. They had cool showcase videos though how one can use their tech already to generate, say, custom figurines for 3D printing, which could then be hand-painted and used for board games or similar use cases. So, we are sure that such algorithms will find their place in practice within the next couple of months and years as well.
ZRemesher: Model with great detail, now also on the iPad
If you have every used ZBrush by Maxon as a sculpting tool for your 3D workflow, maybe you have wondered how it would be to use just a pen, or a similar lightweight tool for modeling – being closer to the experience of drawing on paper? In case you did, there’s great news for you: the recent ZBrush version is better than ever to be used on an iPad. We checked out a demonstration, and while there are still limitations, compared to the full desktop workflow, the new mobile-based sculpting is already well-suited and convenient for many simple modeling applications, such as gear or props for games and similar use cases. Check it out here.
Re-Branding and Updates from Perforce
Industry veterans might be familiar with Perforce – one of the long-standing version control systems in the industry, successfully used both inside and outside the Games sector. Perforce showed a couple of exciting updates for 3D data management, in particular for their various client applications, including a Web-based client with interactive 3D viewer. Their “Helix” product family (e.g., “Helix Core”) has just been rebranded to “P4” (the name for their executable for a while already), so if you stumble upon Perforce P4, that’s really just the new branding, and still the same exciting product with a couple of great updates to make it easier to use for development teams. You can check it out here.
The AR Mirror: Trying VTO at the Marvelous Designer booth
In a fun and non-representative experiment, our team tried out 3D virtual try-on (VTO) via an “AR Mirror”, at the Marvelous Designer booth. Acknowledging that this part of the booth should probably rather seen as a gimmick (Marvelous Designer being a pretty strong cloth modeling solution, and not an AR tool per se), it was a fun experiment and made us think more about VTO: What already works well for more “hard-surface” shoes doesn’t yet translate well to full 3D virtual try on, and for production use there are just so many open topics to solve: starting with removal of existing clothes on the user (except underwear please!), and continuing with realistic fitting and physically-plausible draping, there are still so many unsolved challenges. And while there might be more focused solutions for practice than this simple, yet fun demo, we were thinking about the cases where we have already seen working VTO on 3D avatars (yet not on camera streams). So, maybe the future lies in using exactly more technology that we already know from Games (such as Unreal’s MetaHuman) to “mirror” the user and produce better VTO applications? Food for thought, and definitely a fun and exciting topic to keep track of within the next couple of months and years!


Meet the Author

Max
CEO & Co-Founder
Max is a Co-Founder and the CEO at DGG. Working with leading retailers, Max and his team are on a mission to automate 3D asset optimization workflows for real-time applications - for e-commerce, games and beyond. Max received his PhD of engineering with honors from TU Darmstadt, in the area of 3D data optimization. He has also helped to shape the glTF format at Khronos.
