top of page
canopy creative banner logo in white
Canpy Creative primary logo 2025

3D Spatial Prompting The Next-Gen CGI Pipeline

  • Writer: jesse barratt
    jesse barratt
  • Jul 11
  • 5 min read

Testing Intangible’s Open Beta


In the last few years, AI-generated art has gone from novelty to industry-disrupting force, and nowhere is that more apparent than in the world of CGI, visual storytelling, and creative production pipelines. Tools like Midjourney and Runway have shaken up the post-production space. But a new wave is emerging, one that moves beyond 2D image generation and into 3D spatial prompting. And it’s pointing toward what might become the future of how we ideate, build, and collaborate on visual work.

We just spent 30 minutes inside the open beta of Intangible, a new spatial AI creation platform, and put it through a quick test: a multi-shot look dev and vibe check using a rough 3D mockup of a cave scene. What I found wasn’t perfect, but it was very promising.

This article is a deep-dive into what worked, what didn’t, and why this workflow is going to matter.

Abandoned wooden boats in a cave overgrown with lush green plants. Dim light, tranquil water, and hanging vines create a mysterious mood.
A cavern scene with broken wooden structures, green foliage, and purple mushrooms. Stalactites hang from the ceiling; a calm water body below.

What is Intangible?


For those new to it, Intangible is a browser-based tool that lets you create 3D scenes using intuitive drag-and-drop tools, then feed those spatial layouts into AI models that generate high-quality images and video. Think of it as a kind of "Midjourney meets Unreal Engine," but built for cinematic previsualization, storyboarding, and real-time ideation.


You lay out your shot with objects, assign prompts and camera angles, and the platform generates stylized or realistic renders based on your spatial input. It’s entirely no-code, and aimed at artists, directors, and teams looking to cut the time between idea and output.

The promise?


Faster pre-production. Less technical bottlenecking. Better visual communication.

Race cars speeding on a track at sunset, creating a dynamic scene. Text reads "Ideas move faster here." Mood is exhilarating.

Testing the Workflow: A 30-Minute Mockup

I dropped into the tool with no tutorial, built a quick greybox cave scene with a few props, and used it to test out a short sequence of cinematic shots. The goal wasn’t polish.


It was to see...


Can this fit inside a modern production workflow?


A ship sails in the distance beyond a cave opening, with lush plants and calm water inside. Sunlight illuminates the entrance and rocky edges.

Here’s what I learned.


1. 3D UX: Clean Enough to Be Useful

The base UX is relatively familiar. Standard 3D object manipulation, grid snapping, widget controls. Pressing Tab opens a more advanced editor, unlocking technical controls that actually make the process feel serious, not just playful.

For anyone with experience in 3D environments (Unreal, Unity, Blender), the learning curve is minimal. For others, it’s approachable but might still feel abstract.


Skeleton lies on sandy beach under decaying wooden structure. Waves crash in the background. Somber, eerie mood with muted colors.

2. Prompting is Still the Bottleneck, but Color Helps

Like any generative design tool, prompting is where things get murky. It takes some back-and-forth to find the sweet spot between literal description and stylized guidance.

That said, assigning color to objects noticeably improves prompt fidelity. Once you give the AI more visual context, this rock is red, this pool is glowing blue, the quality and relevance of outputs improve dramatically.

A lush cave interior with large green plants and a serene water pool. Sunlight filters through the opening, casting a tranquil mood.

3. Kling Integration = Solid Video Output

One of the stronger features is the ability to render short videos using Kling, one of the more promising AI video generation models right now. While not flawless, the motion outputs were coherent, stylized, and surprisingly cinematic for something generated on the fly.


4. The Missing Pieces

As strong as the core platform is, there are some key things missing that would take it to the next level:

  • Camera Presets: Having to prompt camera logic for every shot is tedious. A library of dolly-ins, pans, crane shots, etc., would unlock much faster storyboarding.

  • Lighting Control: Being able to control the direction, temperature, and intensity of light sources would drastically improve artistic control.

  • Material & Surface Prompts: Right now, material context is shallow. Adding more granularity here (roughness, reflectivity, displacement, etc.) would push renders closer to final concept art.

  • Object Tagging: The ability to tag an object as “altar,” “torch,” or “ancient pillar” would help the AI interpret prompts with more semantic accuracy.


View from inside a cave with polygonal rock walls, overlooking a serene sea and distant trees. The mood is tranquil and mysterious.
Sailing ship seen through a cave entrance with lush greenery, calm water, and a sunlit tree in the background. Peaceful and mysterious mood.

A New Phase in Visual Production

This isn’t just a toy for artists. It’s a look into the next generation of CGI pipelines, where AI, 3D layout, and human art direction converge.

Let’s talk about why that’s a big deal.


From Text-to-Image to Spatial-to-Image

We’ve all seen what happens when text prompts alone drive image generation. The outputs are powerful but wildly inconsistent. Visual ideas get lost in translation. What Intangible does is flip the script, letting users communicate through spatial intention rather than words alone.

This is a massive step forward. It enables:

  • Faster ideation

  • More accurate communication of intent

  • Lowered barrier to entry for non-technical creatives

It also removes the dependence on prompt engineering. You can show, not tell.


Ideation Over Execution

Right now, I wouldn’t use Intangible to generate final production assets. But that’s not the point. Its real power is in unlocking creative freedom earlier in the pipeline, where ideas are fragile and iteration is everything.

Use cases I can already see:

  • Look dev sprints during pre-production

  • AI-powered storyboarding for directors and cinematographers

  • Pitch decks with immersive visuals

  • VR/AR experience planning

  • Campaign ideation for agencies and experiential teams

In these contexts, fidelity doesn’t matter as much as feeling. And that’s where AI-generated art really shines.


Creative Risk-Taking Gets Easier

When visuals are cheap to generate and quick to modify, teams can take more risks. You can explore weird styles, odd lighting choices, and surreal compositions without incurring a dev cost. You’re no longer spending 3 days building a mockup, just 3 clicks.

This speed and flexibility give teams room to experiment, and that leads to better final work.


Wrecked wooden ship in a rocky cave overgrown with green vines. Water surrounds the debris, and colorful plants add contrast.
Sunken wooden ships in a lush cave with hanging vines and large leafy plants. Dim lighting creates a mysterious, abandoned atmosphere.

So Is This the Future of Production?


Not quite....but it’s getting close.


Real-time engines like Unreal still offer better control, higher fidelity, and production-ready outputs. I wouldn’t swap them out for Intangible just yet. But I would consider using Intangible alongside them, for look dev, client alignment, or exploratory scene design.


Where this tool will shine most in the next year or two is pre-production and prototyping.

And as features improve, modular AI pipelines, dynamic feedback loops, real-time light response, it’s not hard to imagine spatial AI tools like this becoming core parts of the modern virtual production workflow.


This was just 30 minutes of play, but the implications are serious. Intangible is pushing us toward a creative future that’s more fluid, more collaborative, and more spatially intuitive.

As AI tools evolve, the line between ideation and production will blur and that’s going to unlock new forms of storytelling and design that haven’t been possible before.

If you're working in film, advertising, VR, experiential, or games, this is worth keeping an eye on.


And if you're building next-gen tools in this space: spatial input is the key.


Interested in Spatial Input or workflows for your projects that will speed things up and save you budget? Contact the Canopy team for a chat.


For more information on the open beta visit Remix Reality Book A Demo


Subscribe to our Blog

Dont Miss Out On The Information You're Looking For.

Let’s Build Something Immersive Together

Whether you're looking to enhance training with VR, create engaging AR product activations, develop high-quality 3D animations, or explore custom interactive experiences, we’re here to help. Our team specializes in tailored immersive solutions that bring your vision to life.

Have a project in mind? Let’s start the conversation today.

Contact us

bottom of page