Show HN: MCP server for Blender that builds 3D scenes via natural language
blender-mcp-psi.vercel.appHi HN!
I built a custom MCP (Model Context Protocol) server that connects Blender to LLMs like ChatGPT, Claude, and any other llm supporting tool calling and mcps, enabling the AI to understand and control 3D scenes using natural language.
You can describe an entire environment like:
> “Create a small village with 5 huts arranged around a central bonfire, add a river flowing on the left, place a wooden bridge across it, and scatter trees randomly.”
And the system parses that, reasons about the scene, and builds it inside Blender — no manual modeling or scripting needed.
What it can do: - Generate multi-object scenes like villages, landscapes, from a single prompt - Understand spatial relations — e.g., “place the bridge over the river” or “add trees behind the huts” - Create camera animations and lighting setups: “orbit around the scene at sunset lighting” - Respond to iterative changes like: “replace all huts with stone houses” or “make the river narrower” - Maintain object hierarchy and labels for later editing
Tech Stack: - Blender Python scripting - Node.js server running MCP - LLM backend (OpenAI / Claude, easily swappable)
Demo: https://blender-mcp-psi.vercel.app/
GitHub: https://github.com/pranav-deshmukh/blender-mcp-demo/
Curious to hear thoughts from folks in 3D tooling, AI-assisted design, or dev interface design. Would you find this useful as a Blender plugin? I’m open to expanding it!
Please try it and give it a star on github
Couple things:
1. Your github doesn't have anything in it, it is just a generic MCP server.
2. How does this differ from blender-mcp released by ahujasid several months ago? That one actually does have the complete source and has quite a following.
https://github.com/ahujasid/blender-mcp
https://news.ycombinator.com/item?id=43357112
It is indeed a mcp server, but I have added some things that makes of different from being generic, it works smoothly, you can see from code.
And I am working on it, it is new and I am adding other this to it like generating 3js scenes, adding free blender asset apis, etc. Happy if anyone else wants to contribute
Why star it on github if it doesn't include the code to run it?
Cause github has become a advertising platform for dev. How did this get to front page without code?
What are you saying, it has a code brother, please visit once. And yes, if you want to contribute, I am more than happy
There is no real code here, its all a stub.
No prompts, no functions, nothing in the github repos.
https://github.com/pranav-deshmukh/blender-mcp/blob/main/add...
The fade in effect when scrolling down is quite distracting, and makes reading the web page slower, because I have to wait for the text to appear. Yes, I have a fast computer.
It is also very choppy on my iPhone 16, not sure why.
Edit - I tried watching the demo, and it seems that on my phone the site is not usable, I can’t play the video, clicking on play does nothing and the page keeps scrolling and jumping
Fixing it asap
The live demo video is broken in mobile Firefox. It displays, but is annoyingly cropped (and differently depending on landscape or portrait).
The site layout is completely broken.
Probably vibecoded slop.
Hi, not at all vibecoded. I will fix it asap, I built it in hurry, sorry for the issue
That's fairly rude.
[dead]
HN has been happily very rude about anything AI related the last year, even in cases here where it's hardly relevant or appropriate. It's depressing and I used to expect a lot better.
It takes more work to build a janky site than just no frills html/css unless you vibecoded it or copy and pasted a crappy template.
I think we should be allowed to push back against sloppy work (which is different from beginner work) instead of ingratiating it with a smile.
We have the rest of you to baby them over adding the worst css transitions I’ve ever seen, something they deliberately swerved into.
They are accused of vibe coding it only through charity because it’s hard to imagine they did it themselves and went “yup that’s exactly what I wanted after spending that extra time adding it.” Whether it’s vibe coded or not isn’t really the point.
Grading on perceived effort is not a rubric destined to last. You cannot detect sloppy work from beginner work without context, and in any case a lot of beginner work these days (and to some degree for the rest of time!) is going to include LLMs or AI.
Is HN only for advertising startups these days? If this post had nothing to do with AI maybe the response would have included some real genuine criticism and feedback, with the assumption baked-in that a beginner was being coached.
To your last point, then downvote it if it's bad. You're right and I agree precisely that it being vibe-coded wasn't the point - but it was brought up regardless. If the result is bad the feedback is still the same. If the "problem" is just that they used tools you don't agree with using, then that's not feedback on the result.
I do not think it has much to do with how fast your computer is, it is probably timed, e.g. from the CSS: "transition-duration: 0.3s". It is quite annoying.
Almost akin to:
- "How many CSS effects do you want?"
- "Yes".
:P
At any rate, the project is pretty cool. Everything is just one prompt away now (not really, but still!).
Thanks for the feedback brother, I will surely improve the website
The only issue I have is not being able to read the text right away, but perhaps making the animation faster might work?
Why these web page animations are still a thing in 2025, i will never understand…
Hi, quick feedback: the demo is extremely short, so I can't really say much. Please generate more complicated scenes and, most importantly, inspect the wireframe. From what I could glance from the demo, the generated models are tri-based instead of quads, which would be a showstopper for me.
Just curious: why do you prefer/have a requirement of quad-based meshes?
Because traditionally, Blender modeling works best on a clean quad-based mesh. Just look at any modeling tutorial for Blender and one of the first things you learn is to always keep a clean, quad-based topology, and avoid triangles and n-gons as much as possible, as it will make further work on the model more painful, if not impossible. That starts with simple stuff like doing a loop cut to things like uv-unwrapping and using the sculpting tools. It's also better for subdivision surface modeling. You can of course use tri-based models, but if you want to refine them manually, it's often a pain. Usually, for me it's pretty much a "take as-is or leave it" situation for tri-based meshes, and since I see these AI-created models more as a starting point rather than the finished product, having a clean quad-based topology would be very important for me.
Is this true even if you do only or mostly sculpting?
No. But for animation meshes, it's the norm to use only quads. Mainly because of topology/retopology issues.
Sometimes texture artists like this a lot more.
Yes, because uv-unwrapping is much more predictable with quads, and you can place seams along edge loops. I'm by no means an expert here, maybe there are tools which make this similarly easy with non-quad topology, but at least from what I've learnt, the clean grids you get form quad-meshes are simply much easier to deal with when doing texturing.
On it, thank you for feedback
The fade-in effect is really distracting, and so poorly done. It takes the elements reaching almost 50% of the screen height before becoming readable.
This is so sad to see animation hurting a good product.
An MPC server is not necessary, one can just API call LLM services directly from within Blender, and they already know Blender - the LLMs know it very well, it being open source and a gargantuan amount of data about it online in the form of tutorials and so on - all in foundation model training data.
Great works. In your "How to Setup" the cloned project is "blender-mcp" but the directory is "bleder-mcp-demo".
I don't have Claude and no experience with MCP. How to use it with other tools such as LMStudio, ollama, etc?
fixing it, its actually blender-mcp only, I changed the repo name form blende-mcp-demo to blender-mcp.
And you can use free tier claude desktop or other open source llms
Nice idea - I’m adding it to my list over at https://taoofmac.com/space/ai/mcp and will try it out later as I have been dabbling in Blender plugins myself.
Thanks
Congrats and releasing something. I'm not a blender user, but I think the demo is pretty cool. Kind of crazy what MCP is allowing LLMs to do.
how does it compare to the existing https://blender-mcp.com/ ?
Slightly strange how both use the same example of a house with some trees.
Will use better example, thank you for suggestion
Better in every way since this is posted to HN!
That one was discussed here too, many times
I apologize for this extremely dumb question, but how is this a "server"? As far as I'm aware Blender is a local app. It can run without an internet connection. If an LLM wants to call into it, it needs to call its local python API.
Is this just unlucky naming or am I missing a critical piece?
MCP is a spec that is attempting to standardize a communication pattern for registering and calling tools from an llm. Part of the spec is a server that exposes specific JSON-RPC end points with a registry of the available tools, resources, and templates, and a way of executing them. That's the server, in this case the server acts as the interface into Blender.
The pipeline for LLM to MCP and to the app looks like,
The chat app doesn’t know how to talk to Blender. It knows about MCP and links in a client. Blender exposes its functionality via a MCP server. MCP connects the two.A server-client architecture can run on a single computer. You just need one piece of code to act as the "server" and one to act as the "client". Technically you don't even necessarily have to involve the networking stack, you can just communicate between processes.
This is an awesome use of MCP. Thank you!
Thanks a lot brother
Updated github link: https://github.com/pranav-deshmukh/blender-mcp
Does anyone know of a way to create custom 3D print designs with LLMs? Is there a bespoke project or service somewhere?
I have successfully(if inefficiently, but faster than if I did it on my own) used Claude with OpenSCAD to make 3d printed products.
Your vibe coded website has a lot of issues on mobile
Screenshots would be nice.
Is there a feedback loop ?
As in:
External Prompt -> Claude -> MCP -> Blender -> Cycles -> .exr -> show Claude how good its work actually is -> Correct -> New prompt -> ... Rinse and repeat until result actually looks realistic.
Yes
I'm tired of the half-way-there automations, I want an MCP that can replace the person that would need to use this.
Taken to the nth degree you want an MCP server that makes you a feature length animation or invents a new device and ships it to you?
Of course not, I would just ask for a MCP that watches the generated movie so I can use my time for more important matters, I just want the system to work by itself entirely, we could have these full consumerism silos and we just enjoy being called it's gods, but perhaps we could automate such egocentrism too.
> Of course not, I would just ask for a MCP that watches the generated movie so I can use my time for more important matters
Well, now I know why "they" bother to digitally simulate my existence, and why movies are so terrible.
Ha ha, will reach there eventually
sounds like my CEO. "we don't need engineers, we have ai"
now who runs the AI?
...
Obviously you set a Batlle royale style competition with all the engineers where the only one who survives get to be in charge of all AI.
I managed to do something like this directly in WebGL via threejs in Windsurf 2 weeks ago, you can see the resulting animation over here: https://infinite-food.com/ Also did an SVG animation and a globe with geopoints. So much easier than by hand...