Creating Path Wizard
Uploaded on August 3, 2025
This post will be outlining the technical design, challenges, and process of a digital toy I created called Path Wizard.
The original idea came out of another project (which I should actually continue working on…) I saw this:
wonderful painting in the Milwaukee Art Museum years ago and I've kind of never stopped thinking about it. It is my favorite painting of all time. In my own work I'm interested in the geometry of silhouettes and playing with dimension. This painting exhibits those qualities, so I wanted to make a tribute piece in 3JS where you can drag the swoopy lines around, use gravity selectively, etc.
So I started work on this, made svgs of the background and the swoopy lines, and then I started to put them together in 3js and I realized applying physics to the swoopies would be harder than I initially thought.
I genuinely thought that there would be some easy "svg to rope" plugin that I could add to 3js. This does not exist (maybe another thing that I could build out of this project). The reason this doesn't exist is probably because it's complicated and there aren't a lot of people who need it. What are the physics of this "rope"? Should it "feel" like a thread or a 100 pound chain? Should it be able to stretch?
I started out by playing around with some physics libraries and 3JS in vanilla JS to understand what the libraries really *do* and how to use them. (Also for context I had never done more than poke around with 3js / 3d programming before) I started out with Cannon, and then switched over to Rapier because it has a React port (@react-three/rapier) and its compatibility with @react-three/fiber which I figured would make the project way easier. I wanted to work with something that worked nicely with React since my website is built on React.
Once I kind of got the hang of it and had looked through some documentation, I knew what I was building – a React tool that could take an input of an SVG with a single path and output a 3d object that worked within a physics library. The 3d object should look like the svg on initialization, and its shape should be dictated by joints which can be changed by mouse events.
So step one was to make the actual "shape" in 3d. SVGs are sets of instructions that form an image. One of the composite features of svgs are paths, and paths contain instructions define segments of a path, so my idea was to extract each one of these segments with some sort of library (please let there be a library… okay there is a library) to turn it into an array of objects that define something that looks like the shape.
Now that I have this array, which contains start and end coordinate data, length, angle, etc. I can use this data to construct a 3d boxes or cylinders based on this data to create a replica of the svg in three dimensions. Next, I use Rapier's useSpringJoint() to create a "joint" in between each one of these objects, or in other words apply a force proportional to the distance between the two objects. That looks like this:
Now we have the "bones" – the 3d segments, and the joints, so we need the "skin". The tube around the bones. On each frame I calculate the position of each segment and use that as a guide to create the shape around the segments. I add a shader for flat "cartoony" shading to fit the aesthetic of the website.
Next, we need to add the click events. On the tube mesh above, I created an onMouseDown event that finds the closest segment and turns on an isDragging flag. While this is on, any mouse movement will figure out the new mouse position, calculate the distance between those two points, and apply a force in the direction of the mouse based on the distance. On mouse up, we stop calculating these distances. For clarity, I removed the "skin" mesh to better illustrate how the joints work:
While working on this, I added a UI element where I would actually have to draw a path – this allowed me to catch a lot of edge cases and bugs because I would be drawing all different kinds of shapes. But at this point I realized that a large part of the screen real estate was taken up by a static image that the user had already drawn. So I decided to do something similar to how I create the skin/tube layer. I applied the same points from the position of each segments in the 3d image back onto the original drawing. You can even add to the original image, and those points will generate new segments and thus a new tube.
I just did this via useContext . I would update the values of the segments and then added an onFrame event to the canvas to update the path.
Adding this animated drawing element brought on additional complexity. Consider the following:
- The scale of the 3d model and the drawing were different
- The position of the camera depends on the scale of the drawing: If you draw a small line (imagine a line with three segments) the camera zooms in to this based on scale.
- The rope's physics are defined as having ability to stretch
How do we keep both the rope, and the "backported" 2d dimensions in the frame of the DOM elements?
To keep the 3d model in frame, we normalize the points based on the center of the 3d model. Additionally, because the dragging events happen based on distance that's calculated on mouse position, the force can only be applied to the edge of the DOM element, keeping the 3d model within view at all times.
However this isn't the case for the 2d frame. The 2d frame will have to be resized. There are two parts to this resizing: Initial resizing, and subsequent resizing. This is what the initial resizing looks like slowed down, and onClick the 3d model turns red (you need to "drag", or at least mouseDown the item to trigger the resize):
I also could have not scaled it on init, and any change could have modified the drawing in place, but it felt like it made sense to just scale it up because the onFrame event resizes the drawing every frame. It felt cleaner to just always scale it as long as the onframe event in the "drawing" panel is firing.
That brings me to the final point: optimizing the rendering this app is obviously very heavy on the gpu with 3js and the canvas library on the left both going up to 60fps so we want to not be rerendering when we don't need to be. When there isn't any change in the 3d model's position, the logic in both of the onframe events should stop firing – we should stop calculating the dimension of the tube and redrawing the 2d path. I did this really simply, I just turned both off after 10 seconds of inactivity – based on if i had "grabbed" the tube or drawn on the left hand side. Another way to do this :
- Run a function onframe to figure out if the position has changed. I would have to loop through each segment again, figure out if there has been a (meaningful – based on some delta value) change. If there has, re-render the tube and the 2d path.
The reason I didn't do that is that it could potentially slow down the app. The frame rate on my iphone 12 pro max is already lower than it is on desktop, and I think this is still on the higher end of performance for phones, so I didn't want to add additional processing into the onFrame event. Instead it makes more sense for each frame to have less processing and render them for a longer period of time.
Going forward I should really go back to finally recreating the Lesley Vance painting. She has a series of paintings that look this way, it would be cool to make something interactive where you could toggle between the paintings and have the functionality be the same with the dragging element.




