fbpx

5.3 – Node Editor – Geometry Nodes

So far, we have covered two main modeling methodologies, the first of which has a technical-design character as it aims to assemble different three-dimensional forms that can be traced back to primitives (mesh, curves, surfaces, etc.), while the other is more artistic and goes through the tools of sculpting. However, there is also a third paradigm, which is parametric modeling. One of the main innovations introduced with the 2.9x releases of Blender and then refined with the 3.x series is the Geometry Nodes system, through which it is possible to create an infinite number of models, as well as entire realistic or abstract scenes, all using the logic of the node editor already seen in action in the windows: shader editor and compositor. This topic is not essential in an introductory guide to 3D graphics, but since there are now many tutorials on the web related to scenes and models created with the help of Geometry Nodes (some of which are extremely complex), I thought it appropriate to include this paragraph in the guide.

Let’s start from the Blender Geometry Nodes workspace, which consists of three windows including the crucial node editor, of course. Select any mesh, such as the default cube, and click

In the node editor, the two nodes Group Input and Group Output will appear, as shown in the following image

The sockets of both nodes are aqua green, which indicates the transmission of information relating to the geometry of our mesh. The purpose of any combination of nodes is to transform an initial geometry into a different final geometry, which may involve significant changes in the number of vertices and faces, according to a more or less complex spatial distribution. The final geometry can also be a combination of different meshes, both primitives and previously modeled ones. As already seen for other Blender tools, in this case (or perhaps it is better to say especially in this case) the subject is so vast (and not fully explored even by the most savvy users) that it would deserve a book of its own. So I will not give a complete view of the available nodes (over 100) but will show some typical examples, also because many nodes are completely analogous to those used in other contexts (for example, nodes related to numerical, color, and vector data operations, etc.). Therefore, it will be easier to experiment with the geometry nodes only after gaining experience with the logic of the shader editor and/or compositor, i.e., topics covered in the previous paragraphs and chapters that have a higher priority in the learning phase of the software.

In the geometry nodes we also find yellow sockets (color data), gray (numeric parameters) and purple (vector data) plus other colors that are currently unnecessary to list. In the Geometry Nodes workspace, the properties panel appears set on the Modifier context, and here we understand that whatever combination of nodes we will define, it will not be destructive as it will act on the mesh as a normal modifier. Let’s start with a simple plane and begin to modify its geometry with the Subdivide node (Mesh node family) placed between the Group Input and Group Output nodes.

You will not see the effect of this node unless you activate the Wireframe view in the Object context, Viewport Display panel. In this case, you can switch to edit mode and see that in reality the plane’s mesh remains with only four vertices. The Subdivide node has a green socket (not gray) to control the subdivision level which however is an integer numerical parameter. Now we insert in the scene (it doesn’t matter where) a simple cube, maybe making it invisible, then we introduce two new nodes Instance on Points and Object Info exactly as in the figure below.

In Object Info, we will have indicated our cube. What happens exactly with this combination of nodes? We know that the plane is made up of four vertices, that is, simply four points in 3D space. With subdivide, we can increase this number vertiginously, so much that it is convenient to stop at three subdivisions (81 vertices). The Instance on Points node will insert the cube we have chosen into each of these points/vertices. However, the dimensions of this cube may be such that they are all partially overlapping, which is why in the Instance on Points node I have set a scale factor of 0.05 for the three dimensions. The result is visible in the following image.

Obviously, this image does not raise any particular interest as it is something that can easily be obtained with the Array modifier. Let’s add the Random Value node (Utilities family) in Vector mode. Here we will find Min and Max values for rotation for the x, y, and z components (local space). Be careful because these values are not the usual sexagesimal degrees, but rather radians. A radian is equal to 57.2958°, Pi (3.142) is equal to 180°. As you can see in the image, I set the minimum values to 0 while for the maximum values only the third (corresponding to z) is set to 1 radian.

The effect is that of a random rotation of the cubes around their own z axis (local) at an angle between 0 and 1 radian.

Now let’s try to randomly vary the position and dimensions of the cubes as well. For dimensions, the process is identical to what we have already seen since the Instance on Points node has a Scale input socket in addition to the Rotation socket (we will ignore the fact that these nodes are diamond-shaped for now), although in this case we can use a non-vector random value of type Float. To perform translations, we will use the Translate Instances node (Instances family) by unchecking the Local Spaces box as shown in the figure.

What observations can we draw from the examples just seen, and in particular from the last one? The first is that with the use of a few nodes, the banal starting geometry of the mesh has been completely turned upside down, becoming extremely more complex. The second and last observation concerns the use of the Instance on Points node. The ability to define a set of points in space (whether these are the vertices of a mesh or a distribution of points derived from it) in order to duplicate and modify a different model (the one indicated in Object Info) makes the geometry nodes system an alternative to particle systems. Excluding the case of grass or hair simulation, for which particle systems offer advanced and optimized tools for this purpose, it is not advisable to use them for the scattering of any mesh on 3D surfaces (think of rocks on a flat ground), cases in which we will resort to geometry nodes. Particle systems are still more indicated when the particles have to be animated in the context of a physical simulation (Chapter 7). Let’s see how to create the scattering of one or more objects on a surface through the geometry nodes system. In the example, we will use only elementary primitives. On the net, there are dozens of tutorials on hundreds of systems of various nature (from gelees candies to entire procedural cities) that use the following logic. Let’s start from a simple UV sphere to which we will apply the following nodes

first of all, we notice the presence of the Distribute Points on Faces node (point family), which creates a random distribution of points on the surface of the mesh.

This node has the Density socket where we can vary the density (and therefore the number) of generated points. As you can see, I have connected this value to the initially empty socket of the Group Input node. The reason for this connection will be immediately discovered by looking at the Modifier Properties context, obviously related to the sphere, here in fact a bar appears with which it is possible to conveniently modify the parameter without necessarily going through the node editor.

At this point, let’s try to join the cube distribution just obtained to the surface itself from which they derive. We can do this with the Join Geometry node, which allows us to join the starting surface (through the geometry socket of the group input node) to what is generated by the remaining chain of nodes connected to each other with the green sockets.

This is therefore the way to randomly distribute any mesh on the surface of another model. In this example, we immediately notice two things. The first is that the cubes are oriented and rotated in a consistent way with the generating surface, that is, according to the normal direction passing through the points randomly generated on the sphere. This was possible by connecting the Rotation output socket of the Distribute Points on Faces node to the input socket of the same name on the Instances on Points node. Then we see that since the cubes are distributed with their geometric center on the points created by the Point Distribute node, these cubes will intersect the sphere. To control the distance from the surface, a distance that is obviously to be understood along the normal to the surface itself, just insert the Translate Instances node between the Instance on Points node and Join Geometry, with the checkmark active on Local Space and then by freely varying the Z value.

Finally, let’s try adding an equal rotation for all the cubes, obviously along the normal axis (top image). Since the Rotation input socket of the Instance on Points node is already occupied, we insert the Rotation Instances node (active local space) between the Translate Instances and Join Geometry nodes. At that point, as usual, we use the Random Value node for random rotations.

Next paragraph
Previous paragraph
Back to Index


Wishing you an enjoyable and productive study with Blender, I would like to remind you that you can support this project in two ways: by making a small donation through PayPal or by purchasing the professionally formatted and optimized for tablet viewing PDF version on Lulu.com  



>Purchase the ebook (PDF) on Lulu.com securely using PayPal and prepaid cards<