• Andy Nicholas

Customising Houdini Nodes [Part 2]: Hooking into Houdini

Updated: Jan 20

Just to recap. We want a system that will allow us to customise built-in Houdini nodes such as the Mantra ROP, so we can add our own parameters and maybe modify some of the existing parameters too.

In the previous article, we discussed the two options we have for customising built-in Houdini nodes. The conclusion was to go with the simplest option, and add parameters on the fly when each new node is created.

This article is about finding a way of getting Houdini to call our Python code every time a node is created. There are a number of different ways of doing this, and they generally fall into one of three behaviours for how our "parameter patching" Python function is triggered:

  1. Our script is automatically called when a new node is created anywhere, whether it's by a script, or via the UI (e.g. Tab menu)

  2. Our script is automatically called but only when a new node is created via the Tab menu

  3. Our script requires manual invocation. For example; from a Shelf tool, or by clicking on a menu.

We'll look at each of these in turn and investigate the different ways that we might go about implementing them. I'll also comment on what the benefits are with each method.

But before we go through that, first, let's just briefly talk about $HOUDINI_PATH and what it lets us do.

The $HOUDINI_PATH environment variable

This is probably one of the most important and most useful environment variables that Houdini provides. It contains a list of paths, separated by a colon or semi-colon (depending on your operating system). Appending your own path to $HOUDINI_PATH is a little bit like telling Houdini to treat that directory as an extension of the main installation's "houdini" folder.

If you have a browse through the "houdini" folder in Houdini's installation, you'll see that it contains all sorts of things; OTLs, python scripts, etc. all of which are used to provide content and configure Houdini's behaviour and UI.

You'll notice that there's a directory called "otls" that (funnily enough!) contains OTLs, albeit with an "hda" extension. If you create a folder called "otls" inside your own directory that you added to $HOUDINI_PATH, then any OTL files you put inside will be automatically found by Houdini. This mechanism also applies to most of the other files and folders you'll find in here, so it's a very easy and accessible way to extend Houdini.

In fact, the Houdini preferences folder in your own user folder makes use of the exact same structure. If you're just testing things for yourself, it can be convenient to use this folder to try things out. Of course, when it comes to deploying to the rest of the team, you should use a shared folder on your filesystem to store your customisations, and use the $HOUDINI_PATH variable to tell Houdini about it.

One of the more useful folders that can be found underneath $HOUDINI_PATH locations is the "scripts" folder. It can contain a number of important scripts;,, and node-event scripts. If you place a folder called "python" inside the "scripts" folder, Houdini will treat it as if you added it to the Python path and will automatically find any Python modules inside.

The other folder of note that can be found in your $HOUDINI_PATH locations is the "python2.7libs" folder. (Note: older versions of Houdini will use older versions of Python, so this may be called "python2.6libs", and it will likely be "python3.7libs" with the change to Python 3). If you place a python script called "" inside this directory, Houdini will call it as soon as Python starts up. It will be called before a lot of Houdini has been initialised, so there are certain restrictions involved, but it's a good place to put any non-Houdini pipeline initialisation code.

For more information on all of this, see the Houdini documentation:

Following Along?

In case you want to try all this out for yourself, in the discussion below, I will be referring to a folder called "houdini_custom" pointed to somewhere in the $HOUDINI_PATH environment variable. It can be called whatever you like though.

I'll also assume that we have some very simple Python pipeline code that's available for Houdini to find. If you want the code in this article to work, then you should create a module inside the scripts/python directory with a package structure like this:


Inside, you should add the following:

def add_pipeline_parameters(new_node):
    print "Adding parameters to :{0}".format(new_node.path())

It's just dummy code to show you that it gets triggered.

Now that we've covered that, let's look at how we can customise Houdini in each of the three ways we mentioned at the top of this article.

1. Automatic Triggering (by script or UI)

This first category seems to be the most comprehensive as it catches all occurances of node creation and calls our script to add our customisations.

In most cases it's a good solution. However, there are some circumstances where it can be problematic. One example I ran into was that a certain third-party tree-generation system had some shelf tools. One of the shelf tools created a ROP node and it broke because it wasn't expecting the parameter layout to be modified. So there are some circumstances where having script-generated nodes being automatically patched isn't a great idea. Nevertheless, let's look at what options we have available to us.

Since we aren't creating our own HDAs, we can't use the internal HDA scripted events like "OnCreated". However, we can define these scripted events in external python files for individual node types. If we continue to use the Mantra ROP as our example, we can make a script to detect when a Mantra node is created by adding a Python file here:


The "out" folder name is given by the Mantra node's node category; ROP. (Note that there's a slight inconsistency with the naming here, as Houdini usually refers to the ROP node category as "driver".) The name of the python file is given by the node type name, and the event type. In this case the Mantra node type name is "ifd", and we're wanting to detect the OnCreate HDA event. For more details on how to name these scripts and folders, you can read up here:

So what code would we write in this event script? All we need is something simple to call our pipeline code to do the work, like this:

from pipeline.houdini import nodes

If you've not come across it before, you may be wondering what "kwargs" is. The "kwargs" variable is a dictionary in global scope that Houdini gives us in various situations when we're dealing with callbacks. The contents of this dictionary depends on the situation. In this particular case, we would get the following in "kwargs" if we created a Mantra SOP at "/out/mantra1":

    'node': <hou.RopNode of type ifd at /out/mantra1>, 
    'type': <hou.NodeType for Driver ifd>}

In other situations, like for example within a parameter callback, we would get given other data.

If you don't want to have to create a script event file for every node you want to customise, you can simply create a single python script here:


This script will be called whenever any node of any type is created, so make sure that your code is reasonably performant to ensure that it doesn't cause any perceived lag to the user.

We'll go into the details of how to filter the node type and efficiently add parameters in the next part to this series.

2. Automatic triggering (via the Tab menu only)

To avoid the issue I mentioned earlier regarding problems with third party tools breaking, we can make sure that custom parameters are only added when the user creates new nodes via the Tab menu. For me, this is my preferred way of doing things.

The only downside encounted so far is that it presents a very minor inconvenience for TDs if they want to create a pipeline-patched node via scripting. It just means that they can't use Houdini's standard Node.createnode() method, and have to use a pipeline function to create new nodes instead. Once they know about it though, it's not an issue.

There are two ways we can hook into the Tab menu in Houdini. Let's look at both.

Method 1: Monkey Patching The "toolutils" Module

To my knowledge, without exception, every time you create a node via the Tab menu, it invocates the following function from Houdini's "toolutils" module.

toolutils.genericTool(*args, **kwargs)

The function takes some arguments (which we don't need to worry about), and based on those, it creates and returns a new node. At that point, the node has been added to the scene and we're free to interrogate it's type and do whatever we want to it.

To hook into it to run our own code, we use a method known as "Monkey Patching" which Wikipedia defines as:

A monkey patch is a way for a program to extend or modify supporting system software locally (affecting only the running instance of the program).

To do this, we follow these steps:

  1. Create our own file. We don't copy the existing one, we start empty. I find it's best to place this file in a separate location to our other code, and use it specifically for overriding code. For neatness, you could store it in a new folder under our "houdini_custom" folder, called something like "hou_py_override".

  2. Configure the environment so that when a script asks to import "toolutils" we make sure it finds ours instead of Houdini's. The easiest way to do that, is to add the directory of our file to the front of the $PYTHONPATH environment variable.

  3. Inside our, we create our own "genericTool" function that will replace Houdini's built-in one.

  4. After our function declaration, we run "imp.load_source" to load Houdini's built-in "". This returns us a new module object and loads it into sys.modules.

  5. In the module that we just manually loaded, we replace the "genericTool" function with our new one, and we store the old one under a different name so we can still call it.

Our custom "genericTool" function might look something like this:

import sys

def generic_tool_override(*args, **kwargs):
    from pipeline.houdini import node
    toolutils_module = sys.modules["toolutils"]
    new_node = toolutils_module.orig_genericTool(*args, **kwargs)
    if new_node is not None:
    return new_node

Things to observe here:

  1. Notice how we're just passing all the function's arguments straight to the original built-in function (now renamed to "orig_genericTool"). We don't really care what the arguments are, we just want the output from that function, which is the new node.

  2. While not strictly necessary, we're doing a function-level local import of our pipeline module, as opposed to putting an import at global level in our module. Due to "closures", the function would keep a reference anyway, but I just prefer to be explicit with this sort of thing for clarity's sake.

  3. To keep things clean, we use "sys.modules" to obtain the built-in toolutils module as it avoids having to use the import mechanism. We can't do an "import toolutils" at global scope as it would give us import recursion. Doing a function-level local import would be okay, but the effect would be to retrieve the cached version from "sys.modules" anyway, so we may as well be explicit about it.

  4. We're calling the original "genericTool" function with a new name: "orig_genericTool". This is as a result of the monkey-patching mechanism which we'll look at next.

  5. At the end of this function, we return the new node that has been created. The net effect is that we completely mimic the original function's behaviour, but with the addition of a call to our pipeline code.

Let's take a look at the monkey patching function itself. It's a fairly small function but it looks bigger below because I've commented each line to explain it step by step.

import os
import sys
import imp

def monkey_patch_built_in_function(module_name, 

    # We want to find a path in $PYTHONPATH which ends in this
    # string. We make sure we locate the one with
    # the correct python version.
    path_suffix = "houdini/python%d.%dlibs" % \

    # Search from the last path in $PYTHONPATH forwards as
    # we're more likely to encounter a Houdini installation
    # directory earlier searching in this order.
    for cur_path in reversed(sys.path):
        # Construct the likely path to our module
        module_path = os.path.join(cur_path, 

        # Check if it exists, and that the path ends 
        # with what we expect.
        if os.path.exists(module_path) \
                and cur_path.endswith(path_suffix):
            # Use "imp" to load the original module
            module = imp.load_source(module_name, module_path)
            # Get the original function
            orig_function = getattr(module, function_name)

            # Check if we want to store the original function
            if store_orig_function_as:
                # Assign it to the new module with the new name
            # Overwrite the original function with the new one
            setattr(module, function_name, new_function)

            # Our work is done so exit the function

    # Report that we can't find the module and print
    # some helpful information to aid with debugging
    print "Cannot find built-in: '{0}'".format(module_name)
    print "sys.path:" + "\n".join(sys.path)

And we call it like this, in the global scope of our


If you're only doing this to one module, then this monkey patching function can just go inside our custom "". If we were patching multiple modules, then it would be better to put the "monkey_patch_built_in_function" routine somewhere more central and import it.

If you follow all these steps, you should find that it will call the custom pipeline code everytime you create a new node from the Tab menu.

Yep, this might seem a bit outside-the-box and you might even say it's a little "hacky", but it offers a low-maintenance solution and in my experience it works really well in production. I've literally had zero issues with it so far.

My only very small concern with this method so far is about forward compatibility with new versions of Houdini. A change in how Houdini's tool invocation system works could cause this to break, but a) I think it's unlikely as it's not changed for many many years, and b) I imagine I'll be able to work around any changes easily enough. If not, there's another way to achieve the same effect which we'll look at next.

Method 2: Replacing Supported Nodes With Tools In The Tab Menu

If the previous method of monkey patching seems a little unorthodox for your taste, here's another way that's a bit more conventional and explicit in terms of targeting precisely the nodes you want to override.

The method is simple. For each of the nodes we want to patch, we hide its Tab menu entry. Then we add a custom tool to Houdini that shows up in the Tab menu in its place which creates the node and then calls our pipeline code to update the parameters.

So how do we hide a node type from the Tab menu? Taking the Mantra node as an example, we can just do this:

ifd_node_type = hou.ropNodeTypeCategory().nodeType("ifd")

A good place to run this would be from our pipeline start-up code.

If you try to put this in your "python2.7libs/" file, you'll find that it won't work. Houdini hasn't loaded everything yet at this stage, so remember that there are limitations with using this file, and generally you should keep it to Python infrastructure initialisation rather than anything Houdini specific. The solution is to use "" instead, as everything should be initialised properly by then.

So how do we create custom tools for the Tab menu? We could do this manually, but it's a bit of a pain to keep updating it by hand. We could even make a build process to generate the shelf tool file for each release of our pipeline, but there's an easier way. We can do it dynamically at startup and automatically generate the Tab menu tools at the same time as we're hiding the entries for the nodes.

Here's the code that will do all of that for you.

import os
import hou

def get_temp_shelf_tool_file_path():
    temp_dir = hou.expandString("$HOUDINI_TEMP_DIR")
    return os.path.join(temp_dir, "pipeline.shelf")

def override_built_in_nodes(node_override_dict):
    shelf_file_path = get_temp_shelf_tool_file_path()

    # Remove the generated shelf file so we start from scratch
    if os.path.exists(shelf_file_path):

    # Generate the tools
    for node_category, node_data_list \
            in node_override_dict.iteritems():

        for node_data in node_data_list:
            node_type_name, node_tab_path = node_data
            node_type = node_category.nodeType(node_type_name)
            if node_type is not None:
                print "Overriding: {0}".format(str(node_type))
                print "Cannot find '{0}' inside {1}" \
                "".format(node_type_name, node_category)

def create_tool_from_node_type(shelf_file_path, 

    node_category = node_type.category()
    name = "pipeline_{0}".format(node_type.description())
    label = node_type.description()
    script = "import drivertoolutils as dtu\n" \
             "from pipeline.houdini import nodes\n" \
             "new_node = dtu.genericTool(kwargs, '{0}')\n" \
             "nodes.add_pipeline_parameters(new_node)" \

    tool = hou.shelves.newTool(
    return tool

_NODE_OVERRIDE_DICT = {hou.ropNodeTypeCategory(): 
                         [("ifd", "Render"), 
                          ("geometry", "Geometry"), 

You can add this code into the file that lives in the scripts directory, or create a new file if it doesn't already exist. It will be run when Houdini starts and once it's fully initialised.

Let's take a quick look at what this script is doing.

Tool Storage

The script saves the auto-generated tools to a temporary shelf file called "pipeline.shelf" in $HOUDINI_TEMP_DIR. To prevent tool duplication, the "pipeline.shelf" file is deleted (if it exists) before creating the tools.

Writing it to $HOUDINI_TEMP_DIR instead of the "toolbar" directory under a $HOUDINI_PATH location, means that Houdini won't automatically find and install the tools if you start Houdini outside of our setup. The auto-generated tools will only exist for this Houdini session when this script is run first. Generally speaking that's a handy behaviour to have. It's like we're emulating dynamic-in-memory creation of the tools, that are later thrown away at the end of the session.

It's always worth just doing a quick check to see if things like this will work when running multiple Houdini sessions. Even though the tool file is deleted, it is done only momentarily, so it shouldn't present a problem. On the off-chance that it does cause an issue (or if we want to play safe) we could resolve this by adding the Houdini session's process ID to the shelf tool's filename.

Configuring The Script

The "override_built_in_nodes()" function is driven using the _NODE_OVERRIDE_DICT dictionary. The keys of the dictionary are node type categories. The values are a string tuple pair. The first string in the pair is the node type name to override in this node type category. The second string is the desired location of the override in the Tab menu. In this example, I've used the same Tab menu location as the original built-in HDA, so that everything appears unchanged.

In an ideal world, I'd prefer to interrogate the HDA that we're overriding and extract its Tab menu location and then place our tool in the same place. In practice, while it's possible to do this for some HDAs (e.g. the Mantra ROP), it's not possible to interrogate nodes like the Geometry ROP. This is because if you request the node's definition, it returns None:

>>> print hou.node("/out/geometry1").type().definition()

This happens when the node is a C++ compiled node. It unfortunately means that we're not able to retrieve the internal information about it like the Tab menu location. For other HDAs, it is possible, and I"ll show you how to do that in another article.

For now though, I feel it's just better to set this manually in our dictionary, and it gives us some control over changing the location if we would like to.

Just as an example, if we wanted to add an event for the File SOP, the new dictionary would look like this (bold text shows new content):

 _NODE_OVERRIDE_DICT = {hou.ropNodeTypeCategory(): 
                          [("ifd", "Render"), 
                           ("geometry", "Geometry"), 
                           [("file", "Import")]                           

That wraps up this section on hooking into the Tab menu. Let's move on to the last behaviour for script triggering.

3. Manual invocation (e.g. shelf tool or menu callback)

At the other end of the automation spectrum, we can simply give the user a manual way of running our customisation script to add the parameters. In some pipelines this may be a good way to go if you want users to still be able to create the original unpatched built-in nodes. They might not want to have every single node be "pipelined", particularly if you're sharing Houdini scene files with other companies.

To implement this, you could do any of the following:

  1. For each node type your pipeline supports, create a Shelf tool that will create one particular node type and add the parameters. They would use the appropriate Shelf tool to create a pipelined node instead of using the Tab menu.

  2. Create a single Shelf tool that knows how to patch any pipeline supported node type with the custom parameters. The user would select the node(s) to patch before clicking the tool.

  3. Add a custom menu item to the node's right click menu. You can do this by adding a custom somewhere on your $HOUDINI_PATH.

Let's look at each of these in turn:

Method 1: Shelf Tool Per Node Type

Using the Mantra node again as an example, you could simply create a new shelf tool and add a script like this:

import hou
from pipeline.houdini import nodes

mantra_node = hou.node("/out").createNode("ifd")

I'm sure you've spotted an issue already; this will only create new nodes in "/out"!

That's not particularly helpful, but it is an issue with this sort of tool. So how might we work around this?

The most intuitive method for the artist would be for it to create the node in the Network View that's currently open. There are two potential problems with that:

  1. We need to make sure that the Network View is in the correct context (Driver/ROP context in this case)

  2. What happens if we have multiple Network Views open?

We'll write a function to help us with this:

import hou

def find_current_network_location(node_type_category=None):
    for tab in hou.ui.currentPaneTabs():
        if tab.type() == hou.paneTabType.NetworkEditor:
            if node_type_category is not None:
                tab_category = tab.pwd().childTypeCategory()
                if tab_category != node_type_category:
            return tab.pwd()
    return None

This function loops over all the visible Network Editor panes and returns the location (represented as a node instance) of the first Network Editor it finds. If you specify a node type category, then it will only return a node if it's child type matches the category we supplied. If it can't find a Network Editor at a valid location it will return None.

To avoid duplication, let's put this into a Python module called under the pipeline.houdini package.

We can now change our original script to include this helper function and call it appropriately:

import hou
from pipeline.houdini import nodes, tools

parent = tools.find_current_network_location(
if parent:
    mantra_node = parent.createNode("ifd")
    hou.ui.displayMessage("Could not find valid location\n"
        "to create Mantra node\n",
        title="Tool Error")

I've also added an error message in case it can't find a valid location.

Method 2: Single Shelf Tool To Patch Selected Nodes

This one's much easier to write as the nodes already exist. The script simply loops over the currently selected nodes, checks if the current node is supported by the pipeline, makes sure it hasn't already been patched by our tool, and then if it's passed those tests it updates the parameters.

import hou
from pipeline.houdini import nodes

for node in hou.selectedNodes():
    if nodes.is_pipeline_patchable_node(node) \
            and not nodes.is_patched(node):

As you can see, we've needed to introduce two new helper functions in from our fictional pipeline:


The first function returns True if the pipeline supports patching the node with our custom parameters. The second function returns True if the node has already been patched.

The design of these functions will probably be dependent on your pipeline, but a simple implementation may look something like this:

import hou

            {"ifd", "geometry", "alembic"},

def is_pipeline_patchable_node(node):
    category = node.type().category()
    node_set = _VALID_NODE_TYPES.get(category, set())
    return node.type().name() in node_set

def is_patched(node):
    result = node.parm("farm_submit_button") is not None
    return result

To detect if a node has already been patched, we just check to see if a parameter has already been added.

Method 3: Custom Menu Item In Node's Right Click Menu

This is what I would consider to be the tidiest and most integrated option of the three manual methods. The user can just right click on a node and select a custom option in the menu to add pipeline parameters.

To do this, we create a file called OPmenu.xml in our $HOUDINI_PATH location. The following shows how to add a single menu option after the "Save" submenu with a separator line before it:

<?xml version="1.0" encoding="UTF-8"?>
    <scriptItem id="pipeline.add_pipeline_parameters">
      <label>Add Pipeline Parameters</label>
from pipeline.houdini import nodes
node = kwargs["node"]
return nodes.is_pipeline_patchable_node(node) \
    and not nodes.is_patched(node)
from pipeline.houdini import nodes

from pipeline.houdini import nodes
node = kwargs["node"]
return nodes.is_pipeline_patchable_node(node) \
    and not nodes.is_patched(node)


It's not particularly easy to read here, so I'd recommend copy/pasting into an editor like Sublime to make it easier to view.

Some key points to note:

  • In case you're not familiar with XML, the "<![CDATA[" blocks just tell the XML parser to interpret the Python code as a block of text. It stops any of the characters used in the script from getting in the way and being misinterpreted as part of the XML structure.

  • Both our <scriptItem> and the <separatorItem> have a <context> section. This allows us to provide Python code inside the <expression> section which is run by Houdini to figure out if it should show our custom menu option or not.

  • Somewhat counter intuitively, by inserting both the custom menu option and then the separator after the "Save" submenu in that order, the separator will appear first.

Our "add_pipeline_parameters()" function is called when the menu option is triggered, and you would use it to process the node and add the appropriate parameters.


We've gone through three different ways of getting Houdini to call our script to customise the built-in nodes. Out of all of them, my preference is to hook into the Tab menu system to avoid any problems with automation. I've been using the monkey-patching solution in production for around a year now across multiple projects with no problems.

I came up with the tool-replacement method for hooking into the Tab menu while writing this article, and I may switch to that at some point if monkey patching ever becomes an issue.

If you prefer manual invocation of the UI customising callback, then my preference would be to use the node's context menu. The dynamic visibility of the menu option provides that extra bit of feedback to the user to indicate if the node is patchable or not. A shelf tool cannot provide that sort of feedback until after it's been clicked on.

So we've gone through parts 1 and 2 and still not touched on how we actually modify Houdini nodes! Let's address that in the next article.

139 views0 comments
  • Facebook
  • Twitter
  • LinkedIn

©2020 by