If you write any code for Maya, you might find it relevant to know how the code performs under various conditions. Will it work well in large scenes? Will it be feasible with a deep DAG hierarchy? A scene that uses many node types? It’s hard to know unless you test and benchmark code.
This is why Character Rigging Artist and Technical Director, Christopher Crouzet created a tool that can help you benchmark code for Maya. It’s called Revl. Following sets of user defined commands, Real can pseudo-randomly generate Maya scenes that contain different properties. This way you can observe how a piece of code behaves under different situations.
It’s this pseudo-randomness that can help provide insights to potential bugs by exposing edge cases you might not have thought of. It can also be a good tool for unit testing.
- generate scenes by running commands a given total number of times.
- have fine control over the probability distribution for each command.
- scene generations are reproducible using a fixed seed.
- extensible with custom commands.
- allows for fuzz testing.
- fast (using Maya’s API, not the command layer).
Note that Revl does not provide any sort of profiling tool to measure performances. The built-in timeit module as well as other open-source packages can be used for this purpose.
Visit the GitHub page for more information on Revl, the tool that can help to benchmark code for Autodesk Maya.
Also, check out Alessandro Riberti’s amazing work in his reel.