MDL SDK API nvidia_logo_transpbg.gif Up
MDL Compiler

The MDL compiler can generate target code for different backends, which can be accessed via the mi::neuraylib::IMdl_backend_api::get_backend() method.

Currently these backends are supported:

  • MB_NATIVE: The native (CPU) backend generating x86-64 code.
  • MB_CUDA_PTX: The CUDA PTX backend for GPU-based CUDA and OptiX renderers.
  • MB_LLVM_IR: The LLVM IR backend allowing to generate code for custom target platforms or just giving more control over the output.
  • MB_GLSL: The GLSL backend for OpenGL renderers.

Please refer to the Tutorial and Example Programs section on how to use the backends.

Instance-compilation and class-compilation

There are two different compilation modes to accommodate different needs: instance and class compilation. In instance compilation mode (the default) all arguments, i.e., constant expressions and call expressions, are compiled into the body of the resulting material. Hence, the resulting mi::neuraylib::ICompiled_material is parameterless. This mode offers most optimization possibilities. As a downside, even the smallest argument change requires a recompilation to target code.

In class compilation mode, only call expressions, i.e., the structure of the graph created by the call expression, is compiled into the body of the resulting material. The constant literals (mi::neuraylib::IExpression_constant) remain as arguments of the compiled material. Note, that the parameters of a compiled material do not in general correspond to the parameters of the original material:

  • Unused parameters will not make it into the compiled material.
  • The order is arbitrary.
  • Function calls will become part of the material. Hence, if you instantiate a material parameter "int x" with a call to "math::max(1, 3)" (prototype "int math::max(int a, int b)"), there will be no parameter with name "x" but the two constants used in the call with be turned into the parameters "x.a" and "x.b". The constants "1" and "3" will become the default arguments of the parameters "x.a" and "x.b", respectively. The parameter names are generated by walking the path from the material parameters to the literals and concatenating all parameter names that are visited by ".".

When an argument of a material parameter is changed, a new compilation of the material is required. If the structure of the newly created compiled material is still unchanged, then it is not necessary to create new target code using the backends, as the code will also remain identical. Avoiding target code generation is beneficial, because it usually takes up the vast majority of compilation time. To recognize that the structure has not changed, the hash values of the compiled materials obtained via mi::neuraylib::ICompiled_material::get_hash() can be compared. Note, that you can also check parts of the compiled material, e.g. the expression that computes the displacement, by using mi::neuraylib::ICompiled_material::get_slot_hash().

Modifying arguments of a class-compiled material

If a class-compiled material still has parameters after the compilation, the target-specific data of the arguments is available as a mi::neuraylib::ITarget_argument_block object returned by mi::neuraylib::ITarget_code::get_argument_block(). Depending on whether the execute_* methods of mi::neuraylib::ITarget_code are used or the generated code is called directly, either a mi::neuraylib::ITarget_argument_block object or its data must be provided. To get a modifiable block, you can either call mi::neuraylib::ITarget_argument_block::clone() or create a new one from a mi::neuraylib::ICompiled_material object with a matching hash via mi::neuraylib::ITarget_code::create_argument_block().

For direct access to the data obtained by mi::neuraylib::ITarget_argument_block::get_data(), the layout of the data can be retrieved as a mi::neuraylib::ITarget_value_layout object from mi::neuraylib::ITarget_code::get_argument_block_layout(). Using the parameter name and argument type information from the mi::neuraylib::ICompiled_material object corresponding to the layout, the layout can be navigated with mi::neuraylib::ITarget_value_layout::get_nested_state() and the offset, kind, and size of an argument or sub-element can be retrieved via mi::neuraylib::ITarget_value_layout::get_layout().

So, if you wanted to get the offset of "x.b" in the above example case, you would search for the index for which mi::neuraylib::ICompiled_material::get_parameter_name() returns "x.b" and use this index to get the nested layout state for it. Providing this state to mi::neuraylib::ITarget_value_layout::get_layout() would then result in the offset of this argument within the target argument block data.

Note
Target argument blocks and layouts are currently not used for the GLSL backend.

Link units

Sometimes it is necessary to compile multiple parts of one material or several materials. Doing this one by one will result in a lot of duplicated code and data, because every compilation result is self-contained. Also when combining the generated code into one GLSL or CUDA kernel, this may fail because of multiple definitions for commonly used functions.

Link units (mi::neuraylib::ILink_unit) solve this problem: Instead of compiling MDL expressions separately, create a link unit via mi::neuraylib::IMdl_backend::create_link_unit() and add expressions to it via mi::neuraylib::ILink_unit::add_function(), mi::neuraylib::ILink_unit::add_material_expression(), and mi::neuraylib::ILink_unit::add_material_df(). When the link unit is compiled via mi::neuraylib::IMdl_backend::translate_link_unit(), commonly used code and data will only be translated once. The resulting mi::neuraylib::ITarget_code object will then contain code and data of multiple functions in the order in which they were added to the link unit. Note, that for mi::neuraylib::ILink_unit::add_material_df() multiple functions are added at once.

As target argument blocks and their layouts are not generated for all functions, you can get the block / layout index for a function via mi::neuraylib::ITarget_code::get_callable_function_argument_block_index().

CUDA PTX Backend Specific Topics