Which Pipeline Should I Use?¶
Driverless AI provides a Python Scoring Pipeline, an MLI Standalone Scoring Pipeline, and a MOJO Scoring Pipeline. Consider the following when determining the scoring pipeline that you want to use.
For all pipelines, the higher the accuracy, the slower the scoring.
The Python Scoring Pipeline is slower but easier to use than the MOJO scoring pipeline.
When running the Python Scoring Pipeline:
HTTP is easy and is supported by virtually any language. HTTP supports RESTful calls via curl, wget, or supported packages in various scripting languages.
TCP is a bit more complex, though faster. TCP also requires Thrift, which currently does not handle NAs.
The MOJO Scoring Pipeline is flexible and is faster than the Python Scoring Pipeline, but it requires a bit more coding. The MOJO Scoring Pipeline is available as either a Java runtime or a C++ runtime.
The MLI Standalone Python Scoring Pipeline can be used to score interpreted models but only supports k-LIME reason codes.
For obtaining k-LIME reason codes from an MLI experiment, use the MLI Standalone Python Scoring Pipeline. k-LIME reason codes are available for all models.
For obtaining Shapley reason codes from an MLI experiment, use the DAI Standalone Python Scoring Pipeline. Shapley is only available for XGBoost and LightGBM models. Note that obtaining Shapley reason codes through the Python Scoring Pipeline can be time consuming.