Scoring Pipelines OverviewΒΆ

Driverless AI provides Scoring Pipelines that can be deployed to production for experiments and/or interpreted models.

  • A standalone Python Scoring Pipeline is available for experiments and interpreted models.

  • A low-latency, standalone MOJO Scoring Pipeline is available for experiments, with both Java and C++ backends.

The Python Scoring Pipeline is implemented as a Python whl file. While this allows for a single process scoring engine, the scoring service is generally implemented as a client/server architecture and supports interfaces for TCP and HTTP.

The MOJO (Model Objects, Optimized) Scoring Pipeline provides a standalone scoring pipeline that converts experiments to MOJOs, which can be scored in real time. The MOJO Scoring Pipeline is available as either a Java runtime or a C++ runtime. For the C++ runtime, both Python and R wrappers are provided.

Examples are included with each scoring package.

For information on how to deploy a MOJO scoring pipeline, refer to the Deploying the MOJO Pipeline to production section. MOJOs are tied to experiments. To control MOJO size see Reduce MOJO Size expert setting also see.