Changelog
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[v0.2.1b1] - 2024-03-11​
Added​
- Ability to set the number of CPU cores used when scoring Pandas data frames. For more information, see Scoring Pandas Data Frames .
[v0.2.0b1] - 2024-02-27​
Added​
- Ability to request prediction intervals. For more information, see Request Prediction Intervals.
Changed​
pyspark
is now an optional dependency. It is no longer installed by default. For more information, see Install.
[0.1.5b1] - 2024-02-09​
Changed​
- Warning instead of error if
SPARK_CONF_DIR
is set butspark-defaults.conf
is not found.
Fixed​
- Detection of Spark master.
- Scoring Pandas data frames in environments that don't allow multiprocessing.
[0.1.4b1] - 2023-12-14​
Fixed​
- NaNs in returned Pandas data frame
id_column
.
[0.1.3b1] - 2023-11-13​
Fixed​
- ID column being coerced to float. This could result in errors such as:
pyarrow.lib.ArrowTypeError: Expected a string or bytes dtype, got float64
- Hang when getting schema for DAI Python scoring pipeline models with prediction intervals.
[0.1.0b1] - 2023-08-29​
Added​
- Ability to request SHAP contributions for original and transformed features.
Fixed​
- Use generators where possible for Pandas data frame scoring to improve memory usage.
- Don't send unused columns for Pandas data frame scoring to minimize network request sizes.
[0.0.11b1] - 2023-08-24​
Changed​
- Scoring a Pandas data frame does not use Spark.
[0.0.10b1] - 2023-07-17​
Changed​
- Nullable integer data types in Pandas data frames are converted to float before scoring. Addresses potential errors when missing values are present, like
TypeError: Invalid value '' for dtype Int64
.
[0.0.9b1] - 2023-08-10​
Added​
- Support for directly scoring Spark data frames.
Changed​
- Snowflake scoring no longer checks for password or warehouse options.
- Spark log level changed to "error" when scoring in local mode.
[0.0.8b1] - 2023-05-30​
Fixed​
SPARK_CONF_DIR
check when not running in local mode.
[0.0.7b1] - 2023-05-22​
Added​
- Support for Snowflake queries and tables.
Added​
[0.0.6b1] - 2023-05-05​
Added​
- Support for passphrase protected endpoints.
__version__
attribute.- Check for existence of spark-defaults.conf if a Spark configuration directory is specified.
- Unset
SPARK_HOME
if running in local mode. - Documentation site.
[0.0.5b1] - 2023-04-14​
Initial public release.
Feedback
- Submit and view feedback for this page
- Send feedback about H2O MLOps to cloud-feedback@h2o.ai