We will demonstrate how to fit a linear regression model package to the Star Wars dataset.
Subsequently, we will track the experiment and save the model to an MLFlow server. We use the carrier package to serialize the model (write an in-memory object to file).
We will first create a new model in the model registry.
client =mlflow_client()tryCatch(expr = {mlflow_delete_registered_model("sw_lm", client = client)},error =function(x) {})mlflow_create_registered_model("sw_lm", client = client, description ="Perform predictions for Star Wars characters using linear regression.")
$name
[1] "sw_lm"
$creation_timestamp
[1] 1.668548e+12
$last_updated_timestamp
[1] 1.668548e+12
$description
[1] "Perform predictions for Star Wars characters using linear regression."
We will next execute an MLFlow run.
MLFlow Run
s3_bucket ="s3://mlflow/sw_lm"# Begin the run.experiment =mlflow_set_experiment(experiment_name ="sw_lm", artifact_location = s3_bucket) run =mlflow_start_run(client = client)# Save the model.sw_lm =lm(height ~ mass, data = data)packaged_sw_lm = carrier::crate(function(x) { stats::predict.lm(sw_lm, newdata = x) },sw_lm = sw_lm)# Log params and metrics.mlflow_log_param("Intercept", sw_lm$coefficients["(Intercept)"], client = client, run_id = run$run_uuid)mlflow_log_param("mass", sw_lm$coefficients["mass"], client = client, run_id = run$run_uuid)mlflow_log_metric("MSE", mean(sw_lm$residuals^2), client = client, run_id = run$run_uuid)# Log predictions and actual valuessw_lm |>predict() |>iwalk(~mlflow_log_metric("prediction", .x, step =as.numeric(.y), client = client, run_id = run$run_uuid) )data$height |>iwalk(~mlflow_log_metric("actual", .x, step = .y, client = client, run_id = run$run_uuid) )# Save model to the registry.crated_model ="/tmp/sw_lm"saved_model =mlflow_save_model(packaged_sw_lm, crated_model) logged_model =mlflow_log_artifact(crated_model, client = client, run_id = run$run_uuid)
2022/11/15 21:36:42 INFO mlflow.store.artifact.cli: Logged artifact from local dir /tmp/sw_lm to artifact_path=None
Finally, we will demonstrate how to deploy the model using a model-as-a-service approach. Note that the mlflow::mlflow_rfunc_serve function can be used. Instead, we will launch the model using bash.