Modelbit and Weights & Biases are excited to announce an integration for data scientists and machine learning engineers.
The new integration between Modelbit and Weights & Biases allows ML practitioners to train and deploy their models in Modelbit while logging and visualizing training progress in Weights & Biases.
Training Models & Tracking Experiments in Production
Deploying ML models into production has typically been perceived as a tedious and intimidating task. Modelbit was created to make deploying ML models into production as simple as calling modelbit.deploy().
While simplifying model deployment is a step in the right direction, it’s equally critical that ML teams are set up to successfully track experiments when training models. This is exactly where a platform like Weights & Biases comes in. Traditionally, teams will train models in something like a Jupyter notebook. In recent years, many of those teams are now logging training data to Weights & Biases so they can track their experiments. However, once the ML team is happy with the model’s performance, the issue of deploying the model into production comes right back into focus.
What if we could train our models in the same platform that we use to deploy them?
That’s a question we heard from our customers, and led to the release of Modelbit’s training jobs. When you train ML models in Modelbit, they are instantly available to call via REST API once you’re ready to serve them in production.
But this doesn’t eliminate the need for tracking your model’s training experiments. Which is why we are very excited to announce that Modelbit now seamlessly integrates with Weights & Biases to help you log model training progress directly to your W&B projects.
In this tutorial, we’ll demonstrate how you can integrate Weights & Biases with Modelbit. To demonstrate the full power of the integration, we’ll train a neural net for binary classification using PyTorch. We’ll run the training in Modelbit, measure our training experiments in Weights & Biases, and then finish by deploying to production in Modelbit.
Let’s begin!
Set up Modelbit
Using Modelbit, you can deploy any ML model directly from your Python notebook (or git) to Snowflake, Redshift, and REST.
To get started:
Install Modelbit
Firstly, install the Modelbit package via pip:
{%CODE python%}
pip install modelbit
{%/CODE%}
Log in to Modelbit
To deploy models using Modelbit, create your account here. Next, login to Modelbit from Jupyter:
{%CODE python%}
import modelbit
mb = modelbit.login()
{%/CODE%}
And done!
Now, we can start pushing our models to deployment.
Set up Weights & Biases
As discussed above, Weights & Biases make it easy to log experiments, and visualize results directly from your dashboard.
The following steps outline how to get started.
Install Weights & Biases
Firstly, Install the CLI and Python library for interacting with the Weights & Biases API:
{%CODE python%}
pip install wandb
{%/CODE%}
Log in to Weights & Biases
To log your experiments using Weights & Biases, create your account here.
This will give you an API key.
Next, log in and paste your API key when prompted:
{%CODE python%}
wandb login
{%/CODE%}
Import Weights & Biases
Lastly, import the wandb library in your notebook to push models for logging:
{%CODE python%}
import wandb
{%/CODE%}
Done!
Integrate Modelbit with Weights & Biases
To seamlessly deploy your machine learning models to Modelbit and automatically register logging jobs with Weights & Biases, we should integrate them.
The steps are:
Grab your Weights & Biases API Key from the Weights & Biases dashboard:
Provide the API Key to Modelbit
Next, login to the Modelbit dashboard and navigate to the Integrations in Settings.
Done!
Now we can proceed with training a model, deploying it, and monitoring it with Weights & Biases.
Model Development, Deployment and Logging
For this tutorial, we’ll train a neural network for binary classification using PyTorch.
We’ll deploy and train it in Modelbit, and log its training process in Weights & Biases.
Selecting a 2D Spiral Dataset
For this demonstration, we’ll consider a dummy 2D spiral dataset shown below:
Download it here: Dataset link.
However, you are free to proceed with any multidimensional regression/classification data that you may have.
Building the Model
Next, let’s define our PyTorch model class.
{%CODE python%}
import torch
import torch.nn as nn
class NeuralNetwork(nn.Module):
def __init__(self, hidden_size, classes=2):
super().__init__()
self.fc1 = nn.Linear(2, hidden_size)
self.fc2 = nn.Linear(hidden_size, hidden_size)
self.fc3 = nn.Linear(hidden_size, classes)
def forward(self, x):
import torch.nn.functional as F
## Forward pass
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.softmax(self.fc3(x))
return x
def accuracy(self, outputs, labels):
## Compute accuracy
return int(torch.sum(torch.argmax(outputs, axis = 1) == y))/len(outputs)
{%/CODE%}
Here, we develop a neural network with one hidden layer.
It takes 2D data as input and outputs the corresponding softmax probabilities.
Deployment and Logging
Lastly, we proceed with training, deployment and logging.
First, we define some hyperparameters in the global scope:
{%CODE python%}
HIDDEN_SIZE = 400 ## Number of neurons in the hidden layer
TOTAL_EPOCHS = 300 ## Number of epochs
LR = 0.005 ## Learning rate
{%/CODE%}
Next, we define a W&B training and logging method. This will be responsible for local model training and logging it simultaneously in W&B.
Let’s call it "wandb_training".
Here, we initialize a logging job in Weights & Biases using "wandb.init()", instantiate the model object, and define the optimizer and loss. Finally, we fit the model, etc.
In every epoch, we log the training metrics using "wandb.log()" method.
More specifically, we track the loss and accuracy of the model.
{%CODE python%}
def wandb_training():
# initialize W&B
wandb.init(
project="Modelbit With W&B",
config={
"learning_rate": 0.005,
"architecture": "Neural Network",
"dataset": "Spiral",
"total_epochs": 300,
"hidden_size": 400
}
)
# initalize model
model = NeuralNetwork(HIDDEN_SIZE, classes=2)
# define loss function
criterion = nn.CrossEntropyLoss()
# define optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.005)
# train
for epoch in range(TOTAL_EPOCHS):
outputs = model(X)
loss = criterion(outputs, y)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
# compute accuracy
acc = model.accuracy(outputs, y)
# log model training results in W&B
wandb.log({"acc": acc, "loss": loss.item()})
# finish logging once trained
wandb.finish()
# return the trained model
return model
{%/CODE%}
We recommend starting by executing the training locally before sending it off to Modelbit. This can be useful to identify any errors in the code/network layers.
{%CODE python%}
wandb_training()
{%/CODE%}
Once we are satisfied, we deploy our training job to Modelbit as follows:
{%CODE python%}
model = mb.add_job(wandb_training, deployment_name="Modelbit_With_WB")
{%/CODE%}
And done!
Let’s run this cell.
As shown, Modelbit uploads the dependencies and the data. This is followed by a success message that the training job will be ready soon.
Navigating to Modelbit’s dashboard shows us the deployed training job.
Next, we run the job and wait for Modelbit to train it.
Once the training is over, some training logs are available under “Training Jobs”.
Moreover, the training job also shows in the W&B dashboard, with the same name mentioned in the wandb.init() call — “Modelbit With W&B”.
We see both accuracy and loss logs in the W&B dashboard, as specified in the wandb.log(...) call.
Conclusion
The new integration allows ML teams to train their models in Modelbit, all while sending logs to Weights & Biases for experiment tracking and fine tuning. Once you’re ready to deploy into production, your ML model in Modelbit will be made available to call as a REST API.
Try out the new integration for free today (both Modelbit and Weights & Biases have free plans) and let us know what you think!