Monitor

Iterators & Enumerators

You can use iterate() and enum() with any iterable object. In this example we use a PyTorch DataLoader.

# Create a data loader for illustration
import time

import torch
from torchvision import datasets, transforms

from labml import logger, monit, lab, tracker

test_loader = torch.utils.data.DataLoader(
        datasets.MNIST(lab.get_data_path(),
                       train=False,
                       download=True,
                       transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1307,), (0.3081,))
                       ])),
        batch_size=32, shuffle=True)
for data, target in monit.iterate("Test", test_loader):
    time.sleep(0.01)
Test...[DONE] 5,670.82ms
for i, (data, target) in monit.enum("Test", test_loader):
    time.sleep(0.01)
Test...[DONE] 5,698.60ms

Sections

Sections let you monitor time taken for different tasks and also helps keep the code clean by separating different blocks of code.

with monit.section("Load data"):
    # code to load data
    time.sleep(2)
Load data...[DONE]    2,003.36ms
with monit.section("Load saved model"):
    time.sleep(1)
    monit.fail()
Load saved model...[FAIL]     1,007.45ms

You can also show progress while a section is running

with monit.section("Train", total_steps=100):
    for i in range(100):
        time.sleep(0.1)
        # Multiple training steps in the inner loop
        monit.progress(i)
Train...[DONE]        10,508.56ms

Loop

This can be used for the training loop. The loop() keeps track of the time taken and time remaining for the loop.

labml.tracker.save() outputs the current status along with global step.

for step in monit.loop(range(0, 400)):
    tracker.save()
     399:    2ms  0:00m/  0:00m  
tracker.set_global_step(0)

You can manually increment global step too.

for step in monit.loop(range(0, 400)):
    tracker.add_global_step(5)
    tracker.save()
   2,000:    2ms  0:00m/  0:00m