diff --git a/README.md b/README.md
index d4b69e4..b39b78d 100644
--- a/README.md
+++ b/README.md
@@ -19,18 +19,15 @@ The main `vpp-proto` image runs on `hvn0.chbtl0.ipng.ch` with a VM called `vpp-p
 When you want to refresh the image, you can 
 
 ```
-spongebob:~$ ssh -A root@hvn0.chbtl0.ipng.ch
-
-SNAP=$(date +%Y%m%d) ## 20221012
-zfs snapshot ssd-vol0/vpp-proto-disk0@${SNAP}-before
-virsh start --console vpp-proto
+hvn0-chbtl0:~$ virsh start --console vpp-proto
 
 ## Do the upgrades, make changes to vpp-proto's disk image
-## You can always roll back to the -before image if you'd like to revert
+## You can always roll back to the previous snapshot image if you'd like to revert
 
-virsh shutdown --console vpp-proto
-zfs snapshot ssd-vol0/vpp-proto-disk0@${SNAP}-release
-zrepl signal wakeup vpp-proto-snapshots
+hvn0-chbtl0:~$ SNAP=$(date +%Y%m%d) ## 20221012
+hvn0-chbtl0:~$ virsh shutdown --console vpp-proto
+hvn0-chbtl0:~$ sudo zfs snapshot ssd-vol0/vpp-proto-disk0@${SNAP}-release
+hvn0-chbtl0:~$ sudo zrepl signal wakeup vpp-proto-snapshots
 ```
 
 There is a `zrepl` running on this machine, which can pick up the snapshot by manually
@@ -41,7 +38,7 @@ running labs will not be disrupted, as they will be cloned off of old snapshots.
 
 You will find the image as `ssd-vol0/hvn0.chbtl0.ipng.ch/ssd-vol0/vpp-proto-disk0`:
 ```
-spongebob:~$ ssh -A root@hvn0.lab.ipng.ch 'zfs list -t snap'
+lab:~$ ssh -A root@hvn0.lab.ipng.ch 'zfs list -t snap'
 NAME                                                                     USED  AVAIL     REFER  MOUNTPOINT
 ssd-vol0/hvn0.chbtl0.ipng.ch/ssd-vol0/vpp-proto-disk0@20221013-release     0B      -     6.04G  -
 ```
@@ -65,15 +62,17 @@ SSH keys, Bird/FRR configs, etc). We do this on the lab controller `lab.ipng.ch`
 1.  Rsync's the built overlay into that filesystem
 1.  Unmounts the filesystem
 1.  Starts the VM using the newly built filesystem
+1.  Commits the `openvswitch` topology configuration (see `overlays/*/ovs-config.sh`)
 
 Of course, the first two steps are meant to ensure we don't clobber running labs, which can
 be overridden with the `--force` flag. And when the lab is finished, it's common practice to
 shut down the VMs and destroy the clones.
 
 ```
-lab:~/src/ipng-lab$ ./destroy  --host hvn0.lab.ipng.ch
-lab:~/src/ipng-lab$ ./generate --host hvn0.lab.ipng.ch --overlay bird
-lab:~/src/ipng-lab$ ./create   --host hvn0.lab.ipng.ch --overlay bird
+lab:~/src/lab$ ./generate --host hvn0.lab.ipng.ch --overlay default
+lab:~/src/lab$ lab=0 ./destroy   ## remove VMs and ZFS clones
+lab:~/src/lab$ lab=0 ./create    ## create ZFS 'pristine' snapshot and start VMs
+lab:~/src/lab$ lab=0 ./pristine  ## return the lab to the latest 'pristine' snapshot
 ```
 
 ### Generate
@@ -81,22 +80,48 @@ lab:~/src/ipng-lab$ ./create   --host hvn0.lab.ipng.ch --overlay bird
 The generator reads input YAML files one after another merging and overriding them as it goes along,
 then for each node building a `node` dictionary alongside the `lab` and other information from the
 config files. Then, it read the `overlays` dictionary for a given --overlay type, reading all the
-template files from that overlay directory and assembling an output directory which will hold the
+common files from that overlay directory and assembling an output directory which will hold the
 per-node overrides, emitting them to the directory specified by the --build flag. It also copies in
-any per-node files (if they exist) from the overlays/$(overlay)/blobs/$(node.hostname)/ giving full
-control of the filesystem's contents.
+any per-node files (if they exist) from the overlays/$(overlay)/hostname/$(node.hostname)/ giving full
+control of the filesystem's ultimate contents.
+
+```
+lab:~/src/lab$ ./generate --host hvn0.lab.ipng.ch --overlay default
+lab:~/src/lab$ git status build/default/hvn0.lab.ipng.ch/
+```
+
+### Destroy
+
+Ensures that both the VMs are not running (and will stop them if they are), and their filesystem
+clones are destroyed. Obviously this is the most dangerous operation of the bunch, but the philosophy
+of the lab is that the VMs can be re-created off of a stable base image and a generated build.
+
+```
+lab:~/src/lab$ lab=0 ./destroy   ## remove VMs and ZFS clones on hvn0.lab.ipng.ch
+```
 
 ### Create
 
 Based on a generated directory and a lab YAML description, uses SSH to connect to the hypervisor,
 create a clone of the base `vpp-proto` snapshot, mount it locally in a staging directory, then rsync
 over the generated overlay from files from the generator output (build/$(overlay)/$(node.hostname))
-after which the directory is unmounted and the virtual machine booted from the clone.
+after which the directory is unmounted and a specific ZFS snapshot is created called `pristine`.
+The VMs are booted off of their `pristine` snapshot.
 
-If the VM is running, or there exists a clone, an error is printed and the process skips over that
-node. It's wise to run `destroy` before `create` to ensure the hypervisors are in a pristine state.
+Typically, it's necessary to destroy/create, only when the build or the base image change. Otherwise,
+the lab can be brought back into a _factory default_ state by rolling back to the `pristine` snapshot.
 
-### Destroy
+```
+lab:~/src/lab$ lab=0 ./create    ## create ZFS 'pristine' snapshots and start VMs
+```
 
-Ensures that both the VMs are not running (and will stop them if they are), and their filesystem
-clones are destroyed. Obviously this is the most dangerous operation of the bunch.
+### Pristine
+
+In the process of creating the ZFS clones and their per-node filesystems, a snapshot of each VM's
+boot disk is made, and this is called the `pristine` snapshot. After using the lab, it can be quickly
+brought back into a default state by rolling back the disks to the `pristine` snapshot and restarting
+the virtual machines.
+
+```
+lab:~/src/lab$ lab=0 ./pristine  ## return the lab to the latest 'pristine' snapshot
+```