NVIDIA Omniverse Rendering Options with Google Cloud

I am an individual developer, but would like to have the option of running renders in the cloud without tying up my desktop. Most of the time I am fine on my desktop, but if I use my desktop for final path tracing renders, then I cannot use my computer for anything else. So having a cloud solution seems a nice option to have available. (Cloud servers gives me access to GPUs with larger memory capacity and Google Cloud has possibly the widest range of options available.) I am sharing in case my thinking through options is useful to anyone else in a similar situation.

I also like the idea of copying my project files into the cloud with a history of previous versions as a form of backup. I don’t personally need git. Git also doubles the local disk space usage due to keeping a copy of assets under the .git directory. I am fine with using rsync or similar to copy files on my local workstation to cloud storage, then having snapshots of the cloud file system according to a schedule of daily/weekly/monthly snapshots with appropriate retention periods.

Directory Structure

My planned directory structure is to have all of my files for rendering under a single directory tree. Then I can use relative file paths to access other files. As I don’t have an Enterprise license, Nucleus in the cloud is not an option for me. And frankly, a simple file system is just that – simple.

So I plan to have a directory structure something like:

  • /src/library/{library-name}/{structure}/{asset}.usd
  • /src/projects/{project-name}/shots/{scene-id}/{shot-id}/{shot-id}.usd
  • /renders/{project-name}/{shot-id}.mp4

Everything in the ‘src’ directory I will copy from my workstation up to the cloud; everything in the ‘renders’ directory I will copy from the cloud to my workstation.

I plan to be rendering shots, which will have the source stage files quite deep in the file system hierarchy. These files will reference other files from the same project, as well as other files from the library area of shared assets, all using relative path names.

Google Storage Options

I am looking at Google Cloud services, but I am sure similar abilities and pricing are available on other cloud providers due to the competitive nature of the cloud. One provider is enough to make my head spin! Please, double check the following information if you decide to move forward with your own project as I found it quite confusing going through all the options. For example, there are different storage tiers with different pricing – I picked the options I think make most sense for this use case (rendering). I believe the following is directionally correct. Prices are based on US-Central.

  1. Google Drive. $10/month gets me 2TB of storage (or $3/month for 200GB). You can mount it is as a network drive, but no idea of the performance, and I am not sure yet if there would be additional network access costs if you start using it a lot. For example, it is not considered a part of “Google Cloud”, so it might incur ingress or egress network costs when accessed from compute nodes. Also I don’t think 200GB will be enough, so I will be forced up to the 2TB tier (there is nothing in-between).
  2. Google Cloud object storage is for objects, like video or document files. Contents are accessible via REST API, not as a normal file system. Pricing appears to be $20/month for 1TB (but you provision in multiples of 1GB). With Omniverse I would have to write an Asset Resolver to access resources to download them, but storage is cheaper.
  3. Google Cloud persistent disks are for mounting on compute nodes. Pricing is $40/month for 1TB of provisioned space (but you can provision in 1GB increments and resize as you need). It also supports snapshots ($50/month for storage, where for incremental snapshots you only pay for files that have changed, which is ideal for my use case as most files won’t change).
  4. Google Filestore is a network mounted disk, so I could have multiple compute nodes connecting to it at the same time, e.g. for parallel renders. Very cool. A mere $200/month for 1TB. Okay, that makes it a bit less appealing on my budget!

So, to summarize, Google Drive may be a good way to back up files on my desktop, but does not seem like a serious option to use for cloud compute usage. Google Cloud object storage is cheaper, but takes more work to use and I don’t think it supports random access to files. I could believe performance suffers as a result. Google Cloud persistent disks can be mounted on a single compute node, and support snapshots as a form of backups. Google Filestore is a sharable network file system, but it comes at a premium. If I was frequently running parallel farms of compute nodes, then it would be attractive as I only pay for disk once (its sharable).

Oh, and for my case I am estimating I might want 500GB of storage for my own project, so Google object storage is probably $10/month, with persistent storage (with snapshots) more like $30/month (as a guess).

Job Management

There is also a question of how to queue up work to get renders done. Omniverse comes with a farm queue and agent infrastructure, looks like it should work well with a compute node running in the cloud. I was wondering about services like Google Cloud Run so I only pay for storage when in use, but it’s not designed to work with persistent disks (but it will work with the object store or Filestore options). So if I had to do a lot of rendering it might become interesting, but probably not for my own project.


So, it seems my most logical options are:

  • Put up with only rendering on my desktop and use Google Drive for making backups of my files. I will start with this, although it does not have a good solution for snapshots (accessing multiple previous versions).
  • Use Google Cloud persistent disk attached to a compute node with GPU in the cloud, using spot instance prices to do renders. (Spot instances are much cheaper, and I can put up with periods of unavailability.) While the compute node is running I can use rsync to copy files to/from my local disk and the remote compute node, then shut down the compute node when not in use. I can pay more, but set up a schedule of snapshots to back up files to protect against accidents.
  • If I decide I need multiple compute nodes in parallel, then I can start looking at other storage options – either object storage to keep costs down with a bit more work, or Filestore with its improve functionality but higher costs.

But the good news is only one option, using Google object storage, requires any change to the rendering pipeline. All of the other options involve rendering from files on a regular file system. So laying out all my USD on a local file system with relative path names seems a good way forwards.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s