If you want to create a file of over 10mb in Drive, you can’t use either the DriveApp service or the Advanced Drive service. Even if you use the JSON API, the restriction of 10mb on POST body sizes in UrlFetch stops you doing that too.

If the file is for your own use, then you can zip it right there in Apps Script – we’ll come to that later, but if the file needs to stay as is, then you are pretty much screwed.

Unless you use resumable uploads, and do the thing in chunks. The problem is that these are kind of complicated – so I’ve added a function to my cUseful library to do it for you.

Here’s an example of reading a 50mb file, doing something with it, and writing it back again.

Walkthrough.

Get some data from Drive and play around with it. This case converts a JSON file to delimited JSON for input to BigQuery (see Loading large JSON datasets into BigQuery with Apps Script for how to do this directly with Apps Script without creating an intermediate file).

Writing the large file

There’s not much to this at all

    • Because the cUseful library is dependency free (no services are referenced directly in it to avoid unnecessary requests for auhtorization), you need to pass over UrlFetchApp

    • You need to have enabled a Drive scopes token by mentioning DriveApp in your code .. even in a comment, and enabled the Drive API in the developer console for this project.
    • The data should be passed as a blob with mimetype and filename included in it.
    • In this example I wanted the output file to be in the same folder as the source file. If you leave out this argument, it will go in the top level.

Using zip

You can zip your file to make it smaller. You can still use it with resumable uploads, even if it can write the whole thing in one go. Here’s the example modified, so that it zips the content first, then unzips it and checked it worked.

It still needed to do this in 2 chunks, but at least the file written was only 12 mb rather than the original 50+mb. Here’s the log

Reporting on progress

You can provide a function as a 4th argument to resumableUpload which will be called after each chunk is loaded. If you dont provide a function, this default one will be called, which logs progress with Logger. If you return true from this function it will cancel the upload.

This progress function is passed a whole bunch of things that can be used for reporting or restarting.

This status object is also returned from the resumable upload function.

Here’s what the default progress report looks like

Restarting an interrupted load.

If you run into quota problems and want to continue with the upload at a later time, you can use the contents in the status object to do that at a later date. Here’s how –

where

  • accessToken:the access token to use
  • blob: a blob with the total content, filename and mimetype . The content is available in status.content, or you can recreate it for the resumption
  • location. This is the restart url. it can be found in status.location
  • start. The resume start position in the contents. This would be status.start + status.size
  • func. The optional function to monitor progress. If missing the default one will be used.

Defeating the quotas

If you’re not careful you’ll soon run out of quota (100mb a day is all you can have on UrlFetch and if you break that, pretty much nothing will work till tomorrow), so to do this I created another account, shared a directory with it, and ran it from that other account.

The code

The code is in the cUseful library, and below

For more like this see Google Apps Scripts Snippets
Why not join our forum, follow the blog or follow me on Twitter to ensure you get updates when they are available.