Some APIS allow batching of requests, but with a maximum number you can do at the same time. When the calls are asynchronous, chunking a request into smaller pieces and putting it all together afterwards can be a little tricky.

An example of the kind of API that has this kind of behavior is the Knowledge graph API. You supply a list of mids (machine-generated identifier), and it returns Knowledge Graph SP matches for the ones it knows. It allows some maximum (I’m not sure how many exactly) of ids in 1 batch.

Splitting the list into chunks means we don’t go over the limit, but it also allows multiple chunks to be done in parallel.


To get started, let’s use small wrapper for the Node kgsearch api, and install the googleapis npm package. You can generate an api key in your cloud project (or use a service account). For simplicity I’m using an API key, so you’ll need to get your own and substitute the relevant value. The mode parameter is just how I manage differnt keys for production and development, so you can ignore if you don’t do anything like that.


In its simplest form, you can now do this

However, let’s make it handle asynchronous chunking – which is the main purpose of this post.


This is actually a resolver for one of my GraphQL server APIS. You can query on loads of mids, and get back the Knowledge Graph result(s) via the API. The ids come through the GraqphQL resolver arguments as an array of strings in rgs.params.ids, so you’ll need to modify the arguments to suit however you send over the ids.

Let’s break that down.
No more that 20 ids to be in any chunk
const max = 20;

Make an array of slices of the original list, each of which are at most 20 in length. A simple way to do this is only to create an array on multiples of 20, and return null arrays and filter them out for all other indices.

We’ll look at the kgGets function later, but it is the one that interacts with the knowledge graph API. This sends off each of the requests simultaneously (I’ll cover how to handle throttling that if necessary in another post), then concatenates all the results into a single result, as if none of that chunking ever happened.

Hitting the API

This is a wrapper for the kgSearch method which I covered at the beginning of the article. You can just use kgSearch instead of kgGets if you just want the vanilla results from the API back, and stop here.

However the knowledge graph API does return a load of stuff that I don’t need for my API, so I’m going to use the ‘pluck-deep’ npm package to pull out what I need. As an added complication, I use the Facebook loader to optimize graphQL resolvers – this requires that the results are exactly the same length and in the same order as the original request (the knowledge graph only returns results for which it has some data), so there’s a little bit of fiddling required here to pack out the arrays again.

Organizing the data

This again, won’t be relevant if you just want the vanilla api response, but here’s the data I need for the response plucked out.

Since G+ is closed, you can now star and follow post announcements and discussions on github, here