In a serverless app, you would typically choose to use parallel invocations for use cases when the event source that invokes your AWS Lambda function passes a single object in its event body, but multiple events can occur in the time window it takes for your function to process a single event.
For example, Amazon API Gateway generates a single ProxyRequest event similar to the following for each HTTP request it receives:
{
"version": 2,
"eventType": "SyncTrigger",
"region": "us-east-1",
"identityPoolId": "identityPoolId",
"identityId": "identityId",
"datasetName": "datasetName",
"datasetRecords": {
"SampleKey1": {
"oldValue": "oldValue1",
"newValue": "newValue1",
"op": "replace"
},
"SampleKey2": {
"oldValue": "oldValue2",
"newValue": "newValue2",
"op": "replace"
}
}
}
These events can occur hundreds or even thousands of times per second, and your function can take hundreds of milliseconds, a few seconds, or more to complete synchronously, especially if it has downstream asynchronous dependencies such as third party API calls.
In this case, you want your Lambda function to be invoked once per event, but hundreds or thousands of these function instances may be executing at any given time, making this a truly parallel invocation pattern.