I am having trouble wrapping my head around asynchronous/parallel requests using Python 3.5+ asyncio (I believe more specifically aiohttp).
I have read multiple examples that define how to do asynchronous requests using aiohttp but they all seem to use a static predefined URL to hit multiple times or a static predefined list of URLs.
What I'm trying to accomplish is send multiple (say two) parallel requests at a time to a singular REST API endpoint that implements an offset counter to paginate records and continue to increment with each iteration until all records returned via the API are exhausted.
The REST API returns JSON data that looks like this:
These repeated GET requests would return 45 records total, ten items at a time:
Given there are only 45 records, the last query would return an empty number of records and the second to last query would return only five records (out of a max of ten).
The goal is to stop incrementing the offset and stop generating new requests as the number of records returned would be less than the limit in the request (record exhaustion). It would also allow the existing threads to finish their full request and combine the results into one list that could be processed later in the code, unrelated to the API request itself.
My internet searches have returned very little, specifically around paginating requests, so I hope someone on Stack Overflow can help me grasp the basics of what I am missing so I can learn and build upon it.