Build a mechanism for search results to be paginated, or continued, so a client may get all the data available.
Thoughts:
- The search would return the results.
- The search would call
count(*) on the search to get the total available.
- If the total is greater than what's returned, generate a random search identifier to store the search and the last file ID, so the next could follow-up from that file. Also store the user ID.
- Follow-up query would just be for this random search identifier. Ensure user's ID matches the ID of the creator of the paginated search.
Build a mechanism for search results to be paginated, or continued, so a client may get all the data available.
Thoughts:
count(*)on the search to get the total available.