• Resolved marekszewczyk

    (@marekszewczyk)


    Is there a way to get large amount of data from WPGraphQL concurrently?

    I’m making website with Gatsby.js, where WordPress is the main source of data. There is almost 60k of posts and even more media items. Server is very efficient (16CPU, 32GB of RAM), but anyway getting data from WordPress is huge… bottleneck.

    I’ve changed max number of returned objects to 1000. Server can handle 60 concurrent requests in 16 seconds, each one returning for example 1000 media items. But due to fact, that the only available in wpgraphql metod of pagination is cursor-based pagination I can’t send concurent request. It’s because in the next request I have to provide endCursor from previous request as an after parameter. This leads to 200 sec wasted for getting 60k objects, that concurently could be returned in 16 sec. Am I missing something? Is there any solution?

    As a result full build of my website takes 80 minutes and 90% of that time is taken by getting data from WordPress.

Viewing 1 replies (of 1 total)
  • Plugin Author Jason Bahl

    (@jasonbahl)

    @marekszewczyk you could register your own field to the graph that returns a list of post IDs, then use that list to chunk requests by Gatsby.

    something like (not thoroughly tested, but gives you an idea):

    add_action( 'graphql_register_types', function() {
    
    	register_graphql_field( 'RootQuery', 'contentIds', [
    		'type' => [ 'list_of' => 'Int' ],
    		'resolve' => function() {
    			$query = new WP_Query([
    				'fields' => 'ids',
    				'post_type' => \WPGraphQL::get_allowed_post_types(),
    				'post_status' => 'publish',
    				'posts_per_page' => -1,
    			]);
    			return $query->posts ?? null;
    		}
    	] );
    
    } );
Viewing 1 replies (of 1 total)
  • The topic ‘Paginated and concurrent requests’ is closed to new replies.