[BREAKING] New Search API Result Limits

Have more questions? Submit a request

76 Comments

  • Rob Tihanyi
    Comment actions Permalink

    Bryan - Community Manager The incremental APIs are fine if you are just dumping bulk items as they are updated. Not much good though if you want to find a specific block of info, which can also be "bulk, repeated" processes. For example, to run a monthly report on how many new customers signed up on a specific journey.

    Salvador Vazquez If you are true to the desire to create a reliable and performant solution, then perhaps you should look at how to simplify the new way to create secondary queries - let's call it "double-pagination" for now. The current process provides a "next page" link for each page. Why dont you add a "next query" link as well that makes it easy for us to process a new query with the next 1000 results? It would only appear on the 10th page of any large query, and would contain the added info. Of course, you have to hope that none of the data changes whilst you are running the additional queries... !

    I think this whole shift was poorly thought through. I hope you had a major crisis you needed to avert by doing it, because reporting is one of the great strengths of Zendesk over other tools - and you have massively dented that feature in one fell swoop.

    1
  • Kevin Goold
    Comment actions Permalink

    May I suggest a compromise?

    Instead of dropping it from 200k to 1k, how about setting it at 10k?  At least until you have resolved the situation with the Power BI connector.

    That would be sufficient for many organisations such as ourselves, to be able to pull the most recent data, just once a night into our Power BI dashboard.

    Alternatively, how about a fair use policy?  It seems that you're penalising everyone, just because you have a few heavy users.

     

     

     

    2
  • Sebastiaan Wijchers
    Comment actions Permalink

    For the ones that have missed it, Julio has described a good work around here (on page 2). One that's not that hard to implement:
    https://develop.zendesk.com/hc/en-us/articles/360022563994/comments/360003420813

    It works like a charm when you need the search endpoints. As not all cases can be covered by incremental export or the (heavily rate limited) view endpoints.

    0
  • Rudresh Pramanik
    Comment actions Permalink

    We need to extract Zendesk data as part of our integration solutions and its very common to exceed the number of records go over 1000. This limitation is causing all existing integration to break.

    Is there a workaround to avoid this now? 200,000 to 1,000 seems very unreasonable where existing integration solutions are made on this simple service. Is there a way we can offset the records? Query with Date parameter in any form is not a viable option now. Please suggest or increase the limit to a reasonable number. 

    0
  • Ash Parker
    Comment actions Permalink

    Flow (or Power Automate as it is now known) has broken Zendesk connectors too. I finally found the time to automate a few key things for business intelligence and three months later Iits all broken and I am back to where I started - with nothing.  Cheers.

    0
  • Igal Dar
    Comment actions Permalink

    Ash Parker Flow's Zendesk connector has always been eons behind the API regardless of this thread's topic. I suggest not using it but using the HTTP connector to allow you to use everything in the API. The only issue here is that you will need to your credentials/token in every flow as a parameter.

    0
  • Chris Plapp
    Comment actions Permalink

    Hi Bryan - Just wanted to let you know that as of today, still no extension has been granted and I have never heard from our rep on this topic.

    0
  • Andrea De Togni
    Comment actions Permalink

    Hi

    I can second pretty much EVERYONE on this page that this abrupt, massive limitation is breaking several companies' BI, including ours. 
    We're using PowerBI and, out of nowhere, now turns out an error (and by the way you could have returned a message, like a correct object with a message "limit reached" instead of returning a 422, so at least it would not have break the connector!)

    200.000 to 1.000 is NOT an "improvement of the service level" of your API, it's basically removing a service that is used by countless of your clients. Of course your servers are working faster now that you've decreased their load by 200%!

    Changing the way we get the data requires a major rework of our solutions, with an associated cost, delay and disruption of the service. If you had informed us we would have acted accordingly from the beginning instead of working in a hurry trying to find a patch.

    On 30th of April this page still reported a limitation of 200.000
    https://web.archive.org/web/20190430232132/https://developer.zendesk.com/rest_api/docs/support/search
    30 Apr to 15 Oct is 5.5 months

    Your options are all very expensive:

    1) Switching to the Incremental Export is a massive change as it's a completely different method AND requires us to leverage storage in a datawarehouse rather than going directly to PowerBI - because Powerbi works in-memory with the full dataset loaded.

    2) Switching to the CSV export again is not easy as it's a very different data model AND because it's not meant to be ran automatically every day or so.

    3) Paginating the report as you suggest, with date filter, again is complicated because we need to iterate the request, find "when and where" it breaks 1000 rows and because the is no way to do it in PowerBI.

    Therefore
    - I'm going to send you an email with a request for an extension
    - Can't you revert to 200.000 until powerbi connector will be fixed?
    - Can you at least reduce from 200.000 to 10.000 for everyone? This would have probably have generated much less headaches...



    1
  • Greg Sohl
    Comment actions Permalink

    Zendesk - what is the new way to retrieve a large number of records, > 1,000, (users, orgs, etc) that match only a specified tag or tags?

    From my current understanding, after this change, the only way to do that is to export my entire set of data and search locally. Is that correct?

    0
  • Andrea De Togni
    Comment actions Permalink

    Greg Sohl well, basically you can't. The simple answer is that there is no easy way to get a bunch of items larger than 1000. You have to rely on the Incremental export which is a completely different method of working.

    0
  • Sebastiaan Wijchers
    Comment actions Permalink

    Greg Sohl & Andrea De Togni

    It's still possible with an easy workaround:

    For the ones that have missed it, Julio has described a good work around here (on page 2). One that's not that hard to implement:
    https://develop.zendesk.com/hc/en-us/articles/360022563994/comments/360003420813

    It works like a charm when you need the search endpoints. As not all cases can be covered by incremental export or the (heavily rate limited) view endpoints.

    0
  • Greg Sohl
    Comment actions Permalink

    Sebastiaan CoolsJulio Garcés Teuber, Andrea De Togni

    Sebastiaan, thanks for pointing me to Julio's solution. I was thinking the same but had not worked out the technique yet. Same as we do for paged requests to many things.

    I might suggest the potential overlap of data retrieval in Julio Garcés Teuber's technique might be averted if the order_by will allow sorting by ID. Then the subsequent queries with > {last ID}, would only get the next records. Other needed sorting could then be performed in the client.

    Thanks again.

    Greg

    0
  • Andrea De Togni
    Comment actions Permalink

    Sebastiaan Wijchers while I appreciate the inventive of Julio Garcés Teuber BUT this method is hardly a solution.

    - It requires an outer loop of your API connector, where you have to read the last page's ticket time, save the value and re-run the query with this new data

    - It requires an additional layer of deduplication of duplicate tickets

    - It does not really solve the issue existing with PowerBI or other enterprise connectors.

    I second also Julio's opinion. The limitation CAN be avoided by using an outer loop and you're actively promoting it: if we all start using this method you would not have gained any benefit from this scaling down requests, because 10 requests of 10 pages each are the same of 1 request of 100 pages (to be technically correct, due to the overhead, it's actually worse). 

    So, why? 

    0
  • Andrea De Togni
    Comment actions Permalink

    Hi

    there are many post on PowerBI forums regarding this issue

    https://community.powerbi.com/t5/Desktop/DataSource-Error-with-zendesk/m-p/832748

    https://community.powerbi.com/t5/Desktop/Connection-to-PowerBI-Zendesk-Connector-broken/m-p/841703

    https://community.powerbi.com/t5/Desktop/Zendesk-error/m-p/839810

    https://community.powerbi.com/t5/Desktop/I-Cannot-Get-Data-from-Zendesk-using-the-Power-BI-Zendesk-Beta/m-p/827673

    Someone posted an idea

    https://ideas.powerbi.com/forums/265200-power-bi-ideas/suggestions/38892475-update-zendesk-beta-plugin-to-support-new-zendesk

    and few of us, including myself, opened a ticket to Microsoft.

    So I *hope* Microsoft is looking into it. 

    MEANWHILE, it would be great if you could show a little more flexibility and show some understanding of us on the other side. Increasing the limit at least a bit will allow us to keep working while we (or microsoft) work on a more enterprise solution.

    By the way, I tested your suggested solution with Segment (I used Stitch, same logic). While it "works" (somewhat, I had a few replication errors) it's very complex (Zendesk->Stitch->Azure DWH->PowerBI) and expensive (as you have to pay both Stitch for the ETL *and* Azure for the data).

     

    0
  • Carly Lucas-Melanson
    Comment actions Permalink

    How could this not have been resolved with an optional parameter in the API? If the query is known to be greater than 1000 results developers and search users could choose to check a box or input a parameter to return all results. If that parameter isn't there, don't call the unlimited search. This is a frustrating decision that your team has made that was executed poorly, and definitely erodes trust in the product/company. The workaround suggested by a previous commenter results in significant computational overhead for any application pulling large queries, and is not an "easy fix" as suggested by your moderator. Please consider changing this in the future. 

    0
  • JohnCollins
    Comment actions Permalink

    I'd wager that this change was implemented to stop this activity from a competitor and we are all now suffering with broken BI extracts and less utility from Zendesk as a result. https://docs.helpscout.com/article/279-moving-from-zendesk

    0

Article is closed for comments.

Powered by Zendesk