Quote:
Originally Posted by sarettah
No, filtering of data was not in the original but the usage of it almost demanded either caching the xml or using a database. So filtering at our end made sense.
If they are going to go to an api call every page hit (which is the only way using the client ip properly could work imho) then we move away from caching on our end and depend on filtering on their end. Other things in there make me think that way such as the being able to request a certain number of records to skip. That would be to allow for pagination I would think.
My versions of the current usage are to pull everything into a database and then pull from there to make filtered cached files that my sites use.
I do 1 call to the api every 5 minutes and that loads out to 17 different sites, each one niched differently.
Changing that up to an api call on every page and then the filtering, etc will be a bitch and a half.
.
|
I have setup in a similar manner to you as it was far too slow to suck down on each page load.
Pros of this would be that models would no longer send us bs dmca notices, cons are the huge amount of work to adapt existing sites. However with some tweaks to how they filter this might not be a con.
Your ideas are great - skipping records for pagination, filtering results etc. If they are currently working on this, then it would be the optimal time to put in some requests.
Not quite sure who to contact about that these days, I attempted to contact Kitt (totally unrelated matter) and received a response from the general support - shame as Steve was awesome to deal with.