coates’s avatarcoates’s Twitter Archive—№ 22,347

  1. There’s not a reasonable way to debounce two or more exact-same-payload requests to distributed “serverless" http workers (on a very tight, (ms) timeframe) without a whole lot of locking infrastructure, right? We’d run into a race condition in the debouncer on a ms timeline…
    1. …in reply to @coates
      A friend + Faculty contractor (who I believe is not on Twitter) shared an idea: push these events into a queue, and consume this queue with a concurrency of 1. It’s not perfectly foolproof, but it should work here. Embarrassed that I didn’t think of this; we avoid 1 of anything.