Original post

Hola,

Seen some tutorials on using various frameworks for handling rest API calls.. very slick stuff the way it is done in having done this in Java and NodeJS in the past. just looks so much easier/better, though interfacing with back end code is still a mystery to me.

One thought I had was having an sync API call that communicates with gRPC to some back end service(s). Let’s say those services take a while.. a second or so.. maybe they rely on 3rd party libraries that take longer. Now, there are a couple ways to take this. The way I would likely consider is making the API endpoint an async endpoint by returning a 202 or something with some sort of generated transaction ID that can later be matched up to a callback response it gets from my service. Or, what I was thinking.. assuming there isn’t a huge demand on API resources (e.g. 1000’s of open sockets waiting 1 second each on a response, due to 1000’s of req per second coming in), I could have my sync API create a channel, spawn a Go routine that sends the request to the back end… and maybe a separate daemon/go routine running that listens for responses from the back end. Maybe this is using a message bus and all back ends send messages back to the front end over the same queue/topic, and this one Go routine/listener pulls the messages off, then using some meta info in the message, identifies the channel to send back to the waiting API endpoint… so that it can then send the response back and close out the connection.

I am trying to first, understand IF that could work..and if so does it make sense in how I described it might be coded. The idea of some separate Go routine/thread running and listening for messages from back end services.. that then sends resp/data over a specific channel seems more trouble than just having each API endpoint wait for the Go routine to finish with the response from the back end, not open a channel at all.

In either case, it seems the API handler is blocking the consumer… even if it is just for 1 second.. if that consumer makes dozens of the same call, and it wasn’t written to do so in some threaded/web worker manner, it could cause the consumer to block.

Thus, my thinking is that making an API request async, and each consumer registers a callback URL, and then my API returns a 202 (with some generated value that is later used to identify the callback request to match it up), and my API endpoint spawns a Go routine or sends a message over message bus (fire and forget) that once done, results in the callback being called… seems like this would be a much better way to go.

What do you all think?

submitted by /u/sckmaarih
[link] [comments]