React Native to Microservices

posted Originally published at dev.to 2 min read

In the React Native world we pretty much only worry about calling an endpoint and receiving some data from such endpoint. We are usually blind to all the work that happens in retrieving this data. In this post I am going to trace the api call from when it leaves the device up until the response is received, in a microservices context — naive point of view.

The api call goes to a domain name server to resolve where this api call should go. If I am in Italy, the api call will go to the nearest load balancer around Italy. This is enabled by having geolocation routing.

After the nearest load balancer has been identified. The api call is taken to this load balancer. A load balancer makes sure that when too many users are on their beds watching tiktoks the servers are not overloaded with too many requests at once. It distributes the requests in a manageable manner.

The load balancers can also route the api request to the server that is responsible for processing the request. For example, /videos/1 should go to the videos server instead of the pictures server. The load balancer will then take the /videos/1 request to the videos server.

Once the api request has arrived at the videos server, this server will also communicate with other microservices, maybe analytics, views count, metadata, related videos, and such. The Synchronous part of the request will return to the react native app, the video data and metadata for example, on the other hand, the asynchronous part, analytics for e.g., usually make use of something called message queues to communicate with the other microservices.

Message queues ares queues usually read by other microservices, analytics, metadata, recommendations, each can be subscribed to a message queue. Message queues are ideal because if the service goes down, the message queue will hold the message until the service comes back up and acknowledges the message was processed. This prevent’s losing messages along the way.

This is a high level overview, there is so much in between these processes; ssl, caching, routing algorithm, message exchange, producer, broker, CAP theorem, rate limiting, and so much more.

If you read this far, tweet to the author to show them you care. Tweet a Thanks

Thanks for the clear overview, Carlos! How do you usually handle failures or retries in those async message queues?

Thank you James. Systems like RabbitMQ, Kafka, and SQS implement something called a dead letter queue (DQl), another queue where messages go after the max retry count has been reached. Max retry count is another way message queues handle failures, message queues retry an action for a set number of times.

Hope this helps.

More Posts

Building a HTTP Server from Scratch: Know the HTTP principles to be a better FS Developer

Manon - Jun 9

Building Scalable React Native Apps with Nucleux and Supabase

MartyRoque - Jul 14

Create a Video Chat App in React Native with Pre-built API

Alexander S - May 6

React Native to Machine Learning

Carlos Almonte - Jul 7

Starting a new web app with a React project, should you go with React JavaScript or TypeScript?

Sunny - Jun 24
chevron_left