In a real app, an endpoint is rarely just an endpoint. It is a tiny agreement between the backend, the network client, the request model, the repository, the use case, the UI state, and sometimes one very tired developer trying to remember whether this response returns JSON, bytes, a list, or a 200 with vibes.
This is the part of my codebase I keep coming back to: the path from repository to service locator. It is not the flashiest part of the app, but it decides whether new features feel smooth or whether every new API integration becomes a small argument with myself.
The shape I keep repeating
The project I worked on today has a fairly familiar Flutter structure:
- feature folders
- data sources for remote calls
- repositories for parsing and decision-making
- domain use cases
- a shared
DioClient
GetIt as the service locator
The pattern is not exotic, but it has saved me more times than I expected.
When I add a new endpoint, I usually move through the same mental chain:
- Add the URL constant.
- Add the method to the API service contract.
- Implement the Dio call.
- Expose it through the repository contract.
- Parse the response in the repository implementation.
- Wrap it in a use case.
- Register the use case, repository, and API service in
sl.
- Finally, call it from the UI or bloc/cubit layer.
That sounds like a lot when written out. In practice, it gives each layer one job. The API service talks HTTP. The repository turns raw responses into app-friendly objects. The use case gives the UI a clean entry point. The service locator makes the whole thing reachable without manually wiring constructors everywhere.
It is boring in a good way. And in mobile apps, "boring in a good way" is a compliment.
Where things got slightly less boring
The endpoints I worked around were not all the same shape.
Some were plain JSON requests:
final response = await sl<DioClient>().post(
ApiUrls.createBooking,
data: requestBody.toJson(),
options: Options(
headers: {'Authorization': 'Bearer $token'},
),
);
Some needed query parameters:
final response = await sl<DioClient>().get(
ApiUrls.getBookings,
queryParameters: {
'page': page,
if (startDate?.trim().isNotEmpty == true) 'startDate': startDate,
if (endDate?.trim().isNotEmpty == true) 'endDate': endDate,
},
options: Options(
headers: {'Authorization': 'Bearer $token'},
),
);
Some involved multipart uploads, which always make the room a little warmer:
final formData = FormData();
for (final entry in requestBody.toFormFields().entries) {
formData.fields.add(MapEntry(entry.key, entry.value));
}
if (imagePath != null && imagePath.trim().isNotEmpty) {
formData.files.add(
MapEntry(
'image',
await MultipartFile.fromFile(
imagePath,
filename: imagePath.split(RegExp(r'[\\/]')).last,
),
),
);
}
And then there were the document/image endpoints, where the response is not a neat JSON object at all. For those, the Dio options matter:
final response = await sl<DioClient>().get(
ApiUrls.downloadDocument(documentId),
options: Options(
headers: {
'Authorization': 'Bearer $token',
'Accept': '*/*',
},
responseType: ResponseType.bytes,
),
);
That small ResponseType.bytes line is the kind of thing you forget once, then remember forever.
Why I do not let the UI talk directly to Dio
There was a time when I would have been tempted to call Dio directly inside a screen, especially if the endpoint looked small.
Today-me does not trust that version of me.
The screen should not care if the endpoint is GET, POST, multipart, or returning bytes. It should not know whether the backend sends a list directly or wraps it inside a data key. It should not know that one endpoint needs timezone as a query parameter while another one needs a generated ID in the URL.
So the API service stays close to HTTP:
abstract class VendorApiService {
Future<Either> getTransactions({int page = 1});
}
Then the repository handles the translation:
class VendorRepositoryImpl extends VendorRepository {
@override
Future<Either> getTransactions({int page = 1}) async {
final result = await sl<VendorApiService>().getTransactions(page: page);
return result.fold(
(error) => Left(error),
(data) async {
try {
final response = data as Response;
final transactions = TransactionsResponse.fromJson(
(response.data as Map?)?.cast<String, dynamic>() ??
const <String, dynamic>{},
);
return Right(transactions);
} catch (e) {
return Left('Failed to parse transactions.');
}
},
);
}
}
That separation pays off when something changes. If the backend adjusts a response shape, I usually fix the repository. If auth headers are wrong, I look at the API service or client. If the UI state is confused, I inspect the use case call and state handling.
It gives the bug a neighborhood.
The service locator is where I check my work
GetIt is not just a dependency bucket for me. It is also the final checklist.
When I register a new endpoint flow, I have to ask:
- Did I register the API service?
- Did I bind the repository interface to its implementation?
- Did I add the new use case?
- Am I using the same feature boundary as the rest of the app?
The registrations usually look simple:
final sl = GetIt.instance;
void setupServiceLocator() {
sl.registerLazySingleton<VendorApiService>(
() => VendorApiServiceImpl(),
);
sl.registerLazySingleton<VendorRepository>(
() => VendorRepositoryImpl(),
);
sl.registerLazySingleton<GetVendorTransactionsUseCase>(
() => GetVendorTransactionsUseCase(),
);
}
But that simplicity is exactly why I like it. When a use case fails at runtime because it was never registered, the error is loud and direct. Annoying, yes, but honest.
The more dangerous bugs are the quiet ones: a method exists in the API service but not the repository, a repository parses the wrong model, or a use case accepts the wrong params and the UI starts passing half-correct data.
That is why I try to move through the chain in order. URL. Service. Repository. Use case. Service locator. UI. If I jump around too much, I eventually pay for it.
One small detail that mattered more than expected
A pattern I appreciated today was keeping request bodies responsible for serialization.
The endpoint method should not be manually assembling random maps everywhere if the request already has a model. This:
data: requestBody.toJson(),
is much easier to trust than scattering field names across multiple screens.
For multipart requests, I still have to be more hands-on because files and repeated fields need special handling. But even there, I prefer pushing normal text fields through something like toFormFields() and letting the service only deal with the multipart-specific parts.
It keeps the code from slowly becoming a collection of tiny string mistakes.
And yes, a single wrong field name can waste thirty minutes. Sometimes more. I am not proud, I am just reporting from the field.
The endpoint constants are not just strings
The app keeps endpoint paths centralized behind an ApiUrls style class. The real values come from environment configuration, but the usage stays clean:
class ApiUrls {
static String get baseUrl => 'https://api.example.com/';
static final createBooking = '${baseUrl}client/create-booking';
static String bookingReceipt(String bookingId) {
return '${baseUrl}payment/$bookingId/booking';
}
}
I like this because dynamic endpoints become explicit functions instead of random interpolation inside service methods.
The tradeoff is that the constants file can grow large if I am not careful. Once an app has auth, client flows, vendor flows, analytics, notifications, payments, documents, chat, and logging, the endpoint list starts looking like a city map.
Still, I prefer one crowded map to endpoint strings hiding in twenty different files.
What I noticed while wiring everything
The endpoint layer quietly reveals the personality of the app.
In this project, some features are simple CRUD. Others are more workflow-heavy: bookings, contracts, payouts, profile updates, analytics, reviews, notifications, documents, and media. That means the network layer cannot assume every response is the same kind of JSON.
A few decisions made the code easier to reason about:
- keeping auth token reads inside the API service layer
- returning
Either so failures stay explicit
- parsing inside repositories instead of UI code
- using use cases as the UI-facing API
- making file downloads return bytes instead of pretending they are normal models
- registering dependencies in one place so missing pieces are easier to spot
I also liked the small caching choice around relatively stable data. Not every endpoint deserves a network call every time. For data that rarely changes, a short-lived cache can remove friction without turning the app into a stale-data museum.
That is the kind of decision that is easy to skip when you are just trying to "make the endpoint work." But those little choices are where the app starts to feel more mature.
The part I keep learning
Writing endpoints is not hard because of the syntax.
It is hard because each endpoint carries assumptions:
- Does it need auth?
- Is the body JSON or multipart?
- Does it return a map, list, file, or empty success?
- Are IDs in the path or query params?
- Should the repository parse it or pass through the raw response?
- Does the UI need a typed model, bytes, or just success/failure?
- Should this use case take a params object instead of three loose strings?
None of these questions are dramatic. But they are the difference between a codebase that stays calm and one that becomes stressful to touch.
By the end of the day, the win was not only that endpoints were wired. The real win was that the flow still made sense after more endpoints were added.
That matters.
Because tomorrow another endpoint will arrive with one weird query parameter, one optional file, one backend response that does not look like the others, and one screen waiting for a clean result.
And when it does, I would rather follow a familiar path than negotiate with chaos again.