File uploads look simple until someone uploads a 2GB video.
Then suddenly your backend starts behaving differently:
- memory usage spikes
- requests take forever
- uploads fail halfway
- server threads stay blocked
- users retry the same upload again
A small file upload API and a production-ready large file upload API are not the same thing.
If you're building file upload functionality in Spring Boot, simply increasing the max upload limit is not enough.
Let's look at how to handle large file uploads properly.
Why large file uploads become a backend problem
For small files, basic multipart upload works fine.
For larger files, several issues appear quickly:
1. Memory pressure
A common mistake is loading the uploaded file entirely into memory.
Example:
byte[] bytes = file.getBytes();
Looks harmless.
But if users upload 500MB or 2GB files, this becomes dangerous fast.
That single line can destroy your application's memory stability.
2. Long request lifecycle
Large uploads naturally take longer.
That means:
- open HTTP connections for longer periods
- higher timeout risk
- blocked request threads
- poor user experience if retries happen
3. Storage bottlenecks
Saving directly to local disk may work in development.
In production:
- multiple instances may not share storage
- containers may lose uploaded files
- disk space becomes a hidden failure point
4. Broken validation strategy
If validation happens too late, you waste resources processing invalid uploads.
Examples:
- unsupported file types
- oversized payloads
- corrupted uploads
Reject bad uploads early.
How large file uploads should work
A cleaner production flow looks like this:
- Client sends multipart upload request
- Server validates request constraints
- File is streamed instead of fully loaded
- Storage layer handles persistence
- API returns upload metadata
Simple architecture.
Big difference in reliability.
Start by defining upload constraints.
spring.servlet.multipart.max-file-size=1GB
spring.servlet.multipart.max-request-size=1GB
This protects your application from uncontrolled uploads.
Without limits:
- accidental abuse becomes easy
- memory pressure becomes unpredictable
- infrastructure costs can spike
Limits are not optional.
The biggest mistake: loading entire files into memory
This pattern is dangerous:
byte[] bytes = file.getBytes();
Also risky:
String content = new String(file.getBytes());
Why?
Because the whole file gets loaded into memory before processing.
For large uploads, this is exactly what you want to avoid.
Instead, stream the content.
Better approach: stream the upload
Use InputStream instead of loading everything at once.
Example:
@PostMapping("/upload")
public ResponseEntity<String> upload(
@RequestParam("file") MultipartFile file
) throws IOException {
if (file.isEmpty()) {
return ResponseEntity.badRequest().body("File is empty");
}
Path destination = Paths.get("/uploads/" + file.getOriginalFilename());
try (InputStream inputStream = file.getInputStream()) {
Files.copy(inputStream, destination);
}
return ResponseEntity.ok("Upload successful");
}
Why this is better:
- lower memory usage
- safer under larger payloads
- simpler production behavior
- predictable resource consumption
Separate upload logic from storage logic
Do not put everything inside the controller.
Bad:
@PostMapping("/upload")
public ResponseEntity<?> upload(MultipartFile file) {
// validation
// storage logic
// business rules
// response mapping
}
Cleaner structure:
- controller handles request
- service handles workflow
- storage layer handles persistence
Example:
@RestController
@RequiredArgsConstructor
public class FileUploadController {
private final FileUploadService fileUploadService;
@PostMapping("/upload")
public ResponseEntity<String> upload(
@RequestParam MultipartFile file
) {
fileUploadService.upload(file);
return ResponseEntity.ok("Uploaded");
}
}
Service:
@Service
@RequiredArgsConstructor
public class FileUploadService {
private final StorageService storageService;
public void upload(MultipartFile file) {
storageService.store(file);
}
}
This becomes much easier to maintain later.
Storage strategy matters
Local storage is okay for prototypes.
Production systems often need better options:
Local disk
Good for:
- prototypes
- internal tools
- single-instance deployments
Bad for:
- autoscaling environments
- Docker/Kubernetes deployments
- distributed systems
Cloud object storage
Examples:
- AWS S3
- Cloudflare R2
- MinIO
- Google Cloud Storage
Better for:
- scalability
- durability
- shared access across instances
- predictable infrastructure behavior
Validate before expensive processing
Check uploads early.
Minimum checks:
if (file.isEmpty()) {
throw new IllegalArgumentException("Empty file");
}
Size:
if (file.getSize() > MAX_UPLOAD_SIZE) {
throw new IllegalArgumentException("File too large");
}
Content type:
if (!allowedTypes.contains(file.getContentType())) {
throw new IllegalArgumentException("Unsupported file type");
}
Do this before storage work starts.
Common production mistakes
Loading everything into memory
Classic performance killer.
No upload size limits
This creates operational risk immediately.
Using local storage inside containers
Looks fine in dev.
Breaks badly in production.
No timeout awareness
Large uploads take time.
Your infrastructure must account for that.
Mixing upload and business logic
Creates messy controllers and painful maintenance.
Production checklist
Before shipping large upload support:
- upload size limits configured
- streaming instead of full buffering
- validation before processing
- storage abstraction in place
- timeout settings reviewed
- failure handling implemented
- infrastructure storage strategy chosen
If this list is incomplete, the implementation is not production-ready.
Final thoughts
Large file uploads are not complicated.
But careless implementations become expensive fast.
The winning approach is simple:
- set hard limits
- stream uploads
- validate early
- separate responsibilities
- choose proper storage
That gets you a stable upload API.
Built a similar production-ready upload architecture recently while working on Spring Boot backend tooling at BuildBaseKit.