Uploading files to AWS S3 in Spring Boot is simple.
Designing it properly for production is where most developers struggle.
Most implementations end up like this:
- controllers handling everything
- direct S3 calls everywhere
- no structure
- hard to scale later
In this guide, you’ll learn how to build a clean, scalable file upload system.
Building file uploads in production is harder than it looks.
If you want a clean, ready-to-use setup instead of building everything yourself:
https://buildbasekit.com/boilerplates/filora-fs-pro/
Quick Answer
The best way to upload files to S3 in Spring Boot:
- separate storage logic into a service layer
- avoid direct S3 calls in controllers
- use pre-signed URLs for scalability
Why use AWS S3
S3 is the standard for file storage in backend systems.
- handles large files and high traffic
- highly scalable and durable
- no infrastructure management
- supports secure access control
When to use S3 vs local storage
Use S3 when:
- building production systems
- handling user uploads
- running multiple instances
Use local storage when:
- prototyping
- small internal tools
Avoid local storage in distributed systems.
High-level architecture
A clean file upload system should look like this:
- Controller → handles incoming requests
- Service → processes file logic
- Storage layer → interacts with S3
- Database → stores file metadata
This separation keeps your code maintainable.
Do not just upload files.
Store metadata like:
- file name
- unique ID
- S3 object key
- file size and type
- upload timestamp
- user reference
This enables search, access control, and tracking.
Example API design
A simple API structure:
- POST /files → upload file
- GET /files/{id} → fetch metadata
- GET /files/{id}/url → get access URL
Upload flow
- client sends upload request
- backend validates file
- file is uploaded to S3
- metadata is stored in database
- response returns file reference
Keep storage logic abstract
Do not tightly couple your app with S3.
- define a storage interface
- implement S3 separately
- allow switching providers later
This makes your system flexible.
Security considerations
Always handle security properly:
- validate file type and size
- avoid exposing S3 bucket directly
- manage credentials securely
- use pre-signed URLs
Pre-signed URLs (important)
Better approach:
- backend generates upload URL
- client uploads directly to S3
- backend stores metadata
This:
- reduces server load
- improves scalability
Common mistakes
- calling S3 directly from controller
- hardcoding credentials
- skipping metadata storage
- mixing upload and access logic
Final thoughts
Uploading files to S3 is easy.
Designing a clean system is what matters long term.
If you skip structure now, it will become painful later.
Skip the setup
If you're building anything serious, don't spend days structuring this.
Use a production-ready backend instead:
https://buildbasekit.com/boilerplates/filora-fs-pro/
- clean architecture
- S3 + pre-signed URLs
- ready APIs
Build faster. Focus on real features.