I spent part of today staring at a selfie that looked completely normal in the app until it became a profile photo.
That sentence sounds harmless. It was not harmless.
The face verification flow looked fine while running. The camera preview was upright. The liveness checks passed. The user smiled, turned left, turned right, looked up, looked down, and the app happily uploaded the images. Then the first image, the nice centered smiling one, became the user's display picture.
And suddenly the user was lying sideways.
Head on the right. Neck on the left. The kind of bug where the UI looks innocent, the backend looks innocent, and the image itself is quietly doing the wrong thing.
The suspicious part was not the avatar
My first instinct was to look at the profile image rendering. Maybe the avatar widget was doing something odd. Maybe the network image package ignored metadata. Maybe the backend transformed the image. Maybe some storage layer stripped EXIF orientation.
But the avatar was boring in the best possible way. It received a URL and rendered it with BoxFit.cover. No transforms. No rotation. No custom painting. Nothing dramatic.
The face verification flow was more interesting.
The app was using the camera stream for liveness detection. Each frame went through ML Kit face detection, and once a rule was stable enough, we captured the image for that rule. The first rule was the centered smile, which later became the display picture.
That gave the bug a shape:
- preview upright
- ML Kit detection working
- saved/uploaded image rotated
- all captured images rotated the same way
That combination usually means the display layer and detection layer know about orientation, but the saved pixels do not.
The camera stream was telling the truth, just not the truth I wanted
The important detail was that the capture path was not using takePicture().
Instead, the flow converted the exact CameraImage stream frame into a PNG. The motivation made sense: use the precise frame that satisfied the face rule. No waiting for the camera to stop streaming, no second capture after the user moved slightly, no autofocus/flicker delay. It is a tempting approach, and honestly, I still like the intent.
The rough shape looked like this:
final imagePath = await _captureFromStreamFrame(streamFrame);
Inside that helper, Android NV21 bytes were converted into BGRA pixels, then encoded as a PNG:
ui.decodeImageFromPixels(
bgraBytes,
image.width,
image.height,
ui.PixelFormat.bgra8888,
(img) => completer.complete(img),
);
final uiImage = await completer.future;
final byteData = await uiImage.toByteData(
format: ui.ImageByteFormat.png,
);
This is where Flutter politely allowed me to shoot myself in the foot.
Raw camera stream frames are not necessarily in the same orientation as the preview on screen. On Android especially, the camera sensor often has a native orientation that is effectively landscape relative to the way the user holds the phone. The preview widget handles that. ML Kit also gets rotation metadata, so detection can still work.
But if you manually turn the raw bytes into a PNG, nobody magically rotates those pixels for you.
You get the sensor's version of reality.
And the sensor's version of reality had my user's head rotated 90 degrees.
The part that made it slightly annoying
The obvious fix was "rotate the image."
The less obvious question was: rotate it which way?
This is where mobile camera work always gets a little slippery. You are dealing with a few concepts that sound related but are not interchangeable:
- sensor orientation
- device orientation
- preview orientation
- front-camera mirroring
- saved image orientation
- EXIF metadata, if you are saving JPEGs
- actual pixel rotation, if you are writing PNGs
In this flow we were writing PNGs from raw pixels, so EXIF was not going to save us. The pixels themselves needed to be corrected.
The first attempt used the inverse of the camera sensor orientation:
return (360 - sensorOrientation) % 360;
It was a reasonable guess. It was also wrong for the tested device.
The image moved from sideways to upside down. Progress, technically. Emotionally, less so.
That test was useful though, because it proved two things immediately:
- We were definitely fixing the right part of the pipeline.
- The rotation direction needed to be flipped.
So the final working version used the reported sensor orientation directly:
int _streamFrameRotationDegrees({
required int sensorOrientation,
}) {
if (!Platform.isAndroid) return 0;
return sensorOrientation % 360;
}
Then the decoded frame gets rotated before encoding:
final rotationDegrees = _streamFrameRotationDegrees(
sensorOrientation: sensorOrientation,
);
final outputImage = rotationDegrees == 0
? uiImage
: await _rotateImageClockwise(uiImage, rotationDegrees);
final byteData = await outputImage.toByteData(
format: ui.ImageByteFormat.png,
);
The rotation helper itself is not complicated, but it is the kind of code where one swapped width/height can ruin your afternoon:
Future<ui.Image> _rotateImageClockwise(
ui.Image source,
int degrees,
) async {
final normalizedDegrees = degrees % 360;
if (normalizedDegrees == 0) return source;
final swapsDimensions =
normalizedDegrees == 90 || normalizedDegrees == 270;
final outputWidth = swapsDimensions ? source.height : source.width;
final outputHeight = swapsDimensions ? source.width : source.height;
final recorder = ui.PictureRecorder();
final canvas = Canvas(recorder);
canvas.translate(outputWidth / 2, outputHeight / 2);
canvas.rotate(normalizedDegrees * math.pi / 180);
canvas.translate(-source.width / 2, -source.height / 2);
canvas.drawImage(source, Offset.zero, Paint());
final picture = recorder.endRecording();
final rotated = await picture.toImage(outputWidth, outputHeight);
picture.dispose();
return rotated;
}
Once that was in place, the saved local images finally looked like what the user saw in the camera preview: head up, neck down, nobody accidentally reclining across the avatar.
A small win, but a very satisfying one.
One of the better decisions during this fix was not waiting for the entire upload-display-profile loop every time.
Originally, the feedback cycle was too long:
- run face verification
- upload images
- wait for backend processing
- fetch user details again
- check the display picture
- wonder if the bug was in capture, upload, storage, API, cache, or rendering
That is too many suspects.
So I added a temporary debug-only local preview after capture and before upload. It showed the five local files in a grid labeled center, left, right, up, and down. That made the problem brutally obvious. The images were already rotated before they ever touched remote storage.
That little preview screen did not survive the final cleanup, and it should not have. But as a debugging instrument it was perfect. It shortened the loop from "wait for the full system" to "capture and inspect the exact file."
I keep relearning this lesson: when a pipeline has too many stages, build a peephole into the middle.
Not a permanent feature. Not a grand observability platform. Just a tiny window into the thing you keep making assumptions about.
Why not just use takePicture()?
This came up naturally.
The signup selfie flow in another part of the app already uses takePicture(). That path is simpler because the camera plugin handles more of the normal image capture behavior for you. For many apps, that is absolutely what I would prefer.
The liveness flow had a different tradeoff. It wanted the exact frame that passed the rule. If the user smiles and holds still, we capture the stream frame that ML Kit just approved. That avoids a small but real gap between "rule passed" and "photo taken."
So there were two reasonable options:
- switch back to
takePicture() and lean on the camera plugin's capture behavior
- keep stream-frame capture and explicitly own pixel orientation
We chose the second path because it preserved the existing liveness behavior. But choosing it means accepting the responsibility that comes with raw frames. The camera plugin is no longer your orientation babysitter at save time.
That tradeoff is fine, as long as it is intentional.
The device question
After the fix worked, the next question was the right one: will this hold across devices?
The answer is: better than a hardcoded 90-degree rotation, but still worth testing.
Different Android devices may report different camera sensor orientations, commonly 90 or 270. Some devices may even differ between front and back cameras. Using sensorOrientation is the portable part of the fix. It asks the camera what orientation it actually uses instead of assuming every phone is the one on my desk.
But there are caveats.
The current flow assumes portrait capture. If the app allows the user to rotate the phone into landscape during verification, then sensor orientation alone is not enough. We would need to account for current device orientation too.
Also, rotation and mirroring are separate problems. A front camera preview may be mirrored for a familiar selfie experience, but that does not automatically mean the saved image should be mirrored. If logos or text appear reversed, that is a horizontal flip decision, not a rotation fix.
For today's bug, the issue was rotation. The corrected saved PNG now matches the visual expectation.
The thing I am taking away
The interesting part of this bug was not that an image was rotated. That happens all the time in mobile work.
The interesting part was how many layers were technically doing the right thing:
- Camera preview looked correct.
- ML Kit detection worked because it received rotation metadata.
- Upload worked because it uploaded exactly what it was given.
- Avatar rendering worked because it displayed exactly what the URL returned.
The bug lived in the one handoff where metadata stopped being enough and pixels became permanent.
That is the kind of issue that makes mobile development feel like plumbing with opinions. Every layer has its own model of reality. Most days they agree. Today they did not.
At least now the user's face is upright.