I've spent the last several months building Quadify Ultra, a Blender 5.0 addon that uses a KNN-based ML engine to automatically route meshes to the right retopology algorithm. This post is about the technical decisions behind it — the parts that were harder than expected and the approaches that actually worked.
The problem: retopology algorithm selection is a classification task
Retopology — converting triangulated meshes to clean quad topology — isn't one problem. It's five different problems depending on what kind of mesh you're dealing with:
- Open curved panels (car doors, fenders) need edge-flip and greedy quad matching that preserves vertex positions
- CAD imports with hard edges need boundary detection before any merging happens
- Flat CAD surfaces with n-gon triangulation need dissolve passes before quadification
- Closed flat meshes need field-guided remeshing (QuadriFlow)
- Organic meshes and scans with broken topology need voxel rebuilding
The routing decision — which algorithm for this mesh — is something experienced artists do implicitly. I wanted to automate it.
The input features I settled on after testing:
features = [
tri_ratio, # fraction of triangles
quad_ratio, # fraction of quads
ngon_ratio, # fraction of n-gons
boundary_ratio, # open boundary edges / total edges
non_manifold_ratio, # non-manifold edges / total edges
normal_std, # standard deviation of face normals (surface curvature proxy)
edge_length_cv, # coefficient of variation of edge lengths
face_area_cv, # coefficient of variation of face areas
avg_valence, # average vertex valence
pole_ratio, # fraction of vertices with valence != 4
# ... 8 more geometric ratios
]
18 features total. All normalised ratios — no absolute values, so the classifier is scale-independent.
The classifier: KNN with confidence blending
I went with KNN rather than a neural net for two reasons:
- Interpretability — when the engine recommends Smart mode at 73% confidence, I can inspect the k nearest neighbours and understand why
- Cold start — a neural net needs hundreds of examples before it's useful. KNN with k=3 gives reasonable results from the first 10 operations
The confidence score is the fraction of the k nearest neighbours that agree on the top recommendation:
def get_confidence(neighbours, top_strategy):
agreeing = sum(1 for n in neighbours if n['strategy'] == top_strategy)
return agreeing / len(neighbours)
At low DB count the system blends toward a deterministic heuristic:
blend_weight = min(1.0, db_count / 30)
confidence = (ml_confidence * blend_weight) + (heuristic_confidence * (1 - blend_weight))
This means the first 30 operations use mostly heuristic routing, gradually transitioning to learned routing as the DB grows.
The hardest bug: voxel remesh freezing Blender
The original implementation used bpy.ops.object.voxel_remesh(). This is a synchronous operator — it runs on Python's main thread and blocks the entire UI. On a 39,000-face burger mesh it produced about 731,000 intermediate voxel faces and froze Blender for several minutes.
The fix came from reading Quadify Pro's source code. They use a REMESH modifier instead:
mod = obj.modifiers.new("QP_VoxelRebuild", "REMESH")
mod.mode = 'VOXEL'
mod.voxel_size = max(0.0005, min(0.05, world_diag / resolution))
mod.use_smooth_shade = True
bpy.ops.object.modifier_apply(modifier=mod.name)
The modifier executes in Blender's C stack, not Python. It never blocks the UI. Same result, dramatically faster, works on any mesh size.
The voxel_size formula needed one more fix. The original code had:
voxel_size = max(
min_dim / 20.0, # THIS was the bug
local_diag / voxel_resolution,
0.0005,
)
The min_dim / 20.0 floor was designed to prevent microscopic voxels on large meshes, but it overrode the resolution setting for small objects. A burger mesh at 0.2m scale gave min_dim / 20.0 = 0.01 — 15x larger than the resolution-based target of 0.00064. With 0.01 voxels the mesh produced only 3,138 intermediate faces. QuadriFlow then got min(target_faces, 3138) = 3,138 as its target and produced a low-detail result.
Fix: remove the floor entirely, use world-space diagonal from matrix_world @ Vector(v) for scale independence, and add a 150k face cap to prevent genuinely large meshes from freezing:
bb_world = [obj.matrix_world @ Vector(v) for v in obj.bound_box]
diag = max((p - bb_world[0]).length for p in bb_world)
voxel_size = max(0.0001, min(0.05, diag / voxel_resolution))
# Cap to prevent freeze on large meshes
estimated_faces = (diag / voxel_size) ** 2
if estimated_faces > 150000:
voxel_size = diag / math.sqrt(150000)
Each opted-in operation sends 18 geometry ratios and the algorithm result to Supabase:
payload = {
'version': 1,
'features': [float(f) for f in features], # 18 ratios only
'strategy': strategy,
'quad_pct': quad_pct,
'corrected': corrected,
}
No mesh geometry. No vertex positions. No filenames. Just the feature vector and outcome.
The fetch uses only apikey header — no Authorization: Bearer — because the new Supabase sb_publishable_* key format explicitly rejects the Authorization header:
req = urllib.request.Request(url, headers={
'apikey': SUPABASE_KEY,
'Accept': 'application/json',
})
Applying the update merges remote records into the local KNN experience DB, with deduplication and Postgres array format handling ({0.1,0.2,...} → Python list).
The callback uses bpy.app.timers to write results back to Blender properties on the main thread — background threads cannot write to Blender properties directly:
def apply_result():
if _result[0] is None:
return 0.5 # check again in 0.5s
success, data = _result[0]
scene.quadify_ultra.ml_update_status = f"Updated — {total:,} operations"
return None # stop timer
bpy.app.timers.register(apply_result, first_interval=0.5)
numpy boolean ambiguity in batch processing
One subtle bug that cost time: MeshFeatureExtractor.extract() returns a numpy array. In the batch loop:
normal_std = features[5] if features else 0.3 # WRONG
if features on a numpy array raises ValueError: The truth value of an array is ambiguous. The fix:
normal_std = float(features[5]) if features is not None else 0.3
The float() wrapper also prevents numpy scalar type issues downstream in JSON serialisation.
What's live
Quadify Ultra is available on Superhive: https://superhivemarket.com/products/quadify
$200 one-time, Blender 5.0+, MIT licensed. The ML model improves as the community database grows — currently ~160+ operations from early testers.
Happy to go deeper on any part of the implementation.
Asset credit: Car mesh © 2017 Khronos Group. CC BY 4.0 — https://creativecommons.org/licenses/by/4.0/