The difficulty of the task at hand was that the client specifically required that the user should use only a single swipe with the finger to mark the foreground object or the face.
While all other existing solutions use at least 2 swipes, one for the foreground object (or for the face) and one for the background, we still managed to achieve acceptable results with a single swipe.
The front camera detects face and prepares for a face 3D model, while the back camera doesn’t detect faces, but anyway uses the generated 3D mask.
The program generates the mask automatically. After applying the texture it gives a feeling of 3D depth for the object. After generating the 3D model, the user gets an impression of a popped-up 3D object from the screen, while tilting the phone. We added a blur effect for the mask, so the sides are not laddered but are rather smooth.
We have achieved to obtain a 3D model of the face of a selfie taken by the phone, only requiring a single photo.
We took into consideration the lowered number of points for the foreground model and decreased some processing for the in-paint procedures, so that the processing can be done in the tiny platform of the iPhones.
In the screenshots are given examples from a face 3D model and some other objects’ models obtained from a cluttered environment.