Face Mesh (GPU)¶
This example focuses on running the MediaPipe Face Mesh pipeline on mobile devices to perform 3D face landmark estimation in real-time, utilizing GPU acceleration. The pipeline internally incorporates TensorFlow Lite models. To know more about the models, please refer to the model README file. The pipeline is related to the face detection example as it internally utilizes face detection and performs landmark estimation only within the detected region.
MediaPipe Face Mesh generates 468 3D face landmarks in real-time on mobile devices. In the visualization above, the red dots represent the landmarks, and the green lines connecting landmarks illustrate the contours around the eyes, eyebrows, lips and the entire face.
A prebuilt arm64 APK can be downloaded here.
To build the app yourself, run:
bazel build -c opt --config=android_arm64 mediapipe/examples/android/src/java/com/google/mediapipe/apps/facemeshgpu
Once the app is built, install it on Android device with:
adb install bazel-bin/mediapipe/examples/android/src/java/com/google/mediapipe/apps/facemeshgpu/facemeshgpu.apk
See the general instructions for building iOS examples and generating an Xcode project. This will be the FaceMeshGpuApp target.
To build on the command line:
bazel build -c opt --config=ios_arm64 mediapipe/examples/ios/facemeshgpu:FaceMeshGpuApp
The subgraphs show up in the main graph visualization as nodes colored in purple, and the subgraph itself can also be visualized just like a regular graph. For more information on how to visualize a graph that includes subgraphs, see the Visualizing Subgraphs section in the visualizer documentation.
Face Landmark Subgraph¶
The face landmark module contains several subgraphs that can be used to detect and track face landmarks. In particular, in this example the FaceLandmarkFrontGPU subgraph, suitable for images from front-facing cameras (i.e., selfie images) and utilizing GPU acceleration, is selected.