* Replace audio whenever set via source * add H264_qsv&HEVC_qsv (#768) * Update ffmpeg.py * Update choices.py * Update typing.py * Fix spaces and newlines * Fix return type * Introduce hififace swapper * Disable stream for expression restorer * Webcam polishing part1 (#796) * Cosmetics on ignore comments * Testing for replace audio * Testing for restore audio * Testing for restore audio * Fix replace_audio() * Remove shortest and use fixed video duration * Remove shortest and use fixed video duration * Prevent duplicate entries to local PATH * Do hard exit on invalid args * Need for Python 3.10 * Fix state of face selector * Fix OpenVINO by aliasing GPU.0 to GPU * Fix OpenVINO by aliasing GPU.0 to GPU * Fix/age modifier styleganex 512 (#798) * fix * styleganex template * changes * changes * fix occlusion mask * add age modifier scale * change * change * hardcode * Cleanup * Use model_sizes and model_templates variables * No need for prepare when just 2 lines of code * Someone used spaces over tabs * Revert back [0][0] --------- Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> * Feat/update gradio5 (#799) * Update to Gradio 5 * Remove overrides for Gradio * Fix dark mode for Gradio * Polish errors * More styles for tabs and co * Make slider inputs and reset like a unit * Make slider inputs and reset like a unit * Adjust naming * Improved color matching (#800) * aura fix * fix import * move to vision.py * changes * changes * changes * changes * further reduction * add test * better test * change name * Minor cleanup * Minor cleanup * Minor cleanup * changes (#801) * Switch to official assets repo * Add __pycache__ to gitignore * Gradio pinned python-multipart to 0.0.12 * Update dependencies * Feat/temp path second try (#802) * Terminate base directory from temp helper * Partial adjust program codebase * Move arguments around * Make `-j` absolete * Resolve args * Fix job register keys * Adjust date test * Finalize temp path * Update onnxruntime * Update dependencies * Adjust color for checkboxes * Revert due terrible performance * Fix/enforce vp9 for webm (#805) * Simple fix to enforce vp9 for webm * Remove suggest methods from program helper * Cleanup ffmpeg.py a bit * Update onnxruntime (second try) * Update onnxruntime (second try) * Remove cudnn_conv_algo_search tweaks * Remove cudnn_conv_algo_search tweaks * changes * add both mask instead of multiply * adaptive color correction * changes * remove model size requirement * changes * add to facefusion.ini * changes * changes * changes * Add namespace for dfm creators * Release five frame enhancer models * Remove vendor from model name * Remove vendor from model name * changes * changes * changes * changes * Feat/download providers (#809) * Introduce download providers * update processors download method * add ui * Fix CI * Adjust UI component order, Use download resolver for benchmark * Remove is_download_done() * Introduce download provider set, Remove choices method from execution, cast all dict keys() via list() * Fix spacing --------- Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> * Fix model paths for 3.1.0 * Introduce bulk-run (#810) * Introduce bulk-run * Make bulk run bullet proof * Integration test for bulk-run * new alignment * Add safer global named resolve_file_pattern() (#811) * Allow bulk runner with target pattern only * changes * changes * Update Python to 3.12 for CI (#813) * changes * Improve NVIDIA device lookups * Rename template key to deepfacelive * Fix name * Improve resolve download * Rename bulk-run to batch-run * Make deep swapper inputs universal * Add more deepfacelive models * Use different morph value * Feat/simplify hashes sources download (#814) * Extract download directory path from assets path * Fix lint * Fix force-download command, Fix urls in frame enhancer * changes * fix warp_face_by_bounding_box dtype error * DFM Morph (#816) * changes * Improve wording, Replace [None], SideQuest: clean forward() of age modifier * SideQuest: clean forward() of face enhancer --------- Co-authored-by: henryruhs <info@henryruhs.com> * Fix preview refresh after slide * Add more deepfacelive models (#817) * Add more deepfacelive models * Add more deepfacelive models * Fix deep swapper sizes * Kill accent colors, Number input styles for Chrome * Simplify thumbnail-item looks * Fix first black screen * Introduce model helper * ci.yml: Add macOS on ARM64 to the testing (#818) * ci.yml: Add macOS on ARM64 to the testing * ci.yml: uses: AnimMouse/setup-ffmpeg@v1 * ci.yml: strategy: matrix: os: macos-latest, * - name: Set up FFmpeg * Update .github/workflows/ci.yml * Update ci.yml --------- Co-authored-by: Henry Ruhs <info@henryruhs.com> * Show/hide morph slider for deep swapper (#822) * remove dfl_head and update dfl_whole_face template * Add deep swapper models by Mats * Add deep swapper models by Druuzil * Add deep swapper models by Rumateus * Implement face enhancer weight for codeformer, Side Quest: has proces… (#823) * Implement face enhancer weight for codeformer, Side Quest: has processor checks * Fix typo * Fix face enhancer blend in UI * Use static model set creation * Add deep swapper models by Jen * Introduce create_static_model_set() everywhere (#824) * Move clear over to the UI (#825) * Fix model key * Undo restore_audio() * Switch to latest XSeg * Switch to latest XSeg * Switch to latest XSeg * Use resolve_download_url() everywhere, Vanish --skip-download flag * Fix resolve_download_url * Fix space * Kill resolve_execution_provider_keys() and move fallbacks where they belong * Kill resolve_execution_provider_keys() and move fallbacks where they belong * Remove as this does not work * Change TempFrameFormat order * Fix CoreML partially * Remove duplicates (Rumateus is the creator) * Add deep swapper models by Edel * Introduce download scopes (#826) * Introduce download scopes * Limit download scopes to force-download command * Change source-paths behaviour * Fix space * Update README * Rename create_log_level_program to create_misc_program * Fix wording * Fix wording * Update dependencies * Use tolerant for video_memory_strategy in benchmark * Feat/ffmpeg with progress (#827) * FFmpeg with progress bar * Fix typing * FFmpeg with progress bar part2 * Restore streaming wording * Change order in choices and typing * Introduce File using list_directory() (#830) * Feat/local deep swapper models (#832) * Local model support for deep swapper * Local model support for deep swapper part2 * Local model support for deep swapper part3 * Update yet another dfm by Druuzil * Refactor/choices and naming (#833) * Refactor choices, imports and naming * Refactor choices, imports and naming * Fix styles for tabs, Restore toast * Update yet another dfm by Druuzil * Feat/face masker models (#834) * Introduce face masker models * Introduce face masker models * Introduce face masker models * Register needed step keys * Provide different XSeg models * Simplify model context * Fix out of range for trim frame, Fix ffmpeg extraction count (#836) * Fix out of range for trim frame, Fix ffmpeg extraction count * Move restrict of trim frame to the core, Make sure all values are within the range * Fix and merge testing * Fix typing * Add region mask for deep swapper * Adjust wording * Move FACE_MASK_REGIONS to choices * Update dependencies * Feat/download provider fallback (#837) * Introduce download providers fallback, Use CURL everywhre * Fix CI * Use readlines() over readline() to avoid while * Use readlines() over readline() to avoid while * Use readlines() over readline() to avoid while * Use communicate() over wait() * Minor updates for testing * Stop webcam on source image change * Feat/webcam improvements (#838) * Detect available webcams * Fix CI, Move webcam id dropdown to the sidebar, Disable warnings * Fix CI * Remove signal on hard_exit() to prevent exceptions * Fix border color in toast timer * Prepare release * Update preview * Update preview * Hotfix progress bar --------- Co-authored-by: DDXDB <38449595+DDXDB@users.noreply.github.com> Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> Co-authored-by: Christian Clauss <cclauss@me.com>
315 lines
12 KiB
Python
315 lines
12 KiB
Python
from typing import List, Tuple
|
|
|
|
import cv2
|
|
import numpy
|
|
from charset_normalizer.md import lru_cache
|
|
|
|
from facefusion import inference_manager, state_manager
|
|
from facefusion.download import conditional_download_hashes, conditional_download_sources, resolve_download_url
|
|
from facefusion.face_helper import create_rotated_matrix_and_size, create_static_anchors, distance_to_bounding_box, distance_to_face_landmark_5, normalize_bounding_box, transform_bounding_box, transform_points
|
|
from facefusion.filesystem import resolve_relative_path
|
|
from facefusion.thread_helper import thread_semaphore
|
|
from facefusion.typing import Angle, BoundingBox, Detection, DownloadScope, DownloadSet, FaceLandmark5, InferencePool, ModelSet, Score, VisionFrame
|
|
from facefusion.vision import resize_frame_resolution, unpack_resolution
|
|
|
|
|
|
@lru_cache(maxsize = None)
|
|
def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
|
|
return\
|
|
{
|
|
'retinaface':
|
|
{
|
|
'hashes':
|
|
{
|
|
'retinaface':
|
|
{
|
|
'url': resolve_download_url('models-3.0.0', 'retinaface_10g.hash'),
|
|
'path': resolve_relative_path('../.assets/models/retinaface_10g.hash')
|
|
}
|
|
},
|
|
'sources':
|
|
{
|
|
'retinaface':
|
|
{
|
|
'url': resolve_download_url('models-3.0.0', 'retinaface_10g.onnx'),
|
|
'path': resolve_relative_path('../.assets/models/retinaface_10g.onnx')
|
|
}
|
|
}
|
|
},
|
|
'scrfd':
|
|
{
|
|
'hashes':
|
|
{
|
|
'scrfd':
|
|
{
|
|
'url': resolve_download_url('models-3.0.0', 'scrfd_2.5g.hash'),
|
|
'path': resolve_relative_path('../.assets/models/scrfd_2.5g.hash')
|
|
}
|
|
},
|
|
'sources':
|
|
{
|
|
'scrfd':
|
|
{
|
|
'url': resolve_download_url('models-3.0.0', 'scrfd_2.5g.onnx'),
|
|
'path': resolve_relative_path('../.assets/models/scrfd_2.5g.onnx')
|
|
}
|
|
}
|
|
},
|
|
'yoloface':
|
|
{
|
|
'hashes':
|
|
{
|
|
'yoloface':
|
|
{
|
|
'url': resolve_download_url('models-3.0.0', 'yoloface_8n.hash'),
|
|
'path': resolve_relative_path('../.assets/models/yoloface_8n.hash')
|
|
}
|
|
},
|
|
'sources':
|
|
{
|
|
'yoloface':
|
|
{
|
|
'url': resolve_download_url('models-3.0.0', 'yoloface_8n.onnx'),
|
|
'path': resolve_relative_path('../.assets/models/yoloface_8n.onnx')
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
|
|
def get_inference_pool() -> InferencePool:
|
|
_, model_sources = collect_model_downloads()
|
|
return inference_manager.get_inference_pool(__name__, model_sources)
|
|
|
|
|
|
def clear_inference_pool() -> None:
|
|
inference_manager.clear_inference_pool(__name__)
|
|
|
|
|
|
def collect_model_downloads() -> Tuple[DownloadSet, DownloadSet]:
|
|
model_hashes = {}
|
|
model_sources = {}
|
|
model_set = create_static_model_set('full')
|
|
|
|
if state_manager.get_item('face_detector_model') in [ 'many', 'retinaface' ]:
|
|
model_hashes['retinaface'] = model_set.get('retinaface').get('hashes').get('retinaface')
|
|
model_sources['retinaface'] = model_set.get('retinaface').get('sources').get('retinaface')
|
|
|
|
if state_manager.get_item('face_detector_model') in [ 'many', 'scrfd' ]:
|
|
model_hashes['scrfd'] = model_set.get('scrfd').get('hashes').get('scrfd')
|
|
model_sources['scrfd'] = model_set.get('scrfd').get('sources').get('scrfd')
|
|
|
|
if state_manager.get_item('face_detector_model') in [ 'many', 'yoloface' ]:
|
|
model_hashes['yoloface'] = model_set.get('yoloface').get('hashes').get('yoloface')
|
|
model_sources['yoloface'] = model_set.get('yoloface').get('sources').get('yoloface')
|
|
|
|
return model_hashes, model_sources
|
|
|
|
|
|
def pre_check() -> bool:
|
|
model_hashes, model_sources = collect_model_downloads()
|
|
|
|
return conditional_download_hashes(model_hashes) and conditional_download_sources(model_sources)
|
|
|
|
|
|
def detect_faces(vision_frame : VisionFrame) -> Tuple[List[BoundingBox], List[Score], List[FaceLandmark5]]:
|
|
all_bounding_boxes : List[BoundingBox] = []
|
|
all_face_scores : List[Score] = []
|
|
all_face_landmarks_5 : List[FaceLandmark5] = []
|
|
|
|
if state_manager.get_item('face_detector_model') in [ 'many', 'retinaface' ]:
|
|
bounding_boxes, face_scores, face_landmarks_5 = detect_with_retinaface(vision_frame, state_manager.get_item('face_detector_size'))
|
|
all_bounding_boxes.extend(bounding_boxes)
|
|
all_face_scores.extend(face_scores)
|
|
all_face_landmarks_5.extend(face_landmarks_5)
|
|
|
|
if state_manager.get_item('face_detector_model') in [ 'many', 'scrfd' ]:
|
|
bounding_boxes, face_scores, face_landmarks_5 = detect_with_scrfd(vision_frame, state_manager.get_item('face_detector_size'))
|
|
all_bounding_boxes.extend(bounding_boxes)
|
|
all_face_scores.extend(face_scores)
|
|
all_face_landmarks_5.extend(face_landmarks_5)
|
|
|
|
if state_manager.get_item('face_detector_model') in [ 'many', 'yoloface' ]:
|
|
bounding_boxes, face_scores, face_landmarks_5 = detect_with_yoloface(vision_frame, state_manager.get_item('face_detector_size'))
|
|
all_bounding_boxes.extend(bounding_boxes)
|
|
all_face_scores.extend(face_scores)
|
|
all_face_landmarks_5.extend(face_landmarks_5)
|
|
|
|
all_bounding_boxes = [ normalize_bounding_box(all_bounding_box) for all_bounding_box in all_bounding_boxes ]
|
|
return all_bounding_boxes, all_face_scores, all_face_landmarks_5
|
|
|
|
|
|
def detect_rotated_faces(vision_frame : VisionFrame, angle : Angle) -> Tuple[List[BoundingBox], List[Score], List[FaceLandmark5]]:
|
|
rotated_matrix, rotated_size = create_rotated_matrix_and_size(angle, vision_frame.shape[:2][::-1])
|
|
rotated_vision_frame = cv2.warpAffine(vision_frame, rotated_matrix, rotated_size)
|
|
rotated_inverse_matrix = cv2.invertAffineTransform(rotated_matrix)
|
|
bounding_boxes, face_scores, face_landmarks_5 = detect_faces(rotated_vision_frame)
|
|
bounding_boxes = [ transform_bounding_box(bounding_box, rotated_inverse_matrix) for bounding_box in bounding_boxes ]
|
|
face_landmarks_5 = [ transform_points(face_landmark_5, rotated_inverse_matrix) for face_landmark_5 in face_landmarks_5 ]
|
|
return bounding_boxes, face_scores, face_landmarks_5
|
|
|
|
|
|
def detect_with_retinaface(vision_frame : VisionFrame, face_detector_size : str) -> Tuple[List[BoundingBox], List[Score], List[FaceLandmark5]]:
|
|
bounding_boxes = []
|
|
face_scores = []
|
|
face_landmarks_5 = []
|
|
feature_strides = [ 8, 16, 32 ]
|
|
feature_map_channel = 3
|
|
anchor_total = 2
|
|
face_detector_width, face_detector_height = unpack_resolution(face_detector_size)
|
|
temp_vision_frame = resize_frame_resolution(vision_frame, (face_detector_width, face_detector_height))
|
|
ratio_height = vision_frame.shape[0] / temp_vision_frame.shape[0]
|
|
ratio_width = vision_frame.shape[1] / temp_vision_frame.shape[1]
|
|
detect_vision_frame = prepare_detect_frame(temp_vision_frame, face_detector_size)
|
|
detection = forward_with_retinaface(detect_vision_frame)
|
|
|
|
for index, feature_stride in enumerate(feature_strides):
|
|
keep_indices = numpy.where(detection[index] >= state_manager.get_item('face_detector_score'))[0]
|
|
|
|
if numpy.any(keep_indices):
|
|
stride_height = face_detector_height // feature_stride
|
|
stride_width = face_detector_width // feature_stride
|
|
anchors = create_static_anchors(feature_stride, anchor_total, stride_height, stride_width)
|
|
bounding_box_raw = detection[index + feature_map_channel] * feature_stride
|
|
face_landmark_5_raw = detection[index + feature_map_channel * 2] * feature_stride
|
|
|
|
for bounding_box in distance_to_bounding_box(anchors, bounding_box_raw)[keep_indices]:
|
|
bounding_boxes.append(numpy.array(
|
|
[
|
|
bounding_box[0] * ratio_width,
|
|
bounding_box[1] * ratio_height,
|
|
bounding_box[2] * ratio_width,
|
|
bounding_box[3] * ratio_height,
|
|
]))
|
|
|
|
for score in detection[index][keep_indices]:
|
|
face_scores.append(score[0])
|
|
|
|
for face_landmark_5 in distance_to_face_landmark_5(anchors, face_landmark_5_raw)[keep_indices]:
|
|
face_landmarks_5.append(face_landmark_5 * [ ratio_width, ratio_height ])
|
|
|
|
return bounding_boxes, face_scores, face_landmarks_5
|
|
|
|
|
|
def detect_with_scrfd(vision_frame : VisionFrame, face_detector_size : str) -> Tuple[List[BoundingBox], List[Score], List[FaceLandmark5]]:
|
|
bounding_boxes = []
|
|
face_scores = []
|
|
face_landmarks_5 = []
|
|
feature_strides = [ 8, 16, 32 ]
|
|
feature_map_channel = 3
|
|
anchor_total = 2
|
|
face_detector_width, face_detector_height = unpack_resolution(face_detector_size)
|
|
temp_vision_frame = resize_frame_resolution(vision_frame, (face_detector_width, face_detector_height))
|
|
ratio_height = vision_frame.shape[0] / temp_vision_frame.shape[0]
|
|
ratio_width = vision_frame.shape[1] / temp_vision_frame.shape[1]
|
|
detect_vision_frame = prepare_detect_frame(temp_vision_frame, face_detector_size)
|
|
detection = forward_with_scrfd(detect_vision_frame)
|
|
|
|
for index, feature_stride in enumerate(feature_strides):
|
|
keep_indices = numpy.where(detection[index] >= state_manager.get_item('face_detector_score'))[0]
|
|
|
|
if numpy.any(keep_indices):
|
|
stride_height = face_detector_height // feature_stride
|
|
stride_width = face_detector_width // feature_stride
|
|
anchors = create_static_anchors(feature_stride, anchor_total, stride_height, stride_width)
|
|
bounding_box_raw = detection[index + feature_map_channel] * feature_stride
|
|
face_landmark_5_raw = detection[index + feature_map_channel * 2] * feature_stride
|
|
|
|
for bounding_box in distance_to_bounding_box(anchors, bounding_box_raw)[keep_indices]:
|
|
bounding_boxes.append(numpy.array(
|
|
[
|
|
bounding_box[0] * ratio_width,
|
|
bounding_box[1] * ratio_height,
|
|
bounding_box[2] * ratio_width,
|
|
bounding_box[3] * ratio_height,
|
|
]))
|
|
|
|
for score in detection[index][keep_indices]:
|
|
face_scores.append(score[0])
|
|
|
|
for face_landmark_5 in distance_to_face_landmark_5(anchors, face_landmark_5_raw)[keep_indices]:
|
|
face_landmarks_5.append(face_landmark_5 * [ ratio_width, ratio_height ])
|
|
|
|
return bounding_boxes, face_scores, face_landmarks_5
|
|
|
|
|
|
def detect_with_yoloface(vision_frame : VisionFrame, face_detector_size : str) -> Tuple[List[BoundingBox], List[Score], List[FaceLandmark5]]:
|
|
bounding_boxes = []
|
|
face_scores = []
|
|
face_landmarks_5 = []
|
|
face_detector_width, face_detector_height = unpack_resolution(face_detector_size)
|
|
temp_vision_frame = resize_frame_resolution(vision_frame, (face_detector_width, face_detector_height))
|
|
ratio_height = vision_frame.shape[0] / temp_vision_frame.shape[0]
|
|
ratio_width = vision_frame.shape[1] / temp_vision_frame.shape[1]
|
|
detect_vision_frame = prepare_detect_frame(temp_vision_frame, face_detector_size)
|
|
detection = forward_with_yoloface(detect_vision_frame)
|
|
detection = numpy.squeeze(detection).T
|
|
bounding_box_raw, score_raw, face_landmark_5_raw = numpy.split(detection, [ 4, 5 ], axis = 1)
|
|
keep_indices = numpy.where(score_raw > state_manager.get_item('face_detector_score'))[0]
|
|
|
|
if numpy.any(keep_indices):
|
|
bounding_box_raw, face_landmark_5_raw, score_raw = bounding_box_raw[keep_indices], face_landmark_5_raw[keep_indices], score_raw[keep_indices]
|
|
|
|
for bounding_box in bounding_box_raw:
|
|
bounding_boxes.append(numpy.array(
|
|
[
|
|
(bounding_box[0] - bounding_box[2] / 2) * ratio_width,
|
|
(bounding_box[1] - bounding_box[3] / 2) * ratio_height,
|
|
(bounding_box[0] + bounding_box[2] / 2) * ratio_width,
|
|
(bounding_box[1] + bounding_box[3] / 2) * ratio_height,
|
|
]))
|
|
|
|
face_scores = score_raw.ravel().tolist()
|
|
face_landmark_5_raw[:, 0::3] = (face_landmark_5_raw[:, 0::3]) * ratio_width
|
|
face_landmark_5_raw[:, 1::3] = (face_landmark_5_raw[:, 1::3]) * ratio_height
|
|
|
|
for face_landmark_5 in face_landmark_5_raw:
|
|
face_landmarks_5.append(numpy.array(face_landmark_5.reshape(-1, 3)[:, :2]))
|
|
|
|
return bounding_boxes, face_scores, face_landmarks_5
|
|
|
|
|
|
def forward_with_retinaface(detect_vision_frame : VisionFrame) -> Detection:
|
|
face_detector = get_inference_pool().get('retinaface')
|
|
|
|
with thread_semaphore():
|
|
detection = face_detector.run(None,
|
|
{
|
|
'input': detect_vision_frame
|
|
})
|
|
|
|
return detection
|
|
|
|
|
|
def forward_with_scrfd(detect_vision_frame : VisionFrame) -> Detection:
|
|
face_detector = get_inference_pool().get('scrfd')
|
|
|
|
with thread_semaphore():
|
|
detection = face_detector.run(None,
|
|
{
|
|
'input': detect_vision_frame
|
|
})
|
|
|
|
return detection
|
|
|
|
|
|
def forward_with_yoloface(detect_vision_frame : VisionFrame) -> Detection:
|
|
face_detector = get_inference_pool().get('yoloface')
|
|
|
|
with thread_semaphore():
|
|
detection = face_detector.run(None,
|
|
{
|
|
'input': detect_vision_frame
|
|
})
|
|
|
|
return detection
|
|
|
|
|
|
def prepare_detect_frame(temp_vision_frame : VisionFrame, face_detector_size : str) -> VisionFrame:
|
|
face_detector_width, face_detector_height = unpack_resolution(face_detector_size)
|
|
detect_vision_frame = numpy.zeros((face_detector_height, face_detector_width, 3))
|
|
detect_vision_frame[:temp_vision_frame.shape[0], :temp_vision_frame.shape[1], :] = temp_vision_frame
|
|
detect_vision_frame = (detect_vision_frame - 127.5) / 128.0
|
|
detect_vision_frame = numpy.expand_dims(detect_vision_frame.transpose(2, 0, 1), axis = 0).astype(numpy.float32)
|
|
return detect_vision_frame
|