Next (#216)
* Simplify bbox access * Code cleanup * Simplify bbox access * Move code to face helper * Swap and paste back without insightface * Swap and paste back without insightface * Remove semaphore where possible * Improve paste back performance * Cosmetic changes * Move the predictor to ONNX to avoid tensorflow, Use video ranges for prediction * Make CI happy * Move template and size to the options * Fix different color on box * Uniform model handling for predictor * Uniform frame handling for predictor * Pass kps direct to warp_face * Fix urllib * Analyse based on matches * Analyse based on rate * Fix CI * ROCM and OpenVINO mapping for torch backends * Fix the paste back speed * Fix import * Replace retinaface with yunet (#168) * Remove insightface dependency * Fix urllib * Some fixes * Analyse based on matches * Analyse based on rate * Fix CI * Migrate to Yunet * Something is off here * We indeed need semaphore for yunet * Normalize the normed_embedding * Fix download of models * Fix download of models * Fix download of models * Add score and improve affine_matrix * Temp fix for bbox out of frame * Temp fix for bbox out of frame * ROCM and OpenVINO mapping for torch backends * Normalize bbox * Implement gender age * Cosmetics on cli args * Prevent face jumping * Fix the paste back speed * FIx import * Introduce detection size * Cosmetics on face analyser ARGS and globals * Temp fix for shaking face * Accurate event handling * Accurate event handling * Accurate event handling * Set the reference_frame_number in face_selector component * Simswap model (#171) * Add simswap models * Add ghost models * Introduce normed template * Conditional prepare and normalize for ghost * Conditional prepare and normalize for ghost * Get simswap working * Get simswap working * Fix refresh of swapper model * Refine face selection and detection (#174) * Refine face selection and detection * Update README.md * Fix some face analyser UI * Fix some face analyser UI * Introduce range handling for CLI arguments * Introduce range handling for CLI arguments * Fix some spacings * Disable onnxruntime warnings * Use cv2.blur over cv2.GaussianBlur for better performance * Revert "Use cv2.blur over cv2.GaussianBlur for better performance" This reverts commit bab666d6f9216a9f24faa84ead2d006b76f30159. * Prepare universal face detection * Prepare universal face detection part2 * Reimplement retinaface * Introduce cached anchors creation * Restore filtering to enhance performance * Minor changes * Minor changes * More code but easier to understand * Minor changes * Rename predictor to content analyser * Change detection/recognition to detector/recognizer * Fix crop frame borders * Fix spacing * Allow normalize output without a source * Improve conditional set face reference * Update dependencies * Add timeout for get_download_size * Fix performance due disorder * Move models to assets repository, Adjust namings * Refactor face analyser * Rename models once again * Fix spacing * Highres simswap (#192) * Introduce highres simswap * Fix simswap 256 color issue (#191) * Fix simswap 256 color issue * Update face_swapper.py * Normalize models and host in our repo * Normalize models and host in our repo --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Rename face analyser direction to face analyser order * Improve the UI for face selector * Add best-worst, worst-best detector ordering * Clear as needed and fix zero score bug * Fix linter * Improve startup time by multi thread remote download size * Just some cosmetics * Normalize swagger source input, Add blendface_256 (unfinished) * New paste back (#195) * add new paste_back (#194) * add new paste_back * Update face_helper.py * Update face_helper.py * add commandline arguments and gui * fix conflict * Update face_mask.py * type fix * Clean some wording and typing --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Clean more names, use blur range approach * Add blur padding range * Change the padding order * Fix yunet filename * Introduce face debugger * Use percent for mask padding * Ignore this * Ignore this * Simplify debugger output * implement blendface (#198) * Clean up after the genius * Add gpen_bfr_256 * Cosmetics * Ignore face_mask_padding on face enhancer * Update face_debugger.py (#202) * Shrink debug_face() to a minimum * Mark as 2.0.0 release * remove unused (#204) * Apply NMS (#205) * Apply NMS * Apply NMS part2 * Fix restoreformer url * Add debugger cli and gui components (#206) * Add debugger cli and gui components * update * Polishing the types * Fix usage in README.md * Update onnxruntime * Support for webp * Rename paste-back to face-mask * Add license to README * Add license to README * Extend face selector mode by one * Update utilities.py (#212) * Stop inline camera on stream * Minor webcam updates * Gracefully start and stop webcam * Rename capture to video_capture * Make get webcam capture pure * Check webcam to not be None * Remove some is not None * Use index 0 for webcam * Remove memory lookup within progress bar * Less progress bar updates * Uniform progress bar * Use classic progress bar * Fix image and video validation * Use different hash for cache * Use best-worse order for webcam * Normalize padding like CSS * Update preview * Fix max memory * Move disclaimer and license to the docs * Update wording in README * Add LICENSE.md * Fix argument in README --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> Co-authored-by: alex00ds <31631959+alex00ds@users.noreply.github.com>
This commit is contained in:
119
facefusion/face_helper.py
Normal file
119
facefusion/face_helper.py
Normal file
@@ -0,0 +1,119 @@
|
||||
from typing import Any, Dict, Tuple, List
|
||||
from functools import lru_cache
|
||||
from cv2.typing import Size
|
||||
import cv2
|
||||
import numpy
|
||||
|
||||
from facefusion.typing import Bbox, Kps, Frame, Matrix, Template, Padding
|
||||
|
||||
TEMPLATES : Dict[Template, numpy.ndarray[Any, Any]] =\
|
||||
{
|
||||
'arcface_v1': numpy.array(
|
||||
[
|
||||
[ 39.7300, 51.1380 ],
|
||||
[ 72.2700, 51.1380 ],
|
||||
[ 56.0000, 68.4930 ],
|
||||
[ 42.4630, 87.0100 ],
|
||||
[ 69.5370, 87.0100 ]
|
||||
]),
|
||||
'arcface_v2': numpy.array(
|
||||
[
|
||||
[ 38.2946, 51.6963 ],
|
||||
[ 73.5318, 51.5014 ],
|
||||
[ 56.0252, 71.7366 ],
|
||||
[ 41.5493, 92.3655 ],
|
||||
[ 70.7299, 92.2041 ]
|
||||
]),
|
||||
'ffhq': numpy.array(
|
||||
[
|
||||
[ 192.98138, 239.94708 ],
|
||||
[ 318.90277, 240.1936 ],
|
||||
[ 256.63416, 314.01935 ],
|
||||
[ 201.26117, 371.41043 ],
|
||||
[ 313.08905, 371.15118 ]
|
||||
])
|
||||
}
|
||||
|
||||
|
||||
def warp_face(temp_frame : Frame, kps : Kps, template : Template, size : Size) -> Tuple[Frame, Matrix]:
|
||||
normed_template = TEMPLATES.get(template) * size[1] / size[0]
|
||||
affine_matrix = cv2.estimateAffinePartial2D(kps, normed_template, method = cv2.LMEDS)[0]
|
||||
crop_frame = cv2.warpAffine(temp_frame, affine_matrix, (size[1], size[1]), borderMode = cv2.BORDER_REPLICATE)
|
||||
return crop_frame, affine_matrix
|
||||
|
||||
|
||||
def paste_back(temp_frame : Frame, crop_frame: Frame, affine_matrix : Matrix, face_mask_blur : float, face_mask_padding : Padding) -> Frame:
|
||||
inverse_matrix = cv2.invertAffineTransform(affine_matrix)
|
||||
temp_frame_size = temp_frame.shape[:2][::-1]
|
||||
mask_size = tuple(crop_frame.shape[:2])
|
||||
mask_frame = create_static_mask_frame(mask_size, face_mask_blur, face_mask_padding)
|
||||
inverse_mask_frame = cv2.warpAffine(mask_frame, inverse_matrix, temp_frame_size).clip(0, 1)
|
||||
inverse_crop_frame = cv2.warpAffine(crop_frame, inverse_matrix, temp_frame_size, borderMode = cv2.BORDER_REPLICATE)
|
||||
paste_frame = temp_frame.copy()
|
||||
paste_frame[:, :, 0] = inverse_mask_frame * inverse_crop_frame[:, :, 0] + (1 - inverse_mask_frame) * temp_frame[:, :, 0]
|
||||
paste_frame[:, :, 1] = inverse_mask_frame * inverse_crop_frame[:, :, 1] + (1 - inverse_mask_frame) * temp_frame[:, :, 1]
|
||||
paste_frame[:, :, 2] = inverse_mask_frame * inverse_crop_frame[:, :, 2] + (1 - inverse_mask_frame) * temp_frame[:, :, 2]
|
||||
return paste_frame
|
||||
|
||||
|
||||
@lru_cache(maxsize = None)
|
||||
def create_static_mask_frame(mask_size : Size, face_mask_blur : float, face_mask_padding : Padding) -> Frame:
|
||||
mask_frame = numpy.ones(mask_size, numpy.float32)
|
||||
blur_amount = int(mask_size[0] * 0.5 * face_mask_blur)
|
||||
blur_area = max(blur_amount // 2, 1)
|
||||
mask_frame[:max(blur_area, int(mask_size[1] * face_mask_padding[0] / 100)), :] = 0
|
||||
mask_frame[-max(blur_area, int(mask_size[1] * face_mask_padding[2] / 100)):, :] = 0
|
||||
mask_frame[:, :max(blur_area, int(mask_size[0] * face_mask_padding[3] / 100))] = 0
|
||||
mask_frame[:, -max(blur_area, int(mask_size[0] * face_mask_padding[1] / 100)):] = 0
|
||||
if blur_amount > 0:
|
||||
mask_frame = cv2.GaussianBlur(mask_frame, (0, 0), blur_amount * 0.25)
|
||||
return mask_frame
|
||||
|
||||
|
||||
@lru_cache(maxsize = None)
|
||||
def create_static_anchors(feature_stride : int, anchor_total : int, stride_height : int, stride_width : int) -> numpy.ndarray[Any, Any]:
|
||||
y, x = numpy.mgrid[:stride_height, :stride_width][::-1]
|
||||
anchors = numpy.stack((y, x), axis = -1)
|
||||
anchors = (anchors * feature_stride).reshape((-1, 2))
|
||||
anchors = numpy.stack([ anchors ] * anchor_total, axis = 1).reshape((-1, 2))
|
||||
return anchors
|
||||
|
||||
|
||||
def distance_to_bbox(points : numpy.ndarray[Any, Any], distance : numpy.ndarray[Any, Any]) -> Bbox:
|
||||
x1 = points[:, 0] - distance[:, 0]
|
||||
y1 = points[:, 1] - distance[:, 1]
|
||||
x2 = points[:, 0] + distance[:, 2]
|
||||
y2 = points[:, 1] + distance[:, 3]
|
||||
bbox = numpy.column_stack([ x1, y1, x2, y2 ])
|
||||
return bbox
|
||||
|
||||
|
||||
def distance_to_kps(points : numpy.ndarray[Any, Any], distance : numpy.ndarray[Any, Any]) -> Kps:
|
||||
x = points[:, 0::2] + distance[:, 0::2]
|
||||
y = points[:, 1::2] + distance[:, 1::2]
|
||||
kps = numpy.stack((x, y), axis = -1)
|
||||
return kps
|
||||
|
||||
|
||||
def apply_nms(bbox_list : List[Bbox], iou_threshold : float) -> List[int]:
|
||||
keep_indices = []
|
||||
dimension_list = numpy.reshape(bbox_list, (-1, 4))
|
||||
x1 = dimension_list[:, 0]
|
||||
y1 = dimension_list[:, 1]
|
||||
x2 = dimension_list[:, 2]
|
||||
y2 = dimension_list[:, 3]
|
||||
areas = (x2 - x1 + 1) * (y2 - y1 + 1)
|
||||
indices = numpy.arange(len(bbox_list))
|
||||
while indices.size > 0:
|
||||
index = indices[0]
|
||||
remain_indices = indices[1:]
|
||||
keep_indices.append(index)
|
||||
xx1 = numpy.maximum(x1[index], x1[remain_indices])
|
||||
yy1 = numpy.maximum(y1[index], y1[remain_indices])
|
||||
xx2 = numpy.minimum(x2[index], x2[remain_indices])
|
||||
yy2 = numpy.minimum(y2[index], y2[remain_indices])
|
||||
width = numpy.maximum(0, xx2 - xx1 + 1)
|
||||
height = numpy.maximum(0, yy2 - yy1 + 1)
|
||||
iou = width * height / (areas[index] + areas[remain_indices] - width * height)
|
||||
indices = indices[numpy.where(iou <= iou_threshold)[0] + 1]
|
||||
return keep_indices
|
||||
Reference in New Issue
Block a user