* Cleanup after age modifier PR * Cleanup after age modifier PR * Use OpenVino 2024.2.0 for installer * Prepare 3.0.0 for installer * Fix benchmark suite, Introduce sync_item() for state manager * Fix lint * Render slide preview also in lower res * Lower thread and queue count to avoid false usage * Fix spacing * Feat/jobs UI (#627) * Jobs UI part1 * Change naming * Jobs UI part2 * Jobs UI part3 * Jobs UI part4 * Jobs UI part4 * Jobs UI part5 * Jobs UI part6 * Jobs UI part7 * Jobs UI part8 * Jobs UI part9 * Jobs UI part10 * Jobs UI part11 * Jobs UI part12 * Fix rebase * Jobs UI part13 * Jobs UI part14 * Jobs UI part15 * changes (#626) * Remove useless ui registration * Remove useless ui registration * move job_list.py replace [0] with get_first() * optimize imports * fix date None problem add test job list * Jobs UI part16 * Jobs UI part17 * Jobs UI part18 * Jobs UI part19 * Jobs UI part20 * Jobs UI part21 * Jobs UI part22 * move job_list_options * Add label to job status checkbox group * changes * changes --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Update some dependencies * UI helper to convert 'none' * validate job (#628) * changes * changes * add test * changes * changes * Minor adjustments * Replace is_json with is_file * Handle empty and invalid json in job_list * Handle empty and invalid json in job_list * Handle empty and invalid json in job_list * Work on the job manager UI * Cosmetic changes on common helper * Just make it work for now * Just make it work for now * Just make it work for now * Streamline the step index lookups * Hide footer * Simplify instant runner * Simplify instant runner UI and job manager UI * Fix empty step choices * Fix empty step choices * Fix none values in UI * Rework on benchmark (add warmup) and job list * Improve ValueAndUnit * Add step 1 of x output * Cosmetic changes on the UI * Fix invalid job file names * Update preview * Introducing has_step() and sorting out insert behaviour * Introducing has_step() and sorting out insert behaviour * Add [ none ] to some job id dropdowns * Make updated dropdown values kinda perfect * Make updated dropdown values kinda perfect * Fix testing * Minor improvement on UI * Fix false config lookup * Remove TensorRT as our models are not made for it * Feat/cli commands second try rev2 (#640) * Refactor CLI to commands * Refactor CLI to commands part2 * Refactor CLI to commands part3 * Refactor CLI to commands part4 * Rename everything to facefusion.py * Refactor CLI to commands part5 * Refactor CLI to commands part6 * Adjust testing * Fix lint * Fix lint * Fix lint * Refactor CLI to commands part7 * Extend State typing * Fix false config lookup, adjust logical orders * Move away from passing program part1 * Move away from passing program part2 * Move away from passing program part3 * Fix lint * Move away from passing program part4 * ui-args update * ui-args update * ui-args update * temporary type fix * Move away from passing program part5 * remove unused * creates args.py * Move away from passing program part6 * Move away from passing program part7 --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Minor optimizations * Update commands in README * Fix job-retry command * Fix multi runs via UI * add more job keys * Cleanup codebase * One method to create inference session (#641) * One method to create inference session * Remove warnings, as there are none * Remember job id during processing * Fix face masker config block * Change wording * Prevent age modifier from using CoreML * add expression restorer (#642) * add expression restorer * fix import * fix lint * changes * changes * changes * Host the final model for expression restorer * Insert step on the given index * UI workover (#644) * UI workover part1 * Introduce ComponentOptions * Only set Media components to None when visibility changes * Clear static faces and reference faces between step processing * Minor changes * Minor changes * Fix testing * Enable test_sanitize_path_for_windows (#646) * Dynamic download during job processing (#647) * Fix face masker UI * Rename run-headless to headless-run * Feat/split frame processor UI (#649) * Split frame processor UI * Split frame processor UI part3, Refactor get_model_initializer * Split frame processor UI part4 * Feat/rename frame processors (#651) * Rename frame processors * Rename frame processors part2 * Fix imports Conflicts: facefusion/uis/layouts/benchmark.py facefusion/uis/layouts/default.py * Fix imports * Cosmetic changes * Fix multi threading for ROCm * Change temp frames pattern * Adjust terminal help * remove expression restorer (#653) * Expression restorer as processor (#655) * add expression restorer * changes * Cleanup code * Add TensorRT support back * Add TensorRT support back * Add TensorRT support back * changes (#656) * Change minor wording * Fix face enhancer slider * Add more typing * Fix expression-restorer when using trim (#659) * changes * changes * Rework/model and inference pool part2 (#660) * Rework on model and inference pool * Introduce inference sources and pools part1 * Introduce inference sources and pools part2 * Introduce inference sources and pools part3 * Introduce inference sources and pools part4 * Introduce inference sources and pools part5 * Introduce inference sources and pools part6 * Introduce inference sources and pools part6 * Introduce inference sources and pools part6 * Introduce inference sources and pools part7 * Introduce inference sources and pools part7 * Introduce inference sources and pools part8 * Introduce inference sources and pools part9 * Introduce inference sources and pools part10 * Introduce inference sources and pools part11 * Introduce inference sources and pools part11 * Introduce inference sources and pools part11 * Introduce inference sources and pools part12 * Reorganize the face masker UI * Fix trim in UI * Feat/hashed sources (#668) * Introduce source helper * Remove post_check() and just use process_manager * Remove post_check() part2 * Add hash based downloads * Add hash based downloads part2 * Add hash based downloads part3 * Add hash based downloads part4 * Add hash based downloads part5 * Add hash based downloads part6 * Add hash based downloads part7 * Add hash based downloads part7 * Add hash based downloads part8 * Remove print * Prepare 3.0.0 release * Fix UI * Release the check when really done * Update inputs for live portrait * Update to 3.0.0 releases, extend download postfix * Move files to the right place * Logging for the hash and source validation * Changing logic to handle corrupt sources * Fix typo * Use names over get_inputs(), Remove set_options() call * Age modifier now works for CoreML too * Update age_modifier.py * Add video encoder h264_videotoolbox and hevc_videotoolbox * Face editor add eye gaze & remove open factor sliders (#670) * changes * add eye gaze * changes * cleanup * add eyebrow control * changes * changes * Feat/terminal UI (#671) * Introduce terminal to the UI * Introduce terminal to the UI part2 * Introduce terminal to the UI part2 * Introduce terminal to the UI part2 * Calc range step to avoid weird values * Use Sequence for ranges * Use Sequence for ranges * changes (#673) * Use Sequence for ranges * Finalize terminal UI * Finalize terminal UI * Webcam cosmetics, Fix normalize fps to accept int * Cosmetic changes * Finalize terminal UI * Rename leftover typings * Fix wording * Fix rounding in metavar * Fix rounding in metavar * Rename to face classifier * Face editor lip moves (#677) * changes * changes * changes * Fix rounding in metavar * Rename to face classifier * changes * changes * update naming --------- Co-authored-by: henryruhs <info@henryruhs.com> * Fix wording * Feat/many landmarker + face analyser breakdown (#678) * Basic multi landmarker integration * Simplify some method names * Break into face_detector and face_landmarker * Fix cosmetics * Fix testing * Break into face_attributor and face_recognizer * Clear them all * Clear them all * Rename to face classifier * Rename to face classifier * Fix testing * Fix stuff * Add face landmarker model to UI * Add face landmarker model to UI part2 * Split the config * Split the UI * Improvement from code review * Improvement from code review * Validate args also for sub parsers * Remove clear of processors in process step * Allow finder control for the face editor * Fix lint * Improve testing performance * Remove unused file, Clear processors from the UI before job runs * Update the installer * Uniform set handler for swapper and detector in the UI * Fix example urls * Feat/inference manager (#684) * Introduce inference manager * Migrate all to inference manager * clean ini * Introduce app context based inference pools * Fix lint * Fix typing * Adjust layout * Less border radius * Rename app context names * Fix/live portrait directml (#691) * changes (#690) * Adjust naming * Use our assets release * Adjust naming --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Add caches to gitignore * Update dependencies and drop CUDA 11.8 support (#693) * Update dependencies and drop CUDA 11.8 support * Play save and keep numpy 1.x.x * Improve TensorRT optimization * changes * changes * changes * changes * changes * changes * changes * changes * changes * Reuse inference sessions (#696) * Fix force-download command * Refactor processors to forward() (#698) * Install tensorrt when selecting cuda * Minor changes * Use latest numpy * Fix limit system memory * Implement forward() for every inference (#699) * Implement forward() for every inference * Implement forward() for every inference * Implement forward() for every inference * Implement forward() for every inference * changes * changes * changes * changes * Feat/fairface (#710) * Replace gender_age model with fair face (#709) * changes * changes * changes * age dropdown to range-slider * Cleanup code * Cleanup code --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Extend installer to set library paths for cuda and tensorrt (#707) * Extend installer to set library paths for cuda and tensorrt * Add refresh of conda env * Remove invalid commands * Set the conda env according to operating system * Update for ROCm 6.2 * fix installer * Aktualisieren von installer.py * Add missing face selector keys * Try to keep original LD_LIBRARY_PATH * windows support installer * Final touch to the installer * Remove spaces * Simplidy collect_model_downloads() * Fix force download for once and forever * Housekeeping (#715) * changes * changes * changes * Fix performance part1 * Fix mixed states (#689) * Fix mixed states * Add missing sync for job args * Move UnionStateXXX to base typing * Undo * Remove UnionStateXXX * Fix app context performance lookup (#717) * Restore performance for inswapper * Mover upper() to the logger * Undo debugging * Move TensorRT installation to docs * Sort out log level typing, Add log level UI dropdown (#719) * Fix inference pool part1 * Validate conda library paths existence * Default face selector order to large-small * Fix inference pool context according to execution provider (#720) * Fix app context under Windows * CUDA and TensorRT update for the installer * Remove concept of static processor modules * Revert false commit * Change event order makes a difference * Fix multi model context in inference pool (#721) * Fix multi model context in inference pool * Fix multi model context in inference pool part2 * Use latest gradio to avoid fastapi bug * Rework on the Windows Installer * Use embedding converter (#724) * changes (#723) * Upload models to official assets repo --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Rework on the Windows Installer part2 * Resolve subprocess calls (#726) * Experiment * Resolve subprocess calls to cover edge cases like broken PATH * Adjust wording * Simplify code * Rework on the Windows Installer part3 * Rework on the Windows Installer part4 * Numpy fix for older onnxruntime * changes (#729) * Add space * Add MacOS installer * Use favicon * Fix disabled logger * Layout polishing (#731) * Update dependencies, Adjust many face landmarker logic * Cosmetics changes * Should be button * Introduce randomized action button * Fix update of lip syncer and expression restorer * Stop sharing inference session this prevents flushing VRAM * Fix test * Fix urls * Prepare release * Vanish inquirer * Sticky preview does not work on portrait images * Sticky preview only for landscape images and videos * remove gradio tunnel env * Change wording and deeplinks * increase peppa landmark score offset * Change wording * Graceful exit install.py * Just adding a required * Cannot use the exit_helper * Rename our model * Change color of face-landmark-68/5 * Limit liveportrait (#739) * changes * changes * changes * Cleanup * Cleanup --------- Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * limit expression restorer * change expression restorer 0-100 range * Use 256x icon * changes * changes * changes * changes * Limit face editor rotation (#745) * changes (#743) * Finish euler methods --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Use different coveralls badge * Move about wording * Shorten scope in the logger * changes * changes * Shorten scope in the logger * fix typo * Simplify the arcface converter names * Update preview --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com>
242 lines
8.1 KiB
Python
242 lines
8.1 KiB
Python
from functools import lru_cache
|
|
from typing import List, Optional, Tuple
|
|
|
|
import cv2
|
|
import numpy
|
|
from cv2.typing import Size
|
|
|
|
from facefusion.choices import image_template_sizes, video_template_sizes
|
|
from facefusion.common_helper import is_windows
|
|
from facefusion.filesystem import is_image, is_video, sanitize_path_for_windows
|
|
from facefusion.typing import Fps, Orientation, Resolution, VisionFrame
|
|
|
|
|
|
@lru_cache(maxsize = 128)
|
|
def read_static_image(image_path : str) -> Optional[VisionFrame]:
|
|
return read_image(image_path)
|
|
|
|
|
|
def read_static_images(image_paths : List[str]) -> List[VisionFrame]:
|
|
frames = []
|
|
|
|
if image_paths:
|
|
for image_path in image_paths:
|
|
frames.append(read_static_image(image_path))
|
|
return frames
|
|
|
|
|
|
def read_image(image_path : str) -> Optional[VisionFrame]:
|
|
if is_image(image_path):
|
|
if is_windows():
|
|
image_path = sanitize_path_for_windows(image_path)
|
|
return cv2.imread(image_path)
|
|
return None
|
|
|
|
|
|
def write_image(image_path : str, vision_frame : VisionFrame) -> bool:
|
|
if image_path:
|
|
if is_windows():
|
|
image_path = sanitize_path_for_windows(image_path)
|
|
return cv2.imwrite(image_path, vision_frame)
|
|
return False
|
|
|
|
|
|
def detect_image_resolution(image_path : str) -> Optional[Resolution]:
|
|
if is_image(image_path):
|
|
image = read_image(image_path)
|
|
height, width = image.shape[:2]
|
|
return width, height
|
|
return None
|
|
|
|
|
|
def restrict_image_resolution(image_path : str, resolution : Resolution) -> Resolution:
|
|
if is_image(image_path):
|
|
image_resolution = detect_image_resolution(image_path)
|
|
if image_resolution < resolution:
|
|
return image_resolution
|
|
return resolution
|
|
|
|
|
|
def create_image_resolutions(resolution : Resolution) -> List[str]:
|
|
resolutions = []
|
|
temp_resolutions = []
|
|
|
|
if resolution:
|
|
width, height = resolution
|
|
temp_resolutions.append(normalize_resolution(resolution))
|
|
for template_size in image_template_sizes:
|
|
temp_resolutions.append(normalize_resolution((width * template_size, height * template_size)))
|
|
temp_resolutions = sorted(set(temp_resolutions))
|
|
for temp_resolution in temp_resolutions:
|
|
resolutions.append(pack_resolution(temp_resolution))
|
|
return resolutions
|
|
|
|
|
|
def get_video_frame(video_path : str, frame_number : int = 0) -> Optional[VisionFrame]:
|
|
if is_video(video_path):
|
|
if is_windows():
|
|
video_path = sanitize_path_for_windows(video_path)
|
|
video_capture = cv2.VideoCapture(video_path)
|
|
if video_capture.isOpened():
|
|
frame_total = video_capture.get(cv2.CAP_PROP_FRAME_COUNT)
|
|
video_capture.set(cv2.CAP_PROP_POS_FRAMES, min(frame_total, frame_number - 1))
|
|
has_vision_frame, vision_frame = video_capture.read()
|
|
video_capture.release()
|
|
if has_vision_frame:
|
|
return vision_frame
|
|
return None
|
|
|
|
|
|
def count_video_frame_total(video_path : str) -> int:
|
|
if is_video(video_path):
|
|
if is_windows():
|
|
video_path = sanitize_path_for_windows(video_path)
|
|
video_capture = cv2.VideoCapture(video_path)
|
|
if video_capture.isOpened():
|
|
video_frame_total = int(video_capture.get(cv2.CAP_PROP_FRAME_COUNT))
|
|
video_capture.release()
|
|
return video_frame_total
|
|
return 0
|
|
|
|
|
|
def detect_video_fps(video_path : str) -> Optional[float]:
|
|
if is_video(video_path):
|
|
if is_windows():
|
|
video_path = sanitize_path_for_windows(video_path)
|
|
video_capture = cv2.VideoCapture(video_path)
|
|
if video_capture.isOpened():
|
|
video_fps = video_capture.get(cv2.CAP_PROP_FPS)
|
|
video_capture.release()
|
|
return video_fps
|
|
return None
|
|
|
|
|
|
def restrict_video_fps(video_path : str, fps : Fps) -> Fps:
|
|
if is_video(video_path):
|
|
video_fps = detect_video_fps(video_path)
|
|
if video_fps < fps:
|
|
return video_fps
|
|
return fps
|
|
|
|
|
|
def detect_video_resolution(video_path : str) -> Optional[Resolution]:
|
|
if is_video(video_path):
|
|
if is_windows():
|
|
video_path = sanitize_path_for_windows(video_path)
|
|
video_capture = cv2.VideoCapture(video_path)
|
|
if video_capture.isOpened():
|
|
width = video_capture.get(cv2.CAP_PROP_FRAME_WIDTH)
|
|
height = video_capture.get(cv2.CAP_PROP_FRAME_HEIGHT)
|
|
video_capture.release()
|
|
return int(width), int(height)
|
|
return None
|
|
|
|
|
|
def restrict_video_resolution(video_path : str, resolution : Resolution) -> Resolution:
|
|
if is_video(video_path):
|
|
video_resolution = detect_video_resolution(video_path)
|
|
if video_resolution < resolution:
|
|
return video_resolution
|
|
return resolution
|
|
|
|
|
|
def create_video_resolutions(resolution : Resolution) -> List[str]:
|
|
resolutions = []
|
|
temp_resolutions = []
|
|
|
|
if resolution:
|
|
width, height = resolution
|
|
temp_resolutions.append(normalize_resolution(resolution))
|
|
for template_size in video_template_sizes:
|
|
if width > height:
|
|
temp_resolutions.append(normalize_resolution((template_size * width / height, template_size)))
|
|
else:
|
|
temp_resolutions.append(normalize_resolution((template_size, template_size * height / width)))
|
|
temp_resolutions = sorted(set(temp_resolutions))
|
|
for temp_resolution in temp_resolutions:
|
|
resolutions.append(pack_resolution(temp_resolution))
|
|
return resolutions
|
|
|
|
|
|
def normalize_resolution(resolution : Tuple[float, float]) -> Resolution:
|
|
width, height = resolution
|
|
|
|
if width and height:
|
|
normalize_width = round(width / 2) * 2
|
|
normalize_height = round(height / 2) * 2
|
|
return normalize_width, normalize_height
|
|
return 0, 0
|
|
|
|
|
|
def pack_resolution(resolution : Resolution) -> str:
|
|
width, height = normalize_resolution(resolution)
|
|
return str(width) + 'x' + str(height)
|
|
|
|
|
|
def unpack_resolution(resolution : str) -> Resolution:
|
|
width, height = map(int, resolution.split('x'))
|
|
return width, height
|
|
|
|
|
|
def detect_frame_orientation(vision_frame : VisionFrame) -> Orientation:
|
|
height, width = vision_frame.shape[:2]
|
|
|
|
if width > height:
|
|
return 'landscape'
|
|
return 'portrait'
|
|
|
|
|
|
def resize_frame_resolution(vision_frame : VisionFrame, max_resolution : Resolution) -> VisionFrame:
|
|
height, width = vision_frame.shape[:2]
|
|
max_width, max_height = max_resolution
|
|
|
|
if height > max_height or width > max_width:
|
|
scale = min(max_height / height, max_width / width)
|
|
new_width = int(width * scale)
|
|
new_height = int(height * scale)
|
|
return cv2.resize(vision_frame, (new_width, new_height))
|
|
return vision_frame
|
|
|
|
|
|
def normalize_frame_color(vision_frame : VisionFrame) -> VisionFrame:
|
|
return cv2.cvtColor(vision_frame, cv2.COLOR_BGR2RGB)
|
|
|
|
|
|
def create_tile_frames(vision_frame : VisionFrame, size : Size) -> Tuple[List[VisionFrame], int, int]:
|
|
vision_frame = numpy.pad(vision_frame, ((size[1], size[1]), (size[1], size[1]), (0, 0)))
|
|
tile_width = size[0] - 2 * size[2]
|
|
pad_size_bottom = size[2] + tile_width - vision_frame.shape[0] % tile_width
|
|
pad_size_right = size[2] + tile_width - vision_frame.shape[1] % tile_width
|
|
pad_vision_frame = numpy.pad(vision_frame, ((size[2], pad_size_bottom), (size[2], pad_size_right), (0, 0)))
|
|
pad_height, pad_width = pad_vision_frame.shape[:2]
|
|
row_range = range(size[2], pad_height - size[2], tile_width)
|
|
col_range = range(size[2], pad_width - size[2], tile_width)
|
|
tile_vision_frames = []
|
|
|
|
for row_vision_frame in row_range:
|
|
top = row_vision_frame - size[2]
|
|
bottom = row_vision_frame + size[2] + tile_width
|
|
for column_vision_frame in col_range:
|
|
left = column_vision_frame - size[2]
|
|
right = column_vision_frame + size[2] + tile_width
|
|
tile_vision_frames.append(pad_vision_frame[top:bottom, left:right, :])
|
|
return tile_vision_frames, pad_width, pad_height
|
|
|
|
|
|
def merge_tile_frames(tile_vision_frames : List[VisionFrame], temp_width : int, temp_height : int, pad_width : int, pad_height : int, size : Size) -> VisionFrame:
|
|
merge_vision_frame = numpy.zeros((pad_height, pad_width, 3)).astype(numpy.uint8)
|
|
tile_width = tile_vision_frames[0].shape[1] - 2 * size[2]
|
|
tiles_per_row = min(pad_width // tile_width, len(tile_vision_frames))
|
|
|
|
for index, tile_vision_frame in enumerate(tile_vision_frames):
|
|
tile_vision_frame = tile_vision_frame[size[2]:-size[2], size[2]:-size[2]]
|
|
row_index = index // tiles_per_row
|
|
col_index = index % tiles_per_row
|
|
top = row_index * tile_vision_frame.shape[0]
|
|
bottom = top + tile_vision_frame.shape[0]
|
|
left = col_index * tile_vision_frame.shape[1]
|
|
right = left + tile_vision_frame.shape[1]
|
|
merge_vision_frame[top:bottom, left:right, :] = tile_vision_frame
|
|
merge_vision_frame = merge_vision_frame[size[1] : size[1] + temp_height, size[1]: size[1] + temp_width, :]
|
|
return merge_vision_frame
|