Next (#216)
* Simplify bbox access * Code cleanup * Simplify bbox access * Move code to face helper * Swap and paste back without insightface * Swap and paste back without insightface * Remove semaphore where possible * Improve paste back performance * Cosmetic changes * Move the predictor to ONNX to avoid tensorflow, Use video ranges for prediction * Make CI happy * Move template and size to the options * Fix different color on box * Uniform model handling for predictor * Uniform frame handling for predictor * Pass kps direct to warp_face * Fix urllib * Analyse based on matches * Analyse based on rate * Fix CI * ROCM and OpenVINO mapping for torch backends * Fix the paste back speed * Fix import * Replace retinaface with yunet (#168) * Remove insightface dependency * Fix urllib * Some fixes * Analyse based on matches * Analyse based on rate * Fix CI * Migrate to Yunet * Something is off here * We indeed need semaphore for yunet * Normalize the normed_embedding * Fix download of models * Fix download of models * Fix download of models * Add score and improve affine_matrix * Temp fix for bbox out of frame * Temp fix for bbox out of frame * ROCM and OpenVINO mapping for torch backends * Normalize bbox * Implement gender age * Cosmetics on cli args * Prevent face jumping * Fix the paste back speed * FIx import * Introduce detection size * Cosmetics on face analyser ARGS and globals * Temp fix for shaking face * Accurate event handling * Accurate event handling * Accurate event handling * Set the reference_frame_number in face_selector component * Simswap model (#171) * Add simswap models * Add ghost models * Introduce normed template * Conditional prepare and normalize for ghost * Conditional prepare and normalize for ghost * Get simswap working * Get simswap working * Fix refresh of swapper model * Refine face selection and detection (#174) * Refine face selection and detection * Update README.md * Fix some face analyser UI * Fix some face analyser UI * Introduce range handling for CLI arguments * Introduce range handling for CLI arguments * Fix some spacings * Disable onnxruntime warnings * Use cv2.blur over cv2.GaussianBlur for better performance * Revert "Use cv2.blur over cv2.GaussianBlur for better performance" This reverts commit bab666d6f9216a9f24faa84ead2d006b76f30159. * Prepare universal face detection * Prepare universal face detection part2 * Reimplement retinaface * Introduce cached anchors creation * Restore filtering to enhance performance * Minor changes * Minor changes * More code but easier to understand * Minor changes * Rename predictor to content analyser * Change detection/recognition to detector/recognizer * Fix crop frame borders * Fix spacing * Allow normalize output without a source * Improve conditional set face reference * Update dependencies * Add timeout for get_download_size * Fix performance due disorder * Move models to assets repository, Adjust namings * Refactor face analyser * Rename models once again * Fix spacing * Highres simswap (#192) * Introduce highres simswap * Fix simswap 256 color issue (#191) * Fix simswap 256 color issue * Update face_swapper.py * Normalize models and host in our repo * Normalize models and host in our repo --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Rename face analyser direction to face analyser order * Improve the UI for face selector * Add best-worst, worst-best detector ordering * Clear as needed and fix zero score bug * Fix linter * Improve startup time by multi thread remote download size * Just some cosmetics * Normalize swagger source input, Add blendface_256 (unfinished) * New paste back (#195) * add new paste_back (#194) * add new paste_back * Update face_helper.py * Update face_helper.py * add commandline arguments and gui * fix conflict * Update face_mask.py * type fix * Clean some wording and typing --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Clean more names, use blur range approach * Add blur padding range * Change the padding order * Fix yunet filename * Introduce face debugger * Use percent for mask padding * Ignore this * Ignore this * Simplify debugger output * implement blendface (#198) * Clean up after the genius * Add gpen_bfr_256 * Cosmetics * Ignore face_mask_padding on face enhancer * Update face_debugger.py (#202) * Shrink debug_face() to a minimum * Mark as 2.0.0 release * remove unused (#204) * Apply NMS (#205) * Apply NMS * Apply NMS part2 * Fix restoreformer url * Add debugger cli and gui components (#206) * Add debugger cli and gui components * update * Polishing the types * Fix usage in README.md * Update onnxruntime * Support for webp * Rename paste-back to face-mask * Add license to README * Add license to README * Extend face selector mode by one * Update utilities.py (#212) * Stop inline camera on stream * Minor webcam updates * Gracefully start and stop webcam * Rename capture to video_capture * Make get webcam capture pure * Check webcam to not be None * Remove some is not None * Use index 0 for webcam * Remove memory lookup within progress bar * Less progress bar updates * Uniform progress bar * Use classic progress bar * Fix image and video validation * Use different hash for cache * Use best-worse order for webcam * Normalize padding like CSS * Update preview * Fix max memory * Move disclaimer and license to the docs * Update wording in README * Add LICENSE.md * Fix argument in README --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> Co-authored-by: alex00ds <31631959+alex00ds@users.noreply.github.com>
This commit is contained in:
@@ -3,7 +3,7 @@ import gradio
|
||||
|
||||
import facefusion.globals
|
||||
from facefusion import wording
|
||||
from facefusion.uis import choices
|
||||
from facefusion.uis import choices as uis_choices
|
||||
|
||||
COMMON_OPTIONS_CHECKBOX_GROUP : Optional[gradio.Checkboxgroup] = None
|
||||
|
||||
@@ -22,7 +22,7 @@ def render() -> None:
|
||||
value.append('skip-download')
|
||||
COMMON_OPTIONS_CHECKBOX_GROUP = gradio.Checkboxgroup(
|
||||
label = wording.get('common_options_checkbox_group_label'),
|
||||
choices = choices.common_options,
|
||||
choices = uis_choices.common_options,
|
||||
value = value
|
||||
)
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ from typing import Optional
|
||||
import gradio
|
||||
|
||||
import facefusion.globals
|
||||
import facefusion.choices
|
||||
from facefusion import wording
|
||||
|
||||
EXECUTION_QUEUE_COUNT_SLIDER : Optional[gradio.Slider] = None
|
||||
@@ -13,9 +14,9 @@ def render() -> None:
|
||||
EXECUTION_QUEUE_COUNT_SLIDER = gradio.Slider(
|
||||
label = wording.get('execution_queue_count_slider_label'),
|
||||
value = facefusion.globals.execution_queue_count,
|
||||
step = 1,
|
||||
minimum = 1,
|
||||
maximum = 16
|
||||
step = facefusion.choices.execution_queue_count_range[1] - facefusion.choices.execution_queue_count_range[0],
|
||||
minimum = facefusion.choices.execution_queue_count_range[0],
|
||||
maximum = facefusion.choices.execution_queue_count_range[-1]
|
||||
)
|
||||
|
||||
|
||||
@@ -25,4 +26,3 @@ def listen() -> None:
|
||||
|
||||
def update_execution_queue_count(execution_queue_count : int = 1) -> None:
|
||||
facefusion.globals.execution_queue_count = execution_queue_count
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ from typing import Optional
|
||||
import gradio
|
||||
|
||||
import facefusion.globals
|
||||
import facefusion.choices
|
||||
from facefusion import wording
|
||||
|
||||
EXECUTION_THREAD_COUNT_SLIDER : Optional[gradio.Slider] = None
|
||||
@@ -13,9 +14,9 @@ def render() -> None:
|
||||
EXECUTION_THREAD_COUNT_SLIDER = gradio.Slider(
|
||||
label = wording.get('execution_thread_count_slider_label'),
|
||||
value = facefusion.globals.execution_thread_count,
|
||||
step = 1,
|
||||
minimum = 1,
|
||||
maximum = 128
|
||||
step = facefusion.choices.execution_thread_count_range[1] - facefusion.choices.execution_thread_count_range[0],
|
||||
minimum = facefusion.choices.execution_thread_count_range[0],
|
||||
maximum = facefusion.choices.execution_thread_count_range[-1]
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -2,49 +2,97 @@ from typing import Optional
|
||||
|
||||
import gradio
|
||||
|
||||
import facefusion.choices
|
||||
import facefusion.globals
|
||||
import facefusion.choices
|
||||
from facefusion import wording
|
||||
from facefusion.typing import FaceAnalyserOrder, FaceAnalyserAge, FaceAnalyserGender, FaceDetectorModel
|
||||
from facefusion.uis.core import register_ui_component
|
||||
|
||||
FACE_ANALYSER_DIRECTION_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_ANALYSER_ORDER_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_ANALYSER_AGE_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_ANALYSER_GENDER_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_DETECTOR_SIZE_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_DETECTOR_SCORE_SLIDER : Optional[gradio.Slider] = None
|
||||
FACE_DETECTOR_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
global FACE_ANALYSER_DIRECTION_DROPDOWN
|
||||
global FACE_ANALYSER_ORDER_DROPDOWN
|
||||
global FACE_ANALYSER_AGE_DROPDOWN
|
||||
global FACE_ANALYSER_GENDER_DROPDOWN
|
||||
global FACE_DETECTOR_SIZE_DROPDOWN
|
||||
global FACE_DETECTOR_SCORE_SLIDER
|
||||
global FACE_DETECTOR_MODEL_DROPDOWN
|
||||
|
||||
FACE_ANALYSER_DIRECTION_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_analyser_direction_dropdown_label'),
|
||||
choices = facefusion.choices.face_analyser_directions,
|
||||
value = facefusion.globals.face_analyser_direction
|
||||
with gradio.Row():
|
||||
FACE_ANALYSER_ORDER_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_analyser_order_dropdown_label'),
|
||||
choices = facefusion.choices.face_analyser_orders,
|
||||
value = facefusion.globals.face_analyser_order
|
||||
)
|
||||
FACE_ANALYSER_AGE_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_analyser_age_dropdown_label'),
|
||||
choices = [ 'none' ] + facefusion.choices.face_analyser_ages,
|
||||
value = facefusion.globals.face_analyser_age or 'none'
|
||||
)
|
||||
FACE_ANALYSER_GENDER_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_analyser_gender_dropdown_label'),
|
||||
choices = [ 'none' ] + facefusion.choices.face_analyser_genders,
|
||||
value = facefusion.globals.face_analyser_gender or 'none'
|
||||
)
|
||||
FACE_DETECTOR_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_detector_model_dropdown_label'),
|
||||
choices = facefusion.choices.face_detector_models,
|
||||
value = facefusion.globals.face_detector_model
|
||||
)
|
||||
FACE_ANALYSER_AGE_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_analyser_age_dropdown_label'),
|
||||
choices = ['none'] + facefusion.choices.face_analyser_ages,
|
||||
value = facefusion.globals.face_analyser_age or 'none'
|
||||
FACE_DETECTOR_SIZE_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_detector_size_dropdown_label'),
|
||||
choices = facefusion.choices.face_detector_sizes,
|
||||
value = facefusion.globals.face_detector_size
|
||||
)
|
||||
FACE_ANALYSER_GENDER_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_analyser_gender_dropdown_label'),
|
||||
choices = ['none'] + facefusion.choices.face_analyser_genders,
|
||||
value = facefusion.globals.face_analyser_gender or 'none'
|
||||
FACE_DETECTOR_SCORE_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_detector_score_slider_label'),
|
||||
value = facefusion.globals.face_detector_score,
|
||||
step =facefusion.choices.face_detector_score_range[1] - facefusion.choices.face_detector_score_range[0],
|
||||
minimum = facefusion.choices.face_detector_score_range[0],
|
||||
maximum = facefusion.choices.face_detector_score_range[-1]
|
||||
)
|
||||
register_ui_component('face_analyser_direction_dropdown', FACE_ANALYSER_DIRECTION_DROPDOWN)
|
||||
register_ui_component('face_analyser_order_dropdown', FACE_ANALYSER_ORDER_DROPDOWN)
|
||||
register_ui_component('face_analyser_age_dropdown', FACE_ANALYSER_AGE_DROPDOWN)
|
||||
register_ui_component('face_analyser_gender_dropdown', FACE_ANALYSER_GENDER_DROPDOWN)
|
||||
register_ui_component('face_detector_model_dropdown', FACE_DETECTOR_MODEL_DROPDOWN)
|
||||
register_ui_component('face_detector_size_dropdown', FACE_DETECTOR_SIZE_DROPDOWN)
|
||||
register_ui_component('face_detector_score_slider', FACE_DETECTOR_SCORE_SLIDER)
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
FACE_ANALYSER_DIRECTION_DROPDOWN.select(lambda value: update_dropdown('face_analyser_direction', value), inputs = FACE_ANALYSER_DIRECTION_DROPDOWN)
|
||||
FACE_ANALYSER_AGE_DROPDOWN.select(lambda value: update_dropdown('face_analyser_age', value), inputs = FACE_ANALYSER_AGE_DROPDOWN)
|
||||
FACE_ANALYSER_GENDER_DROPDOWN.select(lambda value: update_dropdown('face_analyser_gender', value), inputs = FACE_ANALYSER_GENDER_DROPDOWN)
|
||||
FACE_ANALYSER_ORDER_DROPDOWN.select(update_face_analyser_order, inputs = FACE_ANALYSER_ORDER_DROPDOWN)
|
||||
FACE_ANALYSER_AGE_DROPDOWN.select(update_face_analyser_age, inputs = FACE_ANALYSER_AGE_DROPDOWN)
|
||||
FACE_ANALYSER_GENDER_DROPDOWN.select(update_face_analyser_gender, inputs = FACE_ANALYSER_GENDER_DROPDOWN)
|
||||
FACE_DETECTOR_MODEL_DROPDOWN.change(update_face_detector_model, inputs = FACE_DETECTOR_MODEL_DROPDOWN)
|
||||
FACE_DETECTOR_SIZE_DROPDOWN.select(update_face_detector_size, inputs = FACE_DETECTOR_SIZE_DROPDOWN)
|
||||
FACE_DETECTOR_SCORE_SLIDER.change(update_face_detector_score, inputs = FACE_DETECTOR_SCORE_SLIDER)
|
||||
|
||||
|
||||
def update_dropdown(name : str, value : str) -> None:
|
||||
if value == 'none':
|
||||
setattr(facefusion.globals, name, None)
|
||||
else:
|
||||
setattr(facefusion.globals, name, value)
|
||||
def update_face_analyser_order(face_analyser_order : FaceAnalyserOrder) -> None:
|
||||
facefusion.globals.face_analyser_order = face_analyser_order if face_analyser_order != 'none' else None
|
||||
|
||||
|
||||
def update_face_analyser_age(face_analyser_age : FaceAnalyserAge) -> None:
|
||||
facefusion.globals.face_analyser_age = face_analyser_age if face_analyser_age != 'none' else None
|
||||
|
||||
|
||||
def update_face_analyser_gender(face_analyser_gender : FaceAnalyserGender) -> None:
|
||||
facefusion.globals.face_analyser_gender = face_analyser_gender if face_analyser_gender != 'none' else None
|
||||
|
||||
|
||||
def update_face_detector_model(face_detector_model : FaceDetectorModel) -> None:
|
||||
facefusion.globals.face_detector_model = face_detector_model
|
||||
|
||||
|
||||
def update_face_detector_size(face_detector_size : str) -> None:
|
||||
facefusion.globals.face_detector_size = face_detector_size
|
||||
|
||||
|
||||
def update_face_detector_score(face_detector_score : float) -> None:
|
||||
facefusion.globals.face_detector_score = face_detector_score
|
||||
|
||||
80
facefusion/uis/components/face_mask.py
Executable file
80
facefusion/uis/components/face_mask.py
Executable file
@@ -0,0 +1,80 @@
|
||||
from typing import Optional
|
||||
import gradio
|
||||
|
||||
import facefusion.globals
|
||||
import facefusion.choices
|
||||
from facefusion import wording
|
||||
from facefusion.uis.core import register_ui_component
|
||||
|
||||
FACE_MASK_BLUR_SLIDER : Optional[gradio.Slider] = None
|
||||
FACE_MASK_PADDING_TOP_SLIDER : Optional[gradio.Slider] = None
|
||||
FACE_MASK_PADDING_RIGHT_SLIDER : Optional[gradio.Slider] = None
|
||||
FACE_MASK_PADDING_BOTTOM_SLIDER : Optional[gradio.Slider] = None
|
||||
FACE_MASK_PADDING_LEFT_SLIDER : Optional[gradio.Slider] = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
global FACE_MASK_BLUR_SLIDER
|
||||
global FACE_MASK_PADDING_TOP_SLIDER
|
||||
global FACE_MASK_PADDING_RIGHT_SLIDER
|
||||
global FACE_MASK_PADDING_BOTTOM_SLIDER
|
||||
global FACE_MASK_PADDING_LEFT_SLIDER
|
||||
|
||||
FACE_MASK_BLUR_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_mask_blur_slider_label'),
|
||||
step = facefusion.choices.face_mask_blur_range[1] - facefusion.choices.face_mask_blur_range[0],
|
||||
minimum = facefusion.choices.face_mask_blur_range[0],
|
||||
maximum = facefusion.choices.face_mask_blur_range[-1],
|
||||
value = facefusion.globals.face_mask_blur
|
||||
)
|
||||
with gradio.Group():
|
||||
with gradio.Row():
|
||||
FACE_MASK_PADDING_TOP_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_mask_padding_top_slider_label'),
|
||||
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
|
||||
minimum = facefusion.choices.face_mask_padding_range[0],
|
||||
maximum = facefusion.choices.face_mask_padding_range[-1],
|
||||
value = facefusion.globals.face_mask_padding[0]
|
||||
)
|
||||
FACE_MASK_PADDING_RIGHT_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_mask_padding_right_slider_label'),
|
||||
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
|
||||
minimum = facefusion.choices.face_mask_padding_range[0],
|
||||
maximum = facefusion.choices.face_mask_padding_range[-1],
|
||||
value = facefusion.globals.face_mask_padding[1]
|
||||
)
|
||||
with gradio.Row():
|
||||
FACE_MASK_PADDING_BOTTOM_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_mask_padding_bottom_slider_label'),
|
||||
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
|
||||
minimum = facefusion.choices.face_mask_padding_range[0],
|
||||
maximum = facefusion.choices.face_mask_padding_range[-1],
|
||||
value = facefusion.globals.face_mask_padding[2]
|
||||
)
|
||||
FACE_MASK_PADDING_LEFT_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_mask_padding_left_slider_label'),
|
||||
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
|
||||
minimum = facefusion.choices.face_mask_padding_range[0],
|
||||
maximum = facefusion.choices.face_mask_padding_range[-1],
|
||||
value = facefusion.globals.face_mask_padding[3]
|
||||
)
|
||||
register_ui_component('face_mask_blur_slider', FACE_MASK_BLUR_SLIDER)
|
||||
register_ui_component('face_mask_padding_top_slider', FACE_MASK_PADDING_TOP_SLIDER)
|
||||
register_ui_component('face_mask_padding_right_slider', FACE_MASK_PADDING_RIGHT_SLIDER)
|
||||
register_ui_component('face_mask_padding_bottom_slider', FACE_MASK_PADDING_BOTTOM_SLIDER)
|
||||
register_ui_component('face_mask_padding_left_slider', FACE_MASK_PADDING_LEFT_SLIDER)
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
FACE_MASK_BLUR_SLIDER.change(update_face_mask_blur, inputs = FACE_MASK_BLUR_SLIDER)
|
||||
face_mask_padding_sliders = [ FACE_MASK_PADDING_TOP_SLIDER, FACE_MASK_PADDING_RIGHT_SLIDER, FACE_MASK_PADDING_BOTTOM_SLIDER, FACE_MASK_PADDING_LEFT_SLIDER ]
|
||||
for face_mask_padding_slider in face_mask_padding_sliders:
|
||||
face_mask_padding_slider.change(update_face_mask_padding, inputs = face_mask_padding_sliders)
|
||||
|
||||
|
||||
def update_face_mask_blur(face_mask_blur : float) -> None:
|
||||
facefusion.globals.face_mask_blur = face_mask_blur
|
||||
|
||||
|
||||
def update_face_mask_padding(face_mask_padding_top : int, face_mask_padding_right : int, face_mask_padding_bottom : int, face_mask_padding_left : int) -> None:
|
||||
facefusion.globals.face_mask_padding = (face_mask_padding_top, face_mask_padding_right, face_mask_padding_bottom, face_mask_padding_left)
|
||||
@@ -2,35 +2,35 @@ from typing import List, Optional, Tuple, Any, Dict
|
||||
|
||||
import gradio
|
||||
|
||||
import facefusion.choices
|
||||
import facefusion.globals
|
||||
import facefusion.choices
|
||||
from facefusion import wording
|
||||
from facefusion.vision import get_video_frame, normalize_frame_color, read_static_image
|
||||
from facefusion.face_cache import clear_faces_cache
|
||||
from facefusion.vision import get_video_frame, read_static_image, normalize_frame_color
|
||||
from facefusion.face_analyser import get_many_faces
|
||||
from facefusion.face_reference import clear_face_reference
|
||||
from facefusion.typing import Frame, FaceRecognition
|
||||
from facefusion.typing import Frame, FaceSelectorMode
|
||||
from facefusion.utilities import is_image, is_video
|
||||
from facefusion.uis.core import get_ui_component, register_ui_component
|
||||
from facefusion.uis.typing import ComponentName
|
||||
|
||||
FACE_RECOGNITION_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_SELECTOR_MODE_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
REFERENCE_FACE_POSITION_GALLERY : Optional[gradio.Gallery] = None
|
||||
REFERENCE_FACE_DISTANCE_SLIDER : Optional[gradio.Slider] = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
global FACE_RECOGNITION_DROPDOWN
|
||||
global FACE_SELECTOR_MODE_DROPDOWN
|
||||
global REFERENCE_FACE_POSITION_GALLERY
|
||||
global REFERENCE_FACE_DISTANCE_SLIDER
|
||||
|
||||
reference_face_gallery_args: Dict[str, Any] =\
|
||||
{
|
||||
'label': wording.get('reference_face_gallery_label'),
|
||||
'height': 120,
|
||||
'object_fit': 'cover',
|
||||
'columns': 10,
|
||||
'columns': 8,
|
||||
'allow_preview': False,
|
||||
'visible': 'reference' in facefusion.globals.face_recognition
|
||||
'visible': 'reference' in facefusion.globals.face_selector_mode
|
||||
}
|
||||
if is_image(facefusion.globals.target_path):
|
||||
reference_frame = read_static_image(facefusion.globals.target_path)
|
||||
@@ -38,32 +38,31 @@ def render() -> None:
|
||||
if is_video(facefusion.globals.target_path):
|
||||
reference_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
|
||||
reference_face_gallery_args['value'] = extract_gallery_frames(reference_frame)
|
||||
FACE_RECOGNITION_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_recognition_dropdown_label'),
|
||||
choices = facefusion.choices.face_recognitions,
|
||||
value = facefusion.globals.face_recognition
|
||||
FACE_SELECTOR_MODE_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_selector_mode_dropdown_label'),
|
||||
choices = facefusion.choices.face_selector_modes,
|
||||
value = facefusion.globals.face_selector_mode
|
||||
)
|
||||
REFERENCE_FACE_POSITION_GALLERY = gradio.Gallery(**reference_face_gallery_args)
|
||||
REFERENCE_FACE_DISTANCE_SLIDER = gradio.Slider(
|
||||
label = wording.get('reference_face_distance_slider_label'),
|
||||
value = facefusion.globals.reference_face_distance,
|
||||
step = 0.05,
|
||||
minimum = 0,
|
||||
maximum = 3,
|
||||
visible = 'reference' in facefusion.globals.face_recognition
|
||||
step = facefusion.choices.reference_face_distance_range[1] - facefusion.choices.reference_face_distance_range[0],
|
||||
minimum = facefusion.choices.reference_face_distance_range[0],
|
||||
maximum = facefusion.choices.reference_face_distance_range[-1],
|
||||
visible = 'reference' in facefusion.globals.face_selector_mode
|
||||
)
|
||||
register_ui_component('face_recognition_dropdown', FACE_RECOGNITION_DROPDOWN)
|
||||
register_ui_component('face_selector_mode_dropdown', FACE_SELECTOR_MODE_DROPDOWN)
|
||||
register_ui_component('reference_face_position_gallery', REFERENCE_FACE_POSITION_GALLERY)
|
||||
register_ui_component('reference_face_distance_slider', REFERENCE_FACE_DISTANCE_SLIDER)
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
FACE_RECOGNITION_DROPDOWN.select(update_face_recognition, inputs = FACE_RECOGNITION_DROPDOWN, outputs = [ REFERENCE_FACE_POSITION_GALLERY, REFERENCE_FACE_DISTANCE_SLIDER ])
|
||||
REFERENCE_FACE_POSITION_GALLERY.select(clear_and_update_face_reference_position)
|
||||
FACE_SELECTOR_MODE_DROPDOWN.select(update_face_selector_mode, inputs = FACE_SELECTOR_MODE_DROPDOWN, outputs = [ REFERENCE_FACE_POSITION_GALLERY, REFERENCE_FACE_DISTANCE_SLIDER ])
|
||||
REFERENCE_FACE_POSITION_GALLERY.select(clear_and_update_reference_face_position)
|
||||
REFERENCE_FACE_DISTANCE_SLIDER.change(update_reference_face_distance, inputs = REFERENCE_FACE_DISTANCE_SLIDER)
|
||||
multi_component_names : List[ComponentName] =\
|
||||
[
|
||||
'source_image',
|
||||
'target_image',
|
||||
'target_video'
|
||||
]
|
||||
@@ -71,39 +70,73 @@ def listen() -> None:
|
||||
component = get_ui_component(component_name)
|
||||
if component:
|
||||
for method in [ 'upload', 'change', 'clear' ]:
|
||||
getattr(component, method)(update_face_reference_position, outputs = REFERENCE_FACE_POSITION_GALLERY)
|
||||
select_component_names : List[ComponentName] =\
|
||||
getattr(component, method)(update_reference_face_position)
|
||||
getattr(component, method)(update_reference_position_gallery, outputs = REFERENCE_FACE_POSITION_GALLERY)
|
||||
change_one_component_names : List[ComponentName] =\
|
||||
[
|
||||
'face_analyser_direction_dropdown',
|
||||
'face_analyser_order_dropdown',
|
||||
'face_analyser_age_dropdown',
|
||||
'face_analyser_gender_dropdown'
|
||||
]
|
||||
for component_name in select_component_names:
|
||||
for component_name in change_one_component_names:
|
||||
component = get_ui_component(component_name)
|
||||
if component:
|
||||
component.select(update_face_reference_position, outputs = REFERENCE_FACE_POSITION_GALLERY)
|
||||
component.change(update_reference_position_gallery, outputs = REFERENCE_FACE_POSITION_GALLERY)
|
||||
change_two_component_names : List[ComponentName] =\
|
||||
[
|
||||
'face_detector_model_dropdown',
|
||||
'face_detector_size_dropdown',
|
||||
'face_detector_score_slider'
|
||||
]
|
||||
for component_name in change_two_component_names:
|
||||
component = get_ui_component(component_name)
|
||||
if component:
|
||||
component.change(clear_and_update_reference_position_gallery, outputs = REFERENCE_FACE_POSITION_GALLERY)
|
||||
preview_frame_slider = get_ui_component('preview_frame_slider')
|
||||
if preview_frame_slider:
|
||||
preview_frame_slider.release(update_face_reference_position, outputs = REFERENCE_FACE_POSITION_GALLERY)
|
||||
preview_frame_slider.change(update_reference_frame_number, inputs = preview_frame_slider)
|
||||
preview_frame_slider.release(update_reference_position_gallery, outputs = REFERENCE_FACE_POSITION_GALLERY)
|
||||
|
||||
|
||||
def update_face_recognition(face_recognition : FaceRecognition) -> Tuple[gradio.Gallery, gradio.Slider]:
|
||||
if face_recognition == 'reference':
|
||||
facefusion.globals.face_recognition = face_recognition
|
||||
def update_face_selector_mode(face_selector_mode : FaceSelectorMode) -> Tuple[gradio.Gallery, gradio.Slider]:
|
||||
if face_selector_mode == 'reference':
|
||||
facefusion.globals.face_selector_mode = face_selector_mode
|
||||
return gradio.Gallery(visible = True), gradio.Slider(visible = True)
|
||||
if face_recognition == 'many':
|
||||
facefusion.globals.face_recognition = face_recognition
|
||||
if face_selector_mode == 'one':
|
||||
facefusion.globals.face_selector_mode = face_selector_mode
|
||||
return gradio.Gallery(visible = False), gradio.Slider(visible = False)
|
||||
if face_selector_mode == 'many':
|
||||
facefusion.globals.face_selector_mode = face_selector_mode
|
||||
return gradio.Gallery(visible = False), gradio.Slider(visible = False)
|
||||
|
||||
|
||||
def clear_and_update_face_reference_position(event: gradio.SelectData) -> gradio.Gallery:
|
||||
def clear_and_update_reference_face_position(event : gradio.SelectData) -> gradio.Gallery:
|
||||
clear_face_reference()
|
||||
return update_face_reference_position(event.index)
|
||||
clear_faces_cache()
|
||||
update_reference_face_position(event.index)
|
||||
return update_reference_position_gallery()
|
||||
|
||||
|
||||
def update_face_reference_position(reference_face_position : int = 0) -> gradio.Gallery:
|
||||
gallery_frames = []
|
||||
def update_reference_face_position(reference_face_position : int = 0) -> None:
|
||||
facefusion.globals.reference_face_position = reference_face_position
|
||||
|
||||
|
||||
def update_reference_face_distance(reference_face_distance : float) -> None:
|
||||
facefusion.globals.reference_face_distance = reference_face_distance
|
||||
|
||||
|
||||
def update_reference_frame_number(reference_frame_number : int) -> None:
|
||||
facefusion.globals.reference_frame_number = reference_frame_number
|
||||
|
||||
|
||||
def clear_and_update_reference_position_gallery() -> gradio.Gallery:
|
||||
clear_face_reference()
|
||||
clear_faces_cache()
|
||||
return update_reference_position_gallery()
|
||||
|
||||
|
||||
def update_reference_position_gallery() -> gradio.Gallery:
|
||||
gallery_frames = []
|
||||
if is_image(facefusion.globals.target_path):
|
||||
reference_frame = read_static_image(facefusion.globals.target_path)
|
||||
gallery_frames = extract_gallery_frames(reference_frame)
|
||||
@@ -115,15 +148,11 @@ def update_face_reference_position(reference_face_position : int = 0) -> gradio.
|
||||
return gradio.Gallery(value = None)
|
||||
|
||||
|
||||
def update_reference_face_distance(reference_face_distance : float) -> None:
|
||||
facefusion.globals.reference_face_distance = reference_face_distance
|
||||
|
||||
|
||||
def extract_gallery_frames(reference_frame : Frame) -> List[Frame]:
|
||||
crop_frames = []
|
||||
faces = get_many_faces(reference_frame)
|
||||
for face in faces:
|
||||
start_x, start_y, end_x, end_y = map(int, face['bbox'])
|
||||
start_x, start_y, end_x, end_y = map(int, face.bbox)
|
||||
padding_x = int((end_x - start_x) * 0.25)
|
||||
padding_y = int((end_y - start_y) * 0.25)
|
||||
start_x = max(0, start_x - padding_x)
|
||||
|
||||
47
facefusion/uis/components/frame_processors_options.py
Normal file → Executable file
47
facefusion/uis/components/frame_processors_options.py
Normal file → Executable file
@@ -5,6 +5,7 @@ import facefusion.globals
|
||||
from facefusion import wording
|
||||
from facefusion.processors.frame.core import load_frame_processor_module
|
||||
from facefusion.processors.frame import globals as frame_processors_globals, choices as frame_processors_choices
|
||||
from facefusion.processors.frame.typings import FaceSwapperModel, FaceEnhancerModel, FrameEnhancerModel, FaceDebuggerItem
|
||||
from facefusion.uis.core import get_ui_component, register_ui_component
|
||||
|
||||
FACE_SWAPPER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
@@ -12,6 +13,7 @@ FACE_ENHANCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_ENHANCER_BLEND_SLIDER : Optional[gradio.Slider] = None
|
||||
FRAME_ENHANCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FRAME_ENHANCER_BLEND_SLIDER : Optional[gradio.Slider] = None
|
||||
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
@@ -20,6 +22,7 @@ def render() -> None:
|
||||
global FACE_ENHANCER_BLEND_SLIDER
|
||||
global FRAME_ENHANCER_MODEL_DROPDOWN
|
||||
global FRAME_ENHANCER_BLEND_SLIDER
|
||||
global FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP
|
||||
|
||||
FACE_SWAPPER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_swapper_model_dropdown_label'),
|
||||
@@ -36,9 +39,9 @@ def render() -> None:
|
||||
FACE_ENHANCER_BLEND_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_enhancer_blend_slider_label'),
|
||||
value = frame_processors_globals.face_enhancer_blend,
|
||||
step = 1,
|
||||
minimum = 0,
|
||||
maximum = 100,
|
||||
step = frame_processors_choices.face_enhancer_blend_range[1] - frame_processors_choices.face_enhancer_blend_range[0],
|
||||
minimum = frame_processors_choices.face_enhancer_blend_range[0],
|
||||
maximum = frame_processors_choices.face_enhancer_blend_range[-1],
|
||||
visible = 'face_enhancer' in facefusion.globals.frame_processors
|
||||
)
|
||||
FRAME_ENHANCER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
@@ -50,16 +53,24 @@ def render() -> None:
|
||||
FRAME_ENHANCER_BLEND_SLIDER = gradio.Slider(
|
||||
label = wording.get('frame_enhancer_blend_slider_label'),
|
||||
value = frame_processors_globals.frame_enhancer_blend,
|
||||
step = 1,
|
||||
minimum = 0,
|
||||
maximum = 100,
|
||||
step = frame_processors_choices.frame_enhancer_blend_range[1] - frame_processors_choices.frame_enhancer_blend_range[0],
|
||||
minimum = frame_processors_choices.frame_enhancer_blend_range[0],
|
||||
maximum = frame_processors_choices.frame_enhancer_blend_range[-1],
|
||||
visible = 'face_enhancer' in facefusion.globals.frame_processors
|
||||
)
|
||||
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('face_debugger_items_checkbox_group_label'),
|
||||
choices = frame_processors_choices.face_debugger_items,
|
||||
value = frame_processors_globals.face_debugger_items,
|
||||
visible = 'face_debugger' in facefusion.globals.frame_processors
|
||||
)
|
||||
|
||||
register_ui_component('face_swapper_model_dropdown', FACE_SWAPPER_MODEL_DROPDOWN)
|
||||
register_ui_component('face_enhancer_model_dropdown', FACE_ENHANCER_MODEL_DROPDOWN)
|
||||
register_ui_component('face_enhancer_blend_slider', FACE_ENHANCER_BLEND_SLIDER)
|
||||
register_ui_component('frame_enhancer_model_dropdown', FRAME_ENHANCER_MODEL_DROPDOWN)
|
||||
register_ui_component('frame_enhancer_blend_slider', FRAME_ENHANCER_BLEND_SLIDER)
|
||||
register_ui_component('face_debugger_items_checkbox_group', FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
@@ -68,13 +79,20 @@ def listen() -> None:
|
||||
FACE_ENHANCER_BLEND_SLIDER.change(update_face_enhancer_blend, inputs = FACE_ENHANCER_BLEND_SLIDER)
|
||||
FRAME_ENHANCER_MODEL_DROPDOWN.change(update_frame_enhancer_model, inputs = FRAME_ENHANCER_MODEL_DROPDOWN, outputs = FRAME_ENHANCER_MODEL_DROPDOWN)
|
||||
FRAME_ENHANCER_BLEND_SLIDER.change(update_frame_enhancer_blend, inputs = FRAME_ENHANCER_BLEND_SLIDER)
|
||||
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP.change(update_face_debugger_items, inputs = FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
|
||||
frame_processors_checkbox_group = get_ui_component('frame_processors_checkbox_group')
|
||||
if frame_processors_checkbox_group:
|
||||
frame_processors_checkbox_group.change(toggle_face_swapper_model, inputs = frame_processors_checkbox_group, outputs = [ FACE_SWAPPER_MODEL_DROPDOWN, FACE_ENHANCER_MODEL_DROPDOWN, FACE_ENHANCER_BLEND_SLIDER, FRAME_ENHANCER_MODEL_DROPDOWN, FRAME_ENHANCER_BLEND_SLIDER ])
|
||||
frame_processors_checkbox_group.change(toggle_face_swapper_model, inputs = frame_processors_checkbox_group, outputs = [ FACE_SWAPPER_MODEL_DROPDOWN, FACE_ENHANCER_MODEL_DROPDOWN, FACE_ENHANCER_BLEND_SLIDER, FRAME_ENHANCER_MODEL_DROPDOWN, FRAME_ENHANCER_BLEND_SLIDER, FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP ])
|
||||
|
||||
|
||||
def update_face_swapper_model(face_swapper_model : str) -> gradio.Dropdown:
|
||||
def update_face_swapper_model(face_swapper_model : FaceSwapperModel) -> gradio.Dropdown:
|
||||
frame_processors_globals.face_swapper_model = face_swapper_model
|
||||
if face_swapper_model == 'blendface_256':
|
||||
facefusion.globals.face_recognizer_model = 'arcface_blendface'
|
||||
if face_swapper_model == 'inswapper_128' or face_swapper_model == 'inswapper_128_fp16':
|
||||
facefusion.globals.face_recognizer_model = 'arcface_inswapper'
|
||||
if face_swapper_model == 'simswap_256' or face_swapper_model == 'simswap_512_unofficial':
|
||||
facefusion.globals.face_recognizer_model = 'arcface_simswap'
|
||||
face_swapper_module = load_frame_processor_module('face_swapper')
|
||||
face_swapper_module.clear_frame_processor()
|
||||
face_swapper_module.set_options('model', face_swapper_module.MODELS[face_swapper_model])
|
||||
@@ -83,7 +101,7 @@ def update_face_swapper_model(face_swapper_model : str) -> gradio.Dropdown:
|
||||
return gradio.Dropdown(value = face_swapper_model)
|
||||
|
||||
|
||||
def update_face_enhancer_model(face_enhancer_model : str) -> gradio.Dropdown:
|
||||
def update_face_enhancer_model(face_enhancer_model : FaceEnhancerModel) -> gradio.Dropdown:
|
||||
frame_processors_globals.face_enhancer_model = face_enhancer_model
|
||||
face_enhancer_module = load_frame_processor_module('face_enhancer')
|
||||
face_enhancer_module.clear_frame_processor()
|
||||
@@ -97,7 +115,7 @@ def update_face_enhancer_blend(face_enhancer_blend : int) -> None:
|
||||
frame_processors_globals.face_enhancer_blend = face_enhancer_blend
|
||||
|
||||
|
||||
def update_frame_enhancer_model(frame_enhancer_model : str) -> gradio.Dropdown:
|
||||
def update_frame_enhancer_model(frame_enhancer_model : FrameEnhancerModel) -> gradio.Dropdown:
|
||||
frame_processors_globals.frame_enhancer_model = frame_enhancer_model
|
||||
frame_enhancer_module = load_frame_processor_module('frame_enhancer')
|
||||
frame_enhancer_module.clear_frame_processor()
|
||||
@@ -111,8 +129,13 @@ def update_frame_enhancer_blend(frame_enhancer_blend : int) -> None:
|
||||
frame_processors_globals.frame_enhancer_blend = frame_enhancer_blend
|
||||
|
||||
|
||||
def toggle_face_swapper_model(frame_processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Dropdown, gradio.Slider, gradio.Dropdown, gradio.Slider]:
|
||||
def update_face_debugger_items(face_debugger_items : List[FaceDebuggerItem]) -> None:
|
||||
frame_processors_globals.face_debugger_items = face_debugger_items
|
||||
|
||||
|
||||
def toggle_face_swapper_model(frame_processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Dropdown, gradio.Slider, gradio.Dropdown, gradio.Slider, gradio.CheckboxGroup]:
|
||||
has_face_swapper = 'face_swapper' in frame_processors
|
||||
has_face_enhancer = 'face_enhancer' in frame_processors
|
||||
has_frame_enhancer = 'frame_enhancer' in frame_processors
|
||||
return gradio.Dropdown(visible = has_face_swapper), gradio.Dropdown(visible = has_face_enhancer), gradio.Slider(visible = has_face_enhancer), gradio.Dropdown(visible = has_frame_enhancer), gradio.Slider(visible = has_frame_enhancer)
|
||||
has_face_debugger = 'face_debugger' in frame_processors
|
||||
return gradio.Dropdown(visible = has_face_swapper), gradio.Dropdown(visible = has_face_enhancer), gradio.Slider(visible = has_face_enhancer), gradio.Dropdown(visible = has_frame_enhancer), gradio.Slider(visible = has_frame_enhancer), gradio.CheckboxGroup(visible = has_face_debugger)
|
||||
|
||||
@@ -2,6 +2,7 @@ from typing import Optional
|
||||
import gradio
|
||||
|
||||
import facefusion.globals
|
||||
import facefusion.choices
|
||||
from facefusion import wording
|
||||
|
||||
MAX_MEMORY_SLIDER : Optional[gradio.Slider] = None
|
||||
@@ -12,9 +13,9 @@ def render() -> None:
|
||||
|
||||
MAX_MEMORY_SLIDER = gradio.Slider(
|
||||
label = wording.get('max_memory_slider_label'),
|
||||
step = 1,
|
||||
minimum = 0,
|
||||
maximum = 128
|
||||
step = facefusion.choices.max_memory_range[1] - facefusion.choices.max_memory_range[0],
|
||||
minimum = facefusion.choices.max_memory_range[0],
|
||||
maximum = facefusion.choices.max_memory_range[-1]
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -2,8 +2,8 @@ from typing import Optional, Tuple, List
|
||||
import tempfile
|
||||
import gradio
|
||||
|
||||
import facefusion.choices
|
||||
import facefusion.globals
|
||||
import facefusion.choices
|
||||
from facefusion import wording
|
||||
from facefusion.typing import OutputVideoEncoder
|
||||
from facefusion.utilities import is_image, is_video
|
||||
@@ -30,9 +30,9 @@ def render() -> None:
|
||||
OUTPUT_IMAGE_QUALITY_SLIDER = gradio.Slider(
|
||||
label = wording.get('output_image_quality_slider_label'),
|
||||
value = facefusion.globals.output_image_quality,
|
||||
step = 1,
|
||||
minimum = 0,
|
||||
maximum = 100,
|
||||
step = facefusion.choices.output_image_quality_range[1] - facefusion.choices.output_image_quality_range[0],
|
||||
minimum = facefusion.choices.output_image_quality_range[0],
|
||||
maximum = facefusion.choices.output_image_quality_range[-1],
|
||||
visible = is_image(facefusion.globals.target_path)
|
||||
)
|
||||
OUTPUT_VIDEO_ENCODER_DROPDOWN = gradio.Dropdown(
|
||||
@@ -44,9 +44,9 @@ def render() -> None:
|
||||
OUTPUT_VIDEO_QUALITY_SLIDER = gradio.Slider(
|
||||
label = wording.get('output_video_quality_slider_label'),
|
||||
value = facefusion.globals.output_video_quality,
|
||||
step = 1,
|
||||
minimum = 0,
|
||||
maximum = 100,
|
||||
step = facefusion.choices.output_video_quality_range[1] - facefusion.choices.output_video_quality_range[0],
|
||||
minimum = facefusion.choices.output_video_quality_range[0],
|
||||
maximum = facefusion.choices.output_video_quality_range[-1],
|
||||
visible = is_video(facefusion.globals.target_path)
|
||||
)
|
||||
register_ui_component('output_path_textbox', OUTPUT_PATH_TEXTBOX)
|
||||
|
||||
88
facefusion/uis/components/preview.py
Normal file → Executable file
88
facefusion/uis/components/preview.py
Normal file → Executable file
@@ -4,11 +4,13 @@ import gradio
|
||||
|
||||
import facefusion.globals
|
||||
from facefusion import wording
|
||||
from facefusion.core import conditional_set_face_reference
|
||||
from facefusion.face_cache import clear_faces_cache
|
||||
from facefusion.typing import Frame, Face
|
||||
from facefusion.vision import get_video_frame, count_video_frame_total, normalize_frame_color, resize_frame_dimension, read_static_image
|
||||
from facefusion.face_analyser import get_one_face
|
||||
from facefusion.face_reference import get_face_reference, set_face_reference
|
||||
from facefusion.predictor import predict_frame
|
||||
from facefusion.face_analyser import get_one_face, clear_face_analyser
|
||||
from facefusion.face_reference import get_face_reference, clear_face_reference
|
||||
from facefusion.content_analyser import analyse_frame
|
||||
from facefusion.processors.frame.core import load_frame_processor_module
|
||||
from facefusion.utilities import is_video, is_image
|
||||
from facefusion.uis.typing import ComponentName
|
||||
@@ -37,7 +39,7 @@ def render() -> None:
|
||||
}
|
||||
conditional_set_face_reference()
|
||||
source_face = get_one_face(read_static_image(facefusion.globals.source_path))
|
||||
reference_face = get_face_reference() if 'reference' in facefusion.globals.face_recognition else None
|
||||
reference_face = get_face_reference() if 'reference' in facefusion.globals.face_selector_mode else None
|
||||
if is_image(facefusion.globals.target_path):
|
||||
target_frame = read_static_image(facefusion.globals.target_path)
|
||||
preview_frame = process_preview_frame(source_face, reference_face, target_frame)
|
||||
@@ -57,34 +59,31 @@ def render() -> None:
|
||||
|
||||
def listen() -> None:
|
||||
PREVIEW_FRAME_SLIDER.change(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
|
||||
multi_component_names : List[ComponentName] =\
|
||||
multi_one_component_names : List[ComponentName] =\
|
||||
[
|
||||
'source_image',
|
||||
'target_image',
|
||||
'target_video'
|
||||
]
|
||||
for component_name in multi_component_names:
|
||||
for component_name in multi_one_component_names:
|
||||
component = get_ui_component(component_name)
|
||||
if component:
|
||||
for method in [ 'upload', 'change', 'clear' ]:
|
||||
getattr(component, method)(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
|
||||
getattr(component, method)(update_preview_frame_slider, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_FRAME_SLIDER)
|
||||
update_component_names : List[ComponentName] =\
|
||||
multi_two_component_names : List[ComponentName] =\
|
||||
[
|
||||
'face_recognition_dropdown',
|
||||
'frame_processors_checkbox_group',
|
||||
'face_swapper_model_dropdown',
|
||||
'face_enhancer_model_dropdown',
|
||||
'frame_enhancer_model_dropdown'
|
||||
'target_image',
|
||||
'target_video'
|
||||
]
|
||||
for component_name in update_component_names:
|
||||
for component_name in multi_two_component_names:
|
||||
component = get_ui_component(component_name)
|
||||
if component:
|
||||
component.change(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
|
||||
for method in [ 'upload', 'change', 'clear' ]:
|
||||
getattr(component, method)(update_preview_frame_slider, outputs = PREVIEW_FRAME_SLIDER)
|
||||
select_component_names : List[ComponentName] =\
|
||||
[
|
||||
'reference_face_position_gallery',
|
||||
'face_analyser_direction_dropdown',
|
||||
'face_analyser_order_dropdown',
|
||||
'face_analyser_age_dropdown',
|
||||
'face_analyser_gender_dropdown'
|
||||
]
|
||||
@@ -92,49 +91,73 @@ def listen() -> None:
|
||||
component = get_ui_component(component_name)
|
||||
if component:
|
||||
component.select(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
|
||||
change_component_names : List[ComponentName] =\
|
||||
change_one_component_names : List[ComponentName] =\
|
||||
[
|
||||
'reference_face_distance_slider',
|
||||
'frame_processors_checkbox_group',
|
||||
'face_debugger_items_checkbox_group',
|
||||
'face_enhancer_model_dropdown',
|
||||
'face_enhancer_blend_slider',
|
||||
'frame_enhancer_blend_slider'
|
||||
'frame_enhancer_model_dropdown',
|
||||
'frame_enhancer_blend_slider',
|
||||
'face_selector_mode_dropdown',
|
||||
'reference_face_distance_slider',
|
||||
'face_mask_blur_slider',
|
||||
'face_mask_padding_top_slider',
|
||||
'face_mask_padding_bottom_slider',
|
||||
'face_mask_padding_left_slider',
|
||||
'face_mask_padding_right_slider'
|
||||
]
|
||||
for component_name in change_component_names:
|
||||
for component_name in change_one_component_names:
|
||||
component = get_ui_component(component_name)
|
||||
if component:
|
||||
component.change(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
|
||||
change_two_component_names : List[ComponentName] =\
|
||||
[
|
||||
'face_swapper_model_dropdown',
|
||||
'face_detector_model_dropdown',
|
||||
'face_detector_size_dropdown',
|
||||
'face_detector_score_slider'
|
||||
]
|
||||
for component_name in change_two_component_names:
|
||||
component = get_ui_component(component_name)
|
||||
if component:
|
||||
component.change(clear_and_update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
|
||||
|
||||
|
||||
def clear_and_update_preview_image(frame_number : int = 0) -> gradio.Image:
|
||||
clear_face_analyser()
|
||||
clear_face_reference()
|
||||
clear_faces_cache()
|
||||
return update_preview_image(frame_number)
|
||||
|
||||
|
||||
def update_preview_image(frame_number : int = 0) -> gradio.Image:
|
||||
conditional_set_face_reference()
|
||||
source_face = get_one_face(read_static_image(facefusion.globals.source_path))
|
||||
reference_face = get_face_reference() if 'reference' in facefusion.globals.face_recognition else None
|
||||
reference_face = get_face_reference() if 'reference' in facefusion.globals.face_selector_mode else None
|
||||
if is_image(facefusion.globals.target_path):
|
||||
target_frame = read_static_image(facefusion.globals.target_path)
|
||||
preview_frame = process_preview_frame(source_face, reference_face, target_frame)
|
||||
preview_frame = normalize_frame_color(preview_frame)
|
||||
return gradio.Image(value = preview_frame)
|
||||
if is_video(facefusion.globals.target_path):
|
||||
facefusion.globals.reference_frame_number = frame_number
|
||||
temp_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
|
||||
temp_frame = get_video_frame(facefusion.globals.target_path, frame_number)
|
||||
preview_frame = process_preview_frame(source_face, reference_face, temp_frame)
|
||||
preview_frame = normalize_frame_color(preview_frame)
|
||||
return gradio.Image(value = preview_frame)
|
||||
return gradio.Image(value = None)
|
||||
|
||||
|
||||
def update_preview_frame_slider(frame_number : int = 0) -> gradio.Slider:
|
||||
if is_image(facefusion.globals.target_path):
|
||||
return gradio.Slider(value = None, maximum = None, visible = False)
|
||||
def update_preview_frame_slider() -> gradio.Slider:
|
||||
if is_video(facefusion.globals.target_path):
|
||||
facefusion.globals.reference_frame_number = frame_number
|
||||
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
|
||||
return gradio.Slider(maximum = video_frame_total, visible = True)
|
||||
return gradio.Slider()
|
||||
return gradio.Slider(value = None, maximum = None, visible = False)
|
||||
|
||||
|
||||
def process_preview_frame(source_face : Face, reference_face : Face, temp_frame : Frame) -> Frame:
|
||||
temp_frame = resize_frame_dimension(temp_frame, 640, 640)
|
||||
if predict_frame(temp_frame):
|
||||
if analyse_frame(temp_frame):
|
||||
return cv2.GaussianBlur(temp_frame, (99, 99), 0)
|
||||
for frame_processor in facefusion.globals.frame_processors:
|
||||
frame_processor_module = load_frame_processor_module(frame_processor)
|
||||
@@ -145,10 +168,3 @@ def process_preview_frame(source_face : Face, reference_face : Face, temp_frame
|
||||
temp_frame
|
||||
)
|
||||
return temp_frame
|
||||
|
||||
|
||||
def conditional_set_face_reference() -> None:
|
||||
if 'reference' in facefusion.globals.face_recognition and not get_face_reference():
|
||||
reference_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
|
||||
reference_face = get_one_face(reference_frame, facefusion.globals.reference_face_position)
|
||||
set_face_reference(reference_face)
|
||||
|
||||
@@ -3,6 +3,7 @@ import gradio
|
||||
|
||||
import facefusion.globals
|
||||
from facefusion import wording
|
||||
from facefusion.face_cache import clear_faces_cache
|
||||
from facefusion.face_reference import clear_face_reference
|
||||
from facefusion.utilities import is_image, is_video
|
||||
from facefusion.uis.core import register_ui_component
|
||||
@@ -51,6 +52,7 @@ def listen() -> None:
|
||||
|
||||
def update(file : IO[Any]) -> Tuple[gradio.Image, gradio.Video]:
|
||||
clear_face_reference()
|
||||
clear_faces_cache()
|
||||
if file and is_image(file.name):
|
||||
facefusion.globals.target_path = file.name
|
||||
return gradio.Image(value = file.name, visible = True), gradio.Video(value = None, visible = False)
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
from typing import Optional, Tuple
|
||||
import gradio
|
||||
|
||||
import facefusion.choices
|
||||
import facefusion.globals
|
||||
import facefusion.choices
|
||||
from facefusion import wording
|
||||
from facefusion.typing import TempFrameFormat
|
||||
from facefusion.utilities import is_video
|
||||
@@ -25,9 +25,9 @@ def render() -> None:
|
||||
TEMP_FRAME_QUALITY_SLIDER = gradio.Slider(
|
||||
label = wording.get('temp_frame_quality_slider_label'),
|
||||
value = facefusion.globals.temp_frame_quality,
|
||||
step = 1,
|
||||
minimum = 0,
|
||||
maximum = 100,
|
||||
step = facefusion.choices.temp_frame_quality_range[1] - facefusion.choices.temp_frame_quality_range[0],
|
||||
minimum = facefusion.choices.temp_frame_quality_range[0],
|
||||
maximum = facefusion.choices.temp_frame_quality_range[-1],
|
||||
visible = is_video(facefusion.globals.target_path)
|
||||
)
|
||||
|
||||
|
||||
@@ -39,8 +39,9 @@ def render() -> None:
|
||||
trim_frame_end_slider_args['value'] = facefusion.globals.trim_frame_end or video_frame_total
|
||||
trim_frame_end_slider_args['maximum'] = video_frame_total
|
||||
trim_frame_end_slider_args['visible'] = True
|
||||
TRIM_FRAME_START_SLIDER = gradio.Slider(**trim_frame_start_slider_args)
|
||||
TRIM_FRAME_END_SLIDER = gradio.Slider(**trim_frame_end_slider_args)
|
||||
with gradio.Row():
|
||||
TRIM_FRAME_START_SLIDER = gradio.Slider(**trim_frame_start_slider_args)
|
||||
TRIM_FRAME_END_SLIDER = gradio.Slider(**trim_frame_end_slider_args)
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
|
||||
@@ -10,7 +10,7 @@ from tqdm import tqdm
|
||||
|
||||
import facefusion.globals
|
||||
from facefusion import wording
|
||||
from facefusion.predictor import predict_stream
|
||||
from facefusion.content_analyser import analyse_stream
|
||||
from facefusion.typing import Frame, Face
|
||||
from facefusion.face_analyser import get_one_face
|
||||
from facefusion.processors.frame.core import get_frame_processors_modules
|
||||
@@ -19,11 +19,33 @@ from facefusion.vision import normalize_frame_color, read_static_image
|
||||
from facefusion.uis.typing import StreamMode, WebcamMode
|
||||
from facefusion.uis.core import get_ui_component
|
||||
|
||||
WEBCAM_CAPTURE : Optional[cv2.VideoCapture] = None
|
||||
WEBCAM_IMAGE : Optional[gradio.Image] = None
|
||||
WEBCAM_START_BUTTON : Optional[gradio.Button] = None
|
||||
WEBCAM_STOP_BUTTON : Optional[gradio.Button] = None
|
||||
|
||||
|
||||
def get_webcam_capture() -> Optional[cv2.VideoCapture]:
|
||||
global WEBCAM_CAPTURE
|
||||
|
||||
if WEBCAM_CAPTURE is None:
|
||||
if platform.system().lower() == 'windows':
|
||||
webcam_capture = cv2.VideoCapture(0, cv2.CAP_DSHOW)
|
||||
else:
|
||||
webcam_capture = cv2.VideoCapture(0)
|
||||
if webcam_capture and webcam_capture.isOpened():
|
||||
WEBCAM_CAPTURE = webcam_capture
|
||||
return WEBCAM_CAPTURE
|
||||
|
||||
|
||||
def clear_webcam_capture() -> None:
|
||||
global WEBCAM_CAPTURE
|
||||
|
||||
if WEBCAM_CAPTURE:
|
||||
WEBCAM_CAPTURE.release()
|
||||
WEBCAM_CAPTURE = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
global WEBCAM_IMAGE
|
||||
global WEBCAM_START_BUTTON
|
||||
@@ -50,9 +72,6 @@ def listen() -> None:
|
||||
webcam_fps_slider = get_ui_component('webcam_fps_slider')
|
||||
if webcam_mode_radio and webcam_resolution_dropdown and webcam_fps_slider:
|
||||
start_event = WEBCAM_START_BUTTON.click(start, inputs = [ webcam_mode_radio, webcam_resolution_dropdown, webcam_fps_slider ], outputs = WEBCAM_IMAGE)
|
||||
webcam_mode_radio.change(stop, outputs = WEBCAM_IMAGE, cancels = start_event)
|
||||
webcam_resolution_dropdown.change(stop, outputs = WEBCAM_IMAGE, cancels = start_event)
|
||||
webcam_fps_slider.change(stop, outputs = WEBCAM_IMAGE, cancels = start_event)
|
||||
WEBCAM_STOP_BUTTON.click(stop, cancels = start_event)
|
||||
source_image = get_ui_component('source_image')
|
||||
if source_image:
|
||||
@@ -60,57 +79,53 @@ def listen() -> None:
|
||||
getattr(source_image, method)(stop, cancels = start_event)
|
||||
|
||||
|
||||
def start(mode: WebcamMode, resolution: str, fps: float) -> Generator[Frame, None, None]:
|
||||
facefusion.globals.face_recognition = 'many'
|
||||
def start(mode : WebcamMode, resolution : str, fps : float) -> Generator[Frame, None, None]:
|
||||
facefusion.globals.face_selector_mode = 'one'
|
||||
facefusion.globals.face_analyser_order = 'large-small'
|
||||
source_face = get_one_face(read_static_image(facefusion.globals.source_path))
|
||||
stream = None
|
||||
if mode in [ 'udp', 'v4l2' ]:
|
||||
stream = open_stream(mode, resolution, fps) # type: ignore[arg-type]
|
||||
capture = capture_webcam(resolution, fps)
|
||||
if capture.isOpened():
|
||||
for capture_frame in multi_process_capture(source_face, capture):
|
||||
if stream is not None:
|
||||
webcam_width, webcam_height = map(int, resolution.split('x'))
|
||||
webcam_capture = get_webcam_capture()
|
||||
if webcam_capture and webcam_capture.isOpened():
|
||||
webcam_capture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG')) # type: ignore[attr-defined]
|
||||
webcam_capture.set(cv2.CAP_PROP_FRAME_WIDTH, webcam_width)
|
||||
webcam_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, webcam_height)
|
||||
webcam_capture.set(cv2.CAP_PROP_FPS, fps)
|
||||
for capture_frame in multi_process_capture(source_face, webcam_capture, fps):
|
||||
if mode == 'inline':
|
||||
yield normalize_frame_color(capture_frame)
|
||||
else:
|
||||
stream.stdin.write(capture_frame.tobytes())
|
||||
yield normalize_frame_color(capture_frame)
|
||||
yield None
|
||||
|
||||
|
||||
def multi_process_capture(source_face: Face, capture : cv2.VideoCapture) -> Generator[Frame, None, None]:
|
||||
progress = tqdm(desc = wording.get('processing'), unit = 'frame', dynamic_ncols = True)
|
||||
with ThreadPoolExecutor(max_workers = facefusion.globals.execution_thread_count) as executor:
|
||||
futures = []
|
||||
deque_capture_frames : Deque[Frame] = deque()
|
||||
while True:
|
||||
_, capture_frame = capture.read()
|
||||
if predict_stream(capture_frame):
|
||||
return
|
||||
future = executor.submit(process_stream_frame, source_face, capture_frame)
|
||||
futures.append(future)
|
||||
for future_done in [ future for future in futures if future.done() ]:
|
||||
capture_frame = future_done.result()
|
||||
deque_capture_frames.append(capture_frame)
|
||||
futures.remove(future_done)
|
||||
while deque_capture_frames:
|
||||
yield deque_capture_frames.popleft()
|
||||
progress.update()
|
||||
def multi_process_capture(source_face : Face, webcam_capture : cv2.VideoCapture, fps : float) -> Generator[Frame, None, None]:
|
||||
with tqdm(desc = wording.get('processing'), unit = 'frame', ascii = ' =') as progress:
|
||||
with ThreadPoolExecutor(max_workers = facefusion.globals.execution_thread_count) as executor:
|
||||
futures = []
|
||||
deque_capture_frames : Deque[Frame] = deque()
|
||||
while webcam_capture and webcam_capture.isOpened():
|
||||
_, capture_frame = webcam_capture.read()
|
||||
if analyse_stream(capture_frame, fps):
|
||||
return
|
||||
future = executor.submit(process_stream_frame, source_face, capture_frame)
|
||||
futures.append(future)
|
||||
for future_done in [ future for future in futures if future.done() ]:
|
||||
capture_frame = future_done.result()
|
||||
deque_capture_frames.append(capture_frame)
|
||||
futures.remove(future_done)
|
||||
while deque_capture_frames:
|
||||
progress.update()
|
||||
yield deque_capture_frames.popleft()
|
||||
|
||||
|
||||
def stop() -> gradio.Image:
|
||||
clear_webcam_capture()
|
||||
return gradio.Image(value = None)
|
||||
|
||||
|
||||
def capture_webcam(resolution : str, fps : float) -> cv2.VideoCapture:
|
||||
width, height = resolution.split('x')
|
||||
if platform.system().lower() == 'windows':
|
||||
capture = cv2.VideoCapture(0, cv2.CAP_DSHOW)
|
||||
else:
|
||||
capture = cv2.VideoCapture(0)
|
||||
capture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG')) # type: ignore[attr-defined]
|
||||
capture.set(cv2.CAP_PROP_FRAME_WIDTH, int(width))
|
||||
capture.set(cv2.CAP_PROP_FRAME_HEIGHT, int(height))
|
||||
capture.set(cv2.CAP_PROP_FPS, fps)
|
||||
return capture
|
||||
|
||||
|
||||
def process_stream_frame(source_face : Face, temp_frame : Frame) -> Frame:
|
||||
for frame_processor_module in get_frame_processors_modules(facefusion.globals.frame_processors):
|
||||
if frame_processor_module.pre_process('stream'):
|
||||
|
||||
@@ -2,7 +2,7 @@ from typing import Optional
|
||||
import gradio
|
||||
|
||||
from facefusion import wording
|
||||
from facefusion.uis import choices
|
||||
from facefusion.uis import choices as uis_choices
|
||||
from facefusion.uis.core import register_ui_component
|
||||
|
||||
WEBCAM_MODE_RADIO : Optional[gradio.Radio] = None
|
||||
@@ -17,13 +17,13 @@ def render() -> None:
|
||||
|
||||
WEBCAM_MODE_RADIO = gradio.Radio(
|
||||
label = wording.get('webcam_mode_radio_label'),
|
||||
choices = choices.webcam_modes,
|
||||
choices = uis_choices.webcam_modes,
|
||||
value = 'inline'
|
||||
)
|
||||
WEBCAM_RESOLUTION_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('webcam_resolution_dropdown'),
|
||||
choices = choices.webcam_resolutions,
|
||||
value = choices.webcam_resolutions[0]
|
||||
choices = uis_choices.webcam_resolutions,
|
||||
value = uis_choices.webcam_resolutions[0]
|
||||
)
|
||||
WEBCAM_FPS_SLIDER = gradio.Slider(
|
||||
label = wording.get('webcam_fps_slider'),
|
||||
|
||||
Reference in New Issue
Block a user