Next (#384)
* feat/yoloface (#334) * added yolov8 to face_detector (#323) * added yolov8 to face_detector * added yolov8 to face_detector * Initial cleanup and renaming * Update README * refactored detect_with_yoloface (#329) * refactored detect_with_yoloface * apply review * Change order again * Restore working code * modified code (#330) * refactored detect_with_yoloface * apply review * use temp_frame in detect_with_yoloface * reorder * modified * reorder models * Tiny cleanup --------- Co-authored-by: tamoharu <133945583+tamoharu@users.noreply.github.com> * include audio file functions (#336) * Add testing for audio handlers * Change order * Fix naming * Use correct typing in choices * Update help message for arguments, Notation based wording approach (#347) * Update help message for arguments, Notation based wording approach * Fix installer * Audio functions (#345) * Update ffmpeg.py * Create audio.py * Update ffmpeg.py * Update audio.py * Update audio.py * Update typing.py * Update ffmpeg.py * Update audio.py * Rename Frame to VisionFrame (#346) * Minor tidy up * Introduce audio testing * Add more todo for testing * Add more todo for testing * Fix indent * Enable venv on the fly * Enable venv on the fly * Revert venv on the fly * Revert venv on the fly * Force Gradio to shut up * Force Gradio to shut up * Clear temp before processing * Reduce terminal output * include audio file functions * Enforce output resolution on merge video * Minor cleanups * Add age and gender to face debugger items (#353) * Add age and gender to face debugger items * Rename like suggested in the code review * Fix the output framerate vs. time * Lip Sync (#356) * Cli implementation of wav2lip * - create get_first_item() - remove non gan wav2lip model - implement video memory strategy - implement get_reference_frame() - implement process_image() - rearrange crop_mask_list - implement test_cli * Simplify testing * Rename to lip syncer * Fix testing * Fix testing * Minor cleanup * Cuda 12 installer (#362) * Make cuda nightly (12) the default * Better keep legacy cuda just in case * Use CUDA and ROCM versions * Remove MacOS options from installer (CoreML include in default package) * Add lip-syncer support to source component * Add lip-syncer support to source component * Fix the check in the source component * Add target image check * Introduce more helpers to suite the lip-syncer needs * Downgrade onnxruntime as of buggy 1.17.0 release * Revert "Downgrade onnxruntime as of buggy 1.17.0 release" This reverts commit f4a7ae6824fed87f0be50906bbc7e2d61d00617b. * More testing and add todos * Fix the frame processor API to at least not throw errors * Introduce dict based frame processor inputs (#364) * Introduce dict based frame processor inputs * Forgot to adjust webcam * create path payloads (#365) * create index payload to paths for process_frames * rename to payload_paths * This code now is poetry * Fix the terminal output * Make lip-syncer work in the preview * Remove face debugger test for now * Reoder reference_faces, Fix testing * Use inswapper_128 on buggy onnxruntime 1.17.0 * Undo inswapper_128_fp16 duo broken onnxruntime 1.17.0 * Undo inswapper_128_fp16 duo broken onnxruntime 1.17.0 * Fix lip_syncer occluder & region mask issue * Fix preview once in case there was no output video fps * fix lip_syncer custom fps * remove unused import * Add 68 landmark functions (#367) * Add 68 landmark model * Add landmark to face object * Re-arrange and modify typing * Rename function * Rearrange * Rearrange * ignore type * ignore type * change type * ignore * name * Some cleanup * Some cleanup * Opps, I broke something * Feat/face analyser refactoring (#369) * Restructure face analyser and start TDD * YoloFace and Yunet testing are passing * Remove offset from yoloface detection * Cleanup code * Tiny fix * Fix get_many_faces() * Tiny fix (again) * Use 320x320 fallback for retinaface * Fix merging mashup * Upload wave2lip model * Upload 2dfan2 model and rename internal to face_predictor * Downgrade onnxruntime for most cases * Update for the face debugger to render landmark 68 * Try to make detect_face_landmark_68() and detect_gender_age() more uniform * Enable retinaface testing for 320x320 * Make detect_face_landmark_68() and detect_gender_age() as uniform as … (#370) * Make detect_face_landmark_68() and detect_gender_age() as uniform as possible * Revert landmark scale and translation * Make box-mask for lip-syncer adjustable * Add create_bbox_from_landmark() * Remove currently unused code * Feat/uniface (#375) * add uniface (#373) * Finalize UniFace implementation --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * My approach how todo it * edit * edit * replace vertical blur with gaussian * remove region mask * Rebase against next and restore method * Minor improvements * Minor improvements * rename & add forehead padding * Adjust and host uniface model * Use 2dfan4 model * Rename to face landmarker * Feat/replace bbox with bounding box (#380) * Add landmark 68 to 5 convertion * Add landmark 68 to 5 convertion * Keep 5, 5/68 and 68 landmarks * Replace kps with landmark * Replace bbox with bounding box * Reshape face_landmark5_list different * Make yoloface the default * Move convert_face_landmark_68_to_5 to face_helper * Minor spacing issue * Dynamic detector sizes according to model (#382) * Dynamic detector sizes according to model * Dynamic detector sizes according to model * Undo false commited files * Add lib syncer model to the UI * fix halo (#383) * Bump to 2.3.0 * Update README and wording * Update README and wording * Fix spacing * Apply _vision suffix * Apply _vision suffix * Apply _vision suffix * Apply _vision suffix * Apply _vision suffix * Apply _vision suffix * Apply _vision suffix, Move mouth mask to face_masker.py * Apply _vision suffix * Apply _vision suffix * increase forehead padding --------- Co-authored-by: tamoharu <133945583+tamoharu@users.noreply.github.com> Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
This commit is contained in:
@@ -17,7 +17,7 @@ def render() -> None:
|
||||
link = metadata.get('url')
|
||||
)
|
||||
DONATE_BUTTON = gradio.Button(
|
||||
value = wording.get('donate_button_label'),
|
||||
value = wording.get('uis.donate_button'),
|
||||
link = 'https://donate.facefusion.io',
|
||||
size = 'sm'
|
||||
)
|
||||
|
||||
@@ -36,7 +36,7 @@ def render() -> None:
|
||||
global BENCHMARK_CLEAR_BUTTON
|
||||
|
||||
BENCHMARK_RESULTS_DATAFRAME = gradio.Dataframe(
|
||||
label = wording.get('benchmark_results_dataframe_label'),
|
||||
label = wording.get('uis.benchmark_results_dataframe'),
|
||||
headers =
|
||||
[
|
||||
'target_path',
|
||||
@@ -57,12 +57,12 @@ def render() -> None:
|
||||
]
|
||||
)
|
||||
BENCHMARK_START_BUTTON = gradio.Button(
|
||||
value = wording.get('start_button_label'),
|
||||
value = wording.get('uis.start_button'),
|
||||
variant = 'primary',
|
||||
size = 'sm'
|
||||
)
|
||||
BENCHMARK_CLEAR_BUTTON = gradio.Button(
|
||||
value = wording.get('clear_button_label'),
|
||||
value = wording.get('uis.clear_button'),
|
||||
size = 'sm'
|
||||
)
|
||||
|
||||
|
||||
@@ -14,12 +14,12 @@ def render() -> None:
|
||||
global BENCHMARK_CYCLES_SLIDER
|
||||
|
||||
BENCHMARK_RUNS_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('benchmark_runs_checkbox_group_label'),
|
||||
label = wording.get('uis.benchmark_runs_checkbox_group'),
|
||||
value = list(BENCHMARKS.keys()),
|
||||
choices = list(BENCHMARKS.keys())
|
||||
)
|
||||
BENCHMARK_CYCLES_SLIDER = gradio.Slider(
|
||||
label = wording.get('benchmark_cycles_slider_label'),
|
||||
label = wording.get('uis.benchmark_cycles_slider'),
|
||||
value = 5,
|
||||
step = 1,
|
||||
minimum = 1,
|
||||
|
||||
@@ -19,7 +19,7 @@ def render() -> None:
|
||||
if facefusion.globals.skip_download:
|
||||
value.append('skip-download')
|
||||
COMMON_OPTIONS_CHECKBOX_GROUP = gradio.Checkboxgroup(
|
||||
label = wording.get('common_options_checkbox_group_label'),
|
||||
label = wording.get('uis.common_options_checkbox_group'),
|
||||
choices = uis_choices.common_options,
|
||||
value = value
|
||||
)
|
||||
|
||||
@@ -15,7 +15,7 @@ def render() -> None:
|
||||
global EXECUTION_PROVIDERS_CHECKBOX_GROUP
|
||||
|
||||
EXECUTION_PROVIDERS_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('execution_providers_checkbox_group_label'),
|
||||
label = wording.get('uis.execution_providers_checkbox_group'),
|
||||
choices = encode_execution_providers(onnxruntime.get_available_providers()),
|
||||
value = encode_execution_providers(facefusion.globals.execution_providers)
|
||||
)
|
||||
|
||||
@@ -12,7 +12,7 @@ def render() -> None:
|
||||
global EXECUTION_QUEUE_COUNT_SLIDER
|
||||
|
||||
EXECUTION_QUEUE_COUNT_SLIDER = gradio.Slider(
|
||||
label = wording.get('execution_queue_count_slider_label'),
|
||||
label = wording.get('uis.execution_queue_count_slider'),
|
||||
value = facefusion.globals.execution_queue_count,
|
||||
step = facefusion.choices.execution_queue_count_range[1] - facefusion.choices.execution_queue_count_range[0],
|
||||
minimum = facefusion.choices.execution_queue_count_range[0],
|
||||
|
||||
@@ -12,7 +12,7 @@ def render() -> None:
|
||||
global EXECUTION_THREAD_COUNT_SLIDER
|
||||
|
||||
EXECUTION_THREAD_COUNT_SLIDER = gradio.Slider(
|
||||
label = wording.get('execution_thread_count_slider_label'),
|
||||
label = wording.get('uis.execution_thread_count_slider'),
|
||||
value = facefusion.globals.execution_thread_count,
|
||||
step = facefusion.choices.execution_thread_count_range[1] - facefusion.choices.execution_thread_count_range[0],
|
||||
minimum = facefusion.choices.execution_thread_count_range[0],
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Optional
|
||||
from typing import Optional, Dict, Any
|
||||
|
||||
import gradio
|
||||
|
||||
@@ -24,34 +24,37 @@ def render() -> None:
|
||||
global FACE_DETECTOR_SCORE_SLIDER
|
||||
global FACE_DETECTOR_MODEL_DROPDOWN
|
||||
|
||||
face_detector_size_dropdown_args : Dict[str, Any] =\
|
||||
{
|
||||
'label': wording.get('uis.face_detector_size_dropdown'),
|
||||
'value': facefusion.globals.face_detector_size
|
||||
}
|
||||
if facefusion.globals.face_detector_size in facefusion.choices.face_detector_set[facefusion.globals.face_detector_model]:
|
||||
face_detector_size_dropdown_args['choices'] = facefusion.choices.face_detector_set[facefusion.globals.face_detector_model]
|
||||
with gradio.Row():
|
||||
FACE_ANALYSER_ORDER_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_analyser_order_dropdown_label'),
|
||||
label = wording.get('uis.face_analyser_order_dropdown'),
|
||||
choices = facefusion.choices.face_analyser_orders,
|
||||
value = facefusion.globals.face_analyser_order
|
||||
)
|
||||
FACE_ANALYSER_AGE_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_analyser_age_dropdown_label'),
|
||||
label = wording.get('uis.face_analyser_age_dropdown'),
|
||||
choices = [ 'none' ] + facefusion.choices.face_analyser_ages,
|
||||
value = facefusion.globals.face_analyser_age or 'none'
|
||||
)
|
||||
FACE_ANALYSER_GENDER_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_analyser_gender_dropdown_label'),
|
||||
label = wording.get('uis.face_analyser_gender_dropdown'),
|
||||
choices = [ 'none' ] + facefusion.choices.face_analyser_genders,
|
||||
value = facefusion.globals.face_analyser_gender or 'none'
|
||||
)
|
||||
FACE_DETECTOR_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_detector_model_dropdown_label'),
|
||||
choices = facefusion.choices.face_detector_models,
|
||||
label = wording.get('uis.face_detector_model_dropdown'),
|
||||
choices = facefusion.choices.face_detector_set.keys(),
|
||||
value = facefusion.globals.face_detector_model
|
||||
)
|
||||
FACE_DETECTOR_SIZE_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_detector_size_dropdown_label'),
|
||||
choices = facefusion.choices.face_detector_sizes,
|
||||
value = facefusion.globals.face_detector_size
|
||||
)
|
||||
FACE_DETECTOR_SIZE_DROPDOWN = gradio.Dropdown(**face_detector_size_dropdown_args)
|
||||
FACE_DETECTOR_SCORE_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_detector_score_slider_label'),
|
||||
label = wording.get('uis.face_detector_score_slider'),
|
||||
value = facefusion.globals.face_detector_score,
|
||||
step = facefusion.choices.face_detector_score_range[1] - facefusion.choices.face_detector_score_range[0],
|
||||
minimum = facefusion.choices.face_detector_score_range[0],
|
||||
@@ -69,7 +72,7 @@ def listen() -> None:
|
||||
FACE_ANALYSER_ORDER_DROPDOWN.change(update_face_analyser_order, inputs = FACE_ANALYSER_ORDER_DROPDOWN)
|
||||
FACE_ANALYSER_AGE_DROPDOWN.change(update_face_analyser_age, inputs = FACE_ANALYSER_AGE_DROPDOWN)
|
||||
FACE_ANALYSER_GENDER_DROPDOWN.change(update_face_analyser_gender, inputs = FACE_ANALYSER_GENDER_DROPDOWN)
|
||||
FACE_DETECTOR_MODEL_DROPDOWN.change(update_face_detector_model, inputs = FACE_DETECTOR_MODEL_DROPDOWN)
|
||||
FACE_DETECTOR_MODEL_DROPDOWN.change(update_face_detector_model, inputs = FACE_DETECTOR_MODEL_DROPDOWN, outputs = FACE_DETECTOR_SIZE_DROPDOWN)
|
||||
FACE_DETECTOR_SIZE_DROPDOWN.change(update_face_detector_size, inputs = FACE_DETECTOR_SIZE_DROPDOWN)
|
||||
FACE_DETECTOR_SCORE_SLIDER.change(update_face_detector_score, inputs = FACE_DETECTOR_SCORE_SLIDER)
|
||||
|
||||
@@ -86,8 +89,11 @@ def update_face_analyser_gender(face_analyser_gender : FaceAnalyserGender) -> No
|
||||
facefusion.globals.face_analyser_gender = face_analyser_gender if face_analyser_gender != 'none' else None
|
||||
|
||||
|
||||
def update_face_detector_model(face_detector_model : FaceDetectorModel) -> None:
|
||||
def update_face_detector_model(face_detector_model : FaceDetectorModel) -> gradio.Dropdown:
|
||||
facefusion.globals.face_detector_model = face_detector_model
|
||||
if facefusion.globals.face_detector_size in facefusion.choices.face_detector_set[face_detector_model]:
|
||||
return gradio.Dropdown(value = '640x640', choices = facefusion.choices.face_detector_set[face_detector_model])
|
||||
return gradio.Dropdown(value = '640x640', choices = [ '640x640' ])
|
||||
|
||||
|
||||
def update_face_detector_size(face_detector_size : str) -> None:
|
||||
|
||||
@@ -32,13 +32,13 @@ def render() -> None:
|
||||
has_box_mask = 'box' in facefusion.globals.face_mask_types
|
||||
has_region_mask = 'region' in facefusion.globals.face_mask_types
|
||||
FACE_MASK_TYPES_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('face_mask_types_checkbox_group_label'),
|
||||
label = wording.get('uis.face_mask_types_checkbox_group'),
|
||||
choices = facefusion.choices.face_mask_types,
|
||||
value = facefusion.globals.face_mask_types
|
||||
)
|
||||
with gradio.Group(visible = has_box_mask) as FACE_MASK_BOX_GROUP:
|
||||
FACE_MASK_BLUR_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_mask_blur_slider_label'),
|
||||
label = wording.get('uis.face_mask_blur_slider'),
|
||||
step = facefusion.choices.face_mask_blur_range[1] - facefusion.choices.face_mask_blur_range[0],
|
||||
minimum = facefusion.choices.face_mask_blur_range[0],
|
||||
maximum = facefusion.choices.face_mask_blur_range[-1],
|
||||
@@ -46,14 +46,14 @@ def render() -> None:
|
||||
)
|
||||
with gradio.Row():
|
||||
FACE_MASK_PADDING_TOP_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_mask_padding_top_slider_label'),
|
||||
label = wording.get('uis.face_mask_padding_top_slider'),
|
||||
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
|
||||
minimum = facefusion.choices.face_mask_padding_range[0],
|
||||
maximum = facefusion.choices.face_mask_padding_range[-1],
|
||||
value = facefusion.globals.face_mask_padding[0]
|
||||
)
|
||||
FACE_MASK_PADDING_RIGHT_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_mask_padding_right_slider_label'),
|
||||
label = wording.get('uis.face_mask_padding_right_slider'),
|
||||
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
|
||||
minimum = facefusion.choices.face_mask_padding_range[0],
|
||||
maximum = facefusion.choices.face_mask_padding_range[-1],
|
||||
@@ -61,14 +61,14 @@ def render() -> None:
|
||||
)
|
||||
with gradio.Row():
|
||||
FACE_MASK_PADDING_BOTTOM_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_mask_padding_bottom_slider_label'),
|
||||
label = wording.get('uis.face_mask_padding_bottom_slider'),
|
||||
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
|
||||
minimum = facefusion.choices.face_mask_padding_range[0],
|
||||
maximum = facefusion.choices.face_mask_padding_range[-1],
|
||||
value = facefusion.globals.face_mask_padding[2]
|
||||
)
|
||||
FACE_MASK_PADDING_LEFT_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_mask_padding_left_slider_label'),
|
||||
label = wording.get('uis.face_mask_padding_left_slider'),
|
||||
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
|
||||
minimum = facefusion.choices.face_mask_padding_range[0],
|
||||
maximum = facefusion.choices.face_mask_padding_range[-1],
|
||||
@@ -76,7 +76,7 @@ def render() -> None:
|
||||
)
|
||||
with gradio.Row():
|
||||
FACE_MASK_REGION_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('face_mask_region_checkbox_group_label'),
|
||||
label = wording.get('uis.face_mask_region_checkbox_group'),
|
||||
choices = facefusion.choices.face_mask_regions,
|
||||
value = facefusion.globals.face_mask_regions,
|
||||
visible = has_region_mask
|
||||
|
||||
@@ -9,7 +9,7 @@ from facefusion.face_store import clear_static_faces, clear_reference_faces
|
||||
from facefusion.vision import get_video_frame, read_static_image, normalize_frame_color
|
||||
from facefusion.filesystem import is_image, is_video
|
||||
from facefusion.face_analyser import get_many_faces
|
||||
from facefusion.typing import Frame, FaceSelectorMode
|
||||
from facefusion.typing import VisionFrame, FaceSelectorMode
|
||||
from facefusion.uis.core import get_ui_component, register_ui_component
|
||||
from facefusion.uis.typing import ComponentName
|
||||
|
||||
@@ -25,7 +25,7 @@ def render() -> None:
|
||||
|
||||
reference_face_gallery_args: Dict[str, Any] =\
|
||||
{
|
||||
'label': wording.get('reference_face_gallery_label'),
|
||||
'label': wording.get('uis.reference_face_gallery'),
|
||||
'object_fit': 'cover',
|
||||
'columns': 8,
|
||||
'allow_preview': False,
|
||||
@@ -38,13 +38,13 @@ def render() -> None:
|
||||
reference_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
|
||||
reference_face_gallery_args['value'] = extract_gallery_frames(reference_frame)
|
||||
FACE_SELECTOR_MODE_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_selector_mode_dropdown_label'),
|
||||
label = wording.get('uis.face_selector_mode_dropdown'),
|
||||
choices = facefusion.choices.face_selector_modes,
|
||||
value = facefusion.globals.face_selector_mode
|
||||
)
|
||||
REFERENCE_FACE_POSITION_GALLERY = gradio.Gallery(**reference_face_gallery_args)
|
||||
REFERENCE_FACE_DISTANCE_SLIDER = gradio.Slider(
|
||||
label = wording.get('reference_face_distance_slider_label'),
|
||||
label = wording.get('uis.reference_face_distance_slider'),
|
||||
value = facefusion.globals.reference_face_distance,
|
||||
step = facefusion.choices.reference_face_distance_range[1] - facefusion.choices.reference_face_distance_range[0],
|
||||
minimum = facefusion.choices.reference_face_distance_range[0],
|
||||
@@ -135,30 +135,31 @@ def clear_and_update_reference_position_gallery() -> gradio.Gallery:
|
||||
|
||||
|
||||
def update_reference_position_gallery() -> gradio.Gallery:
|
||||
gallery_frames = []
|
||||
gallery_vision_frames = []
|
||||
if is_image(facefusion.globals.target_path):
|
||||
reference_frame = read_static_image(facefusion.globals.target_path)
|
||||
gallery_frames = extract_gallery_frames(reference_frame)
|
||||
temp_vision_frame = read_static_image(facefusion.globals.target_path)
|
||||
gallery_vision_frames = extract_gallery_frames(temp_vision_frame)
|
||||
if is_video(facefusion.globals.target_path):
|
||||
reference_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
|
||||
gallery_frames = extract_gallery_frames(reference_frame)
|
||||
if gallery_frames:
|
||||
return gradio.Gallery(value = gallery_frames)
|
||||
temp_vision_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
|
||||
gallery_vision_frames = extract_gallery_frames(temp_vision_frame)
|
||||
if gallery_vision_frames:
|
||||
return gradio.Gallery(value = gallery_vision_frames)
|
||||
return gradio.Gallery(value = None)
|
||||
|
||||
|
||||
def extract_gallery_frames(reference_frame : Frame) -> List[Frame]:
|
||||
crop_frames = []
|
||||
faces = get_many_faces(reference_frame)
|
||||
def extract_gallery_frames(temp_vision_frame : VisionFrame) -> List[VisionFrame]:
|
||||
gallery_vision_frames = []
|
||||
faces = get_many_faces(temp_vision_frame)
|
||||
|
||||
for face in faces:
|
||||
start_x, start_y, end_x, end_y = map(int, face.bbox)
|
||||
start_x, start_y, end_x, end_y = map(int, face.bounding_box)
|
||||
padding_x = int((end_x - start_x) * 0.25)
|
||||
padding_y = int((end_y - start_y) * 0.25)
|
||||
start_x = max(0, start_x - padding_x)
|
||||
start_y = max(0, start_y - padding_y)
|
||||
end_x = max(0, end_x + padding_x)
|
||||
end_y = max(0, end_y + padding_y)
|
||||
crop_frame = reference_frame[start_y:end_y, start_x:end_x]
|
||||
crop_frame = normalize_frame_color(crop_frame)
|
||||
crop_frames.append(crop_frame)
|
||||
return crop_frames
|
||||
crop_vision_frame = temp_vision_frame[start_y:end_y, start_x:end_x]
|
||||
crop_vision_frame = normalize_frame_color(crop_vision_frame)
|
||||
gallery_vision_frames.append(crop_vision_frame)
|
||||
return gallery_vision_frames
|
||||
|
||||
@@ -14,7 +14,7 @@ def render() -> None:
|
||||
global FRAME_PROCESSORS_CHECKBOX_GROUP
|
||||
|
||||
FRAME_PROCESSORS_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('frame_processors_checkbox_group_label'),
|
||||
label = wording.get('uis.frame_processors_checkbox_group'),
|
||||
choices = sort_frame_processors(facefusion.globals.frame_processors),
|
||||
value = facefusion.globals.frame_processors
|
||||
)
|
||||
|
||||
@@ -5,84 +5,120 @@ import facefusion.globals
|
||||
from facefusion import wording
|
||||
from facefusion.processors.frame.core import load_frame_processor_module
|
||||
from facefusion.processors.frame import globals as frame_processors_globals, choices as frame_processors_choices
|
||||
from facefusion.processors.frame.typings import FaceSwapperModel, FaceEnhancerModel, FrameEnhancerModel, FaceDebuggerItem
|
||||
from facefusion.processors.frame.typings import FaceDebuggerItem, FaceEnhancerModel, FaceSwapperModel, FrameEnhancerModel, LipSyncerModel
|
||||
from facefusion.uis.core import get_ui_component, register_ui_component
|
||||
|
||||
FACE_SWAPPER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
|
||||
FACE_ENHANCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_ENHANCER_BLEND_SLIDER : Optional[gradio.Slider] = None
|
||||
FACE_SWAPPER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FRAME_ENHANCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FRAME_ENHANCER_BLEND_SLIDER : Optional[gradio.Slider] = None
|
||||
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
|
||||
LIP_SYNCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
global FACE_SWAPPER_MODEL_DROPDOWN
|
||||
global FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP
|
||||
global FACE_ENHANCER_MODEL_DROPDOWN
|
||||
global FACE_ENHANCER_BLEND_SLIDER
|
||||
global FACE_SWAPPER_MODEL_DROPDOWN
|
||||
global FRAME_ENHANCER_MODEL_DROPDOWN
|
||||
global FRAME_ENHANCER_BLEND_SLIDER
|
||||
global FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP
|
||||
global LIP_SYNCER_MODEL_DROPDOWN
|
||||
|
||||
FACE_SWAPPER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_swapper_model_dropdown_label'),
|
||||
choices = frame_processors_choices.face_swapper_models,
|
||||
value = frame_processors_globals.face_swapper_model,
|
||||
visible = 'face_swapper' in facefusion.globals.frame_processors
|
||||
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('uis.face_debugger_items_checkbox_group'),
|
||||
choices = frame_processors_choices.face_debugger_items,
|
||||
value = frame_processors_globals.face_debugger_items,
|
||||
visible = 'face_debugger' in facefusion.globals.frame_processors
|
||||
)
|
||||
FACE_ENHANCER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('face_enhancer_model_dropdown_label'),
|
||||
label = wording.get('uis.face_enhancer_model_dropdown'),
|
||||
choices = frame_processors_choices.face_enhancer_models,
|
||||
value = frame_processors_globals.face_enhancer_model,
|
||||
visible = 'face_enhancer' in facefusion.globals.frame_processors
|
||||
)
|
||||
FACE_ENHANCER_BLEND_SLIDER = gradio.Slider(
|
||||
label = wording.get('face_enhancer_blend_slider_label'),
|
||||
label = wording.get('uis.face_enhancer_blend_slider'),
|
||||
value = frame_processors_globals.face_enhancer_blend,
|
||||
step = frame_processors_choices.face_enhancer_blend_range[1] - frame_processors_choices.face_enhancer_blend_range[0],
|
||||
minimum = frame_processors_choices.face_enhancer_blend_range[0],
|
||||
maximum = frame_processors_choices.face_enhancer_blend_range[-1],
|
||||
visible = 'face_enhancer' in facefusion.globals.frame_processors
|
||||
)
|
||||
FACE_SWAPPER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.face_swapper_model_dropdown'),
|
||||
choices = frame_processors_choices.face_swapper_models,
|
||||
value = frame_processors_globals.face_swapper_model,
|
||||
visible = 'face_swapper' in facefusion.globals.frame_processors
|
||||
)
|
||||
FRAME_ENHANCER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('frame_enhancer_model_dropdown_label'),
|
||||
label = wording.get('uis.frame_enhancer_model_dropdown'),
|
||||
choices = frame_processors_choices.frame_enhancer_models,
|
||||
value = frame_processors_globals.frame_enhancer_model,
|
||||
visible = 'frame_enhancer' in facefusion.globals.frame_processors
|
||||
)
|
||||
FRAME_ENHANCER_BLEND_SLIDER = gradio.Slider(
|
||||
label = wording.get('frame_enhancer_blend_slider_label'),
|
||||
label = wording.get('uis.frame_enhancer_blend_slider'),
|
||||
value = frame_processors_globals.frame_enhancer_blend,
|
||||
step = frame_processors_choices.frame_enhancer_blend_range[1] - frame_processors_choices.frame_enhancer_blend_range[0],
|
||||
minimum = frame_processors_choices.frame_enhancer_blend_range[0],
|
||||
maximum = frame_processors_choices.frame_enhancer_blend_range[-1],
|
||||
visible = 'frame_enhancer' in facefusion.globals.frame_processors
|
||||
)
|
||||
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('face_debugger_items_checkbox_group_label'),
|
||||
choices = frame_processors_choices.face_debugger_items,
|
||||
value = frame_processors_globals.face_debugger_items,
|
||||
visible = 'face_debugger' in facefusion.globals.frame_processors
|
||||
LIP_SYNCER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.lip_syncer_model_dropdown'),
|
||||
choices = frame_processors_choices.lip_syncer_models,
|
||||
value = frame_processors_globals.lip_syncer_model,
|
||||
visible = 'lip_syncer' in facefusion.globals.frame_processors
|
||||
)
|
||||
|
||||
register_ui_component('face_swapper_model_dropdown', FACE_SWAPPER_MODEL_DROPDOWN)
|
||||
register_ui_component('face_debugger_items_checkbox_group', FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
|
||||
register_ui_component('face_enhancer_model_dropdown', FACE_ENHANCER_MODEL_DROPDOWN)
|
||||
register_ui_component('face_enhancer_blend_slider', FACE_ENHANCER_BLEND_SLIDER)
|
||||
register_ui_component('face_swapper_model_dropdown', FACE_SWAPPER_MODEL_DROPDOWN)
|
||||
register_ui_component('frame_enhancer_model_dropdown', FRAME_ENHANCER_MODEL_DROPDOWN)
|
||||
register_ui_component('frame_enhancer_blend_slider', FRAME_ENHANCER_BLEND_SLIDER)
|
||||
register_ui_component('face_debugger_items_checkbox_group', FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
|
||||
register_ui_component('lip_syncer_model_dropdown', LIP_SYNCER_MODEL_DROPDOWN)
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
FACE_SWAPPER_MODEL_DROPDOWN.change(update_face_swapper_model, inputs = FACE_SWAPPER_MODEL_DROPDOWN, outputs = FACE_SWAPPER_MODEL_DROPDOWN)
|
||||
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP.change(update_face_debugger_items, inputs = FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
|
||||
FACE_ENHANCER_MODEL_DROPDOWN.change(update_face_enhancer_model, inputs = FACE_ENHANCER_MODEL_DROPDOWN, outputs = FACE_ENHANCER_MODEL_DROPDOWN)
|
||||
FACE_ENHANCER_BLEND_SLIDER.change(update_face_enhancer_blend, inputs = FACE_ENHANCER_BLEND_SLIDER)
|
||||
FACE_SWAPPER_MODEL_DROPDOWN.change(update_face_swapper_model, inputs = FACE_SWAPPER_MODEL_DROPDOWN, outputs = FACE_SWAPPER_MODEL_DROPDOWN)
|
||||
FRAME_ENHANCER_MODEL_DROPDOWN.change(update_frame_enhancer_model, inputs = FRAME_ENHANCER_MODEL_DROPDOWN, outputs = FRAME_ENHANCER_MODEL_DROPDOWN)
|
||||
FRAME_ENHANCER_BLEND_SLIDER.change(update_frame_enhancer_blend, inputs = FRAME_ENHANCER_BLEND_SLIDER)
|
||||
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP.change(update_face_debugger_items, inputs = FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
|
||||
LIP_SYNCER_MODEL_DROPDOWN.change(update_lip_syncer_model, inputs = LIP_SYNCER_MODEL_DROPDOWN, outputs = LIP_SYNCER_MODEL_DROPDOWN)
|
||||
frame_processors_checkbox_group = get_ui_component('frame_processors_checkbox_group')
|
||||
if frame_processors_checkbox_group:
|
||||
frame_processors_checkbox_group.change(toggle_face_swapper_model, inputs = frame_processors_checkbox_group, outputs = [ FACE_SWAPPER_MODEL_DROPDOWN, FACE_ENHANCER_MODEL_DROPDOWN, FACE_ENHANCER_BLEND_SLIDER, FRAME_ENHANCER_MODEL_DROPDOWN, FRAME_ENHANCER_BLEND_SLIDER, FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP ])
|
||||
frame_processors_checkbox_group.change(update_frame_processors, inputs = frame_processors_checkbox_group, outputs = [ FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP, FACE_ENHANCER_MODEL_DROPDOWN, FACE_ENHANCER_BLEND_SLIDER, FACE_SWAPPER_MODEL_DROPDOWN, FRAME_ENHANCER_MODEL_DROPDOWN, FRAME_ENHANCER_BLEND_SLIDER, LIP_SYNCER_MODEL_DROPDOWN ])
|
||||
|
||||
|
||||
def update_frame_processors(frame_processors : List[str]) -> Tuple[gradio.CheckboxGroup, gradio.Dropdown, gradio.Slider, gradio.Dropdown, gradio.Dropdown, gradio.Slider, gradio.Dropdown]:
|
||||
has_face_debugger = 'face_debugger' in frame_processors
|
||||
has_face_enhancer = 'face_enhancer' in frame_processors
|
||||
has_face_swapper = 'face_swapper' in frame_processors
|
||||
has_frame_enhancer = 'frame_enhancer' in frame_processors
|
||||
has_lip_syncer = 'lip_syncer' in frame_processors
|
||||
return gradio.CheckboxGroup(visible = has_face_debugger), gradio.Dropdown(visible = has_face_enhancer), gradio.Slider(visible = has_face_enhancer), gradio.Dropdown(visible = has_face_swapper), gradio.Dropdown(visible = has_frame_enhancer), gradio.Slider(visible = has_frame_enhancer), gradio.Dropdown(visible = has_lip_syncer)
|
||||
|
||||
|
||||
def update_face_debugger_items(face_debugger_items : List[FaceDebuggerItem]) -> None:
|
||||
frame_processors_globals.face_debugger_items = face_debugger_items
|
||||
|
||||
|
||||
def update_face_enhancer_model(face_enhancer_model : FaceEnhancerModel) -> gradio.Dropdown:
|
||||
frame_processors_globals.face_enhancer_model = face_enhancer_model
|
||||
face_enhancer_module = load_frame_processor_module('face_enhancer')
|
||||
face_enhancer_module.clear_frame_processor()
|
||||
face_enhancer_module.set_options('model', face_enhancer_module.MODELS[face_enhancer_model])
|
||||
if face_enhancer_module.pre_check():
|
||||
return gradio.Dropdown(value = face_enhancer_model)
|
||||
return gradio.Dropdown()
|
||||
|
||||
|
||||
def update_face_enhancer_blend(face_enhancer_blend : int) -> None:
|
||||
frame_processors_globals.face_enhancer_blend = face_enhancer_blend
|
||||
|
||||
|
||||
def update_face_swapper_model(face_swapper_model : FaceSwapperModel) -> gradio.Dropdown:
|
||||
@@ -93,26 +129,14 @@ def update_face_swapper_model(face_swapper_model : FaceSwapperModel) -> gradio.D
|
||||
facefusion.globals.face_recognizer_model = 'arcface_inswapper'
|
||||
if face_swapper_model == 'simswap_256' or face_swapper_model == 'simswap_512_unofficial':
|
||||
facefusion.globals.face_recognizer_model = 'arcface_simswap'
|
||||
if face_swapper_model == 'uniface_256':
|
||||
facefusion.globals.face_recognizer_model = 'arcface_uniface'
|
||||
face_swapper_module = load_frame_processor_module('face_swapper')
|
||||
face_swapper_module.clear_frame_processor()
|
||||
face_swapper_module.set_options('model', face_swapper_module.MODELS[face_swapper_model])
|
||||
if not face_swapper_module.pre_check():
|
||||
return gradio.Dropdown()
|
||||
return gradio.Dropdown(value = face_swapper_model)
|
||||
|
||||
|
||||
def update_face_enhancer_model(face_enhancer_model : FaceEnhancerModel) -> gradio.Dropdown:
|
||||
frame_processors_globals.face_enhancer_model = face_enhancer_model
|
||||
face_enhancer_module = load_frame_processor_module('face_enhancer')
|
||||
face_enhancer_module.clear_frame_processor()
|
||||
face_enhancer_module.set_options('model', face_enhancer_module.MODELS[face_enhancer_model])
|
||||
if not face_enhancer_module.pre_check():
|
||||
return gradio.Dropdown()
|
||||
return gradio.Dropdown(value = face_enhancer_model)
|
||||
|
||||
|
||||
def update_face_enhancer_blend(face_enhancer_blend : int) -> None:
|
||||
frame_processors_globals.face_enhancer_blend = face_enhancer_blend
|
||||
if face_swapper_module.pre_check():
|
||||
return gradio.Dropdown(value = face_swapper_model)
|
||||
return gradio.Dropdown()
|
||||
|
||||
|
||||
def update_frame_enhancer_model(frame_enhancer_model : FrameEnhancerModel) -> gradio.Dropdown:
|
||||
@@ -120,22 +144,20 @@ def update_frame_enhancer_model(frame_enhancer_model : FrameEnhancerModel) -> gr
|
||||
frame_enhancer_module = load_frame_processor_module('frame_enhancer')
|
||||
frame_enhancer_module.clear_frame_processor()
|
||||
frame_enhancer_module.set_options('model', frame_enhancer_module.MODELS[frame_enhancer_model])
|
||||
if not frame_enhancer_module.pre_check():
|
||||
return gradio.Dropdown()
|
||||
return gradio.Dropdown(value = frame_enhancer_model)
|
||||
if frame_enhancer_module.pre_check():
|
||||
return gradio.Dropdown(value = frame_enhancer_model)
|
||||
return gradio.Dropdown()
|
||||
|
||||
|
||||
def update_frame_enhancer_blend(frame_enhancer_blend : int) -> None:
|
||||
frame_processors_globals.frame_enhancer_blend = frame_enhancer_blend
|
||||
|
||||
|
||||
def update_face_debugger_items(face_debugger_items : List[FaceDebuggerItem]) -> None:
|
||||
frame_processors_globals.face_debugger_items = face_debugger_items
|
||||
|
||||
|
||||
def toggle_face_swapper_model(frame_processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Dropdown, gradio.Slider, gradio.Dropdown, gradio.Slider, gradio.CheckboxGroup]:
|
||||
has_face_swapper = 'face_swapper' in frame_processors
|
||||
has_face_enhancer = 'face_enhancer' in frame_processors
|
||||
has_frame_enhancer = 'frame_enhancer' in frame_processors
|
||||
has_face_debugger = 'face_debugger' in frame_processors
|
||||
return gradio.Dropdown(visible = has_face_swapper), gradio.Dropdown(visible = has_face_enhancer), gradio.Slider(visible = has_face_enhancer), gradio.Dropdown(visible = has_frame_enhancer), gradio.Slider(visible = has_frame_enhancer), gradio.CheckboxGroup(visible = has_face_debugger)
|
||||
def update_lip_syncer_model(lip_syncer_model : LipSyncerModel) -> gradio.Dropdown:
|
||||
frame_processors_globals.lip_syncer_model = lip_syncer_model
|
||||
lip_syncer_module = load_frame_processor_module('lip_syncer')
|
||||
lip_syncer_module.clear_frame_processor()
|
||||
lip_syncer_module.set_options('model', lip_syncer_module.MODELS[lip_syncer_model])
|
||||
if lip_syncer_module.pre_check():
|
||||
return gradio.Dropdown(value = lip_syncer_model)
|
||||
return gradio.Dropdown()
|
||||
|
||||
@@ -15,12 +15,12 @@ def render() -> None:
|
||||
global SYSTEM_MEMORY_LIMIT_SLIDER
|
||||
|
||||
VIDEO_MEMORY_STRATEGY = gradio.Dropdown(
|
||||
label = wording.get('video_memory_strategy_dropdown_label'),
|
||||
label = wording.get('uis.video_memory_strategy_dropdown'),
|
||||
choices = facefusion.choices.video_memory_strategies,
|
||||
value = facefusion.globals.video_memory_strategy
|
||||
)
|
||||
SYSTEM_MEMORY_LIMIT_SLIDER = gradio.Slider(
|
||||
label = wording.get('system_memory_limit_slider_label'),
|
||||
label = wording.get('uis.system_memory_limit_slider'),
|
||||
step =facefusion.choices.system_memory_limit_range[1] - facefusion.choices.system_memory_limit_range[0],
|
||||
minimum = facefusion.choices.system_memory_limit_range[0],
|
||||
maximum = facefusion.choices.system_memory_limit_range[-1],
|
||||
|
||||
@@ -22,19 +22,19 @@ def render() -> None:
|
||||
global OUTPUT_CLEAR_BUTTON
|
||||
|
||||
OUTPUT_IMAGE = gradio.Image(
|
||||
label = wording.get('output_image_or_video_label'),
|
||||
label = wording.get('uis.output_image_or_video'),
|
||||
visible = False
|
||||
)
|
||||
OUTPUT_VIDEO = gradio.Video(
|
||||
label = wording.get('output_image_or_video_label')
|
||||
label = wording.get('uis.output_image_or_video')
|
||||
)
|
||||
OUTPUT_START_BUTTON = gradio.Button(
|
||||
value = wording.get('start_button_label'),
|
||||
value = wording.get('uis.start_button'),
|
||||
variant = 'primary',
|
||||
size = 'sm'
|
||||
)
|
||||
OUTPUT_CLEAR_BUTTON = gradio.Button(
|
||||
value = wording.get('clear_button_label'),
|
||||
value = wording.get('uis.clear_button'),
|
||||
size = 'sm'
|
||||
)
|
||||
|
||||
|
||||
@@ -30,12 +30,12 @@ def render() -> None:
|
||||
global OUTPUT_VIDEO_FPS_SLIDER
|
||||
|
||||
OUTPUT_PATH_TEXTBOX = gradio.Textbox(
|
||||
label = wording.get('output_path_textbox_label'),
|
||||
label = wording.get('uis.output_path_textbox'),
|
||||
value = facefusion.globals.output_path or tempfile.gettempdir(),
|
||||
max_lines = 1
|
||||
)
|
||||
OUTPUT_IMAGE_QUALITY_SLIDER = gradio.Slider(
|
||||
label = wording.get('output_image_quality_slider_label'),
|
||||
label = wording.get('uis.output_image_quality_slider'),
|
||||
value = facefusion.globals.output_image_quality,
|
||||
step = facefusion.choices.output_image_quality_range[1] - facefusion.choices.output_image_quality_range[0],
|
||||
minimum = facefusion.choices.output_image_quality_range[0],
|
||||
@@ -43,19 +43,19 @@ def render() -> None:
|
||||
visible = is_image(facefusion.globals.target_path)
|
||||
)
|
||||
OUTPUT_VIDEO_ENCODER_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('output_video_encoder_dropdown_label'),
|
||||
label = wording.get('uis.output_video_encoder_dropdown'),
|
||||
choices = facefusion.choices.output_video_encoders,
|
||||
value = facefusion.globals.output_video_encoder,
|
||||
visible = is_video(facefusion.globals.target_path)
|
||||
)
|
||||
OUTPUT_VIDEO_PRESET_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('output_video_preset_dropdown_label'),
|
||||
label = wording.get('uis.output_video_preset_dropdown'),
|
||||
choices = facefusion.choices.output_video_presets,
|
||||
value = facefusion.globals.output_video_preset,
|
||||
visible = is_video(facefusion.globals.target_path)
|
||||
)
|
||||
OUTPUT_VIDEO_QUALITY_SLIDER = gradio.Slider(
|
||||
label = wording.get('output_video_quality_slider_label'),
|
||||
label = wording.get('uis.output_video_quality_slider'),
|
||||
value = facefusion.globals.output_video_quality,
|
||||
step = facefusion.choices.output_video_quality_range[1] - facefusion.choices.output_video_quality_range[0],
|
||||
minimum = facefusion.choices.output_video_quality_range[0],
|
||||
@@ -63,13 +63,13 @@ def render() -> None:
|
||||
visible = is_video(facefusion.globals.target_path)
|
||||
)
|
||||
OUTPUT_VIDEO_RESOLUTION_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('output_video_resolution_dropdown_label'),
|
||||
label = wording.get('uis.output_video_resolution_dropdown'),
|
||||
choices = create_video_resolutions(facefusion.globals.target_path),
|
||||
value = facefusion.globals.output_video_resolution,
|
||||
visible = is_video(facefusion.globals.target_path)
|
||||
)
|
||||
OUTPUT_VIDEO_FPS_SLIDER = gradio.Slider(
|
||||
label = wording.get('output_video_fps_slider_label'),
|
||||
label = wording.get('uis.output_video_fps_slider'),
|
||||
value = facefusion.globals.output_video_fps,
|
||||
step = 0.01,
|
||||
minimum = 1,
|
||||
@@ -77,6 +77,7 @@ def render() -> None:
|
||||
visible = is_video(facefusion.globals.target_path)
|
||||
)
|
||||
register_ui_component('output_path_textbox', OUTPUT_PATH_TEXTBOX)
|
||||
register_ui_component('output_video_fps_slider', OUTPUT_VIDEO_FPS_SLIDER)
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
@@ -89,7 +90,6 @@ def listen() -> None:
|
||||
OUTPUT_VIDEO_FPS_SLIDER.change(update_output_video_fps, inputs = OUTPUT_VIDEO_FPS_SLIDER)
|
||||
multi_component_names : List[ComponentName] =\
|
||||
[
|
||||
'source_image',
|
||||
'target_image',
|
||||
'target_video'
|
||||
]
|
||||
|
||||
@@ -5,12 +5,14 @@ import gradio
|
||||
|
||||
import facefusion.globals
|
||||
from facefusion import wording, logger
|
||||
from facefusion.audio import get_audio_frame
|
||||
from facefusion.common_helper import get_first
|
||||
from facefusion.core import conditional_append_reference_faces
|
||||
from facefusion.face_store import clear_static_faces, get_reference_faces, clear_reference_faces
|
||||
from facefusion.typing import Frame, Face, FaceSet
|
||||
from facefusion.vision import get_video_frame, count_video_frame_total, normalize_frame_color, resize_frame_resolution, read_static_image, read_static_images
|
||||
from facefusion.filesystem import is_image, is_video
|
||||
from facefusion.face_analyser import get_average_face, clear_face_analyser
|
||||
from facefusion.face_store import clear_static_faces, get_reference_faces, clear_reference_faces
|
||||
from facefusion.typing import Face, FaceSet, AudioFrame, VisionFrame
|
||||
from facefusion.vision import get_video_frame, count_video_frame_total, normalize_frame_color, resize_frame_resolution, read_static_image, read_static_images
|
||||
from facefusion.filesystem import is_image, is_video, filter_audio_paths
|
||||
from facefusion.content_analyser import analyse_frame
|
||||
from facefusion.processors.frame.core import load_frame_processor_module
|
||||
from facefusion.uis.typing import ComponentName
|
||||
@@ -26,29 +28,34 @@ def render() -> None:
|
||||
|
||||
preview_image_args: Dict[str, Any] =\
|
||||
{
|
||||
'label': wording.get('preview_image_label'),
|
||||
'label': wording.get('uis.preview_image'),
|
||||
'interactive': False
|
||||
}
|
||||
preview_frame_slider_args: Dict[str, Any] =\
|
||||
{
|
||||
'label': wording.get('preview_frame_slider_label'),
|
||||
'label': wording.get('uis.preview_frame_slider'),
|
||||
'step': 1,
|
||||
'minimum': 0,
|
||||
'maximum': 100,
|
||||
'visible': False
|
||||
}
|
||||
conditional_append_reference_faces()
|
||||
reference_faces = get_reference_faces() if 'reference' in facefusion.globals.face_selector_mode else None
|
||||
source_frames = read_static_images(facefusion.globals.source_paths)
|
||||
source_face = get_average_face(source_frames)
|
||||
reference_faces = get_reference_faces() if 'reference' in facefusion.globals.face_selector_mode else None
|
||||
source_audio_path = get_first(filter_audio_paths(facefusion.globals.source_paths))
|
||||
if source_audio_path and facefusion.globals.output_video_fps:
|
||||
source_audio_frame = get_audio_frame(source_audio_path, facefusion.globals.output_video_fps, facefusion.globals.reference_frame_number)
|
||||
else:
|
||||
source_audio_frame = None
|
||||
if is_image(facefusion.globals.target_path):
|
||||
target_frame = read_static_image(facefusion.globals.target_path)
|
||||
preview_frame = process_preview_frame(source_face, reference_faces, target_frame)
|
||||
preview_image_args['value'] = normalize_frame_color(preview_frame)
|
||||
target_vision_frame = read_static_image(facefusion.globals.target_path)
|
||||
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, target_vision_frame)
|
||||
preview_image_args['value'] = normalize_frame_color(preview_vision_frame)
|
||||
if is_video(facefusion.globals.target_path):
|
||||
temp_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
|
||||
preview_frame = process_preview_frame(source_face, reference_faces, temp_frame)
|
||||
preview_image_args['value'] = normalize_frame_color(preview_frame)
|
||||
temp_vision_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
|
||||
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, temp_vision_frame)
|
||||
preview_image_args['value'] = normalize_frame_color(preview_vision_frame)
|
||||
preview_image_args['visible'] = True
|
||||
preview_frame_slider_args['value'] = facefusion.globals.reference_frame_number
|
||||
preview_frame_slider_args['maximum'] = count_video_frame_total(facefusion.globals.target_path)
|
||||
@@ -60,8 +67,12 @@ def render() -> None:
|
||||
|
||||
def listen() -> None:
|
||||
PREVIEW_FRAME_SLIDER.release(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
|
||||
reference_face_position_gallery = get_ui_component('reference_face_position_gallery')
|
||||
if reference_face_position_gallery:
|
||||
reference_face_position_gallery.select(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
|
||||
multi_one_component_names : List[ComponentName] =\
|
||||
[
|
||||
'source_audio',
|
||||
'source_image',
|
||||
'target_image',
|
||||
'target_video'
|
||||
@@ -81,17 +92,6 @@ def listen() -> None:
|
||||
if component:
|
||||
for method in [ 'upload', 'change', 'clear' ]:
|
||||
getattr(component, method)(update_preview_frame_slider, outputs = PREVIEW_FRAME_SLIDER)
|
||||
select_component_names : List[ComponentName] =\
|
||||
[
|
||||
'reference_face_position_gallery',
|
||||
'face_analyser_order_dropdown',
|
||||
'face_analyser_age_dropdown',
|
||||
'face_analyser_gender_dropdown'
|
||||
]
|
||||
for component_name in select_component_names:
|
||||
component = get_ui_component(component_name)
|
||||
if component:
|
||||
component.select(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
|
||||
change_one_component_names : List[ComponentName] =\
|
||||
[
|
||||
'face_debugger_items_checkbox_group',
|
||||
@@ -105,7 +105,11 @@ def listen() -> None:
|
||||
'face_mask_padding_bottom_slider',
|
||||
'face_mask_padding_left_slider',
|
||||
'face_mask_padding_right_slider',
|
||||
'face_mask_region_checkbox_group'
|
||||
'face_mask_region_checkbox_group',
|
||||
'face_analyser_order_dropdown',
|
||||
'face_analyser_age_dropdown',
|
||||
'face_analyser_gender_dropdown',
|
||||
'output_video_fps_slider'
|
||||
]
|
||||
for component_name in change_one_component_names:
|
||||
component = get_ui_component(component_name)
|
||||
@@ -117,6 +121,7 @@ def listen() -> None:
|
||||
'face_enhancer_model_dropdown',
|
||||
'face_swapper_model_dropdown',
|
||||
'frame_enhancer_model_dropdown',
|
||||
'lip_syncer_model_dropdown',
|
||||
'face_detector_model_dropdown',
|
||||
'face_detector_size_dropdown',
|
||||
'face_detector_score_slider'
|
||||
@@ -143,19 +148,25 @@ def update_preview_image(frame_number : int = 0) -> gradio.Image:
|
||||
sleep(0.5)
|
||||
logger.enable()
|
||||
conditional_append_reference_faces()
|
||||
reference_faces = get_reference_faces() if 'reference' in facefusion.globals.face_selector_mode else None
|
||||
source_frames = read_static_images(facefusion.globals.source_paths)
|
||||
source_face = get_average_face(source_frames)
|
||||
reference_faces = get_reference_faces() if 'reference' in facefusion.globals.face_selector_mode else None
|
||||
source_audio_path = get_first(filter_audio_paths(facefusion.globals.source_paths))
|
||||
if source_audio_path and facefusion.globals.output_video_fps:
|
||||
source_audio_frame = get_audio_frame(source_audio_path, facefusion.globals.output_video_fps, facefusion.globals.reference_frame_number)
|
||||
else:
|
||||
source_audio_frame = None
|
||||
|
||||
if is_image(facefusion.globals.target_path):
|
||||
target_frame = read_static_image(facefusion.globals.target_path)
|
||||
preview_frame = process_preview_frame(source_face, reference_faces, target_frame)
|
||||
preview_frame = normalize_frame_color(preview_frame)
|
||||
return gradio.Image(value = preview_frame)
|
||||
target_vision_frame = read_static_image(facefusion.globals.target_path)
|
||||
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, target_vision_frame)
|
||||
preview_vision_frame = normalize_frame_color(preview_vision_frame)
|
||||
return gradio.Image(value = preview_vision_frame)
|
||||
if is_video(facefusion.globals.target_path):
|
||||
temp_frame = get_video_frame(facefusion.globals.target_path, frame_number)
|
||||
preview_frame = process_preview_frame(source_face, reference_faces, temp_frame)
|
||||
preview_frame = normalize_frame_color(preview_frame)
|
||||
return gradio.Image(value = preview_frame)
|
||||
temp_vision_frame = get_video_frame(facefusion.globals.target_path, frame_number)
|
||||
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, temp_vision_frame)
|
||||
preview_vision_frame = normalize_frame_color(preview_vision_frame)
|
||||
return gradio.Image(value = preview_vision_frame)
|
||||
return gradio.Image(value = None)
|
||||
|
||||
|
||||
@@ -166,18 +177,20 @@ def update_preview_frame_slider() -> gradio.Slider:
|
||||
return gradio.Slider(value = None, maximum = None, visible = False)
|
||||
|
||||
|
||||
def process_preview_frame(source_face : Face, reference_faces : FaceSet, temp_frame : Frame) -> Frame:
|
||||
temp_frame = resize_frame_resolution(temp_frame, 640, 640)
|
||||
if analyse_frame(temp_frame):
|
||||
return cv2.GaussianBlur(temp_frame, (99, 99), 0)
|
||||
def process_preview_frame(reference_faces : FaceSet, source_face : Face, source_audio_frame : AudioFrame, target_vision_frame : VisionFrame) -> VisionFrame:
|
||||
target_vision_frame = resize_frame_resolution(target_vision_frame, 640, 640)
|
||||
if analyse_frame(target_vision_frame):
|
||||
return cv2.GaussianBlur(target_vision_frame, (99, 99), 0)
|
||||
for frame_processor in facefusion.globals.frame_processors:
|
||||
frame_processor_module = load_frame_processor_module(frame_processor)
|
||||
logger.disable()
|
||||
if frame_processor_module.pre_process('preview'):
|
||||
logger.enable()
|
||||
temp_frame = frame_processor_module.process_frame(
|
||||
source_face,
|
||||
reference_faces,
|
||||
temp_frame
|
||||
)
|
||||
return temp_frame
|
||||
target_vision_frame = frame_processor_module.process_frame(
|
||||
{
|
||||
'reference_faces': reference_faces,
|
||||
'source_face': source_face,
|
||||
'source_audio_frame': source_audio_frame,
|
||||
'target_vision_frame': target_vision_frame
|
||||
})
|
||||
return target_vision_frame
|
||||
|
||||
@@ -1,49 +1,67 @@
|
||||
from typing import Optional, List
|
||||
from typing import Optional, List, Tuple
|
||||
import gradio
|
||||
|
||||
import facefusion.globals
|
||||
from facefusion import wording
|
||||
from facefusion.uis.typing import File
|
||||
from facefusion.filesystem import are_images
|
||||
from facefusion.common_helper import get_first
|
||||
from facefusion.filesystem import has_audio, has_image, filter_audio_paths, filter_image_paths
|
||||
from facefusion.uis.core import register_ui_component
|
||||
|
||||
SOURCE_FILE : Optional[gradio.File] = None
|
||||
SOURCE_AUDIO : Optional[gradio.Audio] = None
|
||||
SOURCE_IMAGE : Optional[gradio.Image] = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
global SOURCE_FILE
|
||||
global SOURCE_AUDIO
|
||||
global SOURCE_IMAGE
|
||||
|
||||
are_source_images = are_images(facefusion.globals.source_paths)
|
||||
has_source_audio = has_audio(facefusion.globals.source_paths)
|
||||
has_source_image = has_image(facefusion.globals.source_paths)
|
||||
SOURCE_FILE = gradio.File(
|
||||
file_count = 'multiple',
|
||||
file_types =
|
||||
[
|
||||
'.mp3',
|
||||
'.wav',
|
||||
'.png',
|
||||
'.jpg',
|
||||
'.webp'
|
||||
],
|
||||
label = wording.get('source_file_label'),
|
||||
value = facefusion.globals.source_paths if are_source_images else None
|
||||
label = wording.get('uis.source_file'),
|
||||
value = facefusion.globals.source_paths if has_source_audio or has_source_image else None
|
||||
)
|
||||
source_file_names = [ source_file_value['name'] for source_file_value in SOURCE_FILE.value ] if SOURCE_FILE.value else None
|
||||
SOURCE_IMAGE = gradio.Image(
|
||||
value = source_file_names[0] if are_source_images else None,
|
||||
visible = are_source_images,
|
||||
source_audio_path = get_first(filter_audio_paths(source_file_names))
|
||||
source_image_path = get_first(filter_image_paths(source_file_names))
|
||||
SOURCE_AUDIO = gradio.Audio(
|
||||
value = source_audio_path if has_source_audio else None,
|
||||
visible = has_source_audio,
|
||||
show_label = False
|
||||
)
|
||||
SOURCE_IMAGE = gradio.Image(
|
||||
value = source_image_path if has_source_image else None,
|
||||
visible = has_source_image,
|
||||
show_label = False
|
||||
)
|
||||
register_ui_component('source_audio', SOURCE_AUDIO)
|
||||
register_ui_component('source_image', SOURCE_IMAGE)
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
SOURCE_FILE.change(update, inputs = SOURCE_FILE, outputs = SOURCE_IMAGE)
|
||||
SOURCE_FILE.change(update, inputs = SOURCE_FILE, outputs = [ SOURCE_AUDIO, SOURCE_IMAGE ])
|
||||
|
||||
|
||||
def update(files : List[File]) -> gradio.Image:
|
||||
def update(files : List[File]) -> Tuple[gradio.Audio, gradio.Image]:
|
||||
file_names = [ file.name for file in files ] if files else None
|
||||
if are_images(file_names):
|
||||
has_source_audio = has_audio(file_names)
|
||||
has_source_image = has_image(file_names)
|
||||
if has_source_audio or has_source_image:
|
||||
source_audio_path = get_first(filter_audio_paths(file_names))
|
||||
source_image_path = get_first(filter_image_paths(file_names))
|
||||
facefusion.globals.source_paths = file_names
|
||||
return gradio.Image(value = file_names[0], visible = True)
|
||||
return gradio.Audio(value = source_audio_path, visible = has_source_audio), gradio.Image(value = source_image_path, visible = has_source_image)
|
||||
facefusion.globals.source_paths = None
|
||||
return gradio.Image(value = None, visible = False)
|
||||
return gradio.Audio(value = None, visible = False), gradio.Image(value = None, visible = False)
|
||||
|
||||
@@ -21,7 +21,7 @@ def render() -> None:
|
||||
is_target_image = is_image(facefusion.globals.target_path)
|
||||
is_target_video = is_video(facefusion.globals.target_path)
|
||||
TARGET_FILE = gradio.File(
|
||||
label = wording.get('target_file_label'),
|
||||
label = wording.get('uis.target_file'),
|
||||
file_count = 'single',
|
||||
file_types =
|
||||
[
|
||||
|
||||
@@ -17,13 +17,13 @@ def render() -> None:
|
||||
global TEMP_FRAME_QUALITY_SLIDER
|
||||
|
||||
TEMP_FRAME_FORMAT_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('temp_frame_format_dropdown_label'),
|
||||
label = wording.get('uis.temp_frame_format_dropdown'),
|
||||
choices = facefusion.choices.temp_frame_formats,
|
||||
value = facefusion.globals.temp_frame_format,
|
||||
visible = is_video(facefusion.globals.target_path)
|
||||
)
|
||||
TEMP_FRAME_QUALITY_SLIDER = gradio.Slider(
|
||||
label = wording.get('temp_frame_quality_slider_label'),
|
||||
label = wording.get('uis.temp_frame_quality_slider'),
|
||||
value = facefusion.globals.temp_frame_quality,
|
||||
step = facefusion.choices.temp_frame_quality_range[1] - facefusion.choices.temp_frame_quality_range[0],
|
||||
minimum = facefusion.choices.temp_frame_quality_range[0],
|
||||
|
||||
@@ -17,7 +17,7 @@ def render() -> None:
|
||||
|
||||
trim_frame_start_slider_args : Dict[str, Any] =\
|
||||
{
|
||||
'label': wording.get('trim_frame_start_slider_label'),
|
||||
'label': wording.get('uis.trim_frame_start_slider'),
|
||||
'step': 1,
|
||||
'minimum': 0,
|
||||
'maximum': 100,
|
||||
@@ -25,7 +25,7 @@ def render() -> None:
|
||||
}
|
||||
trim_frame_end_slider_args : Dict[str, Any] =\
|
||||
{
|
||||
'label': wording.get('trim_frame_end_slider_label'),
|
||||
'label': wording.get('uis.trim_frame_end_slider'),
|
||||
'step': 1,
|
||||
'minimum': 0,
|
||||
'maximum': 100,
|
||||
|
||||
@@ -12,7 +12,7 @@ from tqdm import tqdm
|
||||
import facefusion.globals
|
||||
from facefusion import logger, wording
|
||||
from facefusion.content_analyser import analyse_stream
|
||||
from facefusion.typing import Frame, Face, Fps
|
||||
from facefusion.typing import VisionFrame, Face, Fps
|
||||
from facefusion.face_analyser import get_average_face
|
||||
from facefusion.processors.frame.core import get_frame_processors_modules, load_frame_processor_module
|
||||
from facefusion.ffmpeg import open_ffmpeg
|
||||
@@ -53,15 +53,15 @@ def render() -> None:
|
||||
global WEBCAM_STOP_BUTTON
|
||||
|
||||
WEBCAM_IMAGE = gradio.Image(
|
||||
label = wording.get('webcam_image_label')
|
||||
label = wording.get('uis.webcam_image')
|
||||
)
|
||||
WEBCAM_START_BUTTON = gradio.Button(
|
||||
value = wording.get('start_button_label'),
|
||||
value = wording.get('uis.start_button'),
|
||||
variant = 'primary',
|
||||
size = 'sm'
|
||||
)
|
||||
WEBCAM_STOP_BUTTON = gradio.Button(
|
||||
value = wording.get('stop_button_label'),
|
||||
value = wording.get('uis.stop_button'),
|
||||
size = 'sm'
|
||||
)
|
||||
|
||||
@@ -80,6 +80,7 @@ def listen() -> None:
|
||||
'face_swapper_model_dropdown',
|
||||
'face_enhancer_model_dropdown',
|
||||
'frame_enhancer_model_dropdown',
|
||||
'lip_syncer_model_dropdown',
|
||||
'source_image'
|
||||
]
|
||||
for component_name in change_two_component_names:
|
||||
@@ -88,7 +89,7 @@ def listen() -> None:
|
||||
component.change(update, cancels = start_event)
|
||||
|
||||
|
||||
def start(webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -> Generator[Frame, None, None]:
|
||||
def start(webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -> Generator[VisionFrame, None, None]:
|
||||
facefusion.globals.face_selector_mode = 'one'
|
||||
facefusion.globals.face_analyser_order = 'large-small'
|
||||
source_frames = read_static_images(facefusion.globals.source_paths)
|
||||
@@ -114,11 +115,11 @@ def start(webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -
|
||||
yield None
|
||||
|
||||
|
||||
def multi_process_capture(source_face : Face, webcam_capture : cv2.VideoCapture, webcam_fps : Fps) -> Generator[Frame, None, None]:
|
||||
def multi_process_capture(source_face : Face, webcam_capture : cv2.VideoCapture, webcam_fps : Fps) -> Generator[VisionFrame, None, None]:
|
||||
with tqdm(desc = wording.get('processing'), unit = 'frame', ascii = ' =', disable = facefusion.globals.log_level in [ 'warn', 'error' ]) as progress:
|
||||
with ThreadPoolExecutor(max_workers = facefusion.globals.execution_thread_count) as executor:
|
||||
futures = []
|
||||
deque_capture_frames : Deque[Frame] = deque()
|
||||
deque_capture_frames : Deque[VisionFrame] = deque()
|
||||
while webcam_capture and webcam_capture.isOpened():
|
||||
_, capture_frame = webcam_capture.read()
|
||||
if analyse_stream(capture_frame, webcam_fps):
|
||||
@@ -148,17 +149,19 @@ def stop() -> gradio.Image:
|
||||
return gradio.Image(value = None)
|
||||
|
||||
|
||||
def process_stream_frame(source_face : Face, temp_frame : Frame) -> Frame:
|
||||
def process_stream_frame(source_face : Face, target_vision_frame : VisionFrame) -> VisionFrame:
|
||||
for frame_processor_module in get_frame_processors_modules(facefusion.globals.frame_processors):
|
||||
logger.disable()
|
||||
if frame_processor_module.pre_process('stream'):
|
||||
logger.enable()
|
||||
temp_frame = frame_processor_module.process_frame(
|
||||
source_face,
|
||||
None,
|
||||
temp_frame
|
||||
)
|
||||
return temp_frame
|
||||
target_vision_frame = frame_processor_module.process_frame(
|
||||
{
|
||||
'source_face': source_face,
|
||||
'reference_faces': None,
|
||||
'source_audio_frame': None,
|
||||
'target_vision_frame': target_vision_frame
|
||||
})
|
||||
return target_vision_frame
|
||||
|
||||
|
||||
def open_stream(stream_mode : StreamMode, stream_resolution : str, stream_fps : Fps) -> subprocess.Popen[bytes]:
|
||||
|
||||
@@ -16,17 +16,17 @@ def render() -> None:
|
||||
global WEBCAM_FPS_SLIDER
|
||||
|
||||
WEBCAM_MODE_RADIO = gradio.Radio(
|
||||
label = wording.get('webcam_mode_radio_label'),
|
||||
label = wording.get('uis.webcam_mode_radio'),
|
||||
choices = uis_choices.webcam_modes,
|
||||
value = 'inline'
|
||||
)
|
||||
WEBCAM_RESOLUTION_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('webcam_resolution_dropdown'),
|
||||
label = wording.get('uis.webcam_resolution_dropdown'),
|
||||
choices = uis_choices.webcam_resolutions,
|
||||
value = uis_choices.webcam_resolutions[0]
|
||||
)
|
||||
WEBCAM_FPS_SLIDER = gradio.Slider(
|
||||
label = wording.get('webcam_fps_slider'),
|
||||
label = wording.get('uis.webcam_fps_slider'),
|
||||
value = 25,
|
||||
step = 1,
|
||||
minimum = 1,
|
||||
|
||||
@@ -34,6 +34,7 @@ def render() -> gradio.Blocks:
|
||||
about.render()
|
||||
with gradio.Blocks():
|
||||
frame_processors.render()
|
||||
with gradio.Blocks():
|
||||
frame_processors_options.render()
|
||||
with gradio.Blocks():
|
||||
execution.render()
|
||||
@@ -60,4 +61,4 @@ def listen() -> None:
|
||||
|
||||
|
||||
def run(ui : gradio.Blocks) -> None:
|
||||
ui.queue(concurrency_count = 2, api_open = False).launch(show_api = False)
|
||||
ui.queue(concurrency_count = 2).launch(show_api = False, quiet = True)
|
||||
|
||||
@@ -19,6 +19,7 @@ def render() -> gradio.Blocks:
|
||||
about.render()
|
||||
with gradio.Blocks():
|
||||
frame_processors.render()
|
||||
with gradio.Blocks():
|
||||
frame_processors_options.render()
|
||||
with gradio.Blocks():
|
||||
execution.render()
|
||||
@@ -74,4 +75,4 @@ def listen() -> None:
|
||||
|
||||
|
||||
def run(ui : gradio.Blocks) -> None:
|
||||
ui.launch(show_api = False)
|
||||
ui.launch(show_api = False, quiet = True)
|
||||
|
||||
@@ -19,6 +19,7 @@ def render() -> gradio.Blocks:
|
||||
about.render()
|
||||
with gradio.Blocks():
|
||||
frame_processors.render()
|
||||
with gradio.Blocks():
|
||||
frame_processors_options.render()
|
||||
with gradio.Blocks():
|
||||
execution.render()
|
||||
@@ -43,4 +44,4 @@ def listen() -> None:
|
||||
|
||||
|
||||
def run(ui : gradio.Blocks) -> None:
|
||||
ui.queue(concurrency_count = 2, api_open = False).launch(show_api = False)
|
||||
ui.queue(concurrency_count = 2).launch(show_api = False, quiet = True)
|
||||
|
||||
@@ -5,6 +5,7 @@ File = IO[Any]
|
||||
Component = gradio.File or gradio.Image or gradio.Video or gradio.Slider
|
||||
ComponentName = Literal\
|
||||
[
|
||||
'source_audio',
|
||||
'source_image',
|
||||
'target_image',
|
||||
'target_video',
|
||||
@@ -26,13 +27,15 @@ ComponentName = Literal\
|
||||
'face_mask_padding_right_slider',
|
||||
'face_mask_region_checkbox_group',
|
||||
'frame_processors_checkbox_group',
|
||||
'face_swapper_model_dropdown',
|
||||
'face_debugger_items_checkbox_group',
|
||||
'face_enhancer_model_dropdown',
|
||||
'face_enhancer_blend_slider',
|
||||
'face_swapper_model_dropdown',
|
||||
'frame_enhancer_model_dropdown',
|
||||
'frame_enhancer_blend_slider',
|
||||
'face_debugger_items_checkbox_group',
|
||||
'lip_syncer_model_dropdown',
|
||||
'output_path_textbox',
|
||||
'output_video_fps_slider',
|
||||
'benchmark_runs_checkbox_group',
|
||||
'benchmark_cycles_slider',
|
||||
'webcam_mode_radio',
|
||||
|
||||
Reference in New Issue
Block a user