* Rename landmark 5 variables

* Mark as NEXT

* Render tabs for multiple ui layout usage

* Allow many face detectors at once, Add face detector tweaks

* Remove face detector tweaks for now (kinda placebo)

* Fix lint issues

* Allow rendering the landmark-5 and landmark-5/68 via debugger

* Fix naming

* Convert face landmark based on confidence score

* Convert face landmark based on confidence score

* Add scrfd face detector model (#397)

* Add scrfd face detector model

* Switch to scrfd_2.5g.onnx model

* Just some renaming

* Downgrade OpenCV, Add SYSTEM_VERSION_COMPAT=0 for MacOS

* Improve naming

* prepare detect frame outside of semaphore

* Feat/process manager (#399)

* Minor naming

* Introduce process manager to start and stop

* Introduce process manager to start and stop

* Introduce process manager to start and stop

* Introduce process manager to start and stop

* Introduce process manager to start and stop

* Remove useless test for now

* Avoid useless variables

* Show stop once is_processing is True

* Allow to stop ffmpeg processing too

* Implement output image resolution (#403)

* Implement output image resolution

* Reorder code

* Simplify output logic and therefore fix bug

* Frame-enhancer-onnx (#404)

* changes

* changes

* changes

* changes

* add models

* update workflow

* Some cleanup

* Some cleanup

* Feat/frame enhancer polishing (#410)

* Some cleanup

* Polish the frame enhancer

* Frame Enhancer: Add more models, optimize processing

* Minor changes

* Improve readability of create_tile_frames and merge_tile_frames

* We don't have enough models yet

* Feat/face landmarker score (#413)

* Introduce face landmarker score

* Fix testing

* Fix testing

* Use release for score related sliders

* Reduce face landmark fallbacks

* Scores and landmarks in Face dict, Change color-theme in face debugger

* Scores and landmarks in Face dict, Change color-theme in face debugger

* Fix some naming

* Add 8K support (for whatever reasons)

* Fix testing

* Using get() for face.landmarks

* Introduce statistics

* More statistics

* Limit the histogram equalization

* Enable queue() for default layout

* Improve copy_image()

* Fix error when switching detector model

* Always set UI values with globals if possible

* Use different logic for output image and output video resolutions

* Enforce re-download if file size is off

* Remove unused method

* Remove unused method

* Remove unused warning filter

* Improved output path normalization (#419)

* Handle some exceptions

* Handle some exceptions

* Cleanup

* Prevent countless thread locks

* Listen to user feedback

* Fix webp edge case

* Feat/cuda device detection (#424)

* Introduce cuda device detection

* Introduce cuda device detection

* it's gtx

* Move logic to run_nvidia_smi()

* Finalize execution device naming

* Finalize execution device naming

* Merge execution_helper.py to execution.py

* Undo lowercase of values

* Undo lowercase of values

* Finalize naming

* Add missing entry to ini

* fix lip_syncer preview (#426)

* fix lip_syncer preview

* change

* Refresh preview on trim changes

* Cleanup frame enhancers and remove useless scale in merge_video() (#428)

* Keep lips over the whole video once lip syncer is enabled (#430)

* Keep lips over the whole video once lip syncer is enabled

* changes

* changes

* Fix spacing

* Use empty audio frame on silence

* Use empty audio frame on silence

* Fix ConfigParser encoding (#431)

facefusion.ini is UTF8 encoded but config.py doesn't specify encoding which results in corrupted entries when non english characters are used. 

Affected entries:
source_paths
target_path
output_path

* Adjust spacing

* Improve the GTX 16 series detection

* Use general exception to catch ParseError

* Use general exception to catch ParseError

* Host frame enhancer models4

* Use latest onnxruntime

* Minor changes in benchmark UI

* Different approach to cancel ffmpeg process

* Add support for amd amf encoders (#433)

* Add amd_amf encoders

* remove -rc cqp from amf encoder parameters

* Improve terminal output, move success messages to debug mode

* Improve terminal output, move success messages to debug mode

* Minor update

* Minor update

* onnxruntime 1.17.1 matches cuda 12.2

* Feat/improved scaling (#435)

* Prevent useless temp upscaling, Show resolution and fps in terminal output

* Remove temp frame quality

* Remove temp frame quality

* Tiny cleanup

* Default back to png for temp frames, Remove pix_fmt from frame extraction due mjpeg error

* Fix inswapper fallback by onnxruntime

* Fix inswapper fallback by major onnxruntime

* Fix inswapper fallback by major onnxruntime

* Add testing for vision restrict methods

* Fix left / right face mask regions, add left-ear and right-ear

* Flip right and left again

* Undo ears - does not work with box mask

* Prepare next release

* Fix spacing

* 100% quality when using jpg for temp frames

* Use span_kendata_x4 as default as of speed

* benchmark optimal tile and pad

* Undo commented out code

* Add real_esrgan_x4_fp16 model

* Be strict when using many face detectors

---------

Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
Co-authored-by: aldemoth <159712934+aldemoth@users.noreply.github.com>
This commit is contained in:
Henry Ruhs
2024-03-14 19:56:54 +01:00
committed by GitHub
parent dd2193cf39
commit 7609df6747
60 changed files with 1322 additions and 624 deletions

View File

@@ -1,17 +1,16 @@
from typing import Any, Optional, List, Dict, Generator
import time
from time import sleep, perf_counter
import tempfile
import statistics
import gradio
import facefusion.globals
from facefusion import wording
from facefusion import process_manager, wording
from facefusion.face_store import clear_static_faces
from facefusion.processors.frame.core import get_frame_processors_modules
from facefusion.vision import count_video_frame_total, detect_video_resolution, detect_video_fps, pack_resolution
from facefusion.core import conditional_process
from facefusion.memory import limit_system_memory
from facefusion.normalizer import normalize_output_path
from facefusion.filesystem import clear_temp
from facefusion.uis.core import get_ui_component
@@ -70,6 +69,7 @@ def render() -> None:
def listen() -> None:
benchmark_runs_checkbox_group = get_ui_component('benchmark_runs_checkbox_group')
benchmark_cycles_slider = get_ui_component('benchmark_cycles_slider')
if benchmark_runs_checkbox_group and benchmark_cycles_slider:
BENCHMARK_START_BUTTON.click(start, inputs = [ benchmark_runs_checkbox_group, benchmark_cycles_slider ], outputs = BENCHMARK_RESULTS_DATAFRAME)
BENCHMARK_CLEAR_BUTTON.click(clear, outputs = BENCHMARK_RESULTS_DATAFRAME)
@@ -77,10 +77,13 @@ def listen() -> None:
def start(benchmark_runs : List[str], benchmark_cycles : int) -> Generator[List[Any], None, None]:
facefusion.globals.source_paths = [ '.assets/examples/source.jpg' ]
facefusion.globals.output_path = tempfile.gettempdir()
facefusion.globals.face_landmarker_score = 0
facefusion.globals.temp_frame_format = 'bmp'
facefusion.globals.output_video_preset = 'ultrafast'
target_paths = [ BENCHMARKS[benchmark_run] for benchmark_run in benchmark_runs if benchmark_run in BENCHMARKS ]
benchmark_results = []
target_paths = [ BENCHMARKS[benchmark_run] for benchmark_run in benchmark_runs if benchmark_run in BENCHMARKS ]
if target_paths:
pre_process()
for target_path in target_paths:
@@ -103,16 +106,16 @@ def post_process() -> None:
def benchmark(target_path : str, benchmark_cycles : int) -> List[Any]:
process_times = []
total_fps = 0.0
facefusion.globals.target_path = target_path
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
output_video_resolution = detect_video_resolution(facefusion.globals.target_path)
facefusion.globals.output_video_resolution = pack_resolution(output_video_resolution)
facefusion.globals.output_video_fps = detect_video_fps(facefusion.globals.target_path)
for index in range(benchmark_cycles):
facefusion.globals.target_path = target_path
facefusion.globals.output_path = normalize_output_path(facefusion.globals.source_paths, facefusion.globals.target_path, tempfile.gettempdir())
target_video_resolution = detect_video_resolution(facefusion.globals.target_path)
facefusion.globals.output_video_resolution = pack_resolution(target_video_resolution)
facefusion.globals.output_video_fps = detect_video_fps(facefusion.globals.target_path)
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
start_time = time.perf_counter()
start_time = perf_counter()
conditional_process()
end_time = time.perf_counter()
end_time = perf_counter()
process_time = end_time - start_time
total_fps += video_frame_total / process_time
process_times.append(process_time)
@@ -132,6 +135,8 @@ def benchmark(target_path : str, benchmark_cycles : int) -> List[Any]:
def clear() -> gradio.Dataframe:
while process_manager.is_processing():
sleep(0.5)
if facefusion.globals.target_path:
clear_temp(facefusion.globals.target_path)
return gradio.Dataframe(value = None)

View File

@@ -6,7 +6,7 @@ import facefusion.globals
from facefusion import wording
from facefusion.face_analyser import clear_face_analyser
from facefusion.processors.frame.core import clear_frame_processors_modules
from facefusion.execution_helper import encode_execution_providers, decode_execution_providers
from facefusion.execution import encode_execution_providers, decode_execution_providers
EXECUTION_PROVIDERS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
@@ -28,7 +28,6 @@ def listen() -> None:
def update_execution_providers(execution_providers : List[str]) -> gradio.CheckboxGroup:
clear_face_analyser()
clear_frame_processors_modules()
if not execution_providers:
execution_providers = encode_execution_providers(onnxruntime.get_available_providers())
execution_providers = execution_providers or encode_execution_providers(onnxruntime.get_available_providers())
facefusion.globals.execution_providers = decode_execution_providers(execution_providers)
return gradio.CheckboxGroup(value = execution_providers)

View File

@@ -11,18 +11,20 @@ from facefusion.uis.core import register_ui_component
FACE_ANALYSER_ORDER_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_ANALYSER_AGE_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_ANALYSER_GENDER_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_DETECTOR_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_DETECTOR_SIZE_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_DETECTOR_SCORE_SLIDER : Optional[gradio.Slider] = None
FACE_DETECTOR_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_LANDMARKER_SCORE_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global FACE_ANALYSER_ORDER_DROPDOWN
global FACE_ANALYSER_AGE_DROPDOWN
global FACE_ANALYSER_GENDER_DROPDOWN
global FACE_DETECTOR_MODEL_DROPDOWN
global FACE_DETECTOR_SIZE_DROPDOWN
global FACE_DETECTOR_SCORE_SLIDER
global FACE_DETECTOR_MODEL_DROPDOWN
global FACE_LANDMARKER_SCORE_SLIDER
face_detector_size_dropdown_args : Dict[str, Any] =\
{
@@ -53,19 +55,28 @@ def render() -> None:
value = facefusion.globals.face_detector_model
)
FACE_DETECTOR_SIZE_DROPDOWN = gradio.Dropdown(**face_detector_size_dropdown_args)
FACE_DETECTOR_SCORE_SLIDER = gradio.Slider(
label = wording.get('uis.face_detector_score_slider'),
value = facefusion.globals.face_detector_score,
step = facefusion.choices.face_detector_score_range[1] - facefusion.choices.face_detector_score_range[0],
minimum = facefusion.choices.face_detector_score_range[0],
maximum = facefusion.choices.face_detector_score_range[-1]
)
with gradio.Row():
FACE_DETECTOR_SCORE_SLIDER = gradio.Slider(
label = wording.get('uis.face_detector_score_slider'),
value = facefusion.globals.face_detector_score,
step = facefusion.choices.face_detector_score_range[1] - facefusion.choices.face_detector_score_range[0],
minimum = facefusion.choices.face_detector_score_range[0],
maximum = facefusion.choices.face_detector_score_range[-1]
)
FACE_LANDMARKER_SCORE_SLIDER = gradio.Slider(
label = wording.get('uis.face_landmarker_score_slider'),
value = facefusion.globals.face_landmarker_score,
step = facefusion.choices.face_landmarker_score_range[1] - facefusion.choices.face_landmarker_score_range[0],
minimum = facefusion.choices.face_landmarker_score_range[0],
maximum = facefusion.choices.face_landmarker_score_range[-1]
)
register_ui_component('face_analyser_order_dropdown', FACE_ANALYSER_ORDER_DROPDOWN)
register_ui_component('face_analyser_age_dropdown', FACE_ANALYSER_AGE_DROPDOWN)
register_ui_component('face_analyser_gender_dropdown', FACE_ANALYSER_GENDER_DROPDOWN)
register_ui_component('face_detector_model_dropdown', FACE_DETECTOR_MODEL_DROPDOWN)
register_ui_component('face_detector_size_dropdown', FACE_DETECTOR_SIZE_DROPDOWN)
register_ui_component('face_detector_score_slider', FACE_DETECTOR_SCORE_SLIDER)
register_ui_component('face_landmarker_score_slider', FACE_LANDMARKER_SCORE_SLIDER)
def listen() -> None:
@@ -74,7 +85,8 @@ def listen() -> None:
FACE_ANALYSER_GENDER_DROPDOWN.change(update_face_analyser_gender, inputs = FACE_ANALYSER_GENDER_DROPDOWN)
FACE_DETECTOR_MODEL_DROPDOWN.change(update_face_detector_model, inputs = FACE_DETECTOR_MODEL_DROPDOWN, outputs = FACE_DETECTOR_SIZE_DROPDOWN)
FACE_DETECTOR_SIZE_DROPDOWN.change(update_face_detector_size, inputs = FACE_DETECTOR_SIZE_DROPDOWN)
FACE_DETECTOR_SCORE_SLIDER.change(update_face_detector_score, inputs = FACE_DETECTOR_SCORE_SLIDER)
FACE_DETECTOR_SCORE_SLIDER.release(update_face_detector_score, inputs = FACE_DETECTOR_SCORE_SLIDER)
FACE_LANDMARKER_SCORE_SLIDER.release(update_face_landmarker_score, inputs = FACE_LANDMARKER_SCORE_SLIDER)
def update_face_analyser_order(face_analyser_order : FaceAnalyserOrder) -> None:
@@ -91,9 +103,10 @@ def update_face_analyser_gender(face_analyser_gender : FaceAnalyserGender) -> No
def update_face_detector_model(face_detector_model : FaceDetectorModel) -> gradio.Dropdown:
facefusion.globals.face_detector_model = face_detector_model
facefusion.globals.face_detector_size = '640x640'
if facefusion.globals.face_detector_size in facefusion.choices.face_detector_set[face_detector_model]:
return gradio.Dropdown(value = '640x640', choices = facefusion.choices.face_detector_set[face_detector_model])
return gradio.Dropdown(value = '640x640', choices = [ '640x640' ])
return gradio.Dropdown(value = facefusion.globals.face_detector_size, choices = facefusion.choices.face_detector_set[face_detector_model])
return gradio.Dropdown(value = facefusion.globals.face_detector_size, choices = [ facefusion.globals.face_detector_size ])
def update_face_detector_size(face_detector_size : str) -> None:
@@ -102,3 +115,7 @@ def update_face_detector_size(face_detector_size : str) -> None:
def update_face_detector_score(face_detector_score : float) -> None:
facefusion.globals.face_detector_score = face_detector_score
def update_face_landmarker_score(face_landmarker_score : float) -> None:
facefusion.globals.face_landmarker_score = face_landmarker_score

View File

@@ -100,12 +100,10 @@ def listen() -> None:
def update_face_mask_type(face_mask_types : List[FaceMaskType]) -> Tuple[gradio.CheckboxGroup, gradio.Group, gradio.CheckboxGroup]:
if not face_mask_types:
face_mask_types = facefusion.choices.face_mask_types
facefusion.globals.face_mask_types = face_mask_types
facefusion.globals.face_mask_types = face_mask_types or facefusion.choices.face_mask_types
has_box_mask = 'box' in face_mask_types
has_region_mask = 'region' in face_mask_types
return gradio.CheckboxGroup(value = face_mask_types), gradio.Group(visible = has_box_mask), gradio.CheckboxGroup(visible = has_region_mask)
return gradio.CheckboxGroup(value = facefusion.globals.face_mask_types), gradio.Group(visible = has_box_mask), gradio.CheckboxGroup(visible = has_region_mask)
def update_face_mask_blur(face_mask_blur : float) -> None:
@@ -117,7 +115,5 @@ def update_face_mask_padding(face_mask_padding_top : int, face_mask_padding_righ
def update_face_mask_regions(face_mask_regions : List[FaceMaskRegion]) -> gradio.CheckboxGroup:
if not face_mask_regions:
face_mask_regions = facefusion.choices.face_mask_regions
facefusion.globals.face_mask_regions = face_mask_regions
return gradio.CheckboxGroup(value = face_mask_regions)
facefusion.globals.face_mask_regions = face_mask_regions or facefusion.choices.face_mask_regions
return gradio.CheckboxGroup(value = facefusion.globals.face_mask_regions)

View File

@@ -23,7 +23,7 @@ def render() -> None:
global REFERENCE_FACE_POSITION_GALLERY
global REFERENCE_FACE_DISTANCE_SLIDER
reference_face_gallery_args: Dict[str, Any] =\
reference_face_gallery_args : Dict[str, Any] =\
{
'label': wording.get('uis.reference_face_gallery'),
'object_fit': 'cover',
@@ -85,7 +85,8 @@ def listen() -> None:
[
'face_detector_model_dropdown',
'face_detector_size_dropdown',
'face_detector_score_slider'
'face_detector_score_slider',
'face_landmarker_score_slider'
]
for component_name in change_two_component_names:
component = get_ui_component(component_name)
@@ -98,15 +99,15 @@ def listen() -> None:
def update_face_selector_mode(face_selector_mode : FaceSelectorMode) -> Tuple[gradio.Gallery, gradio.Slider]:
if face_selector_mode == 'reference':
facefusion.globals.face_selector_mode = face_selector_mode
return gradio.Gallery(visible = True), gradio.Slider(visible = True)
if face_selector_mode == 'one':
facefusion.globals.face_selector_mode = face_selector_mode
return gradio.Gallery(visible = False), gradio.Slider(visible = False)
if face_selector_mode == 'many':
facefusion.globals.face_selector_mode = face_selector_mode
return gradio.Gallery(visible = False), gradio.Slider(visible = False)
if face_selector_mode == 'one':
facefusion.globals.face_selector_mode = face_selector_mode
return gradio.Gallery(visible = False), gradio.Slider(visible = False)
if face_selector_mode == 'reference':
facefusion.globals.face_selector_mode = face_selector_mode
return gradio.Gallery(visible = True), gradio.Slider(visible = True)
def clear_and_update_reference_face_position(event : gradio.SelectData) -> gradio.Gallery:

View File

@@ -32,7 +32,7 @@ def update_frame_processors(frame_processors : List[str]) -> gradio.CheckboxGrou
frame_processor_module = load_frame_processor_module(frame_processor)
if not frame_processor_module.pre_check():
return gradio.CheckboxGroup()
return gradio.CheckboxGroup(value = frame_processors, choices = sort_frame_processors(frame_processors))
return gradio.CheckboxGroup(value = facefusion.globals.frame_processors, choices = sort_frame_processors(facefusion.globals.frame_processors))
def sort_frame_processors(frame_processors : List[str]) -> list[str]:

View File

@@ -113,7 +113,7 @@ def update_face_enhancer_model(face_enhancer_model : FaceEnhancerModel) -> gradi
face_enhancer_module.clear_frame_processor()
face_enhancer_module.set_options('model', face_enhancer_module.MODELS[face_enhancer_model])
if face_enhancer_module.pre_check():
return gradio.Dropdown(value = face_enhancer_model)
return gradio.Dropdown(value = frame_processors_globals.face_enhancer_model)
return gradio.Dropdown()
@@ -135,7 +135,7 @@ def update_face_swapper_model(face_swapper_model : FaceSwapperModel) -> gradio.D
face_swapper_module.clear_frame_processor()
face_swapper_module.set_options('model', face_swapper_module.MODELS[face_swapper_model])
if face_swapper_module.pre_check():
return gradio.Dropdown(value = face_swapper_model)
return gradio.Dropdown(value = frame_processors_globals.face_swapper_model)
return gradio.Dropdown()
@@ -145,7 +145,7 @@ def update_frame_enhancer_model(frame_enhancer_model : FrameEnhancerModel) -> gr
frame_enhancer_module.clear_frame_processor()
frame_enhancer_module.set_options('model', frame_enhancer_module.MODELS[frame_enhancer_model])
if frame_enhancer_module.pre_check():
return gradio.Dropdown(value = frame_enhancer_model)
return gradio.Dropdown(value = frame_processors_globals.frame_enhancer_model)
return gradio.Dropdown()
@@ -159,5 +159,5 @@ def update_lip_syncer_model(lip_syncer_model : LipSyncerModel) -> gradio.Dropdow
lip_syncer_module.clear_frame_processor()
lip_syncer_module.set_options('model', lip_syncer_module.MODELS[lip_syncer_model])
if lip_syncer_module.pre_check():
return gradio.Dropdown(value = lip_syncer_model)
return gradio.Dropdown(value = frame_processors_globals.lip_syncer_model)
return gradio.Dropdown()

View File

@@ -1,24 +1,27 @@
from typing import Tuple, Optional
from time import sleep
import gradio
import facefusion.globals
from facefusion import wording
from facefusion import process_manager, wording
from facefusion.core import conditional_process
from facefusion.memory import limit_system_memory
from facefusion.uis.core import get_ui_component
from facefusion.normalizer import normalize_output_path
from facefusion.uis.core import get_ui_component
from facefusion.filesystem import clear_temp, is_image, is_video
OUTPUT_IMAGE : Optional[gradio.Image] = None
OUTPUT_VIDEO : Optional[gradio.Video] = None
OUTPUT_START_BUTTON : Optional[gradio.Button] = None
OUTPUT_CLEAR_BUTTON : Optional[gradio.Button] = None
OUTPUT_STOP_BUTTON : Optional[gradio.Button] = None
def render() -> None:
global OUTPUT_IMAGE
global OUTPUT_VIDEO
global OUTPUT_START_BUTTON
global OUTPUT_STOP_BUTTON
global OUTPUT_CLEAR_BUTTON
OUTPUT_IMAGE = gradio.Image(
@@ -33,6 +36,12 @@ def render() -> None:
variant = 'primary',
size = 'sm'
)
OUTPUT_STOP_BUTTON = gradio.Button(
value = wording.get('uis.stop_button'),
variant = 'primary',
size = 'sm',
visible = False
)
OUTPUT_CLEAR_BUTTON = gradio.Button(
value = wording.get('uis.clear_button'),
size = 'sm'
@@ -42,23 +51,38 @@ def render() -> None:
def listen() -> None:
output_path_textbox = get_ui_component('output_path_textbox')
if output_path_textbox:
OUTPUT_START_BUTTON.click(start, inputs = output_path_textbox, outputs = [ OUTPUT_IMAGE, OUTPUT_VIDEO ])
OUTPUT_START_BUTTON.click(start, outputs = [ OUTPUT_START_BUTTON, OUTPUT_STOP_BUTTON ])
OUTPUT_START_BUTTON.click(process, outputs = [ OUTPUT_IMAGE, OUTPUT_VIDEO, OUTPUT_START_BUTTON, OUTPUT_STOP_BUTTON ])
OUTPUT_STOP_BUTTON.click(stop, outputs = [ OUTPUT_START_BUTTON, OUTPUT_STOP_BUTTON ])
OUTPUT_CLEAR_BUTTON.click(clear, outputs = [ OUTPUT_IMAGE, OUTPUT_VIDEO ])
def start(output_path : str) -> Tuple[gradio.Image, gradio.Video]:
facefusion.globals.output_path = normalize_output_path(facefusion.globals.source_paths, facefusion.globals.target_path, output_path)
def start() -> Tuple[gradio.Button, gradio.Button]:
while not process_manager.is_processing():
sleep(0.5)
return gradio.Button(visible = False), gradio.Button(visible = True)
def process() -> Tuple[gradio.Image, gradio.Video, gradio.Button, gradio.Button]:
normed_output_path = normalize_output_path(facefusion.globals.target_path, facefusion.globals.output_path)
if facefusion.globals.system_memory_limit > 0:
limit_system_memory(facefusion.globals.system_memory_limit)
conditional_process()
if is_image(facefusion.globals.output_path):
return gradio.Image(value = facefusion.globals.output_path, visible = True), gradio.Video(value = None, visible = False)
if is_video(facefusion.globals.output_path):
return gradio.Image(value = None, visible = False), gradio.Video(value = facefusion.globals.output_path, visible = True)
return gradio.Image(), gradio.Video()
if is_image(normed_output_path):
return gradio.Image(value = normed_output_path, visible = True), gradio.Video(value = None, visible = False), gradio.Button(visible = True), gradio.Button(visible = False)
if is_video(normed_output_path):
return gradio.Image(value = None, visible = False), gradio.Video(value = normed_output_path, visible = True), gradio.Button(visible = True), gradio.Button(visible = False)
return gradio.Image(value = None), gradio.Video(value = None), gradio.Button(visible = True), gradio.Button(visible = False)
def stop() -> Tuple[gradio.Button, gradio.Button]:
process_manager.stop()
return gradio.Button(visible = True), gradio.Button(visible = False)
def clear() -> Tuple[gradio.Image, gradio.Video]:
while process_manager.is_processing():
sleep(0.5)
if facefusion.globals.target_path:
clear_temp(facefusion.globals.target_path)
return gradio.Image(value = None), gradio.Video(value = None)

View File

@@ -1,5 +1,4 @@
from typing import Optional, Tuple, List
import tempfile
import gradio
import facefusion.globals
@@ -9,10 +8,11 @@ from facefusion.typing import OutputVideoEncoder, OutputVideoPreset, Fps
from facefusion.filesystem import is_image, is_video
from facefusion.uis.typing import ComponentName
from facefusion.uis.core import get_ui_component, register_ui_component
from facefusion.vision import detect_video_fps, create_video_resolutions, detect_video_resolution, pack_resolution
from facefusion.vision import detect_image_resolution, create_image_resolutions, detect_video_fps, detect_video_resolution, create_video_resolutions, pack_resolution
OUTPUT_PATH_TEXTBOX : Optional[gradio.Textbox] = None
OUTPUT_IMAGE_QUALITY_SLIDER : Optional[gradio.Slider] = None
OUTPUT_IMAGE_RESOLUTION_DROPDOWN : Optional[gradio.Dropdown] = None
OUTPUT_VIDEO_ENCODER_DROPDOWN : Optional[gradio.Dropdown] = None
OUTPUT_VIDEO_PRESET_DROPDOWN : Optional[gradio.Dropdown] = None
OUTPUT_VIDEO_RESOLUTION_DROPDOWN : Optional[gradio.Dropdown] = None
@@ -23,15 +23,25 @@ OUTPUT_VIDEO_FPS_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global OUTPUT_PATH_TEXTBOX
global OUTPUT_IMAGE_QUALITY_SLIDER
global OUTPUT_IMAGE_RESOLUTION_DROPDOWN
global OUTPUT_VIDEO_ENCODER_DROPDOWN
global OUTPUT_VIDEO_PRESET_DROPDOWN
global OUTPUT_VIDEO_RESOLUTION_DROPDOWN
global OUTPUT_VIDEO_QUALITY_SLIDER
global OUTPUT_VIDEO_FPS_SLIDER
output_image_resolutions = []
output_video_resolutions = []
if is_image(facefusion.globals.target_path):
output_image_resolution = detect_image_resolution(facefusion.globals.target_path)
output_image_resolutions = create_image_resolutions(output_image_resolution)
if is_video(facefusion.globals.target_path):
output_video_resolution = detect_video_resolution(facefusion.globals.target_path)
output_video_resolutions = create_video_resolutions(output_video_resolution)
facefusion.globals.output_path = facefusion.globals.output_path or '.'
OUTPUT_PATH_TEXTBOX = gradio.Textbox(
label = wording.get('uis.output_path_textbox'),
value = facefusion.globals.output_path or tempfile.gettempdir(),
value = facefusion.globals.output_path,
max_lines = 1
)
OUTPUT_IMAGE_QUALITY_SLIDER = gradio.Slider(
@@ -42,6 +52,12 @@ def render() -> None:
maximum = facefusion.choices.output_image_quality_range[-1],
visible = is_image(facefusion.globals.target_path)
)
OUTPUT_IMAGE_RESOLUTION_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.output_image_resolution_dropdown'),
choices = output_image_resolutions,
value = facefusion.globals.output_image_resolution,
visible = is_image(facefusion.globals.target_path)
)
OUTPUT_VIDEO_ENCODER_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.output_video_encoder_dropdown'),
choices = facefusion.choices.output_video_encoders,
@@ -64,7 +80,7 @@ def render() -> None:
)
OUTPUT_VIDEO_RESOLUTION_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.output_video_resolution_dropdown'),
choices = create_video_resolutions(facefusion.globals.target_path),
choices = output_video_resolutions,
value = facefusion.globals.output_video_resolution,
visible = is_video(facefusion.globals.target_path)
)
@@ -83,6 +99,7 @@ def render() -> None:
def listen() -> None:
OUTPUT_PATH_TEXTBOX.change(update_output_path, inputs = OUTPUT_PATH_TEXTBOX)
OUTPUT_IMAGE_QUALITY_SLIDER.change(update_output_image_quality, inputs = OUTPUT_IMAGE_QUALITY_SLIDER)
OUTPUT_IMAGE_RESOLUTION_DROPDOWN.change(update_output_image_resolution, inputs = OUTPUT_IMAGE_RESOLUTION_DROPDOWN)
OUTPUT_VIDEO_ENCODER_DROPDOWN.change(update_output_video_encoder, inputs = OUTPUT_VIDEO_ENCODER_DROPDOWN)
OUTPUT_VIDEO_PRESET_DROPDOWN.change(update_output_video_preset, inputs = OUTPUT_VIDEO_PRESET_DROPDOWN)
OUTPUT_VIDEO_QUALITY_SLIDER.change(update_output_video_quality, inputs = OUTPUT_VIDEO_QUALITY_SLIDER)
@@ -97,19 +114,22 @@ def listen() -> None:
component = get_ui_component(component_name)
if component:
for method in [ 'upload', 'change', 'clear' ]:
getattr(component, method)(remote_update, outputs = [ OUTPUT_IMAGE_QUALITY_SLIDER, OUTPUT_VIDEO_ENCODER_DROPDOWN, OUTPUT_VIDEO_PRESET_DROPDOWN, OUTPUT_VIDEO_QUALITY_SLIDER, OUTPUT_VIDEO_RESOLUTION_DROPDOWN, OUTPUT_VIDEO_FPS_SLIDER ])
getattr(component, method)(remote_update, outputs = [ OUTPUT_IMAGE_QUALITY_SLIDER, OUTPUT_IMAGE_RESOLUTION_DROPDOWN, OUTPUT_VIDEO_ENCODER_DROPDOWN, OUTPUT_VIDEO_PRESET_DROPDOWN, OUTPUT_VIDEO_QUALITY_SLIDER, OUTPUT_VIDEO_RESOLUTION_DROPDOWN, OUTPUT_VIDEO_FPS_SLIDER ])
def remote_update() -> Tuple[gradio.Slider, gradio.Dropdown, gradio.Dropdown, gradio.Slider, gradio.Dropdown, gradio.Slider]:
def remote_update() -> Tuple[gradio.Slider, gradio.Dropdown, gradio.Dropdown, gradio.Dropdown, gradio.Slider, gradio.Dropdown, gradio.Slider]:
if is_image(facefusion.globals.target_path):
return gradio.Slider(visible = True), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False), gradio.Dropdown(visible = False, value = None, choices = None), gradio.Slider(visible = False, value = None)
output_image_resolution = detect_image_resolution(facefusion.globals.target_path)
output_image_resolutions = create_image_resolutions(output_image_resolution)
facefusion.globals.output_image_resolution = pack_resolution(output_image_resolution)
return gradio.Slider(visible = True), gradio.Dropdown(visible = True, value = facefusion.globals.output_image_resolution, choices = output_image_resolutions), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False), gradio.Dropdown(visible = False, value = None, choices = None), gradio.Slider(visible = False, value = None)
if is_video(facefusion.globals.target_path):
target_video_resolution = detect_video_resolution(facefusion.globals.target_path)
output_video_resolution = pack_resolution(target_video_resolution)
output_video_resolutions = create_video_resolutions(facefusion.globals.target_path)
output_video_fps = detect_video_fps(facefusion.globals.target_path)
return gradio.Slider(visible = False), gradio.Dropdown(visible = True), gradio.Dropdown(visible = True), gradio.Slider(visible = True), gradio.Dropdown(visible = True, value = output_video_resolution, choices = output_video_resolutions), gradio.Slider(visible = True, value = output_video_fps)
return gradio.Slider(visible = False), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False), gradio.Dropdown(visible = False, value = None, choices = None), gradio.Slider(visible = False, value = None)
output_video_resolution = detect_video_resolution(facefusion.globals.target_path)
output_video_resolutions = create_video_resolutions(output_video_resolution)
facefusion.globals.output_video_resolution = pack_resolution(output_video_resolution)
facefusion.globals.output_video_fps = detect_video_fps(facefusion.globals.target_path)
return gradio.Slider(visible = False), gradio.Dropdown(visible = False), gradio.Dropdown(visible = True), gradio.Dropdown(visible = True), gradio.Slider(visible = True), gradio.Dropdown(visible = True, value = facefusion.globals.output_video_resolution, choices = output_video_resolutions), gradio.Slider(visible = True, value = facefusion.globals.output_video_fps)
return gradio.Slider(visible = False), gradio.Dropdown(visible = False, value = None, choices = None), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False), gradio.Dropdown(visible = False, value = None, choices = None), gradio.Slider(visible = False, value = None)
def update_output_path(output_path : str) -> None:
@@ -120,6 +140,10 @@ def update_output_image_quality(output_image_quality : int) -> None:
facefusion.globals.output_image_quality = output_image_quality
def update_output_image_resolution(output_image_resolution : str) -> None:
facefusion.globals.output_image_resolution = output_image_resolution
def update_output_video_encoder(output_video_encoder: OutputVideoEncoder) -> None:
facefusion.globals.output_video_encoder = output_video_encoder

View File

@@ -2,10 +2,11 @@ from typing import Any, Dict, List, Optional
from time import sleep
import cv2
import gradio
import numpy
import facefusion.globals
from facefusion import wording, logger
from facefusion.audio import get_audio_frame
from facefusion.audio import get_audio_frame, create_empty_audio_frame
from facefusion.common_helper import get_first
from facefusion.core import conditional_append_reference_faces
from facefusion.face_analyser import get_average_face, clear_face_analyser
@@ -26,12 +27,12 @@ def render() -> None:
global PREVIEW_IMAGE
global PREVIEW_FRAME_SLIDER
preview_image_args: Dict[str, Any] =\
preview_image_args : Dict[str, Any] =\
{
'label': wording.get('uis.preview_image'),
'interactive': False
}
preview_frame_slider_args: Dict[str, Any] =\
preview_frame_slider_args : Dict[str, Any] =\
{
'label': wording.get('uis.preview_frame_slider'),
'step': 1,
@@ -46,6 +47,8 @@ def render() -> None:
source_audio_path = get_first(filter_audio_paths(facefusion.globals.source_paths))
if source_audio_path and facefusion.globals.output_video_fps:
source_audio_frame = get_audio_frame(source_audio_path, facefusion.globals.output_video_fps, facefusion.globals.reference_frame_number)
if not numpy.any(source_audio_frame):
source_audio_frame = create_empty_audio_frame()
else:
source_audio_frame = None
if is_image(facefusion.globals.target_path):
@@ -97,6 +100,8 @@ def listen() -> None:
'face_debugger_items_checkbox_group',
'face_enhancer_blend_slider',
'frame_enhancer_blend_slider',
'trim_frame_start_slider',
'trim_frame_end_slider',
'face_selector_mode_dropdown',
'reference_face_distance_slider',
'face_mask_types_checkbox_group',
@@ -124,7 +129,8 @@ def listen() -> None:
'lip_syncer_model_dropdown',
'face_detector_model_dropdown',
'face_detector_size_dropdown',
'face_detector_score_slider'
'face_detector_score_slider',
'face_landmarker_score_slider'
]
for component_name in change_two_component_names:
component = get_ui_component(component_name)
@@ -153,10 +159,14 @@ def update_preview_image(frame_number : int = 0) -> gradio.Image:
source_face = get_average_face(source_frames)
source_audio_path = get_first(filter_audio_paths(facefusion.globals.source_paths))
if source_audio_path and facefusion.globals.output_video_fps:
source_audio_frame = get_audio_frame(source_audio_path, facefusion.globals.output_video_fps, facefusion.globals.reference_frame_number)
reference_audio_frame_number = facefusion.globals.reference_frame_number
if facefusion.globals.trim_frame_start:
reference_audio_frame_number -= facefusion.globals.trim_frame_start
source_audio_frame = get_audio_frame(source_audio_path, facefusion.globals.output_video_fps, reference_audio_frame_number)
if not numpy.any(source_audio_frame):
source_audio_frame = create_empty_audio_frame()
else:
source_audio_frame = None
if is_image(facefusion.globals.target_path):
target_vision_frame = read_static_image(facefusion.globals.target_path)
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, target_vision_frame)
@@ -178,7 +188,7 @@ def update_preview_frame_slider() -> gradio.Slider:
def process_preview_frame(reference_faces : FaceSet, source_face : Face, source_audio_frame : AudioFrame, target_vision_frame : VisionFrame) -> VisionFrame:
target_vision_frame = resize_frame_resolution(target_vision_frame, 640, 640)
target_vision_frame = resize_frame_resolution(target_vision_frame, (640, 640))
if analyse_frame(target_vision_frame):
return cv2.GaussianBlur(target_vision_frame, (99, 99), 0)
for frame_processor in facefusion.globals.frame_processors:

View File

@@ -1,4 +1,4 @@
from typing import Optional, Tuple
from typing import Optional
import gradio
import facefusion.globals
@@ -9,12 +9,10 @@ from facefusion.filesystem import is_video
from facefusion.uis.core import get_ui_component
TEMP_FRAME_FORMAT_DROPDOWN : Optional[gradio.Dropdown] = None
TEMP_FRAME_QUALITY_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global TEMP_FRAME_FORMAT_DROPDOWN
global TEMP_FRAME_QUALITY_SLIDER
TEMP_FRAME_FORMAT_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.temp_frame_format_dropdown'),
@@ -22,34 +20,22 @@ def render() -> None:
value = facefusion.globals.temp_frame_format,
visible = is_video(facefusion.globals.target_path)
)
TEMP_FRAME_QUALITY_SLIDER = gradio.Slider(
label = wording.get('uis.temp_frame_quality_slider'),
value = facefusion.globals.temp_frame_quality,
step = facefusion.choices.temp_frame_quality_range[1] - facefusion.choices.temp_frame_quality_range[0],
minimum = facefusion.choices.temp_frame_quality_range[0],
maximum = facefusion.choices.temp_frame_quality_range[-1],
visible = is_video(facefusion.globals.target_path)
)
def listen() -> None:
TEMP_FRAME_FORMAT_DROPDOWN.change(update_temp_frame_format, inputs = TEMP_FRAME_FORMAT_DROPDOWN)
TEMP_FRAME_QUALITY_SLIDER.change(update_temp_frame_quality, inputs = TEMP_FRAME_QUALITY_SLIDER)
target_video = get_ui_component('target_video')
if target_video:
for method in [ 'upload', 'change', 'clear' ]:
getattr(target_video, method)(remote_update, outputs = [ TEMP_FRAME_FORMAT_DROPDOWN, TEMP_FRAME_QUALITY_SLIDER ])
getattr(target_video, method)(remote_update, outputs = TEMP_FRAME_FORMAT_DROPDOWN)
def remote_update() -> Tuple[gradio.Dropdown, gradio.Slider]:
def remote_update() -> gradio.Dropdown:
if is_video(facefusion.globals.target_path):
return gradio.Dropdown(visible = True), gradio.Slider(visible = True)
return gradio.Dropdown(visible = False), gradio.Slider(visible = False)
return gradio.Dropdown(visible = True)
return gradio.Dropdown(visible = False)
def update_temp_frame_format(temp_frame_format : TempFrameFormat) -> None:
facefusion.globals.temp_frame_format = temp_frame_format
def update_temp_frame_quality(temp_frame_quality : int) -> None:
facefusion.globals.temp_frame_quality = temp_frame_quality

View File

@@ -5,7 +5,7 @@ import facefusion.globals
from facefusion import wording
from facefusion.vision import count_video_frame_total
from facefusion.filesystem import is_video
from facefusion.uis.core import get_ui_component
from facefusion.uis.core import get_ui_component, register_ui_component
TRIM_FRAME_START_SLIDER : Optional[gradio.Slider] = None
TRIM_FRAME_END_SLIDER : Optional[gradio.Slider] = None
@@ -42,6 +42,8 @@ def render() -> None:
with gradio.Row():
TRIM_FRAME_START_SLIDER = gradio.Slider(**trim_frame_start_slider_args)
TRIM_FRAME_END_SLIDER = gradio.Slider(**trim_frame_end_slider_args)
register_ui_component('trim_frame_start_slider', TRIM_FRAME_START_SLIDER)
register_ui_component('trim_frame_end_slider', TRIM_FRAME_END_SLIDER)
def listen() -> None:

View File

@@ -11,7 +11,9 @@ from tqdm import tqdm
import facefusion.globals
from facefusion import logger, wording
from facefusion.audio import create_empty_audio_frame
from facefusion.content_analyser import analyse_stream
from facefusion.filesystem import filter_image_paths
from facefusion.typing import VisionFrame, Face, Fps
from facefusion.face_analyser import get_average_face
from facefusion.processors.frame.core import get_frame_processors_modules, load_frame_processor_module
@@ -92,9 +94,11 @@ def listen() -> None:
def start(webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -> Generator[VisionFrame, None, None]:
facefusion.globals.face_selector_mode = 'one'
facefusion.globals.face_analyser_order = 'large-small'
source_frames = read_static_images(facefusion.globals.source_paths)
source_image_paths = filter_image_paths(facefusion.globals.source_paths)
source_frames = read_static_images(source_image_paths)
source_face = get_average_face(source_frames)
stream = None
if webcam_mode in [ 'udp', 'v4l2' ]:
stream = open_stream(webcam_mode, webcam_resolution, webcam_fps) # type: ignore[arg-type]
webcam_width, webcam_height = unpack_resolution(webcam_resolution)
@@ -150,6 +154,7 @@ def stop() -> gradio.Image:
def process_stream_frame(source_face : Face, target_vision_frame : VisionFrame) -> VisionFrame:
source_audio_frame = create_empty_audio_frame()
for frame_processor_module in get_frame_processors_modules(facefusion.globals.frame_processors):
logger.disable()
if frame_processor_module.pre_process('stream'):
@@ -157,8 +162,7 @@ def process_stream_frame(source_face : Face, target_vision_frame : VisionFrame)
target_vision_frame = frame_processor_module.process_frame(
{
'source_face': source_face,
'reference_faces': None,
'source_audio_frame': None,
'source_audio_frame': source_audio_frame,
'target_vision_frame': target_vision_frame
})
return target_vision_frame