* Cleanup after age modifier PR

* Cleanup after age modifier PR

* Use OpenVino 2024.2.0 for installer

* Prepare 3.0.0 for installer

* Fix benchmark suite, Introduce sync_item() for state manager

* Fix lint

* Render slide preview also in lower res

* Lower thread and queue count to avoid false usage

* Fix spacing

* Feat/jobs UI (#627)

* Jobs UI part1

* Change naming

* Jobs UI part2

* Jobs UI part3

* Jobs UI part4

* Jobs UI part4

* Jobs UI part5

* Jobs UI part6

* Jobs UI part7

* Jobs UI part8

* Jobs UI part9

* Jobs UI part10

* Jobs UI part11

* Jobs UI part12

* Fix rebase

* Jobs UI part13

* Jobs UI part14

* Jobs UI part15

* changes (#626)

* Remove useless ui registration

* Remove useless ui registration

* move job_list.py
replace [0] with get_first()

* optimize imports

* fix date None problem
add test job list

* Jobs UI part16

* Jobs UI part17

* Jobs UI part18

* Jobs UI part19

* Jobs UI part20

* Jobs UI part21

* Jobs UI part22

* move job_list_options

* Add label to job status checkbox group

* changes

* changes

---------

Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>

* Update some dependencies

* UI helper to convert 'none'

* validate job (#628)

* changes

* changes

* add test

* changes

* changes

* Minor adjustments

* Replace is_json with is_file

* Handle empty and invalid json in job_list

* Handle empty and invalid json in job_list

* Handle empty and invalid json in job_list

* Work on the job manager UI

* Cosmetic changes on common helper

* Just make it work for now

* Just make it work for now

* Just make it work for now

* Streamline the step index lookups

* Hide footer

* Simplify instant runner

* Simplify instant runner UI and job manager UI

* Fix empty step choices

* Fix empty step choices

* Fix none values in UI

* Rework on benchmark (add warmup) and job list

* Improve ValueAndUnit

* Add step 1 of x output

* Cosmetic changes on the UI

* Fix invalid job file names

* Update preview

* Introducing has_step() and sorting out insert behaviour

* Introducing has_step() and sorting out insert behaviour

* Add [ none ] to some job id dropdowns

* Make updated dropdown values kinda perfect

* Make updated dropdown values kinda perfect

* Fix testing

* Minor improvement on UI

* Fix false config lookup

* Remove TensorRT as our models are not made for it

* Feat/cli commands second try rev2 (#640)

* Refactor CLI to commands

* Refactor CLI to commands part2

* Refactor CLI to commands part3

* Refactor CLI to commands part4

* Rename everything to facefusion.py

* Refactor CLI to commands part5

* Refactor CLI to commands part6

* Adjust testing

* Fix lint

* Fix lint

* Fix lint

* Refactor CLI to commands part7

* Extend State typing

* Fix false config lookup, adjust logical orders

* Move away from passing program part1

* Move away from passing program part2

* Move away from passing program part3

* Fix lint

* Move away from passing program part4

* ui-args update

* ui-args update

* ui-args update

* temporary type fix

* Move away from passing program part5

* remove unused

* creates args.py

* Move away from passing program part6

* Move away from passing program part7

---------

Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>

* Minor optimizations

* Update commands in README

* Fix job-retry command

* Fix multi runs via UI

* add more job keys

* Cleanup codebase

* One method to create inference session (#641)

* One method to create inference session

* Remove warnings, as there are none

* Remember job id during processing

* Fix face masker config block

* Change wording

* Prevent age modifier from using CoreML

* add expression restorer (#642)

* add expression restorer

* fix import

* fix lint

* changes

* changes

* changes

* Host the final model for expression restorer

* Insert step on the given index

* UI workover (#644)

* UI workover part1

* Introduce ComponentOptions

* Only set Media components to None when visibility changes

* Clear static faces and reference faces between step processing

* Minor changes

* Minor changes

* Fix testing

* Enable test_sanitize_path_for_windows (#646)

* Dynamic download during job processing (#647)

* Fix face masker UI

* Rename run-headless to headless-run

* Feat/split frame processor UI (#649)

* Split frame processor UI

* Split frame processor UI part3, Refactor get_model_initializer

* Split frame processor UI part4

* Feat/rename frame processors (#651)

* Rename frame processors

* Rename frame processors part2

* Fix imports

 Conflicts:
	facefusion/uis/layouts/benchmark.py
	facefusion/uis/layouts/default.py

* Fix imports

* Cosmetic changes

* Fix multi threading for ROCm

* Change temp frames pattern

* Adjust terminal help

* remove expression restorer (#653)

* Expression restorer as processor (#655)

* add expression restorer

* changes

* Cleanup code

* Add TensorRT support back

* Add TensorRT support back

* Add TensorRT support back

* changes (#656)

* Change minor wording

* Fix face enhancer slider

* Add more typing

* Fix expression-restorer when using trim (#659)

* changes

* changes

* Rework/model and inference pool part2 (#660)

* Rework on model and inference pool

* Introduce inference sources and pools part1

* Introduce inference sources and pools part2

* Introduce inference sources and pools part3

* Introduce inference sources and pools part4

* Introduce inference sources and pools part5

* Introduce inference sources and pools part6

* Introduce inference sources and pools part6

* Introduce inference sources and pools part6

* Introduce inference sources and pools part7

* Introduce inference sources and pools part7

* Introduce inference sources and pools part8

* Introduce inference sources and pools part9

* Introduce inference sources and pools part10

* Introduce inference sources and pools part11

* Introduce inference sources and pools part11

* Introduce inference sources and pools part11

* Introduce inference sources and pools part12

* Reorganize the face masker UI

* Fix trim in UI

* Feat/hashed sources (#668)

* Introduce source helper

* Remove post_check() and just use process_manager

* Remove post_check() part2

* Add hash based downloads

* Add hash based downloads part2

* Add hash based downloads part3

* Add hash based downloads part4

* Add hash based downloads part5

* Add hash based downloads part6

* Add hash based downloads part7

* Add hash based downloads part7

* Add hash based downloads part8

* Remove print

* Prepare 3.0.0 release

* Fix UI

* Release the check when really done

* Update inputs for live portrait

* Update to 3.0.0 releases, extend download postfix

* Move files to the right place

* Logging for the hash and source validation

* Changing logic to handle corrupt sources

* Fix typo

* Use names over get_inputs(), Remove set_options() call

* Age modifier now works for CoreML too

* Update age_modifier.py

* Add video encoder h264_videotoolbox and hevc_videotoolbox

* Face editor add eye gaze & remove open factor sliders (#670)

* changes

* add eye gaze

* changes

* cleanup

* add eyebrow control

* changes

* changes

* Feat/terminal UI (#671)

* Introduce terminal to the UI

* Introduce terminal to the UI part2

* Introduce terminal to the UI part2

* Introduce terminal to the UI part2

* Calc range step to avoid weird values

* Use Sequence for ranges

* Use Sequence for ranges

* changes (#673)

* Use Sequence for ranges

* Finalize terminal UI

* Finalize terminal UI

* Webcam cosmetics, Fix normalize fps to accept int

* Cosmetic changes

* Finalize terminal UI

* Rename leftover typings

* Fix wording

* Fix rounding in metavar

* Fix rounding in metavar

* Rename to face classifier

* Face editor lip moves (#677)

* changes

* changes

* changes

* Fix rounding in metavar

* Rename to face classifier

* changes

* changes

* update naming

---------

Co-authored-by: henryruhs <info@henryruhs.com>

* Fix wording

* Feat/many landmarker + face analyser breakdown (#678)

* Basic multi landmarker integration

* Simplify some method names

* Break into face_detector and face_landmarker

* Fix cosmetics

* Fix testing

* Break into face_attributor and face_recognizer

* Clear them all

* Clear them all

* Rename to face classifier

* Rename to face classifier

* Fix testing

* Fix stuff

* Add face landmarker model to UI

* Add face landmarker model to UI part2

* Split the config

* Split the UI

* Improvement from code review

* Improvement from code review

* Validate args also for sub parsers

* Remove clear of processors in process step

* Allow finder control for the face editor

* Fix lint

* Improve testing performance

* Remove unused file, Clear processors from the UI before job runs

* Update the installer

* Uniform set handler for swapper and detector in the UI

* Fix example urls

* Feat/inference manager (#684)

* Introduce inference manager

* Migrate all to inference manager

* clean ini

* Introduce app context based inference pools

* Fix lint

* Fix typing

* Adjust layout

* Less border radius

* Rename app context names

* Fix/live portrait directml (#691)

* changes (#690)

* Adjust naming

* Use our assets release

* Adjust naming

---------

Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>

* Add caches to gitignore

* Update dependencies and drop CUDA 11.8 support (#693)

* Update dependencies and drop CUDA 11.8 support

* Play save and keep numpy 1.x.x

* Improve TensorRT optimization

* changes

* changes

* changes

* changes

* changes

* changes

* changes

* changes

* changes

* Reuse inference sessions (#696)

* Fix force-download command

* Refactor processors to forward() (#698)

* Install tensorrt when selecting cuda

* Minor changes

* Use latest numpy

* Fix limit system memory

* Implement forward() for every inference (#699)

* Implement forward() for every inference

* Implement forward() for every inference

* Implement forward() for every inference

* Implement forward() for every inference

* changes

* changes

* changes

* changes

* Feat/fairface (#710)

* Replace gender_age model with fair face (#709)

* changes

* changes

* changes

* age dropdown to range-slider

* Cleanup code

* Cleanup code

---------

Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>

* Extend installer to set library paths for cuda and tensorrt (#707)

* Extend installer to set library paths for cuda and tensorrt

* Add refresh of conda env

* Remove invalid commands

* Set the conda env according to operating system

* Update for ROCm 6.2

* fix installer

* Aktualisieren von installer.py

* Add missing face selector keys

* Try to keep original LD_LIBRARY_PATH

* windows support installer

* Final touch to the installer

* Remove spaces

* Simplidy collect_model_downloads()

* Fix force download for once and forever

* Housekeeping (#715)

* changes

* changes

* changes

* Fix performance part1

* Fix mixed states (#689)

* Fix mixed states

* Add missing sync for job args

* Move UnionStateXXX to base typing

* Undo

* Remove UnionStateXXX

* Fix app context performance lookup (#717)

* Restore performance for inswapper

* Mover upper() to the logger

* Undo debugging

* Move TensorRT installation to docs

* Sort out log level typing, Add log level UI dropdown (#719)

* Fix inference pool part1

* Validate conda library paths existence

* Default face selector order to large-small

* Fix inference pool context according to execution provider (#720)

* Fix app context under Windows

* CUDA and TensorRT update for the installer

* Remove concept of static processor modules

* Revert false commit

* Change event order makes a difference

* Fix multi model context in inference pool (#721)

* Fix multi model context in inference pool

* Fix multi model context in inference pool part2

* Use latest gradio to avoid fastapi bug

* Rework on the Windows Installer

* Use embedding converter (#724)

* changes (#723)

* Upload models to official assets repo

---------

Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>

* Rework on the Windows Installer part2

* Resolve subprocess calls (#726)

* Experiment

* Resolve subprocess calls to cover edge cases like broken PATH

* Adjust wording

* Simplify code

* Rework on the Windows Installer part3

* Rework on the Windows Installer part4

* Numpy fix for older onnxruntime

* changes (#729)

* Add space

* Add MacOS installer

* Use favicon

* Fix disabled logger

* Layout polishing (#731)

* Update dependencies, Adjust many face landmarker logic

* Cosmetics changes

* Should be button

* Introduce randomized action button

* Fix update of lip syncer and expression restorer

* Stop sharing inference session this prevents flushing VRAM

* Fix test

* Fix urls

* Prepare release

* Vanish inquirer

* Sticky preview does not work on portrait images

* Sticky preview only for landscape images and videos

* remove gradio tunnel env

* Change wording and deeplinks

* increase peppa landmark score offset

* Change wording

* Graceful exit install.py

* Just adding a required

* Cannot use the exit_helper

* Rename our model

* Change color of face-landmark-68/5

* Limit liveportrait (#739)

* changes

* changes

* changes

* Cleanup

* Cleanup

---------

Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com>
Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>

* limit expression restorer

* change expression restorer 0-100 range

* Use 256x icon

* changes

* changes

* changes

* changes

* Limit face editor rotation (#745)

* changes (#743)

* Finish euler methods

---------

Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>

* Use different coveralls badge

* Move about wording

* Shorten scope in the logger

* changes

* changes

* Shorten scope in the logger

* fix typo

* Simplify the arcface converter names

* Update preview

---------

Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com>
Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com>
This commit is contained in:
Henry Ruhs
2024-09-20 17:27:50 +02:00
committed by GitHub
parent 57016d7c77
commit 13761af044
171 changed files with 11598 additions and 5115 deletions

View File

@@ -1,23 +1,41 @@
import random
from typing import Optional
import gradio
from facefusion import metadata, wording
ABOUT_BUTTON : Optional[gradio.HTML] = None
DONATE_BUTTON : Optional[gradio.HTML] = None
METADATA_BUTTON : Optional[gradio.Button] = None
ACTION_BUTTON : Optional[gradio.Button] = None
def render() -> None:
global ABOUT_BUTTON
global DONATE_BUTTON
global METADATA_BUTTON
global ACTION_BUTTON
ABOUT_BUTTON = gradio.Button(
action = random.choice(
[
{
'wording': wording.get('about.become_a_member'),
'url': 'https://subscribe.facefusion.io'
},
{
'wording': wording.get('about.join_our_community'),
'url': 'https://join.facefusion.io'
},
{
'wording': wording.get('about.read_the_documentation'),
'url': 'https://docs.facefusion.io'
}
])
METADATA_BUTTON = gradio.Button(
value = metadata.get('name') + ' ' + metadata.get('version'),
variant = 'primary',
link = metadata.get('url')
)
DONATE_BUTTON = gradio.Button(
value = wording.get('uis.donate_button'),
link = 'https://donate.facefusion.io',
ACTION_BUTTON = gradio.Button(
value = action.get('wording'),
link = action.get('url'),
size = 'sm'
)

View File

@@ -0,0 +1,63 @@
from typing import List, Optional, Tuple
import gradio
from facefusion import state_manager, wording
from facefusion.common_helper import calc_float_step
from facefusion.processors import choices as processors_choices
from facefusion.processors.core import load_processor_module
from facefusion.processors.typing import AgeModifierModel
from facefusion.uis.core import get_ui_component, register_ui_component
AGE_MODIFIER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
AGE_MODIFIER_DIRECTION_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global AGE_MODIFIER_MODEL_DROPDOWN
global AGE_MODIFIER_DIRECTION_SLIDER
AGE_MODIFIER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.age_modifier_model_dropdown'),
choices = processors_choices.age_modifier_models,
value = state_manager.get_item('age_modifier_model'),
visible = 'age_modifier' in state_manager.get_item('processors')
)
AGE_MODIFIER_DIRECTION_SLIDER = gradio.Slider(
label = wording.get('uis.age_modifier_direction_slider'),
value = state_manager.get_item('age_modifier_direction'),
step = calc_float_step(processors_choices.age_modifier_direction_range),
minimum = processors_choices.age_modifier_direction_range[0],
maximum = processors_choices.age_modifier_direction_range[-1],
visible = 'age_modifier' in state_manager.get_item('processors')
)
register_ui_component('age_modifier_model_dropdown', AGE_MODIFIER_MODEL_DROPDOWN)
register_ui_component('age_modifier_direction_slider', AGE_MODIFIER_DIRECTION_SLIDER)
def listen() -> None:
AGE_MODIFIER_MODEL_DROPDOWN.change(update_age_modifier_model, inputs = AGE_MODIFIER_MODEL_DROPDOWN, outputs = AGE_MODIFIER_MODEL_DROPDOWN)
AGE_MODIFIER_DIRECTION_SLIDER.release(update_age_modifier_direction, inputs = AGE_MODIFIER_DIRECTION_SLIDER)
processors_checkbox_group = get_ui_component('processors_checkbox_group')
if processors_checkbox_group:
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = [ AGE_MODIFIER_MODEL_DROPDOWN, AGE_MODIFIER_DIRECTION_SLIDER ])
def remote_update(processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Slider]:
has_age_modifier = 'age_modifier' in processors
return gradio.Dropdown(visible = has_age_modifier), gradio.Slider(visible = has_age_modifier)
def update_age_modifier_model(age_modifier_model : AgeModifierModel) -> gradio.Dropdown:
age_modifier_module = load_processor_module('age_modifier')
age_modifier_module.clear_inference_pool()
state_manager.set_item('age_modifier_model', age_modifier_model)
if age_modifier_module.pre_check():
return gradio.Dropdown(value = state_manager.get_item('age_modifier_model'))
return gradio.Dropdown()
def update_age_modifier_direction(age_modifier_direction : float) -> None:
state_manager.set_item('age_modifier_direction', int(age_modifier_direction))

View File

@@ -1,20 +1,20 @@
from typing import Any, Optional, List, Dict, Generator
from time import sleep, perf_counter
import tempfile
import hashlib
import os
import statistics
import tempfile
from time import perf_counter
from typing import Any, Dict, Generator, List, Optional
import gradio
import facefusion.globals
from facefusion import process_manager, wording
from facefusion.face_store import clear_static_faces
from facefusion.processors.frame.core import get_frame_processors_modules
from facefusion.vision import count_video_frame_total, detect_video_resolution, detect_video_fps, pack_resolution
from facefusion import state_manager, wording
from facefusion.core import conditional_process
from facefusion.filesystem import is_video
from facefusion.memory import limit_system_memory
from facefusion.filesystem import clear_temp
from facefusion.uis.core import get_ui_component
from facefusion.vision import count_video_frame_total, detect_video_fps, detect_video_resolution, pack_resolution
BENCHMARK_RESULTS_DATAFRAME : Optional[gradio.Dataframe] = None
BENCHMARK_BENCHMARKS_DATAFRAME : Optional[gradio.Dataframe] = None
BENCHMARK_START_BUTTON : Optional[gradio.Button] = None
BENCHMARK_CLEAR_BUTTON : Optional[gradio.Button] = None
BENCHMARKS : Dict[str, str] =\
@@ -30,12 +30,11 @@ BENCHMARKS : Dict[str, str] =\
def render() -> None:
global BENCHMARK_RESULTS_DATAFRAME
global BENCHMARK_BENCHMARKS_DATAFRAME
global BENCHMARK_START_BUTTON
global BENCHMARK_CLEAR_BUTTON
BENCHMARK_RESULTS_DATAFRAME = gradio.Dataframe(
label = wording.get('uis.benchmark_results_dataframe'),
BENCHMARK_BENCHMARKS_DATAFRAME = gradio.Dataframe(
headers =
[
'target_path',
@@ -53,17 +52,14 @@ def render() -> None:
'number',
'number',
'number'
]
],
show_label = False
)
BENCHMARK_START_BUTTON = gradio.Button(
value = wording.get('uis.start_button'),
variant = 'primary',
size = 'sm'
)
BENCHMARK_CLEAR_BUTTON = gradio.Button(
value = wording.get('uis.clear_button'),
size = 'sm'
)
def listen() -> None:
@@ -71,46 +67,51 @@ def listen() -> None:
benchmark_cycles_slider = get_ui_component('benchmark_cycles_slider')
if benchmark_runs_checkbox_group and benchmark_cycles_slider:
BENCHMARK_START_BUTTON.click(start, inputs = [ benchmark_runs_checkbox_group, benchmark_cycles_slider ], outputs = BENCHMARK_RESULTS_DATAFRAME)
BENCHMARK_CLEAR_BUTTON.click(clear, outputs = BENCHMARK_RESULTS_DATAFRAME)
BENCHMARK_START_BUTTON.click(start, inputs = [ benchmark_runs_checkbox_group, benchmark_cycles_slider ], outputs = BENCHMARK_BENCHMARKS_DATAFRAME)
def suggest_output_path(target_path : str) -> Optional[str]:
if is_video(target_path):
_, target_extension = os.path.splitext(target_path)
return os.path.join(tempfile.gettempdir(), hashlib.sha1().hexdigest()[:8] + target_extension)
return None
def start(benchmark_runs : List[str], benchmark_cycles : int) -> Generator[List[Any], None, None]:
facefusion.globals.source_paths = [ '.assets/examples/source.jpg', '.assets/examples/source.mp3' ]
facefusion.globals.output_path = tempfile.gettempdir()
facefusion.globals.face_landmarker_score = 0
facefusion.globals.temp_frame_format = 'bmp'
facefusion.globals.output_video_preset = 'ultrafast'
state_manager.init_item('source_paths', [ '.assets/examples/source.jpg', '.assets/examples/source.mp3' ])
state_manager.init_item('face_landmarker_score', 0)
state_manager.init_item('temp_frame_format', 'bmp')
state_manager.init_item('output_video_preset', 'ultrafast')
state_manager.sync_item('execution_providers')
state_manager.sync_item('execution_thread_count')
state_manager.sync_item('execution_queue_count')
state_manager.sync_item('system_memory_limit')
benchmark_results = []
target_paths = [ BENCHMARKS[benchmark_run] for benchmark_run in benchmark_runs if benchmark_run in BENCHMARKS ]
if target_paths:
pre_process()
for target_path in target_paths:
facefusion.globals.target_path = target_path
state_manager.init_item('target_path', target_path)
state_manager.init_item('output_path', suggest_output_path(state_manager.get_item('target_path')))
benchmark_results.append(benchmark(benchmark_cycles))
yield benchmark_results
post_process()
def pre_process() -> None:
if facefusion.globals.system_memory_limit > 0:
limit_system_memory(facefusion.globals.system_memory_limit)
for frame_processor_module in get_frame_processors_modules(facefusion.globals.frame_processors):
frame_processor_module.get_frame_processor()
def post_process() -> None:
clear_static_faces()
system_memory_limit = state_manager.get_item('system_memory_limit')
if system_memory_limit and system_memory_limit > 0:
limit_system_memory(system_memory_limit)
def benchmark(benchmark_cycles : int) -> List[Any]:
process_times = []
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
output_video_resolution = detect_video_resolution(facefusion.globals.target_path)
facefusion.globals.output_video_resolution = pack_resolution(output_video_resolution)
facefusion.globals.output_video_fps = detect_video_fps(facefusion.globals.target_path)
video_frame_total = count_video_frame_total(state_manager.get_item('target_path'))
output_video_resolution = detect_video_resolution(state_manager.get_item('target_path'))
state_manager.init_item('output_video_resolution', pack_resolution(output_video_resolution))
state_manager.init_item('output_video_fps', detect_video_fps(state_manager.get_item('target_path')))
conditional_process()
for index in range(benchmark_cycles):
start_time = perf_counter()
conditional_process()
@@ -123,18 +124,10 @@ def benchmark(benchmark_cycles : int) -> List[Any]:
return\
[
facefusion.globals.target_path,
state_manager.get_item('target_path'),
benchmark_cycles,
average_run,
fastest_run,
slowest_run,
relative_fps
]
def clear() -> gradio.Dataframe:
while process_manager.is_processing():
sleep(0.5)
if facefusion.globals.target_path:
clear_temp(facefusion.globals.target_path)
return gradio.Dataframe(value = None)

View File

@@ -1,9 +1,10 @@
from typing import Optional
import gradio
from facefusion import wording
from facefusion.uis.core import register_ui_component
from facefusion.uis.components.benchmark import BENCHMARKS
from facefusion.uis.core import register_ui_component
BENCHMARK_RUNS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
BENCHMARK_CYCLES_SLIDER : Optional[gradio.Button] = None

View File

@@ -1,8 +1,8 @@
from typing import Optional, List
from typing import List, Optional
import gradio
import facefusion.globals
from facefusion import wording
from facefusion import state_manager, wording
from facefusion.uis import choices as uis_choices
COMMON_OPTIONS_CHECKBOX_GROUP : Optional[gradio.Checkboxgroup] = None
@@ -11,17 +11,19 @@ COMMON_OPTIONS_CHECKBOX_GROUP : Optional[gradio.Checkboxgroup] = None
def render() -> None:
global COMMON_OPTIONS_CHECKBOX_GROUP
value = []
if facefusion.globals.keep_temp:
value.append('keep-temp')
if facefusion.globals.skip_audio:
value.append('skip-audio')
if facefusion.globals.skip_download:
value.append('skip-download')
common_options = []
if state_manager.get_item('skip_download'):
common_options.append('skip-download')
if state_manager.get_item('keep_temp'):
common_options.append('keep-temp')
if state_manager.get_item('skip_audio'):
common_options.append('skip-audio')
COMMON_OPTIONS_CHECKBOX_GROUP = gradio.Checkboxgroup(
label = wording.get('uis.common_options_checkbox_group'),
choices = uis_choices.common_options,
value = value
value = common_options
)
@@ -30,6 +32,9 @@ def listen() -> None:
def update(common_options : List[str]) -> None:
facefusion.globals.keep_temp = 'keep-temp' in common_options
facefusion.globals.skip_audio = 'skip-audio' in common_options
facefusion.globals.skip_download = 'skip-download' in common_options
skip_temp = 'skip-download' in common_options
keep_temp = 'keep-temp' in common_options
skip_audio = 'skip-audio' in common_options
state_manager.set_item('skip_download', skip_temp)
state_manager.set_item('keep_temp', keep_temp)
state_manager.set_item('skip_audio', skip_audio)

View File

@@ -1,12 +1,11 @@
from typing import List, Optional
import gradio
import onnxruntime
import facefusion.globals
from facefusion import wording
from facefusion.face_analyser import clear_face_analyser
from facefusion.processors.frame.core import clear_frame_processors_modules
from facefusion.execution import encode_execution_providers, decode_execution_providers
import gradio
from facefusion import content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, state_manager, voice_extractor, wording
from facefusion.execution import get_execution_provider_choices
from facefusion.processors.core import clear_processors_modules
from facefusion.typing import ExecutionProviderKey
EXECUTION_PROVIDERS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
@@ -16,8 +15,8 @@ def render() -> None:
EXECUTION_PROVIDERS_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.execution_providers_checkbox_group'),
choices = encode_execution_providers(onnxruntime.get_available_providers()),
value = encode_execution_providers(facefusion.globals.execution_providers)
choices = get_execution_provider_choices(),
value = state_manager.get_item('execution_providers')
)
@@ -25,9 +24,15 @@ def listen() -> None:
EXECUTION_PROVIDERS_CHECKBOX_GROUP.change(update_execution_providers, inputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP, outputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP)
def update_execution_providers(execution_providers : List[str]) -> gradio.CheckboxGroup:
clear_face_analyser()
clear_frame_processors_modules()
execution_providers = execution_providers or encode_execution_providers(onnxruntime.get_available_providers())
facefusion.globals.execution_providers = decode_execution_providers(execution_providers)
return gradio.CheckboxGroup(value = execution_providers)
def update_execution_providers(execution_providers : List[ExecutionProviderKey]) -> gradio.CheckboxGroup:
content_analyser.clear_inference_pool()
face_classifier.clear_inference_pool()
face_detector.clear_inference_pool()
face_landmarker.clear_inference_pool()
face_masker.clear_inference_pool()
face_recognizer.clear_inference_pool()
voice_extractor.clear_inference_pool()
clear_processors_modules(state_manager.get_item('processors'))
execution_providers = execution_providers or get_execution_provider_choices()
state_manager.set_item('execution_providers', execution_providers)
return gradio.CheckboxGroup(value = state_manager.get_item('execution_providers'))

View File

@@ -1,9 +1,10 @@
from typing import Optional
import gradio
import facefusion.globals
import facefusion.choices
from facefusion import wording
from facefusion import state_manager, wording
from facefusion.common_helper import calc_int_step
EXECUTION_QUEUE_COUNT_SLIDER : Optional[gradio.Slider] = None
@@ -13,8 +14,8 @@ def render() -> None:
EXECUTION_QUEUE_COUNT_SLIDER = gradio.Slider(
label = wording.get('uis.execution_queue_count_slider'),
value = facefusion.globals.execution_queue_count,
step = facefusion.choices.execution_queue_count_range[1] - facefusion.choices.execution_queue_count_range[0],
value = state_manager.get_item('execution_queue_count'),
step = calc_int_step(facefusion.choices.execution_queue_count_range),
minimum = facefusion.choices.execution_queue_count_range[0],
maximum = facefusion.choices.execution_queue_count_range[-1]
)
@@ -24,5 +25,5 @@ def listen() -> None:
EXECUTION_QUEUE_COUNT_SLIDER.release(update_execution_queue_count, inputs = EXECUTION_QUEUE_COUNT_SLIDER)
def update_execution_queue_count(execution_queue_count : int = 1) -> None:
facefusion.globals.execution_queue_count = execution_queue_count
def update_execution_queue_count(execution_queue_count : float) -> None:
state_manager.set_item('execution_queue_count', int(execution_queue_count))

View File

@@ -1,9 +1,10 @@
from typing import Optional
import gradio
import facefusion.globals
import facefusion.choices
from facefusion import wording
from facefusion import state_manager, wording
from facefusion.common_helper import calc_int_step
EXECUTION_THREAD_COUNT_SLIDER : Optional[gradio.Slider] = None
@@ -13,8 +14,8 @@ def render() -> None:
EXECUTION_THREAD_COUNT_SLIDER = gradio.Slider(
label = wording.get('uis.execution_thread_count_slider'),
value = facefusion.globals.execution_thread_count,
step = facefusion.choices.execution_thread_count_range[1] - facefusion.choices.execution_thread_count_range[0],
value = state_manager.get_item('execution_thread_count'),
step = calc_int_step(facefusion.choices.execution_thread_count_range),
minimum = facefusion.choices.execution_thread_count_range[0],
maximum = facefusion.choices.execution_thread_count_range[-1]
)
@@ -24,6 +25,5 @@ def listen() -> None:
EXECUTION_THREAD_COUNT_SLIDER.release(update_execution_thread_count, inputs = EXECUTION_THREAD_COUNT_SLIDER)
def update_execution_thread_count(execution_thread_count : int = 1) -> None:
facefusion.globals.execution_thread_count = execution_thread_count
def update_execution_thread_count(execution_thread_count : float) -> None:
state_manager.set_item('execution_thread_count', int(execution_thread_count))

View File

@@ -0,0 +1,63 @@
from typing import List, Optional, Tuple
import gradio
from facefusion import state_manager, wording
from facefusion.common_helper import calc_float_step
from facefusion.processors import choices as processors_choices
from facefusion.processors.core import load_processor_module
from facefusion.processors.typing import ExpressionRestorerModel
from facefusion.uis.core import get_ui_component, register_ui_component
EXPRESSION_RESTORER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
EXPRESSION_RESTORER_FACTOR_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global EXPRESSION_RESTORER_MODEL_DROPDOWN
global EXPRESSION_RESTORER_FACTOR_SLIDER
EXPRESSION_RESTORER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.expression_restorer_model_dropdown'),
choices = processors_choices.expression_restorer_models,
value = state_manager.get_item('expression_restorer_model'),
visible = 'expression_restorer' in state_manager.get_item('processors')
)
EXPRESSION_RESTORER_FACTOR_SLIDER = gradio.Slider(
label = wording.get('uis.expression_restorer_factor_slider'),
value = state_manager.get_item('expression_restorer_factor'),
step = calc_float_step(processors_choices.expression_restorer_factor_range),
minimum = processors_choices.expression_restorer_factor_range[0],
maximum = processors_choices.expression_restorer_factor_range[-1],
visible = 'expression_restorer' in state_manager.get_item('processors'),
)
register_ui_component('expression_restorer_model_dropdown', EXPRESSION_RESTORER_MODEL_DROPDOWN)
register_ui_component('expression_restorer_factor_slider', EXPRESSION_RESTORER_FACTOR_SLIDER)
def listen() -> None:
EXPRESSION_RESTORER_MODEL_DROPDOWN.change(update_expression_restorer_model, inputs = EXPRESSION_RESTORER_MODEL_DROPDOWN, outputs = EXPRESSION_RESTORER_MODEL_DROPDOWN)
EXPRESSION_RESTORER_FACTOR_SLIDER.release(update_expression_restorer_factor, inputs = EXPRESSION_RESTORER_FACTOR_SLIDER)
processors_checkbox_group = get_ui_component('processors_checkbox_group')
if processors_checkbox_group:
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = [ EXPRESSION_RESTORER_MODEL_DROPDOWN, EXPRESSION_RESTORER_FACTOR_SLIDER ])
def remote_update(processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Slider]:
has_expression_restorer = 'expression_restorer' in processors
return gradio.Dropdown(visible = has_expression_restorer), gradio.Slider(visible = has_expression_restorer)
def update_expression_restorer_model(expression_restorer_model : ExpressionRestorerModel) -> gradio.Dropdown:
expression_restorer_module = load_processor_module('expression_restorer')
expression_restorer_module.clear_inference_pool()
state_manager.set_item('expression_restorer_model', expression_restorer_model)
if expression_restorer_module.pre_check():
return gradio.Dropdown(value = state_manager.get_item('expression_restorer_model'))
return gradio.Dropdown()
def update_expression_restorer_factor(expression_restorer_factor : float) -> None:
state_manager.set_item('expression_restorer_factor', int(expression_restorer_factor))

View File

@@ -1,123 +0,0 @@
from typing import Optional, Dict, Any, Tuple
import gradio
import facefusion.globals
import facefusion.choices
from facefusion import face_analyser, wording
from facefusion.typing import FaceAnalyserOrder, FaceAnalyserAge, FaceAnalyserGender, FaceDetectorModel
from facefusion.uis.core import register_ui_component
FACE_ANALYSER_ORDER_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_ANALYSER_AGE_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_ANALYSER_GENDER_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_DETECTOR_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_DETECTOR_SIZE_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_DETECTOR_SCORE_SLIDER : Optional[gradio.Slider] = None
FACE_LANDMARKER_SCORE_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global FACE_ANALYSER_ORDER_DROPDOWN
global FACE_ANALYSER_AGE_DROPDOWN
global FACE_ANALYSER_GENDER_DROPDOWN
global FACE_DETECTOR_MODEL_DROPDOWN
global FACE_DETECTOR_SIZE_DROPDOWN
global FACE_DETECTOR_SCORE_SLIDER
global FACE_LANDMARKER_SCORE_SLIDER
face_detector_size_dropdown_args : Dict[str, Any] =\
{
'label': wording.get('uis.face_detector_size_dropdown'),
'value': facefusion.globals.face_detector_size
}
if facefusion.globals.face_detector_size in facefusion.choices.face_detector_set[facefusion.globals.face_detector_model]:
face_detector_size_dropdown_args['choices'] = facefusion.choices.face_detector_set[facefusion.globals.face_detector_model]
with gradio.Row():
FACE_ANALYSER_ORDER_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_analyser_order_dropdown'),
choices = facefusion.choices.face_analyser_orders,
value = facefusion.globals.face_analyser_order
)
FACE_ANALYSER_AGE_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_analyser_age_dropdown'),
choices = [ 'none' ] + facefusion.choices.face_analyser_ages,
value = facefusion.globals.face_analyser_age or 'none'
)
FACE_ANALYSER_GENDER_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_analyser_gender_dropdown'),
choices = [ 'none' ] + facefusion.choices.face_analyser_genders,
value = facefusion.globals.face_analyser_gender or 'none'
)
FACE_DETECTOR_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_detector_model_dropdown'),
choices = facefusion.choices.face_detector_set.keys(),
value = facefusion.globals.face_detector_model
)
FACE_DETECTOR_SIZE_DROPDOWN = gradio.Dropdown(**face_detector_size_dropdown_args)
with gradio.Row():
FACE_DETECTOR_SCORE_SLIDER = gradio.Slider(
label = wording.get('uis.face_detector_score_slider'),
value = facefusion.globals.face_detector_score,
step = facefusion.choices.face_detector_score_range[1] - facefusion.choices.face_detector_score_range[0],
minimum = facefusion.choices.face_detector_score_range[0],
maximum = facefusion.choices.face_detector_score_range[-1]
)
FACE_LANDMARKER_SCORE_SLIDER = gradio.Slider(
label = wording.get('uis.face_landmarker_score_slider'),
value = facefusion.globals.face_landmarker_score,
step = facefusion.choices.face_landmarker_score_range[1] - facefusion.choices.face_landmarker_score_range[0],
minimum = facefusion.choices.face_landmarker_score_range[0],
maximum = facefusion.choices.face_landmarker_score_range[-1]
)
register_ui_component('face_analyser_order_dropdown', FACE_ANALYSER_ORDER_DROPDOWN)
register_ui_component('face_analyser_age_dropdown', FACE_ANALYSER_AGE_DROPDOWN)
register_ui_component('face_analyser_gender_dropdown', FACE_ANALYSER_GENDER_DROPDOWN)
register_ui_component('face_detector_model_dropdown', FACE_DETECTOR_MODEL_DROPDOWN)
register_ui_component('face_detector_size_dropdown', FACE_DETECTOR_SIZE_DROPDOWN)
register_ui_component('face_detector_score_slider', FACE_DETECTOR_SCORE_SLIDER)
register_ui_component('face_landmarker_score_slider', FACE_LANDMARKER_SCORE_SLIDER)
def listen() -> None:
FACE_ANALYSER_ORDER_DROPDOWN.change(update_face_analyser_order, inputs = FACE_ANALYSER_ORDER_DROPDOWN)
FACE_ANALYSER_AGE_DROPDOWN.change(update_face_analyser_age, inputs = FACE_ANALYSER_AGE_DROPDOWN)
FACE_ANALYSER_GENDER_DROPDOWN.change(update_face_analyser_gender, inputs = FACE_ANALYSER_GENDER_DROPDOWN)
FACE_DETECTOR_MODEL_DROPDOWN.change(update_face_detector_model, inputs = FACE_DETECTOR_MODEL_DROPDOWN, outputs = [ FACE_DETECTOR_MODEL_DROPDOWN, FACE_DETECTOR_SIZE_DROPDOWN ])
FACE_DETECTOR_SIZE_DROPDOWN.change(update_face_detector_size, inputs = FACE_DETECTOR_SIZE_DROPDOWN)
FACE_DETECTOR_SCORE_SLIDER.release(update_face_detector_score, inputs = FACE_DETECTOR_SCORE_SLIDER)
FACE_LANDMARKER_SCORE_SLIDER.release(update_face_landmarker_score, inputs = FACE_LANDMARKER_SCORE_SLIDER)
def update_face_analyser_order(face_analyser_order : FaceAnalyserOrder) -> None:
facefusion.globals.face_analyser_order = face_analyser_order if face_analyser_order != 'none' else None
def update_face_analyser_age(face_analyser_age : FaceAnalyserAge) -> None:
facefusion.globals.face_analyser_age = face_analyser_age if face_analyser_age != 'none' else None
def update_face_analyser_gender(face_analyser_gender : FaceAnalyserGender) -> None:
facefusion.globals.face_analyser_gender = face_analyser_gender if face_analyser_gender != 'none' else None
def update_face_detector_model(face_detector_model : FaceDetectorModel) -> Tuple[gradio.Dropdown, gradio.Dropdown]:
facefusion.globals.face_detector_model = face_detector_model
update_face_detector_size('640x640')
if face_analyser.pre_check():
if facefusion.globals.face_detector_size in facefusion.choices.face_detector_set[face_detector_model]:
return gradio.Dropdown(value = facefusion.globals.face_detector_model), gradio.Dropdown(value = facefusion.globals.face_detector_size, choices = facefusion.choices.face_detector_set[face_detector_model])
return gradio.Dropdown(value = facefusion.globals.face_detector_model), gradio.Dropdown(value = facefusion.globals.face_detector_size, choices = [ facefusion.globals.face_detector_size ])
return gradio.Dropdown(), gradio.Dropdown()
def update_face_detector_size(face_detector_size : str) -> None:
facefusion.globals.face_detector_size = face_detector_size
def update_face_detector_score(face_detector_score : float) -> None:
facefusion.globals.face_detector_score = face_detector_score
def update_face_landmarker_score(face_landmarker_score : float) -> None:
facefusion.globals.face_landmarker_score = face_landmarker_score

View File

@@ -0,0 +1,39 @@
from typing import List, Optional
import gradio
from facefusion import state_manager, wording
from facefusion.processors import choices as processors_choices
from facefusion.processors.typing import FaceDebuggerItem
from facefusion.uis.core import get_ui_component, register_ui_component
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
def render() -> None:
global FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.face_debugger_items_checkbox_group'),
choices = processors_choices.face_debugger_items,
value = state_manager.get_item('face_debugger_items'),
visible = 'face_debugger' in state_manager.get_item('processors')
)
register_ui_component('face_debugger_items_checkbox_group', FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
def listen() -> None:
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP.change(update_face_debugger_items, inputs = FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
processors_checkbox_group = get_ui_component('processors_checkbox_group')
if processors_checkbox_group:
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
def remote_update(processors : List[str]) -> gradio.CheckboxGroup:
has_face_debugger = 'face_debugger' in processors
return gradio.CheckboxGroup(visible = has_face_debugger)
def update_face_debugger_items(face_debugger_items : List[FaceDebuggerItem]) -> None:
state_manager.set_item('face_debugger_items', face_debugger_items)

View File

@@ -0,0 +1,85 @@
from typing import Optional, Sequence, Tuple
import gradio
import facefusion.choices
from facefusion import choices, face_detector, state_manager, wording
from facefusion.common_helper import calc_float_step, get_last
from facefusion.typing import Angle, FaceDetectorModel, Score
from facefusion.uis.core import register_ui_component
from facefusion.uis.typing import ComponentOptions
FACE_DETECTOR_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_DETECTOR_SIZE_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_DETECTOR_ANGLES_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
FACE_DETECTOR_SCORE_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global FACE_DETECTOR_MODEL_DROPDOWN
global FACE_DETECTOR_SIZE_DROPDOWN
global FACE_DETECTOR_ANGLES_CHECKBOX_GROUP
global FACE_DETECTOR_SCORE_SLIDER
face_detector_size_dropdown_options : ComponentOptions =\
{
'label': wording.get('uis.face_detector_size_dropdown'),
'value': state_manager.get_item('face_detector_size')
}
if state_manager.get_item('face_detector_size') in facefusion.choices.face_detector_set[state_manager.get_item('face_detector_model')]:
face_detector_size_dropdown_options['choices'] = facefusion.choices.face_detector_set[state_manager.get_item('face_detector_model')]
with gradio.Row():
FACE_DETECTOR_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_detector_model_dropdown'),
choices = facefusion.choices.face_detector_set.keys(),
value = state_manager.get_item('face_detector_model')
)
FACE_DETECTOR_SIZE_DROPDOWN = gradio.Dropdown(**face_detector_size_dropdown_options)
FACE_DETECTOR_ANGLES_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.face_detector_angles_checkbox_group'),
choices = facefusion.choices.face_detector_angles,
value = state_manager.get_item('face_detector_angles')
)
FACE_DETECTOR_SCORE_SLIDER = gradio.Slider(
label = wording.get('uis.face_detector_score_slider'),
value = state_manager.get_item('face_detector_score'),
step = calc_float_step(facefusion.choices.face_detector_score_range),
minimum = facefusion.choices.face_detector_score_range[0],
maximum = facefusion.choices.face_detector_score_range[-1]
)
register_ui_component('face_detector_model_dropdown', FACE_DETECTOR_MODEL_DROPDOWN)
register_ui_component('face_detector_size_dropdown', FACE_DETECTOR_SIZE_DROPDOWN)
register_ui_component('face_detector_angles_checkbox_group', FACE_DETECTOR_ANGLES_CHECKBOX_GROUP)
register_ui_component('face_detector_score_slider', FACE_DETECTOR_SCORE_SLIDER)
def listen() -> None:
FACE_DETECTOR_MODEL_DROPDOWN.change(update_face_detector_model, inputs = FACE_DETECTOR_MODEL_DROPDOWN, outputs = [ FACE_DETECTOR_MODEL_DROPDOWN, FACE_DETECTOR_SIZE_DROPDOWN ])
FACE_DETECTOR_SIZE_DROPDOWN.change(update_face_detector_size, inputs = FACE_DETECTOR_SIZE_DROPDOWN)
FACE_DETECTOR_ANGLES_CHECKBOX_GROUP.change(update_face_detector_angles, inputs = FACE_DETECTOR_ANGLES_CHECKBOX_GROUP, outputs = FACE_DETECTOR_ANGLES_CHECKBOX_GROUP)
FACE_DETECTOR_SCORE_SLIDER.release(update_face_detector_score, inputs = FACE_DETECTOR_SCORE_SLIDER)
def update_face_detector_model(face_detector_model : FaceDetectorModel) -> Tuple[gradio.Dropdown, gradio.Dropdown]:
face_detector.clear_inference_pool()
state_manager.set_item('face_detector_model', face_detector_model)
if face_detector.pre_check():
face_detector_size_choices = choices.face_detector_set.get(state_manager.get_item('face_detector_model'))
state_manager.set_item('face_detector_size', get_last(face_detector_size_choices))
return gradio.Dropdown(value = state_manager.get_item('face_detector_model')), gradio.Dropdown(value = state_manager.get_item('face_detector_size'), choices = face_detector_size_choices)
return gradio.Dropdown(), gradio.Dropdown()
def update_face_detector_size(face_detector_size : str) -> None:
state_manager.set_item('face_detector_size', face_detector_size)
def update_face_detector_angles(face_detector_angles : Sequence[Angle]) -> gradio.CheckboxGroup:
face_detector_angles = face_detector_angles or facefusion.choices.face_detector_angles
state_manager.set_item('face_detector_angles', face_detector_angles)
return gradio.CheckboxGroup(value = state_manager.get_item('face_detector_angles'))
def update_face_detector_score(face_detector_score : Score) -> None:
state_manager.set_item('face_detector_score', face_detector_score)

View File

@@ -0,0 +1,271 @@
from typing import List, Optional, Tuple
import gradio
from facefusion import state_manager, wording
from facefusion.common_helper import calc_float_step
from facefusion.processors import choices as processors_choices
from facefusion.processors.core import load_processor_module
from facefusion.processors.typing import FaceEditorModel
from facefusion.uis.core import get_ui_component, register_ui_component
FACE_EDITOR_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_EDITOR_EYEBROW_DIRECTION_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_EYE_GAZE_HORIZONTAL_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_EYE_GAZE_VERTICAL_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_EYE_OPEN_RATIO_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_LIP_OPEN_RATIO_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_MOUTH_GRIM_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_MOUTH_POUT_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_MOUTH_PURSE_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_MOUTH_SMILE_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_MOUTH_POSITION_HORIZONTAL_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_MOUTH_POSITION_VERTICAL_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_HEAD_PITCH_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_HEAD_YAW_SLIDER : Optional[gradio.Slider] = None
FACE_EDITOR_HEAD_ROLL_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global FACE_EDITOR_MODEL_DROPDOWN
global FACE_EDITOR_EYEBROW_DIRECTION_SLIDER
global FACE_EDITOR_EYE_GAZE_HORIZONTAL_SLIDER
global FACE_EDITOR_EYE_GAZE_VERTICAL_SLIDER
global FACE_EDITOR_EYE_OPEN_RATIO_SLIDER
global FACE_EDITOR_LIP_OPEN_RATIO_SLIDER
global FACE_EDITOR_MOUTH_GRIM_SLIDER
global FACE_EDITOR_MOUTH_POUT_SLIDER
global FACE_EDITOR_MOUTH_PURSE_SLIDER
global FACE_EDITOR_MOUTH_SMILE_SLIDER
global FACE_EDITOR_MOUTH_POSITION_HORIZONTAL_SLIDER
global FACE_EDITOR_MOUTH_POSITION_VERTICAL_SLIDER
global FACE_EDITOR_HEAD_PITCH_SLIDER
global FACE_EDITOR_HEAD_YAW_SLIDER
global FACE_EDITOR_HEAD_ROLL_SLIDER
FACE_EDITOR_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_editor_model_dropdown'),
choices = processors_choices.face_editor_models,
value = state_manager.get_item('face_editor_model'),
visible = 'face_editor' in state_manager.get_item('processors')
)
FACE_EDITOR_EYEBROW_DIRECTION_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_eyebrow_direction_slider'),
value = state_manager.get_item('face_editor_eyebrow_direction'),
step = calc_float_step(processors_choices.face_editor_eyebrow_direction_range),
minimum = processors_choices.face_editor_eyebrow_direction_range[0],
maximum = processors_choices.face_editor_eyebrow_direction_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_EYE_GAZE_HORIZONTAL_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_eye_gaze_horizontal_slider'),
value = state_manager.get_item('face_editor_eye_gaze_horizontal'),
step = calc_float_step(processors_choices.face_editor_eye_gaze_horizontal_range),
minimum = processors_choices.face_editor_eye_gaze_horizontal_range[0],
maximum = processors_choices.face_editor_eye_gaze_horizontal_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_EYE_GAZE_VERTICAL_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_eye_gaze_vertical_slider'),
value = state_manager.get_item('face_editor_eye_gaze_vertical'),
step = calc_float_step(processors_choices.face_editor_eye_gaze_vertical_range),
minimum = processors_choices.face_editor_eye_gaze_vertical_range[0],
maximum = processors_choices.face_editor_eye_gaze_vertical_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_EYE_OPEN_RATIO_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_eye_open_ratio_slider'),
value = state_manager.get_item('face_editor_eye_open_ratio'),
step = calc_float_step(processors_choices.face_editor_eye_open_ratio_range),
minimum = processors_choices.face_editor_eye_open_ratio_range[0],
maximum = processors_choices.face_editor_eye_open_ratio_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_LIP_OPEN_RATIO_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_lip_open_ratio_slider'),
value = state_manager.get_item('face_editor_lip_open_ratio'),
step = calc_float_step(processors_choices.face_editor_lip_open_ratio_range),
minimum = processors_choices.face_editor_lip_open_ratio_range[0],
maximum = processors_choices.face_editor_lip_open_ratio_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_MOUTH_GRIM_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_mouth_grim_slider'),
value = state_manager.get_item('face_editor_mouth_grim'),
step = calc_float_step(processors_choices.face_editor_mouth_grim_range),
minimum = processors_choices.face_editor_mouth_grim_range[0],
maximum = processors_choices.face_editor_mouth_grim_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_MOUTH_POUT_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_mouth_pout_slider'),
value = state_manager.get_item('face_editor_mouth_pout'),
step = calc_float_step(processors_choices.face_editor_mouth_pout_range),
minimum = processors_choices.face_editor_mouth_pout_range[0],
maximum = processors_choices.face_editor_mouth_pout_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_MOUTH_PURSE_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_mouth_purse_slider'),
value = state_manager.get_item('face_editor_mouth_purse'),
step = calc_float_step(processors_choices.face_editor_mouth_purse_range),
minimum = processors_choices.face_editor_mouth_purse_range[0],
maximum = processors_choices.face_editor_mouth_purse_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_MOUTH_SMILE_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_mouth_smile_slider'),
value = state_manager.get_item('face_editor_mouth_smile'),
step = calc_float_step(processors_choices.face_editor_mouth_smile_range),
minimum = processors_choices.face_editor_mouth_smile_range[0],
maximum = processors_choices.face_editor_mouth_smile_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_MOUTH_POSITION_HORIZONTAL_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_mouth_position_horizontal_slider'),
value = state_manager.get_item('face_editor_mouth_position_horizontal'),
step = calc_float_step(processors_choices.face_editor_mouth_position_horizontal_range),
minimum = processors_choices.face_editor_mouth_position_horizontal_range[0],
maximum = processors_choices.face_editor_mouth_position_horizontal_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_MOUTH_POSITION_VERTICAL_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_mouth_position_vertical_slider'),
value = state_manager.get_item('face_editor_mouth_position_vertical'),
step = calc_float_step(processors_choices.face_editor_mouth_position_vertical_range),
minimum = processors_choices.face_editor_mouth_position_vertical_range[0],
maximum = processors_choices.face_editor_mouth_position_vertical_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_HEAD_PITCH_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_head_pitch_slider'),
value = state_manager.get_item('face_editor_head_pitch'),
step = calc_float_step(processors_choices.face_editor_head_pitch_range),
minimum = processors_choices.face_editor_head_pitch_range[0],
maximum = processors_choices.face_editor_head_pitch_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_HEAD_YAW_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_head_yaw_slider'),
value = state_manager.get_item('face_editor_head_yaw'),
step = calc_float_step(processors_choices.face_editor_head_yaw_range),
minimum = processors_choices.face_editor_head_yaw_range[0],
maximum = processors_choices.face_editor_head_yaw_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
FACE_EDITOR_HEAD_ROLL_SLIDER = gradio.Slider(
label = wording.get('uis.face_editor_head_roll_slider'),
value = state_manager.get_item('face_editor_head_roll'),
step = calc_float_step(processors_choices.face_editor_head_roll_range),
minimum = processors_choices.face_editor_head_roll_range[0],
maximum = processors_choices.face_editor_head_roll_range[-1],
visible = 'face_editor' in state_manager.get_item('processors'),
)
register_ui_component('face_editor_model_dropdown', FACE_EDITOR_MODEL_DROPDOWN)
register_ui_component('face_editor_eyebrow_direction_slider', FACE_EDITOR_EYEBROW_DIRECTION_SLIDER)
register_ui_component('face_editor_eye_gaze_horizontal_slider', FACE_EDITOR_EYE_GAZE_HORIZONTAL_SLIDER)
register_ui_component('face_editor_eye_gaze_vertical_slider', FACE_EDITOR_EYE_GAZE_VERTICAL_SLIDER)
register_ui_component('face_editor_eye_open_ratio_slider', FACE_EDITOR_EYE_OPEN_RATIO_SLIDER)
register_ui_component('face_editor_lip_open_ratio_slider', FACE_EDITOR_LIP_OPEN_RATIO_SLIDER)
register_ui_component('face_editor_mouth_grim_slider', FACE_EDITOR_MOUTH_GRIM_SLIDER)
register_ui_component('face_editor_mouth_pout_slider', FACE_EDITOR_MOUTH_POUT_SLIDER)
register_ui_component('face_editor_mouth_purse_slider', FACE_EDITOR_MOUTH_PURSE_SLIDER)
register_ui_component('face_editor_mouth_smile_slider', FACE_EDITOR_MOUTH_SMILE_SLIDER)
register_ui_component('face_editor_mouth_position_horizontal_slider', FACE_EDITOR_MOUTH_POSITION_HORIZONTAL_SLIDER)
register_ui_component('face_editor_mouth_position_vertical_slider', FACE_EDITOR_MOUTH_POSITION_VERTICAL_SLIDER)
register_ui_component('face_editor_head_pitch_slider', FACE_EDITOR_HEAD_PITCH_SLIDER)
register_ui_component('face_editor_head_yaw_slider', FACE_EDITOR_HEAD_YAW_SLIDER)
register_ui_component('face_editor_head_roll_slider', FACE_EDITOR_HEAD_ROLL_SLIDER)
def listen() -> None:
FACE_EDITOR_MODEL_DROPDOWN.change(update_face_editor_model, inputs = FACE_EDITOR_MODEL_DROPDOWN, outputs = FACE_EDITOR_MODEL_DROPDOWN)
FACE_EDITOR_EYEBROW_DIRECTION_SLIDER.release(update_face_editor_eyebrow_direction, inputs = FACE_EDITOR_EYEBROW_DIRECTION_SLIDER)
FACE_EDITOR_EYE_GAZE_HORIZONTAL_SLIDER.release(update_face_editor_eye_gaze_horizontal, inputs = FACE_EDITOR_EYE_GAZE_HORIZONTAL_SLIDER)
FACE_EDITOR_EYE_GAZE_VERTICAL_SLIDER.release(update_face_editor_eye_gaze_vertical, inputs = FACE_EDITOR_EYE_GAZE_VERTICAL_SLIDER)
FACE_EDITOR_EYE_OPEN_RATIO_SLIDER.release(update_face_editor_eye_open_ratio, inputs = FACE_EDITOR_EYE_OPEN_RATIO_SLIDER)
FACE_EDITOR_LIP_OPEN_RATIO_SLIDER.release(update_face_editor_lip_open_ratio, inputs = FACE_EDITOR_LIP_OPEN_RATIO_SLIDER)
FACE_EDITOR_MOUTH_GRIM_SLIDER.release(update_face_editor_mouth_grim, inputs = FACE_EDITOR_MOUTH_GRIM_SLIDER)
FACE_EDITOR_MOUTH_POUT_SLIDER.release(update_face_editor_mouth_pout, inputs = FACE_EDITOR_MOUTH_POUT_SLIDER)
FACE_EDITOR_MOUTH_PURSE_SLIDER.release(update_face_editor_mouth_purse, inputs = FACE_EDITOR_MOUTH_PURSE_SLIDER)
FACE_EDITOR_MOUTH_SMILE_SLIDER.release(update_face_editor_mouth_smile, inputs = FACE_EDITOR_MOUTH_SMILE_SLIDER)
FACE_EDITOR_MOUTH_POSITION_HORIZONTAL_SLIDER.release(update_face_editor_mouth_position_horizontal, inputs = FACE_EDITOR_MOUTH_POSITION_HORIZONTAL_SLIDER)
FACE_EDITOR_MOUTH_POSITION_VERTICAL_SLIDER.release(update_face_editor_mouth_position_vertical, inputs = FACE_EDITOR_MOUTH_POSITION_VERTICAL_SLIDER)
FACE_EDITOR_HEAD_PITCH_SLIDER.release(update_face_editor_head_pitch, inputs = FACE_EDITOR_HEAD_PITCH_SLIDER)
FACE_EDITOR_HEAD_YAW_SLIDER.release(update_face_editor_head_yaw, inputs = FACE_EDITOR_HEAD_YAW_SLIDER)
FACE_EDITOR_HEAD_ROLL_SLIDER.release(update_face_editor_head_roll, inputs = FACE_EDITOR_HEAD_ROLL_SLIDER)
processors_checkbox_group = get_ui_component('processors_checkbox_group')
if processors_checkbox_group:
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = [FACE_EDITOR_MODEL_DROPDOWN, FACE_EDITOR_EYEBROW_DIRECTION_SLIDER, FACE_EDITOR_EYE_GAZE_HORIZONTAL_SLIDER, FACE_EDITOR_EYE_GAZE_VERTICAL_SLIDER, FACE_EDITOR_EYE_OPEN_RATIO_SLIDER, FACE_EDITOR_LIP_OPEN_RATIO_SLIDER, FACE_EDITOR_MOUTH_GRIM_SLIDER, FACE_EDITOR_MOUTH_POUT_SLIDER, FACE_EDITOR_MOUTH_PURSE_SLIDER, FACE_EDITOR_MOUTH_SMILE_SLIDER, FACE_EDITOR_MOUTH_POSITION_HORIZONTAL_SLIDER, FACE_EDITOR_MOUTH_POSITION_VERTICAL_SLIDER, FACE_EDITOR_HEAD_PITCH_SLIDER, FACE_EDITOR_HEAD_YAW_SLIDER, FACE_EDITOR_HEAD_ROLL_SLIDER])
def remote_update(processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider]:
has_face_editor = 'face_editor' in processors
return gradio.Dropdown(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor), gradio.Slider(visible = has_face_editor)
def update_face_editor_model(face_editor_model : FaceEditorModel) -> gradio.Dropdown:
face_editor_module = load_processor_module('face_editor')
face_editor_module.clear_inference_pool()
state_manager.set_item('face_editor_model', face_editor_model)
if face_editor_module.pre_check():
return gradio.Dropdown(value = state_manager.get_item('face_editor_model'))
return gradio.Dropdown()
def update_face_editor_eyebrow_direction(face_editor_eyebrow_direction : float) -> None:
state_manager.set_item('face_editor_eyebrow_direction', face_editor_eyebrow_direction)
def update_face_editor_eye_gaze_horizontal(face_editor_eye_gaze_horizontal : float) -> None:
state_manager.set_item('face_editor_eye_gaze_horizontal', face_editor_eye_gaze_horizontal)
def update_face_editor_eye_gaze_vertical(face_editor_eye_gaze_vertical : float) -> None:
state_manager.set_item('face_editor_eye_gaze_vertical', face_editor_eye_gaze_vertical)
def update_face_editor_eye_open_ratio(face_editor_eye_open_ratio : float) -> None:
state_manager.set_item('face_editor_eye_open_ratio', face_editor_eye_open_ratio)
def update_face_editor_lip_open_ratio(face_editor_lip_open_ratio : float) -> None:
state_manager.set_item('face_editor_lip_open_ratio', face_editor_lip_open_ratio)
def update_face_editor_mouth_grim(face_editor_mouth_grim : float) -> None:
state_manager.set_item('face_editor_mouth_grim', face_editor_mouth_grim)
def update_face_editor_mouth_pout(face_editor_mouth_pout : float) -> None:
state_manager.set_item('face_editor_mouth_pout', face_editor_mouth_pout)
def update_face_editor_mouth_purse(face_editor_mouth_purse : float) -> None:
state_manager.set_item('face_editor_mouth_purse', face_editor_mouth_purse)
def update_face_editor_mouth_smile(face_editor_mouth_smile : float) -> None:
state_manager.set_item('face_editor_mouth_smile', face_editor_mouth_smile)
def update_face_editor_mouth_position_horizontal(face_editor_mouth_position_horizontal : float) -> None:
state_manager.set_item('face_editor_mouth_position_horizontal', face_editor_mouth_position_horizontal)
def update_face_editor_mouth_position_vertical(face_editor_mouth_position_vertical : float) -> None:
state_manager.set_item('face_editor_mouth_position_vertical', face_editor_mouth_position_vertical)
def update_face_editor_head_pitch(face_editor_head_pitch : float) -> None:
state_manager.set_item('face_editor_head_pitch', face_editor_head_pitch)
def update_face_editor_head_yaw(face_editor_head_yaw : float) -> None:
state_manager.set_item('face_editor_head_yaw', face_editor_head_yaw)
def update_face_editor_head_roll(face_editor_head_roll : float) -> None:
state_manager.set_item('face_editor_head_roll', face_editor_head_roll)

View File

@@ -0,0 +1,63 @@
from typing import List, Optional, Tuple
import gradio
from facefusion import state_manager, wording
from facefusion.common_helper import calc_int_step
from facefusion.processors import choices as processors_choices
from facefusion.processors.core import load_processor_module
from facefusion.processors.typing import FaceEnhancerModel
from facefusion.uis.core import get_ui_component, register_ui_component
FACE_ENHANCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_ENHANCER_BLEND_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global FACE_ENHANCER_MODEL_DROPDOWN
global FACE_ENHANCER_BLEND_SLIDER
FACE_ENHANCER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_enhancer_model_dropdown'),
choices = processors_choices.face_enhancer_models,
value = state_manager.get_item('face_enhancer_model'),
visible = 'face_enhancer' in state_manager.get_item('processors')
)
FACE_ENHANCER_BLEND_SLIDER = gradio.Slider(
label = wording.get('uis.face_enhancer_blend_slider'),
value = state_manager.get_item('face_enhancer_blend'),
step = calc_int_step(processors_choices.face_enhancer_blend_range),
minimum = processors_choices.face_enhancer_blend_range[0],
maximum = processors_choices.face_enhancer_blend_range[-1],
visible = 'face_enhancer' in state_manager.get_item('processors')
)
register_ui_component('face_enhancer_model_dropdown', FACE_ENHANCER_MODEL_DROPDOWN)
register_ui_component('face_enhancer_blend_slider', FACE_ENHANCER_BLEND_SLIDER)
def listen() -> None:
FACE_ENHANCER_MODEL_DROPDOWN.change(update_face_enhancer_model, inputs = FACE_ENHANCER_MODEL_DROPDOWN, outputs = FACE_ENHANCER_MODEL_DROPDOWN)
FACE_ENHANCER_BLEND_SLIDER.release(update_face_enhancer_blend, inputs = FACE_ENHANCER_BLEND_SLIDER)
processors_checkbox_group = get_ui_component('processors_checkbox_group')
if processors_checkbox_group:
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = [ FACE_ENHANCER_MODEL_DROPDOWN, FACE_ENHANCER_BLEND_SLIDER ])
def remote_update(processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Slider]:
has_face_enhancer = 'face_enhancer' in processors
return gradio.Dropdown(visible = has_face_enhancer), gradio.Slider(visible = has_face_enhancer)
def update_face_enhancer_model(face_enhancer_model : FaceEnhancerModel) -> gradio.Dropdown:
face_enhancer_module = load_processor_module('face_enhancer')
face_enhancer_module.clear_inference_pool()
state_manager.set_item('face_enhancer_model', face_enhancer_model)
if face_enhancer_module.pre_check():
return gradio.Dropdown(value = state_manager.get_item('face_enhancer_model'))
return gradio.Dropdown()
def update_face_enhancer_blend(face_enhancer_blend : float) -> None:
state_manager.set_item('face_enhancer_blend', int(face_enhancer_blend))

View File

@@ -0,0 +1,50 @@
from typing import Optional
import gradio
import facefusion.choices
from facefusion import face_landmarker, state_manager, wording
from facefusion.common_helper import calc_float_step
from facefusion.typing import FaceLandmarkerModel, Score
from facefusion.uis.core import register_ui_component
FACE_LANDMARKER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_LANDMARKER_SCORE_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global FACE_LANDMARKER_MODEL_DROPDOWN
global FACE_LANDMARKER_SCORE_SLIDER
FACE_LANDMARKER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_landmarker_model_dropdown'),
choices = facefusion.choices.face_landmarker_models,
value = state_manager.get_item('face_landmarker_model')
)
FACE_LANDMARKER_SCORE_SLIDER = gradio.Slider(
label = wording.get('uis.face_landmarker_score_slider'),
value = state_manager.get_item('face_landmarker_score'),
step = calc_float_step(facefusion.choices.face_landmarker_score_range),
minimum = facefusion.choices.face_landmarker_score_range[0],
maximum = facefusion.choices.face_landmarker_score_range[-1]
)
register_ui_component('face_landmarker_model_dropdown', FACE_LANDMARKER_MODEL_DROPDOWN)
register_ui_component('face_landmarker_score_slider', FACE_LANDMARKER_SCORE_SLIDER)
def listen() -> None:
FACE_LANDMARKER_MODEL_DROPDOWN.change(update_face_landmarker_model, inputs = FACE_LANDMARKER_MODEL_DROPDOWN, outputs = FACE_LANDMARKER_MODEL_DROPDOWN)
FACE_LANDMARKER_SCORE_SLIDER.release(update_face_landmarker_score, inputs = FACE_LANDMARKER_SCORE_SLIDER)
def update_face_landmarker_model(face_landmarker_model : FaceLandmarkerModel) -> gradio.Dropdown:
face_landmarker.clear_inference_pool()
state_manager.set_item('face_landmarker_model', face_landmarker_model)
if face_landmarker.pre_check():
gradio.Dropdown(value = state_manager.get_item('face_landmarker_model'))
return gradio.Dropdown()
def update_face_landmarker_score(face_landmarker_score : Score) -> None:
state_manager.set_item('face_landmarker_score', face_landmarker_score)

View File

@@ -1,16 +1,15 @@
from typing import Optional, Tuple, List
from typing import List, Optional, Tuple
import gradio
import facefusion.globals
import facefusion.choices
from facefusion import wording
from facefusion.typing import FaceMaskType, FaceMaskRegion
from facefusion import state_manager, wording
from facefusion.common_helper import calc_float_step, calc_int_step
from facefusion.typing import FaceMaskRegion, FaceMaskType
from facefusion.uis.core import register_ui_component
FACE_MASK_TYPES_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
FACE_MASK_BLUR_SLIDER : Optional[gradio.Slider] = None
FACE_MASK_BOX_GROUP : Optional[gradio.Group] = None
FACE_MASK_REGION_GROUP : Optional[gradio.Group] = None
FACE_MASK_PADDING_TOP_SLIDER : Optional[gradio.Slider] = None
FACE_MASK_PADDING_RIGHT_SLIDER : Optional[gradio.Slider] = None
FACE_MASK_PADDING_BOTTOM_SLIDER : Optional[gradio.Slider] = None
@@ -20,100 +19,105 @@ FACE_MASK_REGION_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
def render() -> None:
global FACE_MASK_TYPES_CHECKBOX_GROUP
global FACE_MASK_REGION_CHECKBOX_GROUP
global FACE_MASK_BLUR_SLIDER
global FACE_MASK_BOX_GROUP
global FACE_MASK_REGION_GROUP
global FACE_MASK_PADDING_TOP_SLIDER
global FACE_MASK_PADDING_RIGHT_SLIDER
global FACE_MASK_PADDING_BOTTOM_SLIDER
global FACE_MASK_PADDING_LEFT_SLIDER
global FACE_MASK_REGION_CHECKBOX_GROUP
has_box_mask = 'box' in facefusion.globals.face_mask_types
has_region_mask = 'region' in facefusion.globals.face_mask_types
has_box_mask = 'box' in state_manager.get_item('face_mask_types')
has_region_mask = 'region' in state_manager.get_item('face_mask_types')
FACE_MASK_TYPES_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.face_mask_types_checkbox_group'),
choices = facefusion.choices.face_mask_types,
value = facefusion.globals.face_mask_types
value = state_manager.get_item('face_mask_types')
)
with gradio.Group(visible = has_box_mask) as FACE_MASK_BOX_GROUP:
FACE_MASK_BLUR_SLIDER = gradio.Slider(
label = wording.get('uis.face_mask_blur_slider'),
step = facefusion.choices.face_mask_blur_range[1] - facefusion.choices.face_mask_blur_range[0],
minimum = facefusion.choices.face_mask_blur_range[0],
maximum = facefusion.choices.face_mask_blur_range[-1],
value = facefusion.globals.face_mask_blur
)
FACE_MASK_REGION_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.face_mask_region_checkbox_group'),
choices = facefusion.choices.face_mask_regions,
value = state_manager.get_item('face_mask_regions'),
visible = has_region_mask
)
FACE_MASK_BLUR_SLIDER = gradio.Slider(
label = wording.get('uis.face_mask_blur_slider'),
step = calc_float_step(facefusion.choices.face_mask_blur_range),
minimum = facefusion.choices.face_mask_blur_range[0],
maximum = facefusion.choices.face_mask_blur_range[-1],
value = state_manager.get_item('face_mask_blur'),
visible = has_box_mask
)
with gradio.Group():
with gradio.Row():
FACE_MASK_PADDING_TOP_SLIDER = gradio.Slider(
label = wording.get('uis.face_mask_padding_top_slider'),
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
step = calc_int_step(facefusion.choices.face_mask_padding_range),
minimum = facefusion.choices.face_mask_padding_range[0],
maximum = facefusion.choices.face_mask_padding_range[-1],
value = facefusion.globals.face_mask_padding[0]
value = state_manager.get_item('face_mask_padding')[0],
visible = has_box_mask
)
FACE_MASK_PADDING_RIGHT_SLIDER = gradio.Slider(
label = wording.get('uis.face_mask_padding_right_slider'),
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
step = calc_int_step(facefusion.choices.face_mask_padding_range),
minimum = facefusion.choices.face_mask_padding_range[0],
maximum = facefusion.choices.face_mask_padding_range[-1],
value = facefusion.globals.face_mask_padding[1]
value = state_manager.get_item('face_mask_padding')[1],
visible = has_box_mask
)
with gradio.Row():
FACE_MASK_PADDING_BOTTOM_SLIDER = gradio.Slider(
label = wording.get('uis.face_mask_padding_bottom_slider'),
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
step = calc_int_step(facefusion.choices.face_mask_padding_range),
minimum = facefusion.choices.face_mask_padding_range[0],
maximum = facefusion.choices.face_mask_padding_range[-1],
value = facefusion.globals.face_mask_padding[2]
value = state_manager.get_item('face_mask_padding')[2],
visible = has_box_mask
)
FACE_MASK_PADDING_LEFT_SLIDER = gradio.Slider(
label = wording.get('uis.face_mask_padding_left_slider'),
step = facefusion.choices.face_mask_padding_range[1] - facefusion.choices.face_mask_padding_range[0],
step = calc_int_step(facefusion.choices.face_mask_padding_range),
minimum = facefusion.choices.face_mask_padding_range[0],
maximum = facefusion.choices.face_mask_padding_range[-1],
value = facefusion.globals.face_mask_padding[3]
value = state_manager.get_item('face_mask_padding')[3],
visible = has_box_mask
)
with gradio.Row():
FACE_MASK_REGION_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.face_mask_region_checkbox_group'),
choices = facefusion.choices.face_mask_regions,
value = facefusion.globals.face_mask_regions,
visible = has_region_mask
)
register_ui_component('face_mask_types_checkbox_group', FACE_MASK_TYPES_CHECKBOX_GROUP)
register_ui_component('face_mask_region_checkbox_group', FACE_MASK_REGION_CHECKBOX_GROUP)
register_ui_component('face_mask_blur_slider', FACE_MASK_BLUR_SLIDER)
register_ui_component('face_mask_padding_top_slider', FACE_MASK_PADDING_TOP_SLIDER)
register_ui_component('face_mask_padding_right_slider', FACE_MASK_PADDING_RIGHT_SLIDER)
register_ui_component('face_mask_padding_bottom_slider', FACE_MASK_PADDING_BOTTOM_SLIDER)
register_ui_component('face_mask_padding_left_slider', FACE_MASK_PADDING_LEFT_SLIDER)
register_ui_component('face_mask_region_checkbox_group', FACE_MASK_REGION_CHECKBOX_GROUP)
def listen() -> None:
FACE_MASK_TYPES_CHECKBOX_GROUP.change(update_face_mask_type, inputs = FACE_MASK_TYPES_CHECKBOX_GROUP, outputs = [ FACE_MASK_TYPES_CHECKBOX_GROUP, FACE_MASK_BOX_GROUP, FACE_MASK_REGION_CHECKBOX_GROUP ])
FACE_MASK_BLUR_SLIDER.release(update_face_mask_blur, inputs = FACE_MASK_BLUR_SLIDER)
FACE_MASK_TYPES_CHECKBOX_GROUP.change(update_face_mask_type, inputs = FACE_MASK_TYPES_CHECKBOX_GROUP, outputs = [ FACE_MASK_TYPES_CHECKBOX_GROUP, FACE_MASK_REGION_CHECKBOX_GROUP, FACE_MASK_BLUR_SLIDER, FACE_MASK_PADDING_TOP_SLIDER, FACE_MASK_PADDING_RIGHT_SLIDER, FACE_MASK_PADDING_BOTTOM_SLIDER, FACE_MASK_PADDING_LEFT_SLIDER ])
FACE_MASK_REGION_CHECKBOX_GROUP.change(update_face_mask_regions, inputs = FACE_MASK_REGION_CHECKBOX_GROUP, outputs = FACE_MASK_REGION_CHECKBOX_GROUP)
FACE_MASK_BLUR_SLIDER.release(update_face_mask_blur, inputs = FACE_MASK_BLUR_SLIDER)
face_mask_padding_sliders = [ FACE_MASK_PADDING_TOP_SLIDER, FACE_MASK_PADDING_RIGHT_SLIDER, FACE_MASK_PADDING_BOTTOM_SLIDER, FACE_MASK_PADDING_LEFT_SLIDER ]
for face_mask_padding_slider in face_mask_padding_sliders:
face_mask_padding_slider.release(update_face_mask_padding, inputs = face_mask_padding_sliders)
def update_face_mask_type(face_mask_types : List[FaceMaskType]) -> Tuple[gradio.CheckboxGroup, gradio.Group, gradio.CheckboxGroup]:
facefusion.globals.face_mask_types = face_mask_types or facefusion.choices.face_mask_types
def update_face_mask_type(face_mask_types : List[FaceMaskType]) -> Tuple[gradio.CheckboxGroup, gradio.CheckboxGroup, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider]:
face_mask_types = face_mask_types or facefusion.choices.face_mask_types
state_manager.set_item('face_mask_types', face_mask_types)
has_box_mask = 'box' in face_mask_types
has_region_mask = 'region' in face_mask_types
return gradio.CheckboxGroup(value = facefusion.globals.face_mask_types), gradio.Group(visible = has_box_mask), gradio.CheckboxGroup(visible = has_region_mask)
def update_face_mask_blur(face_mask_blur : float) -> None:
facefusion.globals.face_mask_blur = face_mask_blur
def update_face_mask_padding(face_mask_padding_top : int, face_mask_padding_right : int, face_mask_padding_bottom : int, face_mask_padding_left : int) -> None:
facefusion.globals.face_mask_padding = (face_mask_padding_top, face_mask_padding_right, face_mask_padding_bottom, face_mask_padding_left)
return gradio.CheckboxGroup(value = state_manager.get_item('face_mask_types')), gradio.CheckboxGroup(visible = has_region_mask), gradio.Slider(visible = has_box_mask), gradio.Slider(visible = has_box_mask), gradio.Slider(visible = has_box_mask), gradio.Slider(visible = has_box_mask), gradio.Slider(visible = has_box_mask)
def update_face_mask_regions(face_mask_regions : List[FaceMaskRegion]) -> gradio.CheckboxGroup:
facefusion.globals.face_mask_regions = face_mask_regions or facefusion.choices.face_mask_regions
return gradio.CheckboxGroup(value = facefusion.globals.face_mask_regions)
face_mask_regions = face_mask_regions or facefusion.choices.face_mask_regions
state_manager.set_item('face_mask_regions', face_mask_regions)
return gradio.CheckboxGroup(value = state_manager.get_item('face_mask_regions'))
def update_face_mask_blur(face_mask_blur : float) -> None:
state_manager.set_item('face_mask_blur', face_mask_blur)
def update_face_mask_padding(face_mask_padding_top : float, face_mask_padding_right : float, face_mask_padding_bottom : float, face_mask_padding_left : float) -> None:
face_mask_padding = (int(face_mask_padding_top), int(face_mask_padding_right), int(face_mask_padding_bottom), int(face_mask_padding_left))
state_manager.set_item('face_mask_padding', face_mask_padding)

View File

@@ -1,62 +1,109 @@
from typing import List, Optional, Tuple, Any, Dict
from typing import List, Optional, Tuple
import gradio
from gradio_rangeslider import RangeSlider
import facefusion.globals
import facefusion.choices
from facefusion import wording
from facefusion.face_store import clear_static_faces, clear_reference_faces
from facefusion.vision import get_video_frame, read_static_image, normalize_frame_color
from facefusion.filesystem import is_image, is_video
from facefusion import state_manager, wording
from facefusion.common_helper import calc_float_step, calc_int_step
from facefusion.face_analyser import get_many_faces
from facefusion.typing import VisionFrame, FaceSelectorMode
from facefusion.face_selector import sort_and_filter_faces
from facefusion.face_store import clear_reference_faces, clear_static_faces
from facefusion.filesystem import is_image, is_video
from facefusion.typing import FaceSelectorMode, FaceSelectorOrder, Gender, Race, VisionFrame
from facefusion.uis.core import get_ui_component, get_ui_components, register_ui_component
from facefusion.uis.typing import ComponentOptions
from facefusion.uis.ui_helper import convert_str_none
from facefusion.vision import get_video_frame, normalize_frame_color, read_static_image
FACE_SELECTOR_MODE_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_SELECTOR_ORDER_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_SELECTOR_GENDER_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_SELECTOR_RACE_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_SELECTOR_AGE_RANGE_SLIDER : Optional[RangeSlider] = None
REFERENCE_FACE_POSITION_GALLERY : Optional[gradio.Gallery] = None
REFERENCE_FACE_DISTANCE_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global FACE_SELECTOR_MODE_DROPDOWN
global FACE_SELECTOR_ORDER_DROPDOWN
global FACE_SELECTOR_GENDER_DROPDOWN
global FACE_SELECTOR_RACE_DROPDOWN
global FACE_SELECTOR_AGE_RANGE_SLIDER
global REFERENCE_FACE_POSITION_GALLERY
global REFERENCE_FACE_DISTANCE_SLIDER
reference_face_gallery_args : Dict[str, Any] =\
reference_face_gallery_options : ComponentOptions =\
{
'label': wording.get('uis.reference_face_gallery'),
'object_fit': 'cover',
'columns': 8,
'allow_preview': False,
'visible': 'reference' in facefusion.globals.face_selector_mode
'visible': 'reference' in state_manager.get_item('face_selector_mode')
}
if is_image(facefusion.globals.target_path):
reference_frame = read_static_image(facefusion.globals.target_path)
reference_face_gallery_args['value'] = extract_gallery_frames(reference_frame)
if is_video(facefusion.globals.target_path):
reference_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
reference_face_gallery_args['value'] = extract_gallery_frames(reference_frame)
if is_image(state_manager.get_item('target_path')):
reference_frame = read_static_image(state_manager.get_item('target_path'))
reference_face_gallery_options['value'] = extract_gallery_frames(reference_frame)
if is_video(state_manager.get_item('target_path')):
reference_frame = get_video_frame(state_manager.get_item('target_path'), state_manager.get_item('reference_frame_number'))
reference_face_gallery_options['value'] = extract_gallery_frames(reference_frame)
FACE_SELECTOR_MODE_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_selector_mode_dropdown'),
choices = facefusion.choices.face_selector_modes,
value = facefusion.globals.face_selector_mode
value = state_manager.get_item('face_selector_mode')
)
REFERENCE_FACE_POSITION_GALLERY = gradio.Gallery(**reference_face_gallery_args)
REFERENCE_FACE_POSITION_GALLERY = gradio.Gallery(**reference_face_gallery_options)
with gradio.Group():
with gradio.Row():
FACE_SELECTOR_ORDER_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_selector_order_dropdown'),
choices = facefusion.choices.face_selector_orders,
value = state_manager.get_item('face_selector_order')
)
FACE_SELECTOR_GENDER_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_selector_gender_dropdown'),
choices = [ 'none' ] + facefusion.choices.face_selector_genders,
value = state_manager.get_item('face_selector_gender') or 'none'
)
FACE_SELECTOR_RACE_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_selector_race_dropdown'),
choices = ['none'] + facefusion.choices.face_selector_races,
value = state_manager.get_item('face_selector_race') or 'none'
)
with gradio.Row():
face_selector_age_start = state_manager.get_item('face_selector_age_start') or facefusion.choices.face_selector_age_range[0]
face_selector_age_end = state_manager.get_item('face_selector_age_end') or facefusion.choices.face_selector_age_range[-1]
FACE_SELECTOR_AGE_RANGE_SLIDER = RangeSlider(
label = wording.get('uis.face_selector_age_range_slider'),
minimum = facefusion.choices.face_selector_age_range[0],
maximum = facefusion.choices.face_selector_age_range[-1],
value = (face_selector_age_start, face_selector_age_end),
step = calc_int_step(facefusion.choices.face_selector_age_range)
)
REFERENCE_FACE_DISTANCE_SLIDER = gradio.Slider(
label = wording.get('uis.reference_face_distance_slider'),
value = facefusion.globals.reference_face_distance,
step = facefusion.choices.reference_face_distance_range[1] - facefusion.choices.reference_face_distance_range[0],
value = state_manager.get_item('reference_face_distance'),
step = calc_float_step(facefusion.choices.reference_face_distance_range),
minimum = facefusion.choices.reference_face_distance_range[0],
maximum = facefusion.choices.reference_face_distance_range[-1],
visible = 'reference' in facefusion.globals.face_selector_mode
visible = 'reference' in state_manager.get_item('face_selector_mode')
)
register_ui_component('face_selector_mode_dropdown', FACE_SELECTOR_MODE_DROPDOWN)
register_ui_component('face_selector_order_dropdown', FACE_SELECTOR_ORDER_DROPDOWN)
register_ui_component('face_selector_gender_dropdown', FACE_SELECTOR_GENDER_DROPDOWN)
register_ui_component('face_selector_race_dropdown', FACE_SELECTOR_RACE_DROPDOWN)
register_ui_component('face_selector_age_range_slider', FACE_SELECTOR_AGE_RANGE_SLIDER)
register_ui_component('reference_face_position_gallery', REFERENCE_FACE_POSITION_GALLERY)
register_ui_component('reference_face_distance_slider', REFERENCE_FACE_DISTANCE_SLIDER)
def listen() -> None:
FACE_SELECTOR_MODE_DROPDOWN.change(update_face_selector_mode, inputs = FACE_SELECTOR_MODE_DROPDOWN, outputs = [ REFERENCE_FACE_POSITION_GALLERY, REFERENCE_FACE_DISTANCE_SLIDER ])
FACE_SELECTOR_ORDER_DROPDOWN.change(update_face_selector_order, inputs = FACE_SELECTOR_ORDER_DROPDOWN, outputs = REFERENCE_FACE_POSITION_GALLERY)
FACE_SELECTOR_GENDER_DROPDOWN.change(update_face_selector_gender, inputs = FACE_SELECTOR_GENDER_DROPDOWN, outputs = REFERENCE_FACE_POSITION_GALLERY)
FACE_SELECTOR_RACE_DROPDOWN.change(update_face_selector_race, inputs = FACE_SELECTOR_RACE_DROPDOWN, outputs = REFERENCE_FACE_POSITION_GALLERY)
FACE_SELECTOR_AGE_RANGE_SLIDER.release(update_face_selector_age_range, inputs = FACE_SELECTOR_AGE_RANGE_SLIDER, outputs = REFERENCE_FACE_POSITION_GALLERY)
REFERENCE_FACE_POSITION_GALLERY.select(clear_and_update_reference_face_position)
REFERENCE_FACE_DISTANCE_SLIDER.release(update_reference_face_distance, inputs = REFERENCE_FACE_DISTANCE_SLIDER)
@@ -69,46 +116,56 @@ def listen() -> None:
getattr(ui_component, method)(update_reference_face_position)
getattr(ui_component, method)(update_reference_position_gallery, outputs = REFERENCE_FACE_POSITION_GALLERY)
for ui_component in get_ui_components(
[
'face_analyser_order_dropdown',
'face_analyser_age_dropdown',
'face_analyser_gender_dropdown'
]):
ui_component.change(update_reference_position_gallery, outputs = REFERENCE_FACE_POSITION_GALLERY)
for ui_component in get_ui_components(
[
'face_detector_model_dropdown',
'face_detector_size_dropdown'
'face_detector_size_dropdown',
'face_detector_angles_checkbox_group'
]):
ui_component.change(clear_and_update_reference_position_gallery, outputs = REFERENCE_FACE_POSITION_GALLERY)
for ui_component in get_ui_components(
[
'face_detector_score_slider',
'face_landmarker_score_slider'
]):
ui_component.release(clear_and_update_reference_position_gallery, outputs=REFERENCE_FACE_POSITION_GALLERY)
face_detector_score_slider = get_ui_component('face_detector_score_slider')
if face_detector_score_slider:
face_detector_score_slider.release(clear_and_update_reference_position_gallery, outputs = REFERENCE_FACE_POSITION_GALLERY)
preview_frame_slider = get_ui_component('preview_frame_slider')
if preview_frame_slider:
preview_frame_slider.change(update_reference_frame_number, inputs = preview_frame_slider)
preview_frame_slider.release(update_reference_frame_number, inputs = preview_frame_slider)
preview_frame_slider.release(update_reference_position_gallery, outputs = REFERENCE_FACE_POSITION_GALLERY)
def update_face_selector_mode(face_selector_mode : FaceSelectorMode) -> Tuple[gradio.Gallery, gradio.Slider]:
state_manager.set_item('face_selector_mode', face_selector_mode)
if face_selector_mode == 'many':
facefusion.globals.face_selector_mode = face_selector_mode
return gradio.Gallery(visible = False), gradio.Slider(visible = False)
if face_selector_mode == 'one':
facefusion.globals.face_selector_mode = face_selector_mode
return gradio.Gallery(visible = False), gradio.Slider(visible = False)
if face_selector_mode == 'reference':
facefusion.globals.face_selector_mode = face_selector_mode
return gradio.Gallery(visible = True), gradio.Slider(visible = True)
def update_face_selector_order(face_analyser_order : FaceSelectorOrder) -> gradio.Gallery:
state_manager.set_item('face_selector_order', convert_str_none(face_analyser_order))
return update_reference_position_gallery()
def update_face_selector_gender(face_selector_gender : Gender) -> gradio.Gallery:
state_manager.set_item('face_selector_gender', convert_str_none(face_selector_gender))
return update_reference_position_gallery()
def update_face_selector_race(face_selector_race : Race) -> gradio.Gallery:
state_manager.set_item('face_selector_race', convert_str_none(face_selector_race))
return update_reference_position_gallery()
def update_face_selector_age_range(face_selector_age_range : Tuple[float, float]) -> gradio.Gallery:
face_selector_age_start, face_selector_age_end = face_selector_age_range
state_manager.set_item('face_selector_age_start', int(face_selector_age_start))
state_manager.set_item('face_selector_age_end', int(face_selector_age_end))
return update_reference_position_gallery()
def clear_and_update_reference_face_position(event : gradio.SelectData) -> gradio.Gallery:
clear_reference_faces()
clear_static_faces()
@@ -117,15 +174,15 @@ def clear_and_update_reference_face_position(event : gradio.SelectData) -> gradi
def update_reference_face_position(reference_face_position : int = 0) -> None:
facefusion.globals.reference_face_position = reference_face_position
state_manager.set_item('reference_face_position', reference_face_position)
def update_reference_face_distance(reference_face_distance : float) -> None:
facefusion.globals.reference_face_distance = reference_face_distance
state_manager.set_item('reference_face_distance', reference_face_distance)
def update_reference_frame_number(reference_frame_number : int) -> None:
facefusion.globals.reference_frame_number = reference_frame_number
state_manager.set_item('reference_frame_number', reference_frame_number)
def clear_and_update_reference_position_gallery() -> gradio.Gallery:
@@ -136,11 +193,11 @@ def clear_and_update_reference_position_gallery() -> gradio.Gallery:
def update_reference_position_gallery() -> gradio.Gallery:
gallery_vision_frames = []
if is_image(facefusion.globals.target_path):
temp_vision_frame = read_static_image(facefusion.globals.target_path)
if is_image(state_manager.get_item('target_path')):
temp_vision_frame = read_static_image(state_manager.get_item('target_path'))
gallery_vision_frames = extract_gallery_frames(temp_vision_frame)
if is_video(facefusion.globals.target_path):
temp_vision_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
if is_video(state_manager.get_item('target_path')):
temp_vision_frame = get_video_frame(state_manager.get_item('target_path'), state_manager.get_item('reference_frame_number'))
gallery_vision_frames = extract_gallery_frames(temp_vision_frame)
if gallery_vision_frames:
return gradio.Gallery(value = gallery_vision_frames)
@@ -149,7 +206,7 @@ def update_reference_position_gallery() -> gradio.Gallery:
def extract_gallery_frames(temp_vision_frame : VisionFrame) -> List[VisionFrame]:
gallery_vision_frames = []
faces = get_many_faces(temp_vision_frame)
faces = sort_and_filter_faces(get_many_faces([ temp_vision_frame ]))
for face in faces:
start_x, start_y, end_x, end_y = map(int, face.bounding_box)

View File

@@ -0,0 +1,63 @@
from typing import List, Optional, Tuple
import gradio
from facefusion import state_manager, wording
from facefusion.common_helper import get_first
from facefusion.processors import choices as processors_choices
from facefusion.processors.core import load_processor_module
from facefusion.processors.typing import FaceSwapperModel
from facefusion.uis.core import get_ui_component, register_ui_component
FACE_SWAPPER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_SWAPPER_PIXEL_BOOST_DROPDOWN : Optional[gradio.Dropdown] = None
def render() -> None:
global FACE_SWAPPER_MODEL_DROPDOWN
global FACE_SWAPPER_PIXEL_BOOST_DROPDOWN
FACE_SWAPPER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_swapper_model_dropdown'),
choices = processors_choices.face_swapper_set.keys(),
value = state_manager.get_item('face_swapper_model'),
visible = 'face_swapper' in state_manager.get_item('processors')
)
FACE_SWAPPER_PIXEL_BOOST_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_swapper_pixel_boost_dropdown'),
choices = processors_choices.face_swapper_set.get(state_manager.get_item('face_swapper_model')),
value = state_manager.get_item('face_swapper_pixel_boost'),
visible = 'face_swapper' in state_manager.get_item('processors')
)
register_ui_component('face_swapper_model_dropdown', FACE_SWAPPER_MODEL_DROPDOWN)
register_ui_component('face_swapper_pixel_boost_dropdown', FACE_SWAPPER_PIXEL_BOOST_DROPDOWN)
def listen() -> None:
FACE_SWAPPER_MODEL_DROPDOWN.change(update_face_swapper_model, inputs = FACE_SWAPPER_MODEL_DROPDOWN, outputs = [ FACE_SWAPPER_MODEL_DROPDOWN, FACE_SWAPPER_PIXEL_BOOST_DROPDOWN ])
FACE_SWAPPER_PIXEL_BOOST_DROPDOWN.change(update_face_swapper_pixel_boost, inputs = FACE_SWAPPER_PIXEL_BOOST_DROPDOWN)
processors_checkbox_group = get_ui_component('processors_checkbox_group')
if processors_checkbox_group:
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = [ FACE_SWAPPER_MODEL_DROPDOWN, FACE_SWAPPER_PIXEL_BOOST_DROPDOWN ])
def remote_update(processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Dropdown]:
has_face_swapper = 'face_swapper' in processors
return gradio.Dropdown(visible = has_face_swapper), gradio.Dropdown(visible = has_face_swapper)
def update_face_swapper_model(face_swapper_model : FaceSwapperModel) -> Tuple[gradio.Dropdown, gradio.Dropdown]:
face_swapper_module = load_processor_module('face_swapper')
face_swapper_module.clear_inference_pool()
state_manager.set_item('face_swapper_model', face_swapper_model)
if face_swapper_module.pre_check():
face_swapper_pixel_boost_choices = processors_choices.face_swapper_set.get(state_manager.get_item('face_swapper_model'))
state_manager.set_item('face_swapper_pixel_boost', get_first(face_swapper_pixel_boost_choices))
return gradio.Dropdown(value = state_manager.get_item('face_swapper_model')), gradio.Dropdown(value = state_manager.get_item('face_swapper_pixel_boost'), choices = face_swapper_pixel_boost_choices)
return gradio.Dropdown(), gradio.Dropdown()
def update_face_swapper_pixel_boost(face_swapper_pixel_boost : str) -> None:
state_manager.set_item('face_swapper_pixel_boost', face_swapper_pixel_boost)

View File

@@ -0,0 +1,77 @@
from typing import List, Optional, Tuple
import gradio
from facefusion import state_manager, wording
from facefusion.common_helper import calc_int_step
from facefusion.processors import choices as processors_choices
from facefusion.processors.core import load_processor_module
from facefusion.processors.typing import FrameColorizerModel
from facefusion.uis.core import get_ui_component, register_ui_component
FRAME_COLORIZER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FRAME_COLORIZER_BLEND_SLIDER : Optional[gradio.Slider] = None
FRAME_COLORIZER_SIZE_DROPDOWN : Optional[gradio.Dropdown] = None
def render() -> None:
global FRAME_COLORIZER_MODEL_DROPDOWN
global FRAME_COLORIZER_BLEND_SLIDER
global FRAME_COLORIZER_SIZE_DROPDOWN
FRAME_COLORIZER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.frame_colorizer_model_dropdown'),
choices = processors_choices.frame_colorizer_models,
value = state_manager.get_item('frame_colorizer_model'),
visible = 'frame_colorizer' in state_manager.get_item('processors')
)
FRAME_COLORIZER_BLEND_SLIDER = gradio.Slider(
label = wording.get('uis.frame_colorizer_blend_slider'),
value = state_manager.get_item('frame_colorizer_blend'),
step = calc_int_step(processors_choices.frame_colorizer_blend_range),
minimum = processors_choices.frame_colorizer_blend_range[0],
maximum = processors_choices.frame_colorizer_blend_range[-1],
visible = 'frame_colorizer' in state_manager.get_item('processors')
)
FRAME_COLORIZER_SIZE_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.frame_colorizer_size_dropdown'),
choices = processors_choices.frame_colorizer_sizes,
value = state_manager.get_item('frame_colorizer_size'),
visible = 'frame_colorizer' in state_manager.get_item('processors')
)
register_ui_component('frame_colorizer_model_dropdown', FRAME_COLORIZER_MODEL_DROPDOWN)
register_ui_component('frame_colorizer_blend_slider', FRAME_COLORIZER_BLEND_SLIDER)
register_ui_component('frame_colorizer_size_dropdown', FRAME_COLORIZER_SIZE_DROPDOWN)
def listen() -> None:
FRAME_COLORIZER_MODEL_DROPDOWN.change(update_frame_colorizer_model, inputs = FRAME_COLORIZER_MODEL_DROPDOWN, outputs = FRAME_COLORIZER_MODEL_DROPDOWN)
FRAME_COLORIZER_BLEND_SLIDER.release(update_frame_colorizer_blend, inputs = FRAME_COLORIZER_BLEND_SLIDER)
FRAME_COLORIZER_SIZE_DROPDOWN.change(update_frame_colorizer_size, inputs = FRAME_COLORIZER_SIZE_DROPDOWN)
processors_checkbox_group = get_ui_component('processors_checkbox_group')
if processors_checkbox_group:
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = [ FRAME_COLORIZER_MODEL_DROPDOWN, FRAME_COLORIZER_BLEND_SLIDER, FRAME_COLORIZER_SIZE_DROPDOWN ])
def remote_update(processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Slider, gradio.Dropdown]:
has_frame_colorizer = 'frame_colorizer' in processors
return gradio.Dropdown(visible = has_frame_colorizer), gradio.Slider(visible = has_frame_colorizer), gradio.Dropdown(visible = has_frame_colorizer)
def update_frame_colorizer_model(frame_colorizer_model : FrameColorizerModel) -> gradio.Dropdown:
frame_colorizer_module = load_processor_module('frame_colorizer')
frame_colorizer_module.clear_inference_pool()
state_manager.set_item('frame_colorizer_model', frame_colorizer_model)
if frame_colorizer_module.pre_check():
return gradio.Dropdown(value = state_manager.get_item('frame_colorizer_model'))
return gradio.Dropdown()
def update_frame_colorizer_blend(frame_colorizer_blend : float) -> None:
state_manager.set_item('frame_colorizer_blend', int(frame_colorizer_blend))
def update_frame_colorizer_size(frame_colorizer_size : str) -> None:
state_manager.set_item('frame_colorizer_size', frame_colorizer_size)

View File

@@ -0,0 +1,63 @@
from typing import List, Optional, Tuple
import gradio
from facefusion import state_manager, wording
from facefusion.common_helper import calc_int_step
from facefusion.processors import choices as processors_choices
from facefusion.processors.core import load_processor_module
from facefusion.processors.typing import FrameEnhancerModel
from facefusion.uis.core import get_ui_component, register_ui_component
FRAME_ENHANCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FRAME_ENHANCER_BLEND_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global FRAME_ENHANCER_MODEL_DROPDOWN
global FRAME_ENHANCER_BLEND_SLIDER
FRAME_ENHANCER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.frame_enhancer_model_dropdown'),
choices = processors_choices.frame_enhancer_models,
value = state_manager.get_item('frame_enhancer_model'),
visible = 'frame_enhancer' in state_manager.get_item('processors')
)
FRAME_ENHANCER_BLEND_SLIDER = gradio.Slider(
label = wording.get('uis.frame_enhancer_blend_slider'),
value = state_manager.get_item('frame_enhancer_blend'),
step = calc_int_step(processors_choices.frame_enhancer_blend_range),
minimum = processors_choices.frame_enhancer_blend_range[0],
maximum = processors_choices.frame_enhancer_blend_range[-1],
visible = 'frame_enhancer' in state_manager.get_item('processors')
)
register_ui_component('frame_enhancer_model_dropdown', FRAME_ENHANCER_MODEL_DROPDOWN)
register_ui_component('frame_enhancer_blend_slider', FRAME_ENHANCER_BLEND_SLIDER)
def listen() -> None:
FRAME_ENHANCER_MODEL_DROPDOWN.change(update_frame_enhancer_model, inputs = FRAME_ENHANCER_MODEL_DROPDOWN, outputs = FRAME_ENHANCER_MODEL_DROPDOWN)
FRAME_ENHANCER_BLEND_SLIDER.release(update_frame_enhancer_blend, inputs = FRAME_ENHANCER_BLEND_SLIDER)
processors_checkbox_group = get_ui_component('processors_checkbox_group')
if processors_checkbox_group:
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = [ FRAME_ENHANCER_MODEL_DROPDOWN, FRAME_ENHANCER_BLEND_SLIDER ])
def remote_update(processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Slider]:
has_frame_enhancer = 'frame_enhancer' in processors
return gradio.Dropdown(visible = has_frame_enhancer), gradio.Slider(visible = has_frame_enhancer)
def update_frame_enhancer_model(frame_enhancer_model : FrameEnhancerModel) -> gradio.Dropdown:
frame_enhancer_module = load_processor_module('frame_enhancer')
frame_enhancer_module.clear_inference_pool()
state_manager.set_item('frame_enhancer_model', frame_enhancer_model)
if frame_enhancer_module.pre_check():
return gradio.Dropdown(value = state_manager.get_item('frame_enhancer_model'))
return gradio.Dropdown()
def update_frame_enhancer_blend(frame_enhancer_blend : float) -> None:
state_manager.set_item('frame_enhancer_blend', int(frame_enhancer_blend))

View File

@@ -1,40 +0,0 @@
from typing import List, Optional
import gradio
import facefusion.globals
from facefusion import wording
from facefusion.processors.frame.core import load_frame_processor_module, clear_frame_processors_modules
from facefusion.filesystem import list_directory
from facefusion.uis.core import register_ui_component
FRAME_PROCESSORS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
def render() -> None:
global FRAME_PROCESSORS_CHECKBOX_GROUP
FRAME_PROCESSORS_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.frame_processors_checkbox_group'),
choices = sort_frame_processors(facefusion.globals.frame_processors),
value = facefusion.globals.frame_processors
)
register_ui_component('frame_processors_checkbox_group', FRAME_PROCESSORS_CHECKBOX_GROUP)
def listen() -> None:
FRAME_PROCESSORS_CHECKBOX_GROUP.change(update_frame_processors, inputs = FRAME_PROCESSORS_CHECKBOX_GROUP, outputs = FRAME_PROCESSORS_CHECKBOX_GROUP)
def update_frame_processors(frame_processors : List[str]) -> gradio.CheckboxGroup:
facefusion.globals.frame_processors = frame_processors
clear_frame_processors_modules()
for frame_processor in frame_processors:
frame_processor_module = load_frame_processor_module(frame_processor)
if not frame_processor_module.pre_check():
return gradio.CheckboxGroup()
return gradio.CheckboxGroup(value = facefusion.globals.frame_processors, choices = sort_frame_processors(facefusion.globals.frame_processors))
def sort_frame_processors(frame_processors : List[str]) -> list[str]:
available_frame_processors = list_directory('facefusion/processors/frame/modules')
return sorted(available_frame_processors, key = lambda frame_processor : frame_processors.index(frame_processor) if frame_processor in frame_processors else len(frame_processors))

View File

@@ -1,216 +0,0 @@
from typing import List, Optional, Tuple
import gradio
import facefusion.globals
from facefusion import face_analyser, wording
from facefusion.processors.frame.core import load_frame_processor_module
from facefusion.processors.frame import globals as frame_processors_globals, choices as frame_processors_choices
from facefusion.processors.frame.typings import FaceDebuggerItem, FaceEnhancerModel, FaceSwapperModel, FrameColorizerModel, FrameEnhancerModel, LipSyncerModel
from facefusion.uis.core import get_ui_component, register_ui_component
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
FACE_ENHANCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_ENHANCER_BLEND_SLIDER : Optional[gradio.Slider] = None
FACE_SWAPPER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FRAME_COLORIZER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FRAME_COLORIZER_BLEND_SLIDER : Optional[gradio.Slider] = None
FRAME_COLORIZER_SIZE_DROPDOWN : Optional[gradio.Dropdown] = None
FRAME_ENHANCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FRAME_ENHANCER_BLEND_SLIDER : Optional[gradio.Slider] = None
LIP_SYNCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
def render() -> None:
global FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP
global FACE_ENHANCER_MODEL_DROPDOWN
global FACE_ENHANCER_BLEND_SLIDER
global FACE_SWAPPER_MODEL_DROPDOWN
global FRAME_COLORIZER_MODEL_DROPDOWN
global FRAME_COLORIZER_BLEND_SLIDER
global FRAME_COLORIZER_SIZE_DROPDOWN
global FRAME_ENHANCER_MODEL_DROPDOWN
global FRAME_ENHANCER_BLEND_SLIDER
global LIP_SYNCER_MODEL_DROPDOWN
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.face_debugger_items_checkbox_group'),
choices = frame_processors_choices.face_debugger_items,
value = frame_processors_globals.face_debugger_items,
visible = 'face_debugger' in facefusion.globals.frame_processors
)
FACE_ENHANCER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_enhancer_model_dropdown'),
choices = frame_processors_choices.face_enhancer_models,
value = frame_processors_globals.face_enhancer_model,
visible = 'face_enhancer' in facefusion.globals.frame_processors
)
FACE_ENHANCER_BLEND_SLIDER = gradio.Slider(
label = wording.get('uis.face_enhancer_blend_slider'),
value = frame_processors_globals.face_enhancer_blend,
step = frame_processors_choices.face_enhancer_blend_range[1] - frame_processors_choices.face_enhancer_blend_range[0],
minimum = frame_processors_choices.face_enhancer_blend_range[0],
maximum = frame_processors_choices.face_enhancer_blend_range[-1],
visible = 'face_enhancer' in facefusion.globals.frame_processors
)
FACE_SWAPPER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_swapper_model_dropdown'),
choices = frame_processors_choices.face_swapper_models,
value = frame_processors_globals.face_swapper_model,
visible = 'face_swapper' in facefusion.globals.frame_processors
)
FRAME_COLORIZER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.frame_colorizer_model_dropdown'),
choices = frame_processors_choices.frame_colorizer_models,
value = frame_processors_globals.frame_colorizer_model,
visible = 'frame_colorizer' in facefusion.globals.frame_processors
)
FRAME_COLORIZER_BLEND_SLIDER = gradio.Slider(
label = wording.get('uis.frame_colorizer_blend_slider'),
value = frame_processors_globals.frame_colorizer_blend,
step = frame_processors_choices.frame_colorizer_blend_range[1] - frame_processors_choices.frame_colorizer_blend_range[0],
minimum = frame_processors_choices.frame_colorizer_blend_range[0],
maximum = frame_processors_choices.frame_colorizer_blend_range[-1],
visible = 'frame_colorizer' in facefusion.globals.frame_processors
)
FRAME_COLORIZER_SIZE_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.frame_colorizer_size_dropdown'),
choices = frame_processors_choices.frame_colorizer_sizes,
value = frame_processors_globals.frame_colorizer_size,
visible = 'frame_colorizer' in facefusion.globals.frame_processors
)
FRAME_ENHANCER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.frame_enhancer_model_dropdown'),
choices = frame_processors_choices.frame_enhancer_models,
value = frame_processors_globals.frame_enhancer_model,
visible = 'frame_enhancer' in facefusion.globals.frame_processors
)
FRAME_ENHANCER_BLEND_SLIDER = gradio.Slider(
label = wording.get('uis.frame_enhancer_blend_slider'),
value = frame_processors_globals.frame_enhancer_blend,
step = frame_processors_choices.frame_enhancer_blend_range[1] - frame_processors_choices.frame_enhancer_blend_range[0],
minimum = frame_processors_choices.frame_enhancer_blend_range[0],
maximum = frame_processors_choices.frame_enhancer_blend_range[-1],
visible = 'frame_enhancer' in facefusion.globals.frame_processors
)
LIP_SYNCER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.lip_syncer_model_dropdown'),
choices = frame_processors_choices.lip_syncer_models,
value = frame_processors_globals.lip_syncer_model,
visible = 'lip_syncer' in facefusion.globals.frame_processors
)
register_ui_component('face_debugger_items_checkbox_group', FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
register_ui_component('face_enhancer_model_dropdown', FACE_ENHANCER_MODEL_DROPDOWN)
register_ui_component('face_enhancer_blend_slider', FACE_ENHANCER_BLEND_SLIDER)
register_ui_component('face_swapper_model_dropdown', FACE_SWAPPER_MODEL_DROPDOWN)
register_ui_component('frame_colorizer_model_dropdown', FRAME_COLORIZER_MODEL_DROPDOWN)
register_ui_component('frame_colorizer_blend_slider', FRAME_COLORIZER_BLEND_SLIDER)
register_ui_component('frame_colorizer_size_dropdown', FRAME_COLORIZER_SIZE_DROPDOWN)
register_ui_component('frame_enhancer_model_dropdown', FRAME_ENHANCER_MODEL_DROPDOWN)
register_ui_component('frame_enhancer_blend_slider', FRAME_ENHANCER_BLEND_SLIDER)
register_ui_component('lip_syncer_model_dropdown', LIP_SYNCER_MODEL_DROPDOWN)
def listen() -> None:
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP.change(update_face_debugger_items, inputs = FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
FACE_ENHANCER_MODEL_DROPDOWN.change(update_face_enhancer_model, inputs = FACE_ENHANCER_MODEL_DROPDOWN, outputs = FACE_ENHANCER_MODEL_DROPDOWN)
FACE_ENHANCER_BLEND_SLIDER.release(update_face_enhancer_blend, inputs = FACE_ENHANCER_BLEND_SLIDER)
FACE_SWAPPER_MODEL_DROPDOWN.change(update_face_swapper_model, inputs = FACE_SWAPPER_MODEL_DROPDOWN, outputs = FACE_SWAPPER_MODEL_DROPDOWN)
FRAME_COLORIZER_MODEL_DROPDOWN.change(update_frame_colorizer_model, inputs = FRAME_COLORIZER_MODEL_DROPDOWN, outputs = FRAME_COLORIZER_MODEL_DROPDOWN)
FRAME_COLORIZER_BLEND_SLIDER.release(update_frame_colorizer_blend, inputs = FRAME_COLORIZER_BLEND_SLIDER)
FRAME_COLORIZER_SIZE_DROPDOWN.change(update_frame_colorizer_size, inputs = FRAME_COLORIZER_SIZE_DROPDOWN, outputs = FRAME_COLORIZER_SIZE_DROPDOWN)
FRAME_ENHANCER_MODEL_DROPDOWN.change(update_frame_enhancer_model, inputs = FRAME_ENHANCER_MODEL_DROPDOWN, outputs = FRAME_ENHANCER_MODEL_DROPDOWN)
FRAME_ENHANCER_BLEND_SLIDER.release(update_frame_enhancer_blend, inputs = FRAME_ENHANCER_BLEND_SLIDER)
LIP_SYNCER_MODEL_DROPDOWN.change(update_lip_syncer_model, inputs = LIP_SYNCER_MODEL_DROPDOWN, outputs = LIP_SYNCER_MODEL_DROPDOWN)
frame_processors_checkbox_group = get_ui_component('frame_processors_checkbox_group')
if frame_processors_checkbox_group:
frame_processors_checkbox_group.change(update_frame_processors, inputs = frame_processors_checkbox_group, outputs = [ FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP, FACE_ENHANCER_MODEL_DROPDOWN, FACE_ENHANCER_BLEND_SLIDER, FACE_SWAPPER_MODEL_DROPDOWN, FRAME_COLORIZER_MODEL_DROPDOWN, FRAME_COLORIZER_BLEND_SLIDER, FRAME_COLORIZER_SIZE_DROPDOWN, FRAME_ENHANCER_MODEL_DROPDOWN, FRAME_ENHANCER_BLEND_SLIDER, LIP_SYNCER_MODEL_DROPDOWN ])
def update_frame_processors(frame_processors : List[str]) -> Tuple[gradio.CheckboxGroup, gradio.Dropdown, gradio.Slider, gradio.Dropdown, gradio.Dropdown, gradio.Slider, gradio.Dropdown, gradio.Dropdown, gradio.Slider, gradio.Dropdown]:
has_face_debugger = 'face_debugger' in frame_processors
has_face_enhancer = 'face_enhancer' in frame_processors
has_face_swapper = 'face_swapper' in frame_processors
has_frame_colorizer = 'frame_colorizer' in frame_processors
has_frame_enhancer = 'frame_enhancer' in frame_processors
has_lip_syncer = 'lip_syncer' in frame_processors
return gradio.CheckboxGroup(visible = has_face_debugger), gradio.Dropdown(visible = has_face_enhancer), gradio.Slider(visible = has_face_enhancer), gradio.Dropdown(visible = has_face_swapper), gradio.Dropdown(visible = has_frame_colorizer), gradio.Slider(visible = has_frame_colorizer), gradio.Dropdown(visible = has_frame_colorizer), gradio.Dropdown(visible = has_frame_enhancer), gradio.Slider(visible = has_frame_enhancer), gradio.Dropdown(visible = has_lip_syncer)
def update_face_debugger_items(face_debugger_items : List[FaceDebuggerItem]) -> None:
frame_processors_globals.face_debugger_items = face_debugger_items
def update_face_enhancer_model(face_enhancer_model : FaceEnhancerModel) -> gradio.Dropdown:
frame_processors_globals.face_enhancer_model = face_enhancer_model
face_enhancer_module = load_frame_processor_module('face_enhancer')
face_enhancer_module.clear_frame_processor()
face_enhancer_module.set_options('model', face_enhancer_module.MODELS[face_enhancer_model])
if face_enhancer_module.pre_check():
return gradio.Dropdown(value = frame_processors_globals.face_enhancer_model)
return gradio.Dropdown()
def update_face_enhancer_blend(face_enhancer_blend : int) -> None:
frame_processors_globals.face_enhancer_blend = face_enhancer_blend
def update_face_swapper_model(face_swapper_model : FaceSwapperModel) -> gradio.Dropdown:
frame_processors_globals.face_swapper_model = face_swapper_model
if face_swapper_model == 'blendswap_256':
facefusion.globals.face_recognizer_model = 'arcface_blendswap'
if face_swapper_model == 'inswapper_128' or face_swapper_model == 'inswapper_128_fp16':
facefusion.globals.face_recognizer_model = 'arcface_inswapper'
if face_swapper_model == 'simswap_256' or face_swapper_model == 'simswap_512_unofficial':
facefusion.globals.face_recognizer_model = 'arcface_simswap'
if face_swapper_model == 'uniface_256':
facefusion.globals.face_recognizer_model = 'arcface_uniface'
face_swapper_module = load_frame_processor_module('face_swapper')
face_swapper_module.clear_model_initializer()
face_swapper_module.clear_frame_processor()
face_swapper_module.set_options('model', face_swapper_module.MODELS[face_swapper_model])
if face_analyser.pre_check() and face_swapper_module.pre_check():
return gradio.Dropdown(value = frame_processors_globals.face_swapper_model)
return gradio.Dropdown()
def update_frame_colorizer_model(frame_colorizer_model : FrameColorizerModel) -> gradio.Dropdown:
frame_processors_globals.frame_colorizer_model = frame_colorizer_model
frame_colorizer_module = load_frame_processor_module('frame_colorizer')
frame_colorizer_module.clear_frame_processor()
frame_colorizer_module.set_options('model', frame_colorizer_module.MODELS[frame_colorizer_model])
if frame_colorizer_module.pre_check():
return gradio.Dropdown(value = frame_processors_globals.frame_colorizer_model)
return gradio.Dropdown()
def update_frame_colorizer_blend(frame_colorizer_blend : int) -> None:
frame_processors_globals.frame_colorizer_blend = frame_colorizer_blend
def update_frame_colorizer_size(frame_colorizer_size : str) -> gradio.Dropdown:
frame_processors_globals.frame_colorizer_size = frame_colorizer_size
return gradio.Dropdown(value = frame_processors_globals.frame_colorizer_size)
def update_frame_enhancer_model(frame_enhancer_model : FrameEnhancerModel) -> gradio.Dropdown:
frame_processors_globals.frame_enhancer_model = frame_enhancer_model
frame_enhancer_module = load_frame_processor_module('frame_enhancer')
frame_enhancer_module.clear_frame_processor()
frame_enhancer_module.set_options('model', frame_enhancer_module.MODELS[frame_enhancer_model])
if frame_enhancer_module.pre_check():
return gradio.Dropdown(value = frame_processors_globals.frame_enhancer_model)
return gradio.Dropdown()
def update_frame_enhancer_blend(frame_enhancer_blend : int) -> None:
frame_processors_globals.frame_enhancer_blend = frame_enhancer_blend
def update_lip_syncer_model(lip_syncer_model : LipSyncerModel) -> gradio.Dropdown:
frame_processors_globals.lip_syncer_model = lip_syncer_model
lip_syncer_module = load_frame_processor_module('lip_syncer')
lip_syncer_module.clear_frame_processor()
lip_syncer_module.set_options('model', lip_syncer_module.MODELS[lip_syncer_model])
if lip_syncer_module.pre_check():
return gradio.Dropdown(value = frame_processors_globals.lip_syncer_model)
return gradio.Dropdown()

View File

@@ -0,0 +1,110 @@
from time import sleep
from typing import Optional, Tuple
import gradio
from facefusion import process_manager, state_manager, wording
from facefusion.args import collect_step_args
from facefusion.core import process_step
from facefusion.filesystem import is_directory, is_image, is_video
from facefusion.jobs import job_helper, job_manager, job_runner, job_store
from facefusion.temp_helper import clear_temp_directory
from facefusion.typing import Args, UiWorkflow
from facefusion.uis.core import get_ui_component
from facefusion.uis.ui_helper import suggest_output_path
INSTANT_RUNNER_WRAPPER : Optional[gradio.Row] = None
INSTANT_RUNNER_START_BUTTON : Optional[gradio.Button] = None
INSTANT_RUNNER_STOP_BUTTON : Optional[gradio.Button] = None
INSTANT_RUNNER_CLEAR_BUTTON : Optional[gradio.Button] = None
def render() -> None:
global INSTANT_RUNNER_WRAPPER
global INSTANT_RUNNER_START_BUTTON
global INSTANT_RUNNER_STOP_BUTTON
global INSTANT_RUNNER_CLEAR_BUTTON
if job_manager.init_jobs(state_manager.get_item('jobs_path')):
is_instant_runner = state_manager.get_item('ui_workflow') == 'instant_runner'
with gradio.Row(visible = is_instant_runner) as INSTANT_RUNNER_WRAPPER:
INSTANT_RUNNER_START_BUTTON = gradio.Button(
value = wording.get('uis.start_button'),
variant = 'primary',
size = 'sm'
)
INSTANT_RUNNER_STOP_BUTTON = gradio.Button(
value = wording.get('uis.stop_button'),
variant = 'primary',
size = 'sm',
visible = False
)
INSTANT_RUNNER_CLEAR_BUTTON = gradio.Button(
value = wording.get('uis.clear_button'),
size = 'sm'
)
def listen() -> None:
output_image = get_ui_component('output_image')
output_video = get_ui_component('output_video')
ui_workflow_dropdown = get_ui_component('ui_workflow_dropdown')
if output_image and output_video:
INSTANT_RUNNER_START_BUTTON.click(start, outputs = [ INSTANT_RUNNER_START_BUTTON, INSTANT_RUNNER_STOP_BUTTON ])
INSTANT_RUNNER_START_BUTTON.click(run, outputs = [ INSTANT_RUNNER_START_BUTTON, INSTANT_RUNNER_STOP_BUTTON, output_image, output_video ])
INSTANT_RUNNER_STOP_BUTTON.click(stop, outputs = [ INSTANT_RUNNER_START_BUTTON, INSTANT_RUNNER_STOP_BUTTON ])
INSTANT_RUNNER_CLEAR_BUTTON.click(clear, outputs = [ output_image, output_video ])
if ui_workflow_dropdown:
ui_workflow_dropdown.change(remote_update, inputs = ui_workflow_dropdown, outputs = INSTANT_RUNNER_WRAPPER)
def remote_update(ui_workflow : UiWorkflow) -> gradio.Row:
is_instant_runner = ui_workflow == 'instant_runner'
return gradio.Row(visible = is_instant_runner)
def start() -> Tuple[gradio.Button, gradio.Button]:
while not process_manager.is_processing():
sleep(0.5)
return gradio.Button(visible = False), gradio.Button(visible = True)
def run() -> Tuple[gradio.Button, gradio.Button, gradio.Image, gradio.Video]:
step_args = collect_step_args()
output_path = step_args.get('output_path')
if is_directory(step_args.get('output_path')):
step_args['output_path'] = suggest_output_path(step_args.get('output_path'), state_manager.get_item('target_path'))
if job_manager.init_jobs(state_manager.get_item('jobs_path')):
create_and_run_job(step_args)
state_manager.set_item('output_path', output_path)
if is_image(step_args.get('output_path')):
return gradio.Button(visible = True), gradio.Button(visible = False), gradio.Image(value = step_args.get('output_path'), visible = True), gradio.Video(value = None, visible = False)
if is_video(step_args.get('output_path')):
return gradio.Button(visible = True), gradio.Button(visible = False), gradio.Image(value = None, visible = False), gradio.Video(value = step_args.get('output_path'), visible = True)
return gradio.Button(visible = True), gradio.Button(visible = False), gradio.Image(value = None), gradio.Video(value = None)
def create_and_run_job(step_args : Args) -> bool:
job_id = job_helper.suggest_job_id('ui')
for key in job_store.get_job_keys():
state_manager.sync_item(key) #type:ignore
return job_manager.create_job(job_id) and job_manager.add_step(job_id, step_args) and job_manager.submit_job(job_id) and job_runner.run_job(job_id, process_step)
def stop() -> Tuple[gradio.Button, gradio.Button]:
process_manager.stop()
return gradio.Button(visible = True), gradio.Button(visible = False)
def clear() -> Tuple[gradio.Image, gradio.Video]:
while process_manager.is_processing():
sleep(0.5)
if state_manager.get_item('target_path'):
clear_temp_directory(state_manager.get_item('target_path'))
return gradio.Image(value = None), gradio.Video(value = None)

View File

@@ -0,0 +1,50 @@
from typing import List, Optional
import gradio
import facefusion.choices
from facefusion import state_manager, wording
from facefusion.common_helper import get_first
from facefusion.jobs import job_list, job_manager
from facefusion.typing import JobStatus
from facefusion.uis.core import get_ui_component
JOB_LIST_JOBS_DATAFRAME : Optional[gradio.Dataframe] = None
JOB_LIST_REFRESH_BUTTON : Optional[gradio.Button] = None
def render() -> None:
global JOB_LIST_JOBS_DATAFRAME
global JOB_LIST_REFRESH_BUTTON
if job_manager.init_jobs(state_manager.get_item('jobs_path')):
job_status = get_first(facefusion.choices.job_statuses)
job_headers, job_contents = job_list.compose_job_list(job_status)
JOB_LIST_JOBS_DATAFRAME = gradio.Dataframe(
headers = job_headers,
value = job_contents,
datatype = [ 'str', 'number', 'date', 'date', 'str' ],
show_label = False
)
JOB_LIST_REFRESH_BUTTON = gradio.Button(
value = wording.get('uis.refresh_button'),
variant = 'primary',
size = 'sm'
)
def listen() -> None:
job_list_job_status_checkbox_group = get_ui_component('job_list_job_status_checkbox_group')
if job_list_job_status_checkbox_group:
job_list_job_status_checkbox_group.change(update_job_dataframe, inputs = job_list_job_status_checkbox_group, outputs = JOB_LIST_JOBS_DATAFRAME)
JOB_LIST_REFRESH_BUTTON.click(update_job_dataframe, inputs = job_list_job_status_checkbox_group, outputs = JOB_LIST_JOBS_DATAFRAME)
def update_job_dataframe(job_statuses : List[JobStatus]) -> gradio.Dataframe:
all_job_contents = []
for job_status in job_statuses:
_, job_contents = job_list.compose_job_list(job_status)
all_job_contents.extend(job_contents)
return gradio.Dataframe(value = all_job_contents)

View File

@@ -0,0 +1,35 @@
from typing import List, Optional
import gradio
import facefusion.choices
from facefusion import state_manager, wording
from facefusion.common_helper import get_first
from facefusion.jobs import job_manager
from facefusion.typing import JobStatus
from facefusion.uis.core import register_ui_component
JOB_LIST_JOB_STATUS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
def render() -> None:
global JOB_LIST_JOB_STATUS_CHECKBOX_GROUP
if job_manager.init_jobs(state_manager.get_item('jobs_path')):
job_status = get_first(facefusion.choices.job_statuses)
JOB_LIST_JOB_STATUS_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.job_list_status_checkbox_group'),
choices = facefusion.choices.job_statuses,
value = job_status
)
register_ui_component('job_list_job_status_checkbox_group', JOB_LIST_JOB_STATUS_CHECKBOX_GROUP)
def listen() -> None:
JOB_LIST_JOB_STATUS_CHECKBOX_GROUP.change(update_job_status_checkbox_group, inputs = JOB_LIST_JOB_STATUS_CHECKBOX_GROUP, outputs = JOB_LIST_JOB_STATUS_CHECKBOX_GROUP)
def update_job_status_checkbox_group(job_statuses : List[JobStatus]) -> gradio.CheckboxGroup:
job_statuses = job_statuses or facefusion.choices.job_statuses
return gradio.CheckboxGroup(value = job_statuses)

View File

@@ -0,0 +1,184 @@
from typing import List, Optional, Tuple
import gradio
from facefusion import logger, state_manager, wording
from facefusion.args import collect_step_args
from facefusion.common_helper import get_first, get_last
from facefusion.filesystem import is_directory
from facefusion.jobs import job_manager
from facefusion.typing import UiWorkflow
from facefusion.uis import choices as uis_choices
from facefusion.uis.core import get_ui_component
from facefusion.uis.typing import JobManagerAction
from facefusion.uis.ui_helper import convert_int_none, convert_str_none, suggest_output_path
JOB_MANAGER_WRAPPER : Optional[gradio.Column] = None
JOB_MANAGER_JOB_ACTION_DROPDOWN : Optional[gradio.Dropdown] = None
JOB_MANAGER_JOB_ID_TEXTBOX : Optional[gradio.Textbox] = None
JOB_MANAGER_JOB_ID_DROPDOWN : Optional[gradio.Dropdown] = None
JOB_MANAGER_STEP_INDEX_DROPDOWN : Optional[gradio.Dropdown] = None
JOB_MANAGER_APPLY_BUTTON : Optional[gradio.Button] = None
def render() -> None:
global JOB_MANAGER_WRAPPER
global JOB_MANAGER_JOB_ACTION_DROPDOWN
global JOB_MANAGER_JOB_ID_TEXTBOX
global JOB_MANAGER_JOB_ID_DROPDOWN
global JOB_MANAGER_STEP_INDEX_DROPDOWN
global JOB_MANAGER_APPLY_BUTTON
if job_manager.init_jobs(state_manager.get_item('jobs_path')):
is_job_manager = state_manager.get_item('ui_workflow') == 'job_manager'
drafted_job_ids = job_manager.find_job_ids('drafted') or [ 'none' ]
with gradio.Column(visible = is_job_manager) as JOB_MANAGER_WRAPPER:
JOB_MANAGER_JOB_ACTION_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.job_manager_job_action_dropdown'),
choices = uis_choices.job_manager_actions,
value = get_first(uis_choices.job_manager_actions)
)
JOB_MANAGER_JOB_ID_TEXTBOX = gradio.Textbox(
label = wording.get('uis.job_manager_job_id_dropdown'),
max_lines = 1,
interactive = True
)
JOB_MANAGER_JOB_ID_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.job_manager_job_id_dropdown'),
choices = drafted_job_ids,
value = get_last(drafted_job_ids),
interactive = True,
visible = False
)
JOB_MANAGER_STEP_INDEX_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.job_manager_step_index_dropdown'),
choices = [ 'none' ],
value = 'none',
interactive = True,
visible = False
)
JOB_MANAGER_APPLY_BUTTON = gradio.Button(
value = wording.get('uis.apply_button'),
variant = 'primary',
size = 'sm'
)
def listen() -> None:
JOB_MANAGER_JOB_ACTION_DROPDOWN.change(update, inputs = [ JOB_MANAGER_JOB_ACTION_DROPDOWN, JOB_MANAGER_JOB_ID_DROPDOWN ], outputs = [ JOB_MANAGER_JOB_ID_TEXTBOX, JOB_MANAGER_JOB_ID_DROPDOWN, JOB_MANAGER_STEP_INDEX_DROPDOWN ])
JOB_MANAGER_JOB_ID_DROPDOWN.change(update_step_index, inputs = JOB_MANAGER_JOB_ID_DROPDOWN, outputs = JOB_MANAGER_STEP_INDEX_DROPDOWN)
JOB_MANAGER_APPLY_BUTTON.click(apply, inputs = [ JOB_MANAGER_JOB_ACTION_DROPDOWN, JOB_MANAGER_JOB_ID_TEXTBOX, JOB_MANAGER_JOB_ID_DROPDOWN, JOB_MANAGER_STEP_INDEX_DROPDOWN ], outputs = [ JOB_MANAGER_JOB_ACTION_DROPDOWN, JOB_MANAGER_JOB_ID_TEXTBOX, JOB_MANAGER_JOB_ID_DROPDOWN, JOB_MANAGER_STEP_INDEX_DROPDOWN ])
ui_workflow_dropdown = get_ui_component('ui_workflow_dropdown')
if ui_workflow_dropdown:
ui_workflow_dropdown.change(remote_update, inputs = ui_workflow_dropdown, outputs = [ JOB_MANAGER_WRAPPER, JOB_MANAGER_JOB_ACTION_DROPDOWN, JOB_MANAGER_JOB_ID_TEXTBOX, JOB_MANAGER_JOB_ID_DROPDOWN, JOB_MANAGER_STEP_INDEX_DROPDOWN ])
def remote_update(ui_workflow : UiWorkflow) -> Tuple[gradio.Row, gradio.Dropdown, gradio.Textbox, gradio.Dropdown, gradio.Dropdown]:
is_job_manager = ui_workflow == 'job_manager'
return gradio.Row(visible = is_job_manager), gradio.Dropdown(value = get_first(uis_choices.job_manager_actions)), gradio.Textbox(value = None, visible = True), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False)
def apply(job_action : JobManagerAction, created_job_id : str, selected_job_id : str, selected_step_index : int) -> Tuple[gradio.Dropdown, gradio.Textbox, gradio.Dropdown, gradio.Dropdown]:
created_job_id = convert_str_none(created_job_id)
selected_job_id = convert_str_none(selected_job_id)
selected_step_index = convert_int_none(selected_step_index)
step_args = collect_step_args()
output_path = step_args.get('output_path')
if is_directory(step_args.get('output_path')):
step_args['output_path'] = suggest_output_path(step_args.get('output_path'), state_manager.get_item('target_path'))
if job_action == 'job-create':
if created_job_id and job_manager.create_job(created_job_id):
updated_job_ids = job_manager.find_job_ids('drafted') or [ 'none' ]
logger.info(wording.get('job_created').format(job_id = created_job_id), __name__)
return gradio.Dropdown(value = 'job-add-step'), gradio.Textbox(visible = False), gradio.Dropdown(value = created_job_id, choices = updated_job_ids, visible = True), gradio.Dropdown()
else:
logger.error(wording.get('job_not_created').format(job_id = created_job_id), __name__)
if job_action == 'job-submit':
if selected_job_id and job_manager.submit_job(selected_job_id):
updated_job_ids = job_manager.find_job_ids('drafted') or [ 'none' ]
logger.info(wording.get('job_submitted').format(job_id = selected_job_id), __name__)
return gradio.Dropdown(), gradio.Textbox(), gradio.Dropdown(value = get_last(updated_job_ids), choices = updated_job_ids, visible = True), gradio.Dropdown()
else:
logger.error(wording.get('job_not_submitted').format(job_id = selected_job_id), __name__)
if job_action == 'job-delete':
if selected_job_id and job_manager.delete_job(selected_job_id):
updated_job_ids = job_manager.find_job_ids('drafted') + job_manager.find_job_ids('queued') + job_manager.find_job_ids('failed') + job_manager.find_job_ids('completed') or [ 'none' ]
logger.info(wording.get('job_deleted').format(job_id = selected_job_id), __name__)
return gradio.Dropdown(), gradio.Textbox(), gradio.Dropdown(value = get_last(updated_job_ids), choices = updated_job_ids, visible = True), gradio.Dropdown()
else:
logger.error(wording.get('job_not_deleted').format(job_id = selected_job_id), __name__)
if job_action == 'job-add-step':
if selected_job_id and job_manager.add_step(selected_job_id, step_args):
state_manager.set_item('output_path', output_path)
logger.info(wording.get('job_step_added').format(job_id = selected_job_id), __name__)
return gradio.Dropdown(), gradio.Textbox(), gradio.Dropdown(visible = True), gradio.Dropdown(visible = False)
else:
state_manager.set_item('output_path', output_path)
logger.error(wording.get('job_step_not_added').format(job_id = selected_job_id), __name__)
if job_action == 'job-remix-step':
if selected_job_id and job_manager.has_step(selected_job_id, selected_step_index) and job_manager.remix_step(selected_job_id, selected_step_index, step_args):
updated_step_choices = get_step_choices(selected_job_id) or [ 'none' ] #type:ignore[list-item]
state_manager.set_item('output_path', output_path)
logger.info(wording.get('job_remix_step_added').format(job_id = selected_job_id, step_index = selected_step_index), __name__)
return gradio.Dropdown(), gradio.Textbox(), gradio.Dropdown(visible = True), gradio.Dropdown(value = get_last(updated_step_choices), choices = updated_step_choices, visible = True)
else:
state_manager.set_item('output_path', output_path)
logger.error(wording.get('job_remix_step_not_added').format(job_id = selected_job_id, step_index = selected_step_index), __name__)
if job_action == 'job-insert-step':
if selected_job_id and job_manager.has_step(selected_job_id, selected_step_index) and job_manager.insert_step(selected_job_id, selected_step_index, step_args):
updated_step_choices = get_step_choices(selected_job_id) or [ 'none' ] #type:ignore[list-item]
state_manager.set_item('output_path', output_path)
logger.info(wording.get('job_step_inserted').format(job_id = selected_job_id, step_index = selected_step_index), __name__)
return gradio.Dropdown(), gradio.Textbox(), gradio.Dropdown(visible = True), gradio.Dropdown(value = get_last(updated_step_choices), choices = updated_step_choices, visible = True)
else:
state_manager.set_item('output_path', output_path)
logger.error(wording.get('job_step_not_inserted').format(job_id = selected_job_id, step_index = selected_step_index), __name__)
if job_action == 'job-remove-step':
if selected_job_id and job_manager.has_step(selected_job_id, selected_step_index) and job_manager.remove_step(selected_job_id, selected_step_index):
updated_step_choices = get_step_choices(selected_job_id) or [ 'none' ] #type:ignore[list-item]
logger.info(wording.get('job_step_removed').format(job_id = selected_job_id, step_index = selected_step_index), __name__)
return gradio.Dropdown(), gradio.Textbox(), gradio.Dropdown(visible = True), gradio.Dropdown(value = get_last(updated_step_choices), choices = updated_step_choices, visible = True)
else:
logger.error(wording.get('job_step_not_removed').format(job_id = selected_job_id, step_index = selected_step_index), __name__)
return gradio.Dropdown(), gradio.Textbox(), gradio.Dropdown(), gradio.Dropdown()
def get_step_choices(job_id : str) -> List[int]:
steps = job_manager.get_steps(job_id)
return [ index for index, _ in enumerate(steps) ]
def update(job_action : JobManagerAction, selected_job_id : str) -> Tuple[gradio.Textbox, gradio.Dropdown, gradio.Dropdown]:
if job_action == 'job-create':
return gradio.Textbox(value = None, visible = True), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False)
if job_action == 'job-delete':
updated_job_ids = job_manager.find_job_ids('drafted') + job_manager.find_job_ids('queued') + job_manager.find_job_ids('failed') + job_manager.find_job_ids('completed') or [ 'none' ]
updated_job_id = selected_job_id if selected_job_id in updated_job_ids else get_last(updated_job_ids)
return gradio.Textbox(visible = False), gradio.Dropdown(value = updated_job_id, choices = updated_job_ids, visible = True), gradio.Dropdown(visible = False)
if job_action in [ 'job-submit', 'job-add-step' ]:
updated_job_ids = job_manager.find_job_ids('drafted') or [ 'none' ]
updated_job_id = selected_job_id if selected_job_id in updated_job_ids else get_last(updated_job_ids)
return gradio.Textbox(visible = False), gradio.Dropdown(value = updated_job_id, choices = updated_job_ids, visible = True), gradio.Dropdown(visible = False)
if job_action in [ 'job-remix-step', 'job-insert-step', 'job-remove-step' ]:
updated_job_ids = job_manager.find_job_ids('drafted') or [ 'none' ]
updated_job_id = selected_job_id if selected_job_id in updated_job_ids else get_last(updated_job_ids)
updated_step_choices = get_step_choices(updated_job_id) or [ 'none' ] #type:ignore[list-item]
return gradio.Textbox(visible = False), gradio.Dropdown(value = updated_job_id, choices = updated_job_ids, visible = True), gradio.Dropdown(value = get_last(updated_step_choices), choices = updated_step_choices, visible = True)
return gradio.Textbox(visible = False), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False)
def update_step_index(job_id : str) -> gradio.Dropdown:
step_choices = get_step_choices(job_id) or [ 'none' ] #type:ignore[list-item]
return gradio.Dropdown(value = get_last(step_choices), choices = step_choices)

View File

@@ -0,0 +1,136 @@
from time import sleep
from typing import Optional, Tuple
import gradio
from facefusion import logger, process_manager, state_manager, wording
from facefusion.common_helper import get_first, get_last
from facefusion.core import process_step
from facefusion.jobs import job_manager, job_runner, job_store
from facefusion.typing import UiWorkflow
from facefusion.uis import choices as uis_choices
from facefusion.uis.core import get_ui_component
from facefusion.uis.typing import JobRunnerAction
from facefusion.uis.ui_helper import convert_str_none
JOB_RUNNER_WRAPPER : Optional[gradio.Column] = None
JOB_RUNNER_JOB_ACTION_DROPDOWN : Optional[gradio.Dropdown] = None
JOB_RUNNER_JOB_ID_DROPDOWN : Optional[gradio.Dropdown] = None
JOB_RUNNER_START_BUTTON : Optional[gradio.Button] = None
JOB_RUNNER_STOP_BUTTON : Optional[gradio.Button] = None
def render() -> None:
global JOB_RUNNER_WRAPPER
global JOB_RUNNER_JOB_ACTION_DROPDOWN
global JOB_RUNNER_JOB_ID_DROPDOWN
global JOB_RUNNER_START_BUTTON
global JOB_RUNNER_STOP_BUTTON
if job_manager.init_jobs(state_manager.get_item('jobs_path')):
is_job_runner = state_manager.get_item('ui_workflow') == 'job_runner'
queued_job_ids = job_manager.find_job_ids('queued') or [ 'none' ]
with gradio.Column(visible = is_job_runner) as JOB_RUNNER_WRAPPER:
JOB_RUNNER_JOB_ACTION_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.job_runner_job_action_dropdown'),
choices = uis_choices.job_runner_actions,
value = get_first(uis_choices.job_runner_actions)
)
JOB_RUNNER_JOB_ID_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.job_runner_job_id_dropdown'),
choices = queued_job_ids,
value = get_last(queued_job_ids)
)
with gradio.Row():
JOB_RUNNER_START_BUTTON = gradio.Button(
value = wording.get('uis.start_button'),
variant = 'primary',
size = 'sm'
)
JOB_RUNNER_STOP_BUTTON = gradio.Button(
value = wording.get('uis.stop_button'),
variant = 'primary',
size = 'sm',
visible = False
)
def listen() -> None:
JOB_RUNNER_JOB_ACTION_DROPDOWN.change(update_job_action, inputs = JOB_RUNNER_JOB_ACTION_DROPDOWN, outputs = JOB_RUNNER_JOB_ID_DROPDOWN)
JOB_RUNNER_START_BUTTON.click(start, outputs = [ JOB_RUNNER_START_BUTTON, JOB_RUNNER_STOP_BUTTON ])
JOB_RUNNER_START_BUTTON.click(run, inputs = [ JOB_RUNNER_JOB_ACTION_DROPDOWN, JOB_RUNNER_JOB_ID_DROPDOWN ], outputs = [ JOB_RUNNER_START_BUTTON, JOB_RUNNER_STOP_BUTTON, JOB_RUNNER_JOB_ID_DROPDOWN ])
JOB_RUNNER_STOP_BUTTON.click(stop, outputs = [ JOB_RUNNER_START_BUTTON, JOB_RUNNER_STOP_BUTTON ])
ui_workflow_dropdown = get_ui_component('ui_workflow_dropdown')
if ui_workflow_dropdown:
ui_workflow_dropdown.change(remote_update, inputs = ui_workflow_dropdown, outputs = [ JOB_RUNNER_WRAPPER, JOB_RUNNER_JOB_ACTION_DROPDOWN, JOB_RUNNER_JOB_ID_DROPDOWN ])
def remote_update(ui_workflow : UiWorkflow) -> Tuple[gradio.Row, gradio.Dropdown, gradio.Dropdown]:
is_job_runner = ui_workflow == 'job_runner'
queued_job_ids = job_manager.find_job_ids('queued') or [ 'none' ]
return gradio.Row(visible = is_job_runner), gradio.Dropdown(value = get_first(uis_choices.job_runner_actions), choices = uis_choices.job_runner_actions), gradio.Dropdown(value = get_last(queued_job_ids), choices = queued_job_ids)
def start() -> Tuple[gradio.Button, gradio.Button]:
while not process_manager.is_processing():
sleep(0.5)
return gradio.Button(visible = False), gradio.Button(visible = True)
def run(job_action : JobRunnerAction, job_id : str) -> Tuple[gradio.Button, gradio.Button, gradio.Dropdown]:
job_id = convert_str_none(job_id)
for key in job_store.get_job_keys():
state_manager.sync_item(key) #type:ignore
if job_action == 'job-run':
logger.info(wording.get('running_job').format(job_id = job_id), __name__)
if job_id and job_runner.run_job(job_id, process_step):
logger.info(wording.get('processing_job_succeed').format(job_id = job_id), __name__)
else:
logger.info(wording.get('processing_job_failed').format(job_id = job_id), __name__)
updated_job_ids = job_manager.find_job_ids('queued') or [ 'none' ]
return gradio.Button(visible = True), gradio.Button(visible = False), gradio.Dropdown(value = get_last(updated_job_ids), choices = updated_job_ids)
if job_action == 'job-run-all':
logger.info(wording.get('running_jobs'), __name__)
if job_runner.run_jobs(process_step):
logger.info(wording.get('processing_jobs_succeed'), __name__)
else:
logger.info(wording.get('processing_jobs_failed'), __name__)
if job_action == 'job-retry':
logger.info(wording.get('retrying_job').format(job_id = job_id), __name__)
if job_id and job_runner.retry_job(job_id, process_step):
logger.info(wording.get('processing_job_succeed').format(job_id = job_id), __name__)
else:
logger.info(wording.get('processing_job_failed').format(job_id = job_id), __name__)
updated_job_ids = job_manager.find_job_ids('failed') or [ 'none' ]
return gradio.Button(visible = True), gradio.Button(visible = False), gradio.Dropdown(value = get_last(updated_job_ids), choices = updated_job_ids)
if job_action == 'job-retry-all':
logger.info(wording.get('retrying_jobs'), __name__)
if job_runner.retry_jobs(process_step):
logger.info(wording.get('processing_jobs_succeed'), __name__)
else:
logger.info(wording.get('processing_jobs_failed'), __name__)
return gradio.Button(visible = True), gradio.Button(visible = False), gradio.Dropdown()
def stop() -> Tuple[gradio.Button, gradio.Button]:
process_manager.stop()
return gradio.Button(visible = True), gradio.Button(visible = False)
def update_job_action(job_action : JobRunnerAction) -> gradio.Dropdown:
if job_action == 'job-run':
updated_job_ids = job_manager.find_job_ids('queued') or [ 'none' ]
return gradio.Dropdown(value = get_last(updated_job_ids), choices = updated_job_ids, visible = True)
if job_action == 'job-retry':
updated_job_ids = job_manager.find_job_ids('failed') or [ 'none' ]
return gradio.Dropdown(value = get_last(updated_job_ids), choices = updated_job_ids, visible = True)
return gradio.Dropdown(visible = False)

View File

@@ -0,0 +1,46 @@
from typing import List, Optional
import gradio
from facefusion import state_manager, wording
from facefusion.processors import choices as processors_choices
from facefusion.processors.core import load_processor_module
from facefusion.processors.typing import LipSyncerModel
from facefusion.uis.core import get_ui_component, register_ui_component
LIP_SYNCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
def render() -> None:
global LIP_SYNCER_MODEL_DROPDOWN
LIP_SYNCER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.lip_syncer_model_dropdown'),
choices = processors_choices.lip_syncer_models,
value = state_manager.get_item('lip_syncer_model'),
visible = 'lip_syncer' in state_manager.get_item('processors')
)
register_ui_component('lip_syncer_model_dropdown', LIP_SYNCER_MODEL_DROPDOWN)
def listen() -> None:
LIP_SYNCER_MODEL_DROPDOWN.change(update_lip_syncer_model, inputs = LIP_SYNCER_MODEL_DROPDOWN, outputs = LIP_SYNCER_MODEL_DROPDOWN)
processors_checkbox_group = get_ui_component('processors_checkbox_group')
if processors_checkbox_group:
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = LIP_SYNCER_MODEL_DROPDOWN)
def remote_update(processors : List[str]) -> gradio.Dropdown:
has_lip_syncer = 'lip_syncer' in processors
return gradio.Dropdown(visible = has_lip_syncer)
def update_lip_syncer_model(lip_syncer_model : LipSyncerModel) -> gradio.Dropdown:
lip_syncer_module = load_processor_module('lip_syncer')
lip_syncer_module.clear_inference_pool()
state_manager.set_item('lip_syncer_model', lip_syncer_model)
if lip_syncer_module.pre_check():
return gradio.Dropdown(value = state_manager.get_item('lip_syncer_model'))
return gradio.Dropdown()

View File

@@ -1,10 +1,11 @@
from typing import Optional
import gradio
import facefusion.globals
import facefusion.choices
from facefusion import state_manager, wording
from facefusion.common_helper import calc_int_step
from facefusion.typing import VideoMemoryStrategy
from facefusion import wording
VIDEO_MEMORY_STRATEGY_DROPDOWN : Optional[gradio.Dropdown] = None
SYSTEM_MEMORY_LIMIT_SLIDER : Optional[gradio.Slider] = None
@@ -17,14 +18,14 @@ def render() -> None:
VIDEO_MEMORY_STRATEGY_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.video_memory_strategy_dropdown'),
choices = facefusion.choices.video_memory_strategies,
value = facefusion.globals.video_memory_strategy
value = state_manager.get_item('video_memory_strategy')
)
SYSTEM_MEMORY_LIMIT_SLIDER = gradio.Slider(
label = wording.get('uis.system_memory_limit_slider'),
step =facefusion.choices.system_memory_limit_range[1] - facefusion.choices.system_memory_limit_range[0],
step = calc_int_step(facefusion.choices.system_memory_limit_range),
minimum = facefusion.choices.system_memory_limit_range[0],
maximum = facefusion.choices.system_memory_limit_range[-1],
value = facefusion.globals.system_memory_limit
value = state_manager.get_item('system_memory_limit')
)
@@ -34,8 +35,8 @@ def listen() -> None:
def update_video_memory_strategy(video_memory_strategy : VideoMemoryStrategy) -> None:
facefusion.globals.video_memory_strategy = video_memory_strategy
state_manager.set_item('video_memory_strategy', video_memory_strategy)
def update_system_memory_limit(system_memory_limit : int) -> None:
facefusion.globals.system_memory_limit = system_memory_limit
def update_system_memory_limit(system_memory_limit : float) -> None:
state_manager.set_item('system_memory_limit', int(system_memory_limit))

View File

@@ -1,29 +1,28 @@
from typing import Tuple, Optional
from time import sleep
import tempfile
from typing import Optional
import gradio
import facefusion.globals
from facefusion import process_manager, wording
from facefusion.core import conditional_process
from facefusion.memory import limit_system_memory
from facefusion.normalizer import normalize_output_path
from facefusion.uis.core import get_ui_component
from facefusion.filesystem import clear_temp, is_image, is_video
from facefusion import state_manager, wording
from facefusion.uis.core import register_ui_component
OUTPUT_PATH_TEXTBOX : Optional[gradio.Textbox] = None
OUTPUT_IMAGE : Optional[gradio.Image] = None
OUTPUT_VIDEO : Optional[gradio.Video] = None
OUTPUT_START_BUTTON : Optional[gradio.Button] = None
OUTPUT_CLEAR_BUTTON : Optional[gradio.Button] = None
OUTPUT_STOP_BUTTON : Optional[gradio.Button] = None
def render() -> None:
global OUTPUT_PATH_TEXTBOX
global OUTPUT_IMAGE
global OUTPUT_VIDEO
global OUTPUT_START_BUTTON
global OUTPUT_STOP_BUTTON
global OUTPUT_CLEAR_BUTTON
if not state_manager.get_item('output_path'):
state_manager.set_item('output_path', tempfile.gettempdir())
OUTPUT_PATH_TEXTBOX = gradio.Textbox(
label = wording.get('uis.output_path_textbox'),
value = state_manager.get_item('output_path'),
max_lines = 1
)
OUTPUT_IMAGE = gradio.Image(
label = wording.get('uis.output_image_or_video'),
visible = False
@@ -31,58 +30,13 @@ def render() -> None:
OUTPUT_VIDEO = gradio.Video(
label = wording.get('uis.output_image_or_video')
)
OUTPUT_START_BUTTON = gradio.Button(
value = wording.get('uis.start_button'),
variant = 'primary',
size = 'sm'
)
OUTPUT_STOP_BUTTON = gradio.Button(
value = wording.get('uis.stop_button'),
variant = 'primary',
size = 'sm',
visible = False
)
OUTPUT_CLEAR_BUTTON = gradio.Button(
value = wording.get('uis.clear_button'),
size = 'sm'
)
def listen() -> None:
output_path_textbox = get_ui_component('output_path_textbox')
if output_path_textbox:
OUTPUT_START_BUTTON.click(start, outputs = [ OUTPUT_START_BUTTON, OUTPUT_STOP_BUTTON ])
OUTPUT_START_BUTTON.click(process, outputs = [ OUTPUT_IMAGE, OUTPUT_VIDEO, OUTPUT_START_BUTTON, OUTPUT_STOP_BUTTON ])
OUTPUT_STOP_BUTTON.click(stop, outputs = [ OUTPUT_START_BUTTON, OUTPUT_STOP_BUTTON ])
OUTPUT_CLEAR_BUTTON.click(clear, outputs = [ OUTPUT_IMAGE, OUTPUT_VIDEO ])
OUTPUT_PATH_TEXTBOX.change(update_output_path, inputs = OUTPUT_PATH_TEXTBOX)
register_ui_component('output_image', OUTPUT_IMAGE)
register_ui_component('output_video', OUTPUT_VIDEO)
def start() -> Tuple[gradio.Button, gradio.Button]:
while not process_manager.is_processing():
sleep(0.5)
return gradio.Button(visible = False), gradio.Button(visible = True)
def process() -> Tuple[gradio.Image, gradio.Video, gradio.Button, gradio.Button]:
normed_output_path = normalize_output_path(facefusion.globals.target_path, facefusion.globals.output_path)
if facefusion.globals.system_memory_limit > 0:
limit_system_memory(facefusion.globals.system_memory_limit)
conditional_process()
if is_image(normed_output_path):
return gradio.Image(value = normed_output_path, visible = True), gradio.Video(value = None, visible = False), gradio.Button(visible = True), gradio.Button(visible = False)
if is_video(normed_output_path):
return gradio.Image(value = None, visible = False), gradio.Video(value = normed_output_path, visible = True), gradio.Button(visible = True), gradio.Button(visible = False)
return gradio.Image(value = None), gradio.Video(value = None), gradio.Button(visible = True), gradio.Button(visible = False)
def stop() -> Tuple[gradio.Button, gradio.Button]:
process_manager.stop()
return gradio.Button(visible = True), gradio.Button(visible = False)
def clear() -> Tuple[gradio.Image, gradio.Video]:
while process_manager.is_processing():
sleep(0.5)
if facefusion.globals.target_path:
clear_temp(facefusion.globals.target_path)
return gradio.Image(value = None), gradio.Video(value = None)
def update_output_path(output_path : str) -> None:
state_manager.set_item('output_path', output_path)

View File

@@ -1,17 +1,18 @@
from typing import Optional, Tuple
import gradio
import facefusion.globals
import facefusion.choices
from facefusion import wording
from facefusion.typing import OutputVideoEncoder, OutputVideoPreset, Fps
from facefusion import state_manager, wording
from facefusion.common_helper import calc_int_step
from facefusion.filesystem import is_image, is_video
from facefusion.typing import Fps, OutputAudioEncoder, OutputVideoEncoder, OutputVideoPreset
from facefusion.uis.core import get_ui_components, register_ui_component
from facefusion.vision import detect_image_resolution, create_image_resolutions, detect_video_fps, detect_video_resolution, create_video_resolutions, pack_resolution
from facefusion.vision import create_image_resolutions, create_video_resolutions, detect_image_resolution, detect_video_fps, detect_video_resolution, pack_resolution
OUTPUT_PATH_TEXTBOX : Optional[gradio.Textbox] = None
OUTPUT_IMAGE_QUALITY_SLIDER : Optional[gradio.Slider] = None
OUTPUT_IMAGE_RESOLUTION_DROPDOWN : Optional[gradio.Dropdown] = None
OUTPUT_AUDIO_ENCODER_DROPDOWN : Optional[gradio.Dropdown] = None
OUTPUT_VIDEO_ENCODER_DROPDOWN : Optional[gradio.Dropdown] = None
OUTPUT_VIDEO_PRESET_DROPDOWN : Optional[gradio.Dropdown] = None
OUTPUT_VIDEO_RESOLUTION_DROPDOWN : Optional[gradio.Dropdown] = None
@@ -20,9 +21,9 @@ OUTPUT_VIDEO_FPS_SLIDER : Optional[gradio.Slider] = None
def render() -> None:
global OUTPUT_PATH_TEXTBOX
global OUTPUT_IMAGE_QUALITY_SLIDER
global OUTPUT_IMAGE_RESOLUTION_DROPDOWN
global OUTPUT_AUDIO_ENCODER_DROPDOWN
global OUTPUT_VIDEO_ENCODER_DROPDOWN
global OUTPUT_VIDEO_PRESET_DROPDOWN
global OUTPUT_VIDEO_RESOLUTION_DROPDOWN
@@ -31,74 +32,73 @@ def render() -> None:
output_image_resolutions = []
output_video_resolutions = []
if is_image(facefusion.globals.target_path):
output_image_resolution = detect_image_resolution(facefusion.globals.target_path)
if is_image(state_manager.get_item('target_path')):
output_image_resolution = detect_image_resolution(state_manager.get_item('target_path'))
output_image_resolutions = create_image_resolutions(output_image_resolution)
if is_video(facefusion.globals.target_path):
output_video_resolution = detect_video_resolution(facefusion.globals.target_path)
if is_video(state_manager.get_item('target_path')):
output_video_resolution = detect_video_resolution(state_manager.get_item('target_path'))
output_video_resolutions = create_video_resolutions(output_video_resolution)
facefusion.globals.output_path = facefusion.globals.output_path or '.'
OUTPUT_PATH_TEXTBOX = gradio.Textbox(
label = wording.get('uis.output_path_textbox'),
value = facefusion.globals.output_path,
max_lines = 1
)
OUTPUT_IMAGE_QUALITY_SLIDER = gradio.Slider(
label = wording.get('uis.output_image_quality_slider'),
value = facefusion.globals.output_image_quality,
step = facefusion.choices.output_image_quality_range[1] - facefusion.choices.output_image_quality_range[0],
value = state_manager.get_item('output_image_quality'),
step = calc_int_step(facefusion.choices.output_image_quality_range),
minimum = facefusion.choices.output_image_quality_range[0],
maximum = facefusion.choices.output_image_quality_range[-1],
visible = is_image(facefusion.globals.target_path)
visible = is_image(state_manager.get_item('target_path'))
)
OUTPUT_IMAGE_RESOLUTION_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.output_image_resolution_dropdown'),
choices = output_image_resolutions,
value = facefusion.globals.output_image_resolution,
visible = is_image(facefusion.globals.target_path)
value = state_manager.get_item('output_image_resolution'),
visible = is_image(state_manager.get_item('target_path'))
)
OUTPUT_AUDIO_ENCODER_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.output_audio_encoder_dropdown'),
choices = facefusion.choices.output_audio_encoders,
value = state_manager.get_item('output_audio_encoder'),
visible = is_video(state_manager.get_item('target_path'))
)
OUTPUT_VIDEO_ENCODER_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.output_video_encoder_dropdown'),
choices = facefusion.choices.output_video_encoders,
value = facefusion.globals.output_video_encoder,
visible = is_video(facefusion.globals.target_path)
value = state_manager.get_item('output_video_encoder'),
visible = is_video(state_manager.get_item('target_path'))
)
OUTPUT_VIDEO_PRESET_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.output_video_preset_dropdown'),
choices = facefusion.choices.output_video_presets,
value = facefusion.globals.output_video_preset,
visible = is_video(facefusion.globals.target_path)
value = state_manager.get_item('output_video_preset'),
visible = is_video(state_manager.get_item('target_path'))
)
OUTPUT_VIDEO_QUALITY_SLIDER = gradio.Slider(
label = wording.get('uis.output_video_quality_slider'),
value = facefusion.globals.output_video_quality,
step = facefusion.choices.output_video_quality_range[1] - facefusion.choices.output_video_quality_range[0],
value = state_manager.get_item('output_video_quality'),
step = calc_int_step(facefusion.choices.output_video_quality_range),
minimum = facefusion.choices.output_video_quality_range[0],
maximum = facefusion.choices.output_video_quality_range[-1],
visible = is_video(facefusion.globals.target_path)
visible = is_video(state_manager.get_item('target_path'))
)
OUTPUT_VIDEO_RESOLUTION_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.output_video_resolution_dropdown'),
choices = output_video_resolutions,
value = facefusion.globals.output_video_resolution,
visible = is_video(facefusion.globals.target_path)
value = state_manager.get_item('output_video_resolution'),
visible = is_video(state_manager.get_item('target_path'))
)
OUTPUT_VIDEO_FPS_SLIDER = gradio.Slider(
label = wording.get('uis.output_video_fps_slider'),
value = facefusion.globals.output_video_fps,
value = state_manager.get_item('output_video_fps'),
step = 0.01,
minimum = 1,
maximum = 60,
visible = is_video(facefusion.globals.target_path)
visible = is_video(state_manager.get_item('target_path'))
)
register_ui_component('output_path_textbox', OUTPUT_PATH_TEXTBOX)
register_ui_component('output_video_fps_slider', OUTPUT_VIDEO_FPS_SLIDER)
def listen() -> None:
OUTPUT_PATH_TEXTBOX.change(update_output_path, inputs = OUTPUT_PATH_TEXTBOX)
OUTPUT_IMAGE_QUALITY_SLIDER.release(update_output_image_quality, inputs = OUTPUT_IMAGE_QUALITY_SLIDER)
OUTPUT_IMAGE_RESOLUTION_DROPDOWN.change(update_output_image_resolution, inputs = OUTPUT_IMAGE_RESOLUTION_DROPDOWN)
OUTPUT_AUDIO_ENCODER_DROPDOWN.change(update_output_audio_encoder, inputs = OUTPUT_AUDIO_ENCODER_DROPDOWN)
OUTPUT_VIDEO_ENCODER_DROPDOWN.change(update_output_video_encoder, inputs = OUTPUT_VIDEO_ENCODER_DROPDOWN)
OUTPUT_VIDEO_PRESET_DROPDOWN.change(update_output_video_preset, inputs = OUTPUT_VIDEO_PRESET_DROPDOWN)
OUTPUT_VIDEO_QUALITY_SLIDER.release(update_output_video_quality, inputs = OUTPUT_VIDEO_QUALITY_SLIDER)
@@ -111,51 +111,51 @@ def listen() -> None:
'target_video'
]):
for method in [ 'upload', 'change', 'clear' ]:
getattr(ui_component, method)(remote_update, outputs = [ OUTPUT_IMAGE_QUALITY_SLIDER, OUTPUT_IMAGE_RESOLUTION_DROPDOWN, OUTPUT_VIDEO_ENCODER_DROPDOWN, OUTPUT_VIDEO_PRESET_DROPDOWN, OUTPUT_VIDEO_QUALITY_SLIDER, OUTPUT_VIDEO_RESOLUTION_DROPDOWN, OUTPUT_VIDEO_FPS_SLIDER ])
getattr(ui_component, method)(remote_update, outputs = [ OUTPUT_IMAGE_QUALITY_SLIDER, OUTPUT_IMAGE_RESOLUTION_DROPDOWN, OUTPUT_AUDIO_ENCODER_DROPDOWN, OUTPUT_VIDEO_ENCODER_DROPDOWN, OUTPUT_VIDEO_PRESET_DROPDOWN, OUTPUT_VIDEO_QUALITY_SLIDER, OUTPUT_VIDEO_RESOLUTION_DROPDOWN, OUTPUT_VIDEO_FPS_SLIDER ])
def remote_update() -> Tuple[gradio.Slider, gradio.Dropdown, gradio.Dropdown, gradio.Dropdown, gradio.Slider, gradio.Dropdown, gradio.Slider]:
if is_image(facefusion.globals.target_path):
output_image_resolution = detect_image_resolution(facefusion.globals.target_path)
def remote_update() -> Tuple[gradio.Slider, gradio.Dropdown, gradio.Dropdown, gradio.Dropdown, gradio.Dropdown, gradio.Slider, gradio.Dropdown, gradio.Slider]:
if is_image(state_manager.get_item('target_path')):
output_image_resolution = detect_image_resolution(state_manager.get_item('target_path'))
output_image_resolutions = create_image_resolutions(output_image_resolution)
facefusion.globals.output_image_resolution = pack_resolution(output_image_resolution)
return gradio.Slider(visible = True), gradio.Dropdown(visible = True, value = facefusion.globals.output_image_resolution, choices = output_image_resolutions), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False), gradio.Dropdown(visible = False, value = None, choices = None), gradio.Slider(visible = False, value = None)
if is_video(facefusion.globals.target_path):
output_video_resolution = detect_video_resolution(facefusion.globals.target_path)
state_manager.set_item('output_image_resolution', pack_resolution(output_image_resolution))
return gradio.Slider(visible = True), gradio.Dropdown(value = state_manager.get_item('output_image_resolution'), choices = output_image_resolutions, visible = True), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False)
if is_video(state_manager.get_item('target_path')):
output_video_resolution = detect_video_resolution(state_manager.get_item('target_path'))
output_video_resolutions = create_video_resolutions(output_video_resolution)
facefusion.globals.output_video_resolution = pack_resolution(output_video_resolution)
facefusion.globals.output_video_fps = detect_video_fps(facefusion.globals.target_path)
return gradio.Slider(visible = False), gradio.Dropdown(visible = False), gradio.Dropdown(visible = True), gradio.Dropdown(visible = True), gradio.Slider(visible = True), gradio.Dropdown(visible = True, value = facefusion.globals.output_video_resolution, choices = output_video_resolutions), gradio.Slider(visible = True, value = facefusion.globals.output_video_fps)
return gradio.Slider(visible = False), gradio.Dropdown(visible = False, value = None, choices = None), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False), gradio.Dropdown(visible = False, value = None, choices = None), gradio.Slider(visible = False, value = None)
state_manager.set_item('output_video_resolution', pack_resolution(output_video_resolution))
state_manager.set_item('output_video_fps', detect_video_fps(state_manager.get_item('target_path')))
return gradio.Slider(visible = False), gradio.Dropdown(visible = False), gradio.Dropdown(visible = True), gradio.Dropdown(visible = True), gradio.Dropdown(visible = True), gradio.Slider(visible = True), gradio.Dropdown(value = state_manager.get_item('output_video_resolution'), choices = output_video_resolutions, visible = True), gradio.Slider(value = state_manager.get_item('output_video_fps'), visible = True)
return gradio.Slider(visible = False), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False), gradio.Dropdown(visible = False), gradio.Slider(visible = False)
def update_output_path(output_path : str) -> None:
facefusion.globals.output_path = output_path
def update_output_image_quality(output_image_quality : int) -> None:
facefusion.globals.output_image_quality = output_image_quality
def update_output_image_quality(output_image_quality : float) -> None:
state_manager.set_item('output_image_quality', int(output_image_quality))
def update_output_image_resolution(output_image_resolution : str) -> None:
facefusion.globals.output_image_resolution = output_image_resolution
state_manager.set_item('output_image_resolution', output_image_resolution)
def update_output_video_encoder(output_video_encoder: OutputVideoEncoder) -> None:
facefusion.globals.output_video_encoder = output_video_encoder
def update_output_audio_encoder(output_audio_encoder : OutputAudioEncoder) -> None:
state_manager.set_item('output_audio_encoder', output_audio_encoder)
def update_output_video_encoder(output_video_encoder : OutputVideoEncoder) -> None:
state_manager.set_item('output_video_encoder', output_video_encoder)
def update_output_video_preset(output_video_preset : OutputVideoPreset) -> None:
facefusion.globals.output_video_preset = output_video_preset
state_manager.set_item('output_video_preset', output_video_preset)
def update_output_video_quality(output_video_quality : int) -> None:
facefusion.globals.output_video_quality = output_video_quality
def update_output_video_quality(output_video_quality : float) -> None:
state_manager.set_item('output_video_quality', int(output_video_quality))
def update_output_video_resolution(output_video_resolution : str) -> None:
facefusion.globals.output_video_resolution = output_video_resolution
state_manager.set_item('output_video_resolution', output_video_resolution)
def update_output_video_fps(output_video_fps : Fps) -> None:
facefusion.globals.output_video_fps = output_video_fps
state_manager.set_item('output_video_fps', output_video_fps)

View File

@@ -1,22 +1,23 @@
from typing import Any, Dict, Optional
from time import sleep
from typing import Optional
import cv2
import gradio
import numpy
import facefusion.globals
from facefusion import logger, wording
from facefusion.audio import get_audio_frame, create_empty_audio_frame
from facefusion import logger, process_manager, state_manager, wording
from facefusion.audio import create_empty_audio_frame, get_audio_frame
from facefusion.common_helper import get_first
from facefusion.core import conditional_append_reference_faces
from facefusion.face_analyser import get_average_face, clear_face_analyser
from facefusion.face_store import clear_static_faces, get_reference_faces, clear_reference_faces
from facefusion.typing import Face, FaceSet, AudioFrame, VisionFrame
from facefusion.vision import get_video_frame, count_video_frame_total, normalize_frame_color, resize_frame_resolution, read_static_image, read_static_images
from facefusion.filesystem import is_image, is_video, filter_audio_paths
from facefusion.content_analyser import analyse_frame
from facefusion.processors.frame.core import load_frame_processor_module
from facefusion.core import conditional_append_reference_faces
from facefusion.face_analyser import get_average_face, get_many_faces
from facefusion.face_store import clear_reference_faces, clear_static_faces, get_reference_faces
from facefusion.filesystem import filter_audio_paths, is_image, is_video
from facefusion.processors.core import get_processors_modules
from facefusion.typing import AudioFrame, Face, FaceSet, VisionFrame
from facefusion.uis.core import get_ui_component, get_ui_components, register_ui_component
from facefusion.uis.typing import ComponentOptions
from facefusion.vision import count_video_frame_total, detect_frame_orientation, get_video_frame, normalize_frame_color, read_static_image, read_static_images, resize_frame_resolution
PREVIEW_IMAGE : Optional[gradio.Image] = None
PREVIEW_FRAME_SLIDER : Optional[gradio.Slider] = None
@@ -26,12 +27,11 @@ def render() -> None:
global PREVIEW_IMAGE
global PREVIEW_FRAME_SLIDER
preview_image_args : Dict[str, Any] =\
preview_image_options : ComponentOptions =\
{
'label': wording.get('uis.preview_image'),
'interactive': False
'label': wording.get('uis.preview_image')
}
preview_frame_slider_args : Dict[str, Any] =\
preview_frame_slider_options : ComponentOptions =\
{
'label': wording.get('uis.preview_frame_slider'),
'step': 1,
@@ -40,34 +40,42 @@ def render() -> None:
'visible': False
}
conditional_append_reference_faces()
reference_faces = get_reference_faces() if 'reference' in facefusion.globals.face_selector_mode else None
source_frames = read_static_images(facefusion.globals.source_paths)
source_face = get_average_face(source_frames)
source_audio_path = get_first(filter_audio_paths(facefusion.globals.source_paths))
reference_faces = get_reference_faces() if 'reference' in state_manager.get_item('face_selector_mode') else None
source_frames = read_static_images(state_manager.get_item('source_paths'))
source_faces = get_many_faces(source_frames)
source_face = get_average_face(source_faces)
source_audio_path = get_first(filter_audio_paths(state_manager.get_item('source_paths')))
source_audio_frame = create_empty_audio_frame()
if source_audio_path and facefusion.globals.output_video_fps and facefusion.globals.reference_frame_number:
temp_audio_frame = get_audio_frame(source_audio_path, facefusion.globals.output_video_fps, facefusion.globals.reference_frame_number)
if source_audio_path and state_manager.get_item('output_video_fps') and state_manager.get_item('reference_frame_number'):
temp_audio_frame = get_audio_frame(source_audio_path, state_manager.get_item('output_video_fps'), state_manager.get_item('reference_frame_number'))
if numpy.any(temp_audio_frame):
source_audio_frame = temp_audio_frame
if is_image(facefusion.globals.target_path):
target_vision_frame = read_static_image(facefusion.globals.target_path)
if is_image(state_manager.get_item('target_path')):
target_vision_frame = read_static_image(state_manager.get_item('target_path'))
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, target_vision_frame)
preview_image_args['value'] = normalize_frame_color(preview_vision_frame)
if is_video(facefusion.globals.target_path):
temp_vision_frame = get_video_frame(facefusion.globals.target_path, facefusion.globals.reference_frame_number)
preview_image_options['value'] = normalize_frame_color(preview_vision_frame)
preview_image_options['elem_classes'] = [ 'image-preview', 'is-' + detect_frame_orientation(preview_vision_frame) ]
if is_video(state_manager.get_item('target_path')):
temp_vision_frame = get_video_frame(state_manager.get_item('target_path'), state_manager.get_item('reference_frame_number'))
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, temp_vision_frame)
preview_image_args['value'] = normalize_frame_color(preview_vision_frame)
preview_image_args['visible'] = True
preview_frame_slider_args['value'] = facefusion.globals.reference_frame_number
preview_frame_slider_args['maximum'] = count_video_frame_total(facefusion.globals.target_path)
preview_frame_slider_args['visible'] = True
PREVIEW_IMAGE = gradio.Image(**preview_image_args)
PREVIEW_FRAME_SLIDER = gradio.Slider(**preview_frame_slider_args)
preview_image_options['value'] = normalize_frame_color(preview_vision_frame)
preview_image_options['elem_classes'] = [ 'image-preview', 'is-' + detect_frame_orientation(preview_vision_frame) ]
preview_image_options['visible'] = True
preview_frame_slider_options['value'] = state_manager.get_item('reference_frame_number')
preview_frame_slider_options['maximum'] = count_video_frame_total(state_manager.get_item('target_path'))
preview_frame_slider_options['visible'] = True
PREVIEW_IMAGE = gradio.Image(**preview_image_options)
PREVIEW_FRAME_SLIDER = gradio.Slider(**preview_frame_slider_options)
register_ui_component('preview_frame_slider', PREVIEW_FRAME_SLIDER)
def listen() -> None:
PREVIEW_FRAME_SLIDER.release(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
PREVIEW_FRAME_SLIDER.release(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE, show_progress = 'hidden')
PREVIEW_FRAME_SLIDER.change(slide_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE, show_progress = 'hidden')
reference_face_position_gallery = get_ui_component('reference_face_position_gallery')
if reference_face_position_gallery:
reference_face_position_gallery.select(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
@@ -94,23 +102,34 @@ def listen() -> None:
[
'face_debugger_items_checkbox_group',
'frame_colorizer_size_dropdown',
'face_selector_mode_dropdown',
'face_mask_types_checkbox_group',
'face_mask_region_checkbox_group',
'face_analyser_order_dropdown',
'face_analyser_age_dropdown',
'face_analyser_gender_dropdown'
'face_mask_region_checkbox_group'
]):
ui_component.change(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
for ui_component in get_ui_components(
[
'age_modifier_direction_slider',
'expression_restorer_factor_slider',
'face_editor_eyebrow_direction_slider',
'face_editor_eye_gaze_horizontal_slider',
'face_editor_eye_gaze_vertical_slider',
'face_editor_eye_open_ratio_slider',
'face_editor_lip_open_ratio_slider',
'face_editor_mouth_grim_slider',
'face_editor_mouth_pout_slider',
'face_editor_mouth_purse_slider',
'face_editor_mouth_smile_slider',
'face_editor_mouth_position_horizontal_slider',
'face_editor_mouth_position_vertical_slider',
'face_editor_head_pitch_slider',
'face_editor_head_yaw_slider',
'face_editor_head_roll_slider',
'face_enhancer_blend_slider',
'frame_colorizer_blend_slider',
'frame_enhancer_blend_slider',
'trim_frame_start_slider',
'trim_frame_end_slider',
'reference_face_distance_slider',
'face_selector_age_range_slider',
'face_mask_blur_slider',
'face_mask_padding_top_slider',
'face_mask_padding_bottom_slider',
@@ -122,14 +141,24 @@ def listen() -> None:
for ui_component in get_ui_components(
[
'frame_processors_checkbox_group',
'age_modifier_model_dropdown',
'expression_restorer_model_dropdown',
'processors_checkbox_group',
'face_editor_model_dropdown',
'face_enhancer_model_dropdown',
'face_swapper_model_dropdown',
'face_swapper_pixel_boost_dropdown',
'frame_colorizer_model_dropdown',
'frame_enhancer_model_dropdown',
'lip_syncer_model_dropdown',
'face_selector_mode_dropdown',
'face_selector_order_dropdown',
'face_selector_gender_dropdown',
'face_selector_race_dropdown',
'face_detector_model_dropdown',
'face_detector_size_dropdown'
'face_detector_size_dropdown',
'face_detector_angles_checkbox_group',
'face_landmarker_model_dropdown'
]):
ui_component.change(clear_and_update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
@@ -142,66 +171,75 @@ def listen() -> None:
def clear_and_update_preview_image(frame_number : int = 0) -> gradio.Image:
clear_face_analyser()
clear_reference_faces()
clear_static_faces()
return update_preview_image(frame_number)
def update_preview_image(frame_number : int = 0) -> gradio.Image:
for frame_processor in facefusion.globals.frame_processors:
frame_processor_module = load_frame_processor_module(frame_processor)
while not frame_processor_module.post_check():
logger.disable()
sleep(0.5)
logger.enable()
conditional_append_reference_faces()
reference_faces = get_reference_faces() if 'reference' in facefusion.globals.face_selector_mode else None
source_frames = read_static_images(facefusion.globals.source_paths)
source_face = get_average_face(source_frames)
source_audio_path = get_first(filter_audio_paths(facefusion.globals.source_paths))
source_audio_frame = create_empty_audio_frame()
if source_audio_path and facefusion.globals.output_video_fps and facefusion.globals.reference_frame_number:
reference_audio_frame_number = facefusion.globals.reference_frame_number
if facefusion.globals.trim_frame_start:
reference_audio_frame_number -= facefusion.globals.trim_frame_start
temp_audio_frame = get_audio_frame(source_audio_path, facefusion.globals.output_video_fps, reference_audio_frame_number)
if numpy.any(temp_audio_frame):
source_audio_frame = temp_audio_frame
if is_image(facefusion.globals.target_path):
target_vision_frame = read_static_image(facefusion.globals.target_path)
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, target_vision_frame)
preview_vision_frame = normalize_frame_color(preview_vision_frame)
return gradio.Image(value = preview_vision_frame)
if is_video(facefusion.globals.target_path):
temp_vision_frame = get_video_frame(facefusion.globals.target_path, frame_number)
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, temp_vision_frame)
preview_vision_frame = normalize_frame_color(preview_vision_frame)
def slide_preview_image(frame_number : int = 0) -> gradio.Image:
if is_video(state_manager.get_item('target_path')):
preview_vision_frame = normalize_frame_color(get_video_frame(state_manager.get_item('target_path'), frame_number))
preview_vision_frame = resize_frame_resolution(preview_vision_frame, (1024, 1024))
return gradio.Image(value = preview_vision_frame)
return gradio.Image(value = None)
def update_preview_image(frame_number : int = 0) -> gradio.Image:
while process_manager.is_checking():
sleep(0.5)
conditional_append_reference_faces()
reference_faces = get_reference_faces() if 'reference' in state_manager.get_item('face_selector_mode') else None
source_frames = read_static_images(state_manager.get_item('source_paths'))
source_faces = get_many_faces(source_frames)
source_face = get_average_face(source_faces)
source_audio_path = get_first(filter_audio_paths(state_manager.get_item('source_paths')))
source_audio_frame = create_empty_audio_frame()
if source_audio_path and state_manager.get_item('output_video_fps') and state_manager.get_item('reference_frame_number'):
reference_audio_frame_number = state_manager.get_item('reference_frame_number')
if state_manager.get_item('trim_frame_start'):
reference_audio_frame_number -= state_manager.get_item('trim_frame_start')
temp_audio_frame = get_audio_frame(source_audio_path, state_manager.get_item('output_video_fps'), reference_audio_frame_number)
if numpy.any(temp_audio_frame):
source_audio_frame = temp_audio_frame
if is_image(state_manager.get_item('target_path')):
target_vision_frame = read_static_image(state_manager.get_item('target_path'))
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, target_vision_frame)
preview_vision_frame = normalize_frame_color(preview_vision_frame)
return gradio.Image(value = preview_vision_frame, elem_classes = [ 'image-preview', 'is-' + detect_frame_orientation(preview_vision_frame) ])
if is_video(state_manager.get_item('target_path')):
temp_vision_frame = get_video_frame(state_manager.get_item('target_path'), frame_number)
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, temp_vision_frame)
preview_vision_frame = normalize_frame_color(preview_vision_frame)
return gradio.Image(value = preview_vision_frame, elem_classes = [ 'image-preview', 'is-' + detect_frame_orientation(preview_vision_frame) ])
return gradio.Image(value = None, elem_classes = None)
def update_preview_frame_slider() -> gradio.Slider:
if is_video(facefusion.globals.target_path):
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
if is_video(state_manager.get_item('target_path')):
video_frame_total = count_video_frame_total(state_manager.get_item('target_path'))
return gradio.Slider(maximum = video_frame_total, visible = True)
return gradio.Slider(value = None, maximum = None, visible = False)
return gradio.Slider(value = 0, visible = False)
def process_preview_frame(reference_faces : FaceSet, source_face : Face, source_audio_frame : AudioFrame, target_vision_frame : VisionFrame) -> VisionFrame:
target_vision_frame = resize_frame_resolution(target_vision_frame, (640, 640))
target_vision_frame = resize_frame_resolution(target_vision_frame, (1024, 1024))
source_vision_frame = target_vision_frame.copy()
if analyse_frame(target_vision_frame):
return cv2.GaussianBlur(target_vision_frame, (99, 99), 0)
for frame_processor in facefusion.globals.frame_processors:
frame_processor_module = load_frame_processor_module(frame_processor)
for processor_module in get_processors_modules(state_manager.get_item('processors')):
logger.disable()
if frame_processor_module.pre_process('preview'):
logger.enable()
target_vision_frame = frame_processor_module.process_frame(
if processor_module.pre_process('preview'):
target_vision_frame = processor_module.process_frame(
{
'reference_faces': reference_faces,
'source_face': source_face,
'source_audio_frame': source_audio_frame,
'source_vision_frame': source_vision_frame,
'target_vision_frame': target_vision_frame
})
logger.enable()
return target_vision_frame

View File

@@ -0,0 +1,40 @@
from typing import List, Optional
import gradio
from facefusion import state_manager, wording
from facefusion.filesystem import list_directory
from facefusion.processors.core import clear_processors_modules, get_processors_modules
from facefusion.uis.core import register_ui_component
PROCESSORS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
def render() -> None:
global PROCESSORS_CHECKBOX_GROUP
PROCESSORS_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.processors_checkbox_group'),
choices = sort_processors(state_manager.get_item('processors')),
value = state_manager.get_item('processors')
)
register_ui_component('processors_checkbox_group', PROCESSORS_CHECKBOX_GROUP)
def listen() -> None:
PROCESSORS_CHECKBOX_GROUP.change(update_processors, inputs = PROCESSORS_CHECKBOX_GROUP, outputs = PROCESSORS_CHECKBOX_GROUP)
def update_processors(processors : List[str]) -> gradio.CheckboxGroup:
clear_processors_modules(state_manager.get_item('processors'))
state_manager.set_item('processors', processors)
for processor_module in get_processors_modules(state_manager.get_item('processors')):
if not processor_module.pre_check():
return gradio.CheckboxGroup()
return gradio.CheckboxGroup(value = state_manager.get_item('processors'), choices = sort_processors(state_manager.get_item('processors')))
def sort_processors(processors : List[str]) -> List[str]:
available_processors = list_directory('facefusion/processors/modules')
return sorted(available_processors, key = lambda processor : processors.index(processor) if processor in processors else len(processors))

View File

@@ -1,12 +1,12 @@
from typing import Optional, List, Tuple
from typing import List, Optional, Tuple
import gradio
import facefusion.globals
from facefusion import wording
from facefusion.uis.typing import File
from facefusion import state_manager, wording
from facefusion.common_helper import get_first
from facefusion.filesystem import has_audio, has_image, filter_audio_paths, filter_image_paths
from facefusion.filesystem import filter_audio_paths, filter_image_paths, has_audio, has_image
from facefusion.uis.core import register_ui_component
from facefusion.uis.typing import File
SOURCE_FILE : Optional[gradio.File] = None
SOURCE_AUDIO : Optional[gradio.Audio] = None
@@ -18,22 +18,19 @@ def render() -> None:
global SOURCE_AUDIO
global SOURCE_IMAGE
has_source_audio = has_audio(facefusion.globals.source_paths)
has_source_image = has_image(facefusion.globals.source_paths)
has_source_audio = has_audio(state_manager.get_item('source_paths'))
has_source_image = has_image(state_manager.get_item('source_paths'))
SOURCE_FILE = gradio.File(
file_count = 'multiple',
file_types =
[
'.mp3',
'.wav',
'.png',
'.jpg',
'.webp'
'audio',
'image'
],
label = wording.get('uis.source_file'),
value = facefusion.globals.source_paths if has_source_audio or has_source_image else None
value = state_manager.get_item('source_paths') if has_source_audio or has_source_image else None
)
source_file_names = [ source_file_value['name'] for source_file_value in SOURCE_FILE.value ] if SOURCE_FILE.value else None
source_file_names = [ source_file_value.get('path') for source_file_value in SOURCE_FILE.value ] if SOURCE_FILE.value else None
source_audio_path = get_first(filter_audio_paths(source_file_names))
source_image_path = get_first(filter_image_paths(source_file_names))
SOURCE_AUDIO = gradio.Audio(
@@ -61,7 +58,7 @@ def update(files : List[File]) -> Tuple[gradio.Audio, gradio.Image]:
if has_source_audio or has_source_image:
source_audio_path = get_first(filter_audio_paths(file_names))
source_image_path = get_first(filter_image_paths(file_names))
facefusion.globals.source_paths = file_names
state_manager.set_item('source_paths', file_names)
return gradio.Audio(value = source_audio_path, visible = has_source_audio), gradio.Image(value = source_image_path, visible = has_source_image)
facefusion.globals.source_paths = None
state_manager.clear_item('source_paths')
return gradio.Audio(value = None, visible = False), gradio.Image(value = None, visible = False)

View File

@@ -1,12 +1,12 @@
from typing import Tuple, Optional
from typing import Optional, Tuple
import gradio
import facefusion.globals
from facefusion import wording
from facefusion.face_store import clear_static_faces, clear_reference_faces
from facefusion.uis.typing import File
from facefusion import state_manager, wording
from facefusion.face_store import clear_reference_faces, clear_static_faces
from facefusion.filesystem import get_file_size, is_image, is_video
from facefusion.uis.core import register_ui_component
from facefusion.uis.typing import ComponentOptions, File
from facefusion.vision import get_video_frame, normalize_frame_color
FILE_SIZE_LIMIT = 512 * 1024 * 1024
@@ -21,44 +21,41 @@ def render() -> None:
global TARGET_IMAGE
global TARGET_VIDEO
is_target_image = is_image(facefusion.globals.target_path)
is_target_video = is_video(facefusion.globals.target_path)
is_target_image = is_image(state_manager.get_item('target_path'))
is_target_video = is_video(state_manager.get_item('target_path'))
TARGET_FILE = gradio.File(
label = wording.get('uis.target_file'),
file_count = 'single',
file_types =
[
'.png',
'.jpg',
'.webp',
'.webm',
'.mp4'
'image',
'video'
],
value = facefusion.globals.target_path if is_target_image or is_target_video else None
value = state_manager.get_item('target_path') if is_target_image or is_target_video else None
)
target_image_args =\
target_image_options : ComponentOptions =\
{
'show_label': False,
'visible': False
}
target_video_args =\
target_video_options : ComponentOptions =\
{
'show_label': False,
'visible': False
}
if is_target_image:
target_image_args['value'] = TARGET_FILE.value['name']
target_image_args['visible'] = True
target_image_options['value'] = TARGET_FILE.value.get('path')
target_image_options['visible'] = True
if is_target_video:
if get_file_size(facefusion.globals.target_path) > FILE_SIZE_LIMIT:
preview_vision_frame = normalize_frame_color(get_video_frame(facefusion.globals.target_path))
target_image_args['value'] = preview_vision_frame
target_image_args['visible'] = True
if get_file_size(state_manager.get_item('target_path')) > FILE_SIZE_LIMIT:
preview_vision_frame = normalize_frame_color(get_video_frame(state_manager.get_item('target_path')))
target_image_options['value'] = preview_vision_frame
target_image_options['visible'] = True
else:
target_video_args['value'] = TARGET_FILE.value['name']
target_video_args['visible'] = True
TARGET_IMAGE = gradio.Image(**target_image_args)
TARGET_VIDEO = gradio.Video(**target_video_args)
target_video_options['value'] = TARGET_FILE.value.get('path')
target_video_options['visible'] = True
TARGET_IMAGE = gradio.Image(**target_image_options)
TARGET_VIDEO = gradio.Video(**target_video_options)
register_ui_component('target_image', TARGET_IMAGE)
register_ui_component('target_video', TARGET_VIDEO)
@@ -71,13 +68,13 @@ def update(file : File) -> Tuple[gradio.Image, gradio.Video]:
clear_reference_faces()
clear_static_faces()
if file and is_image(file.name):
facefusion.globals.target_path = file.name
state_manager.set_item('target_path', file.name)
return gradio.Image(value = file.name, visible = True), gradio.Video(value = None, visible = False)
if file and is_video(file.name):
facefusion.globals.target_path = file.name
state_manager.set_item('target_path', file.name)
if get_file_size(file.name) > FILE_SIZE_LIMIT:
preview_vision_frame = normalize_frame_color(get_video_frame(file.name))
return gradio.Image(value = preview_vision_frame, visible = True), gradio.Video(value = None, visible = False)
return gradio.Image(value = None, visible = False), gradio.Video(value = file.name, visible = True)
facefusion.globals.target_path = None
state_manager.clear_item('target_path')
return gradio.Image(value = None, visible = False), gradio.Video(value = None, visible = False)

View File

@@ -1,11 +1,11 @@
from typing import Optional
import gradio
import facefusion.globals
import facefusion.choices
from facefusion import wording
from facefusion.typing import TempFrameFormat
from facefusion import state_manager, wording
from facefusion.filesystem import is_video
from facefusion.typing import TempFrameFormat
from facefusion.uis.core import get_ui_component
TEMP_FRAME_FORMAT_DROPDOWN : Optional[gradio.Dropdown] = None
@@ -17,13 +17,14 @@ def render() -> None:
TEMP_FRAME_FORMAT_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.temp_frame_format_dropdown'),
choices = facefusion.choices.temp_frame_formats,
value = facefusion.globals.temp_frame_format,
visible = is_video(facefusion.globals.target_path)
value = state_manager.get_item('temp_frame_format'),
visible = is_video(state_manager.get_item('target_path'))
)
def listen() -> None:
TEMP_FRAME_FORMAT_DROPDOWN.change(update_temp_frame_format, inputs = TEMP_FRAME_FORMAT_DROPDOWN)
target_video = get_ui_component('target_video')
if target_video:
for method in [ 'upload', 'change', 'clear' ]:
@@ -31,11 +32,11 @@ def listen() -> None:
def remote_update() -> gradio.Dropdown:
if is_video(facefusion.globals.target_path):
if is_video(state_manager.get_item('target_path')):
return gradio.Dropdown(visible = True)
return gradio.Dropdown(visible = False)
def update_temp_frame_format(temp_frame_format : TempFrameFormat) -> None:
facefusion.globals.temp_frame_format = temp_frame_format
state_manager.set_item('temp_frame_format', temp_frame_format)

View File

@@ -0,0 +1,82 @@
import io
import logging
import math
import os
from typing import Optional
import gradio
from tqdm import tqdm
from facefusion import logger, state_manager, wording
from facefusion.choices import log_level_set
from facefusion.typing import LogLevel
LOG_LEVEL_DROPDOWN : Optional[gradio.Dropdown] = None
TERMINAL_TEXTBOX : Optional[gradio.Textbox] = None
LOG_BUFFER = io.StringIO()
LOG_HANDLER = logging.StreamHandler(LOG_BUFFER)
TQDM_UPDATE = tqdm.update
def render() -> None:
global LOG_LEVEL_DROPDOWN
global TERMINAL_TEXTBOX
LOG_LEVEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.log_level_dropdown'),
choices = log_level_set.keys(),
value = state_manager.get_item('log_level')
)
TERMINAL_TEXTBOX = gradio.Textbox(
label = wording.get('uis.terminal_textbox'),
value = read_logs,
lines = 8,
max_lines = 8,
every = 0.5,
show_copy_button = True
)
def listen() -> None:
global LOG_LEVEL_DROPDOWN
LOG_LEVEL_DROPDOWN.change(update_log_level, inputs = LOG_LEVEL_DROPDOWN)
logger.get_package_logger().addHandler(LOG_HANDLER)
tqdm.update = tqdm_update
def update_log_level(log_level : LogLevel) -> None:
state_manager.set_item('log_level', log_level)
logger.init(state_manager.get_item('log_level'))
def tqdm_update(self : tqdm, n : int = 1) -> None:
TQDM_UPDATE(self, n)
output = create_tqdm_output(self)
if output:
LOG_BUFFER.seek(0)
log_buffer = LOG_BUFFER.read()
lines = log_buffer.splitlines()
if lines and lines[-1].startswith(self.desc):
position = log_buffer.rfind(lines[-1])
LOG_BUFFER.seek(position)
else:
LOG_BUFFER.seek(0, os.SEEK_END)
LOG_BUFFER.write(output + os.linesep)
LOG_BUFFER.flush()
def create_tqdm_output(self : tqdm) -> Optional[str]:
if not self.disable and self.desc and self.total:
percentage = math.floor(self.n / self.total * 100)
return self.desc + wording.get('colon') + ' ' + str(percentage) + '% (' + str(self.n) + '/' + str(self.total) + ')'
if not self.disable and self.desc and self.unit:
return self.desc + wording.get('colon') + ' ' + str(self.n) + ' ' + self.unit
return None
def read_logs() -> str:
LOG_BUFFER.seek(0)
logs = LOG_BUFFER.read().rstrip()
return logs

View File

@@ -1,79 +1,62 @@
from typing import Any, Dict, Tuple, Optional
import gradio
from typing import Optional, Tuple
import facefusion.globals
from facefusion import wording
from gradio_rangeslider import RangeSlider
from facefusion import state_manager, wording
from facefusion.face_store import clear_static_faces
from facefusion.vision import count_video_frame_total
from facefusion.filesystem import is_video
from facefusion.uis.core import get_ui_components, register_ui_component
from facefusion.uis.core import get_ui_components
from facefusion.uis.typing import ComponentOptions
from facefusion.vision import count_video_frame_total
TRIM_FRAME_START_SLIDER : Optional[gradio.Slider] = None
TRIM_FRAME_END_SLIDER : Optional[gradio.Slider] = None
TRIM_FRAME_RANGE_SLIDER : Optional[RangeSlider] = None
def render() -> None:
global TRIM_FRAME_START_SLIDER
global TRIM_FRAME_END_SLIDER
global TRIM_FRAME_RANGE_SLIDER
trim_frame_start_slider_args : Dict[str, Any] =\
trim_frame_range_slider_options : ComponentOptions =\
{
'label': wording.get('uis.trim_frame_start_slider'),
'step': 1,
'label': wording.get('uis.trim_frame_slider'),
'minimum': 0,
'maximum': 100,
'step': 1,
'visible': False
}
trim_frame_end_slider_args : Dict[str, Any] =\
{
'label': wording.get('uis.trim_frame_end_slider'),
'step': 1,
'minimum': 0,
'maximum': 100,
'visible': False
}
if is_video(facefusion.globals.target_path):
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
trim_frame_start_slider_args['value'] = facefusion.globals.trim_frame_start or 0
trim_frame_start_slider_args['maximum'] = video_frame_total
trim_frame_start_slider_args['visible'] = True
trim_frame_end_slider_args['value'] = facefusion.globals.trim_frame_end or video_frame_total
trim_frame_end_slider_args['maximum'] = video_frame_total
trim_frame_end_slider_args['visible'] = True
with gradio.Row():
TRIM_FRAME_START_SLIDER = gradio.Slider(**trim_frame_start_slider_args)
TRIM_FRAME_END_SLIDER = gradio.Slider(**trim_frame_end_slider_args)
register_ui_component('trim_frame_start_slider', TRIM_FRAME_START_SLIDER)
register_ui_component('trim_frame_end_slider', TRIM_FRAME_END_SLIDER)
if is_video(state_manager.get_item('target_path')):
video_frame_total = count_video_frame_total(state_manager.get_item('target_path'))
trim_frame_start = state_manager.get_item('trim_frame_start') or 0
trim_frame_end = state_manager.get_item('trim_frame_end') or video_frame_total
trim_frame_range_slider_options['maximum'] = video_frame_total
trim_frame_range_slider_options['value'] = (trim_frame_start, trim_frame_end)
trim_frame_range_slider_options['visible'] = True
TRIM_FRAME_RANGE_SLIDER = RangeSlider(**trim_frame_range_slider_options)
def listen() -> None:
TRIM_FRAME_START_SLIDER.release(update_trim_frame_start, inputs = TRIM_FRAME_START_SLIDER)
TRIM_FRAME_END_SLIDER.release(update_trim_frame_end, inputs = TRIM_FRAME_END_SLIDER)
TRIM_FRAME_RANGE_SLIDER.release(update_trim_frame, inputs = TRIM_FRAME_RANGE_SLIDER)
for ui_component in get_ui_components(
[
'target_image',
'target_video'
]):
for method in [ 'upload', 'change', 'clear' ]:
getattr(ui_component, method)(remote_update, outputs = [ TRIM_FRAME_START_SLIDER, TRIM_FRAME_END_SLIDER ])
getattr(ui_component, method)(remote_update, outputs = [ TRIM_FRAME_RANGE_SLIDER ])
def remote_update() -> Tuple[gradio.Slider, gradio.Slider]:
if is_video(facefusion.globals.target_path):
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
facefusion.globals.trim_frame_start = None
facefusion.globals.trim_frame_end = None
return gradio.Slider(value = 0, maximum = video_frame_total, visible = True), gradio.Slider(value = video_frame_total, maximum = video_frame_total, visible = True)
return gradio.Slider(value = None, maximum = None, visible = False), gradio.Slider(value = None, maximum = None, visible = False)
def remote_update() -> RangeSlider:
if is_video(state_manager.get_item('target_path')):
video_frame_total = count_video_frame_total(state_manager.get_item('target_path'))
state_manager.clear_item('trim_frame_start')
state_manager.clear_item('trim_frame_end')
return RangeSlider(value = (0, video_frame_total), maximum = video_frame_total, visible = True)
return RangeSlider(visible = False)
def update_trim_frame_start(trim_frame_start : int) -> None:
def update_trim_frame(trim_frame : Tuple[float, float]) -> None:
clear_static_faces()
facefusion.globals.trim_frame_start = trim_frame_start if trim_frame_start > 0 else None
def update_trim_frame_end(trim_frame_end : int) -> None:
clear_static_faces()
video_frame_total = count_video_frame_total(facefusion.globals.target_path)
facefusion.globals.trim_frame_end = trim_frame_end if trim_frame_end < video_frame_total else None
trim_frame_start, trim_frame_end = trim_frame
video_frame_total = count_video_frame_total(state_manager.get_item('target_path'))
trim_frame_start = int(trim_frame_start) if trim_frame_start > 0 else None
trim_frame_end = int(trim_frame_end) if trim_frame_end < video_frame_total else None
state_manager.set_item('trim_frame_start', trim_frame_start)
state_manager.set_item('trim_frame_end', trim_frame_end)

View File

@@ -0,0 +1,21 @@
from typing import Optional
import gradio
import facefusion
from facefusion import state_manager, wording
from facefusion.uis.core import register_ui_component
UI_WORKFLOW_DROPDOWN : Optional[gradio.Dropdown] = None
def render() -> None:
global UI_WORKFLOW_DROPDOWN
UI_WORKFLOW_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.ui_workflow'),
choices = facefusion.choices.ui_workflows,
value = state_manager.get_item('ui_workflow'),
interactive = True
)
register_ui_component('ui_workflow_dropdown', UI_WORKFLOW_DROPDOWN)

View File

@@ -1,26 +1,25 @@
from typing import Optional, Generator, Deque
import os
import subprocess
from collections import deque
from concurrent.futures import ThreadPoolExecutor
from typing import Deque, Generator, Optional
import cv2
import gradio
from time import sleep
from concurrent.futures import ThreadPoolExecutor
from collections import deque
from tqdm import tqdm
import facefusion.globals
from facefusion import logger, wording
from facefusion import logger, state_manager, wording
from facefusion.audio import create_empty_audio_frame
from facefusion.common_helper import is_windows
from facefusion.content_analyser import analyse_stream
from facefusion.filesystem import filter_image_paths
from facefusion.typing import VisionFrame, Face, Fps
from facefusion.face_analyser import get_average_face
from facefusion.processors.frame.core import get_frame_processors_modules, load_frame_processor_module
from facefusion.face_analyser import get_average_face, get_many_faces
from facefusion.ffmpeg import open_ffmpeg
from facefusion.vision import normalize_frame_color, read_static_images, unpack_resolution
from facefusion.filesystem import filter_image_paths
from facefusion.processors.core import get_processors_modules
from facefusion.typing import Face, Fps, VisionFrame
from facefusion.uis.core import get_ui_component
from facefusion.uis.typing import StreamMode, WebcamMode
from facefusion.uis.core import get_ui_component, get_ui_components
from facefusion.vision import normalize_frame_color, read_static_images, unpack_resolution
WEBCAM_CAPTURE : Optional[cv2.VideoCapture] = None
WEBCAM_IMAGE : Optional[gradio.Image] = None
@@ -69,32 +68,21 @@ def render() -> None:
def listen() -> None:
start_event = None
webcam_mode_radio = get_ui_component('webcam_mode_radio')
webcam_resolution_dropdown = get_ui_component('webcam_resolution_dropdown')
webcam_fps_slider = get_ui_component('webcam_fps_slider')
if webcam_mode_radio and webcam_resolution_dropdown and webcam_fps_slider:
start_event = WEBCAM_START_BUTTON.click(start, inputs = [ webcam_mode_radio, webcam_resolution_dropdown, webcam_fps_slider ], outputs = WEBCAM_IMAGE)
WEBCAM_STOP_BUTTON.click(stop, cancels = start_event)
for ui_component in get_ui_components(
[
'frame_processors_checkbox_group',
'face_swapper_model_dropdown',
'face_enhancer_model_dropdown',
'frame_enhancer_model_dropdown',
'lip_syncer_model_dropdown',
'source_image'
]):
ui_component.change(update, cancels = start_event)
WEBCAM_STOP_BUTTON.click(stop, cancels = start_event)
def start(webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -> Generator[VisionFrame, None, None]:
facefusion.globals.face_selector_mode = 'one'
facefusion.globals.face_analyser_order = 'large-small'
source_image_paths = filter_image_paths(facefusion.globals.source_paths)
state_manager.set_item('face_selector_mode', 'one')
source_image_paths = filter_image_paths(state_manager.get_item('source_paths'))
source_frames = read_static_images(source_image_paths)
source_face = get_average_face(source_frames)
source_faces = get_many_faces(source_frames)
source_face = get_average_face(source_faces)
stream = None
if webcam_mode in [ 'udp', 'v4l2' ]:
@@ -118,34 +106,33 @@ def start(webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -
def multi_process_capture(source_face : Face, webcam_capture : cv2.VideoCapture, webcam_fps : Fps) -> Generator[VisionFrame, None, None]:
with tqdm(desc = wording.get('processing'), unit = 'frame', ascii = ' =', disable = facefusion.globals.log_level in [ 'warn', 'error' ]) as progress:
with ThreadPoolExecutor(max_workers = facefusion.globals.execution_thread_count) as executor:
deque_capture_frames: Deque[VisionFrame] = deque()
with tqdm(desc = wording.get('processing'), unit = 'frame', ascii = ' =', disable = state_manager.get_item('log_level') in [ 'warn', 'error' ]) as progress:
progress.set_postfix(
{
'execution_providers': state_manager.get_item('execution_providers'),
'execution_thread_count': state_manager.get_item('execution_thread_count')
})
with ThreadPoolExecutor(max_workers = state_manager.get_item('execution_thread_count')) as executor:
futures = []
deque_capture_frames : Deque[VisionFrame] = deque()
while webcam_capture and webcam_capture.isOpened():
_, capture_frame = webcam_capture.read()
if analyse_stream(capture_frame, webcam_fps):
return
future = executor.submit(process_stream_frame, source_face, capture_frame)
futures.append(future)
for future_done in [ future for future in futures if future.done() ]:
capture_frame = future_done.result()
deque_capture_frames.append(capture_frame)
futures.remove(future_done)
while deque_capture_frames:
progress.update()
yield deque_capture_frames.popleft()
def update() -> None:
for frame_processor in facefusion.globals.frame_processors:
frame_processor_module = load_frame_processor_module(frame_processor)
while not frame_processor_module.post_check():
logger.disable()
sleep(0.5)
logger.enable()
def stop() -> gradio.Image:
clear_webcam_capture()
return gradio.Image(value = None)
@@ -153,16 +140,16 @@ def stop() -> gradio.Image:
def process_stream_frame(source_face : Face, target_vision_frame : VisionFrame) -> VisionFrame:
source_audio_frame = create_empty_audio_frame()
for frame_processor_module in get_frame_processors_modules(facefusion.globals.frame_processors):
for processor_module in get_processors_modules(state_manager.get_item('processors')):
logger.disable()
if frame_processor_module.pre_process('stream'):
logger.enable()
target_vision_frame = frame_processor_module.process_frame(
if processor_module.pre_process('stream'):
target_vision_frame = processor_module.process_frame(
{
'source_face': source_face,
'source_audio_frame': source_audio_frame,
'target_vision_frame': target_vision_frame
})
logger.enable()
return target_vision_frame
@@ -176,5 +163,5 @@ def open_stream(stream_mode : StreamMode, stream_resolution : str, stream_fps :
if device_name:
commands.extend([ '-f', 'v4l2', '/dev/' + device_name ])
except FileNotFoundError:
logger.error(wording.get('stream_not_loaded').format(stream_mode = stream_mode), __name__.upper())
logger.error(wording.get('stream_not_loaded').format(stream_mode = stream_mode), __name__)
return open_ffmpeg(commands)

View File

@@ -1,4 +1,5 @@
from typing import Optional
import gradio
from facefusion import wording