3.1.0 (#839)
* Replace audio whenever set via source * add H264_qsv&HEVC_qsv (#768) * Update ffmpeg.py * Update choices.py * Update typing.py * Fix spaces and newlines * Fix return type * Introduce hififace swapper * Disable stream for expression restorer * Webcam polishing part1 (#796) * Cosmetics on ignore comments * Testing for replace audio * Testing for restore audio * Testing for restore audio * Fix replace_audio() * Remove shortest and use fixed video duration * Remove shortest and use fixed video duration * Prevent duplicate entries to local PATH * Do hard exit on invalid args * Need for Python 3.10 * Fix state of face selector * Fix OpenVINO by aliasing GPU.0 to GPU * Fix OpenVINO by aliasing GPU.0 to GPU * Fix/age modifier styleganex 512 (#798) * fix * styleganex template * changes * changes * fix occlusion mask * add age modifier scale * change * change * hardcode * Cleanup * Use model_sizes and model_templates variables * No need for prepare when just 2 lines of code * Someone used spaces over tabs * Revert back [0][0] --------- Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> * Feat/update gradio5 (#799) * Update to Gradio 5 * Remove overrides for Gradio * Fix dark mode for Gradio * Polish errors * More styles for tabs and co * Make slider inputs and reset like a unit * Make slider inputs and reset like a unit * Adjust naming * Improved color matching (#800) * aura fix * fix import * move to vision.py * changes * changes * changes * changes * further reduction * add test * better test * change name * Minor cleanup * Minor cleanup * Minor cleanup * changes (#801) * Switch to official assets repo * Add __pycache__ to gitignore * Gradio pinned python-multipart to 0.0.12 * Update dependencies * Feat/temp path second try (#802) * Terminate base directory from temp helper * Partial adjust program codebase * Move arguments around * Make `-j` absolete * Resolve args * Fix job register keys * Adjust date test * Finalize temp path * Update onnxruntime * Update dependencies * Adjust color for checkboxes * Revert due terrible performance * Fix/enforce vp9 for webm (#805) * Simple fix to enforce vp9 for webm * Remove suggest methods from program helper * Cleanup ffmpeg.py a bit * Update onnxruntime (second try) * Update onnxruntime (second try) * Remove cudnn_conv_algo_search tweaks * Remove cudnn_conv_algo_search tweaks * changes * add both mask instead of multiply * adaptive color correction * changes * remove model size requirement * changes * add to facefusion.ini * changes * changes * changes * Add namespace for dfm creators * Release five frame enhancer models * Remove vendor from model name * Remove vendor from model name * changes * changes * changes * changes * Feat/download providers (#809) * Introduce download providers * update processors download method * add ui * Fix CI * Adjust UI component order, Use download resolver for benchmark * Remove is_download_done() * Introduce download provider set, Remove choices method from execution, cast all dict keys() via list() * Fix spacing --------- Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> * Fix model paths for 3.1.0 * Introduce bulk-run (#810) * Introduce bulk-run * Make bulk run bullet proof * Integration test for bulk-run * new alignment * Add safer global named resolve_file_pattern() (#811) * Allow bulk runner with target pattern only * changes * changes * Update Python to 3.12 for CI (#813) * changes * Improve NVIDIA device lookups * Rename template key to deepfacelive * Fix name * Improve resolve download * Rename bulk-run to batch-run * Make deep swapper inputs universal * Add more deepfacelive models * Use different morph value * Feat/simplify hashes sources download (#814) * Extract download directory path from assets path * Fix lint * Fix force-download command, Fix urls in frame enhancer * changes * fix warp_face_by_bounding_box dtype error * DFM Morph (#816) * changes * Improve wording, Replace [None], SideQuest: clean forward() of age modifier * SideQuest: clean forward() of face enhancer --------- Co-authored-by: henryruhs <info@henryruhs.com> * Fix preview refresh after slide * Add more deepfacelive models (#817) * Add more deepfacelive models * Add more deepfacelive models * Fix deep swapper sizes * Kill accent colors, Number input styles for Chrome * Simplify thumbnail-item looks * Fix first black screen * Introduce model helper * ci.yml: Add macOS on ARM64 to the testing (#818) * ci.yml: Add macOS on ARM64 to the testing * ci.yml: uses: AnimMouse/setup-ffmpeg@v1 * ci.yml: strategy: matrix: os: macos-latest, * - name: Set up FFmpeg * Update .github/workflows/ci.yml * Update ci.yml --------- Co-authored-by: Henry Ruhs <info@henryruhs.com> * Show/hide morph slider for deep swapper (#822) * remove dfl_head and update dfl_whole_face template * Add deep swapper models by Mats * Add deep swapper models by Druuzil * Add deep swapper models by Rumateus * Implement face enhancer weight for codeformer, Side Quest: has proces… (#823) * Implement face enhancer weight for codeformer, Side Quest: has processor checks * Fix typo * Fix face enhancer blend in UI * Use static model set creation * Add deep swapper models by Jen * Introduce create_static_model_set() everywhere (#824) * Move clear over to the UI (#825) * Fix model key * Undo restore_audio() * Switch to latest XSeg * Switch to latest XSeg * Switch to latest XSeg * Use resolve_download_url() everywhere, Vanish --skip-download flag * Fix resolve_download_url * Fix space * Kill resolve_execution_provider_keys() and move fallbacks where they belong * Kill resolve_execution_provider_keys() and move fallbacks where they belong * Remove as this does not work * Change TempFrameFormat order * Fix CoreML partially * Remove duplicates (Rumateus is the creator) * Add deep swapper models by Edel * Introduce download scopes (#826) * Introduce download scopes * Limit download scopes to force-download command * Change source-paths behaviour * Fix space * Update README * Rename create_log_level_program to create_misc_program * Fix wording * Fix wording * Update dependencies * Use tolerant for video_memory_strategy in benchmark * Feat/ffmpeg with progress (#827) * FFmpeg with progress bar * Fix typing * FFmpeg with progress bar part2 * Restore streaming wording * Change order in choices and typing * Introduce File using list_directory() (#830) * Feat/local deep swapper models (#832) * Local model support for deep swapper * Local model support for deep swapper part2 * Local model support for deep swapper part3 * Update yet another dfm by Druuzil * Refactor/choices and naming (#833) * Refactor choices, imports and naming * Refactor choices, imports and naming * Fix styles for tabs, Restore toast * Update yet another dfm by Druuzil * Feat/face masker models (#834) * Introduce face masker models * Introduce face masker models * Introduce face masker models * Register needed step keys * Provide different XSeg models * Simplify model context * Fix out of range for trim frame, Fix ffmpeg extraction count (#836) * Fix out of range for trim frame, Fix ffmpeg extraction count * Move restrict of trim frame to the core, Make sure all values are within the range * Fix and merge testing * Fix typing * Add region mask for deep swapper * Adjust wording * Move FACE_MASK_REGIONS to choices * Update dependencies * Feat/download provider fallback (#837) * Introduce download providers fallback, Use CURL everywhre * Fix CI * Use readlines() over readline() to avoid while * Use readlines() over readline() to avoid while * Use readlines() over readline() to avoid while * Use communicate() over wait() * Minor updates for testing * Stop webcam on source image change * Feat/webcam improvements (#838) * Detect available webcams * Fix CI, Move webcam id dropdown to the sidebar, Disable warnings * Fix CI * Remove signal on hard_exit() to prevent exceptions * Fix border color in toast timer * Prepare release * Update preview * Update preview * Hotfix progress bar --------- Co-authored-by: DDXDB <38449595+DDXDB@users.noreply.github.com> Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> Co-authored-by: Christian Clauss <cclauss@me.com>
This commit is contained in:
@@ -17,11 +17,12 @@ def render() -> None:
|
||||
global AGE_MODIFIER_MODEL_DROPDOWN
|
||||
global AGE_MODIFIER_DIRECTION_SLIDER
|
||||
|
||||
has_age_modifier = 'age_modifier' in state_manager.get_item('processors')
|
||||
AGE_MODIFIER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.age_modifier_model_dropdown'),
|
||||
choices = processors_choices.age_modifier_models,
|
||||
value = state_manager.get_item('age_modifier_model'),
|
||||
visible = 'age_modifier' in state_manager.get_item('processors')
|
||||
visible = has_age_modifier
|
||||
)
|
||||
AGE_MODIFIER_DIRECTION_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.age_modifier_direction_slider'),
|
||||
@@ -29,7 +30,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.age_modifier_direction_range),
|
||||
minimum = processors_choices.age_modifier_direction_range[0],
|
||||
maximum = processors_choices.age_modifier_direction_range[-1],
|
||||
visible = 'age_modifier' in state_manager.get_item('processors')
|
||||
visible = has_age_modifier
|
||||
)
|
||||
register_ui_component('age_modifier_model_dropdown', AGE_MODIFIER_MODEL_DROPDOWN)
|
||||
register_ui_component('age_modifier_direction_slider', AGE_MODIFIER_DIRECTION_SLIDER)
|
||||
|
||||
@@ -16,8 +16,8 @@ def render() -> None:
|
||||
|
||||
BENCHMARK_RUNS_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('uis.benchmark_runs_checkbox_group'),
|
||||
value = list(BENCHMARKS.keys()),
|
||||
choices = list(BENCHMARKS.keys())
|
||||
choices = list(BENCHMARKS.keys()),
|
||||
value = list(BENCHMARKS.keys())
|
||||
)
|
||||
BENCHMARK_CYCLES_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.benchmark_cycles_slider'),
|
||||
|
||||
@@ -13,8 +13,6 @@ def render() -> None:
|
||||
|
||||
common_options = []
|
||||
|
||||
if state_manager.get_item('skip_download'):
|
||||
common_options.append('skip-download')
|
||||
if state_manager.get_item('keep_temp'):
|
||||
common_options.append('keep-temp')
|
||||
if state_manager.get_item('skip_audio'):
|
||||
@@ -32,9 +30,7 @@ def listen() -> None:
|
||||
|
||||
|
||||
def update(common_options : List[str]) -> None:
|
||||
skip_temp = 'skip-download' in common_options
|
||||
keep_temp = 'keep-temp' in common_options
|
||||
skip_audio = 'skip-audio' in common_options
|
||||
state_manager.set_item('skip_download', skip_temp)
|
||||
state_manager.set_item('keep_temp', keep_temp)
|
||||
state_manager.set_item('skip_audio', skip_audio)
|
||||
|
||||
65
facefusion/uis/components/deep_swapper_options.py
Executable file
65
facefusion/uis/components/deep_swapper_options.py
Executable file
@@ -0,0 +1,65 @@
|
||||
from typing import List, Optional, Tuple
|
||||
|
||||
import gradio
|
||||
|
||||
from facefusion import state_manager, wording
|
||||
from facefusion.common_helper import calc_int_step
|
||||
from facefusion.processors import choices as processors_choices
|
||||
from facefusion.processors.core import load_processor_module
|
||||
from facefusion.processors.modules.deep_swapper import has_morph_input
|
||||
from facefusion.processors.typing import DeepSwapperModel
|
||||
from facefusion.uis.core import get_ui_component, register_ui_component
|
||||
|
||||
DEEP_SWAPPER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
DEEP_SWAPPER_MORPH_SLIDER : Optional[gradio.Slider] = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
global DEEP_SWAPPER_MODEL_DROPDOWN
|
||||
global DEEP_SWAPPER_MORPH_SLIDER
|
||||
|
||||
has_deep_swapper = 'deep_swapper' in state_manager.get_item('processors')
|
||||
DEEP_SWAPPER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.deep_swapper_model_dropdown'),
|
||||
choices = processors_choices.deep_swapper_models,
|
||||
value = state_manager.get_item('deep_swapper_model'),
|
||||
visible = has_deep_swapper
|
||||
)
|
||||
DEEP_SWAPPER_MORPH_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.deep_swapper_morph_slider'),
|
||||
value = state_manager.get_item('deep_swapper_morph'),
|
||||
step = calc_int_step(processors_choices.deep_swapper_morph_range),
|
||||
minimum = processors_choices.deep_swapper_morph_range[0],
|
||||
maximum = processors_choices.deep_swapper_morph_range[-1],
|
||||
visible = has_deep_swapper and has_morph_input()
|
||||
)
|
||||
register_ui_component('deep_swapper_model_dropdown', DEEP_SWAPPER_MODEL_DROPDOWN)
|
||||
register_ui_component('deep_swapper_morph_slider', DEEP_SWAPPER_MORPH_SLIDER)
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
DEEP_SWAPPER_MODEL_DROPDOWN.change(update_deep_swapper_model, inputs = DEEP_SWAPPER_MODEL_DROPDOWN, outputs = [ DEEP_SWAPPER_MODEL_DROPDOWN, DEEP_SWAPPER_MORPH_SLIDER ])
|
||||
DEEP_SWAPPER_MORPH_SLIDER.release(update_deep_swapper_morph, inputs = DEEP_SWAPPER_MORPH_SLIDER)
|
||||
|
||||
processors_checkbox_group = get_ui_component('processors_checkbox_group')
|
||||
if processors_checkbox_group:
|
||||
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = [ DEEP_SWAPPER_MODEL_DROPDOWN, DEEP_SWAPPER_MORPH_SLIDER ])
|
||||
|
||||
|
||||
def remote_update(processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Slider]:
|
||||
has_deep_swapper = 'deep_swapper' in processors
|
||||
return gradio.Dropdown(visible = has_deep_swapper), gradio.Slider(visible = has_deep_swapper and has_morph_input())
|
||||
|
||||
|
||||
def update_deep_swapper_model(deep_swapper_model : DeepSwapperModel) -> Tuple[gradio.Dropdown, gradio.Slider]:
|
||||
deep_swapper_module = load_processor_module('deep_swapper')
|
||||
deep_swapper_module.clear_inference_pool()
|
||||
state_manager.set_item('deep_swapper_model', deep_swapper_model)
|
||||
|
||||
if deep_swapper_module.pre_check():
|
||||
return gradio.Dropdown(value = state_manager.get_item('deep_swapper_model')), gradio.Slider(visible = has_morph_input())
|
||||
return gradio.Dropdown(), gradio.Slider()
|
||||
|
||||
|
||||
def update_deep_swapper_morph(deep_swapper_morph : int) -> None:
|
||||
state_manager.set_item('deep_swapper_morph', deep_swapper_morph)
|
||||
48
facefusion/uis/components/download.py
Normal file
48
facefusion/uis/components/download.py
Normal file
@@ -0,0 +1,48 @@
|
||||
from typing import List, Optional
|
||||
|
||||
import gradio
|
||||
|
||||
import facefusion.choices
|
||||
from facefusion import content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, state_manager, voice_extractor, wording
|
||||
from facefusion.filesystem import list_directory
|
||||
from facefusion.processors.core import get_processors_modules
|
||||
from facefusion.typing import DownloadProvider
|
||||
|
||||
DOWNLOAD_PROVIDERS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
global DOWNLOAD_PROVIDERS_CHECKBOX_GROUP
|
||||
|
||||
DOWNLOAD_PROVIDERS_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('uis.download_providers_checkbox_group'),
|
||||
choices = facefusion.choices.download_providers,
|
||||
value = state_manager.get_item('download_providers')
|
||||
)
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
DOWNLOAD_PROVIDERS_CHECKBOX_GROUP.change(update_download_providers, inputs = DOWNLOAD_PROVIDERS_CHECKBOX_GROUP, outputs = DOWNLOAD_PROVIDERS_CHECKBOX_GROUP)
|
||||
|
||||
|
||||
def update_download_providers(download_providers : List[DownloadProvider]) -> gradio.CheckboxGroup:
|
||||
common_modules =\
|
||||
[
|
||||
content_analyser,
|
||||
face_classifier,
|
||||
face_detector,
|
||||
face_landmarker,
|
||||
face_recognizer,
|
||||
face_masker,
|
||||
voice_extractor
|
||||
]
|
||||
available_processors = [ file.get('name') for file in list_directory('facefusion/processors/modules') ]
|
||||
processor_modules = get_processors_modules(available_processors)
|
||||
|
||||
for module in common_modules + processor_modules:
|
||||
if hasattr(module, 'create_static_model_set'):
|
||||
module.create_static_model_set.cache_clear()
|
||||
|
||||
download_providers = download_providers or facefusion.choices.download_providers
|
||||
state_manager.set_item('download_providers', download_providers)
|
||||
return gradio.CheckboxGroup(value = state_manager.get_item('download_providers'))
|
||||
@@ -3,9 +3,10 @@ from typing import List, Optional
|
||||
import gradio
|
||||
|
||||
from facefusion import content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, state_manager, voice_extractor, wording
|
||||
from facefusion.execution import get_execution_provider_choices
|
||||
from facefusion.processors.core import clear_processors_modules
|
||||
from facefusion.typing import ExecutionProviderKey
|
||||
from facefusion.execution import get_available_execution_providers
|
||||
from facefusion.filesystem import list_directory
|
||||
from facefusion.processors.core import get_processors_modules
|
||||
from facefusion.typing import ExecutionProvider
|
||||
|
||||
EXECUTION_PROVIDERS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
|
||||
|
||||
@@ -15,7 +16,7 @@ def render() -> None:
|
||||
|
||||
EXECUTION_PROVIDERS_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('uis.execution_providers_checkbox_group'),
|
||||
choices = get_execution_provider_choices(),
|
||||
choices = get_available_execution_providers(),
|
||||
value = state_manager.get_item('execution_providers')
|
||||
)
|
||||
|
||||
@@ -24,15 +25,24 @@ def listen() -> None:
|
||||
EXECUTION_PROVIDERS_CHECKBOX_GROUP.change(update_execution_providers, inputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP, outputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP)
|
||||
|
||||
|
||||
def update_execution_providers(execution_providers : List[ExecutionProviderKey]) -> gradio.CheckboxGroup:
|
||||
content_analyser.clear_inference_pool()
|
||||
face_classifier.clear_inference_pool()
|
||||
face_detector.clear_inference_pool()
|
||||
face_landmarker.clear_inference_pool()
|
||||
face_masker.clear_inference_pool()
|
||||
face_recognizer.clear_inference_pool()
|
||||
voice_extractor.clear_inference_pool()
|
||||
clear_processors_modules(state_manager.get_item('processors'))
|
||||
execution_providers = execution_providers or get_execution_provider_choices()
|
||||
def update_execution_providers(execution_providers : List[ExecutionProvider]) -> gradio.CheckboxGroup:
|
||||
common_modules =\
|
||||
[
|
||||
content_analyser,
|
||||
face_classifier,
|
||||
face_detector,
|
||||
face_landmarker,
|
||||
face_masker,
|
||||
face_recognizer,
|
||||
voice_extractor
|
||||
]
|
||||
available_processors = [ file.get('name') for file in list_directory('facefusion/processors/modules') ]
|
||||
processor_modules = get_processors_modules(available_processors)
|
||||
|
||||
for module in common_modules + processor_modules:
|
||||
if hasattr(module, 'clear_inference_pool'):
|
||||
module.clear_inference_pool()
|
||||
|
||||
execution_providers = execution_providers or get_available_execution_providers()
|
||||
state_manager.set_item('execution_providers', execution_providers)
|
||||
return gradio.CheckboxGroup(value = state_manager.get_item('execution_providers'))
|
||||
|
||||
@@ -17,11 +17,12 @@ def render() -> None:
|
||||
global EXPRESSION_RESTORER_MODEL_DROPDOWN
|
||||
global EXPRESSION_RESTORER_FACTOR_SLIDER
|
||||
|
||||
has_expression_restorer = 'expression_restorer' in state_manager.get_item('processors')
|
||||
EXPRESSION_RESTORER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.expression_restorer_model_dropdown'),
|
||||
choices = processors_choices.expression_restorer_models,
|
||||
value = state_manager.get_item('expression_restorer_model'),
|
||||
visible = 'expression_restorer' in state_manager.get_item('processors')
|
||||
visible = has_expression_restorer
|
||||
)
|
||||
EXPRESSION_RESTORER_FACTOR_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.expression_restorer_factor_slider'),
|
||||
@@ -29,7 +30,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.expression_restorer_factor_range),
|
||||
minimum = processors_choices.expression_restorer_factor_range[0],
|
||||
maximum = processors_choices.expression_restorer_factor_range[-1],
|
||||
visible = 'expression_restorer' in state_manager.get_item('processors'),
|
||||
visible = has_expression_restorer
|
||||
)
|
||||
register_ui_component('expression_restorer_model_dropdown', EXPRESSION_RESTORER_MODEL_DROPDOWN)
|
||||
register_ui_component('expression_restorer_factor_slider', EXPRESSION_RESTORER_FACTOR_SLIDER)
|
||||
|
||||
@@ -13,11 +13,12 @@ FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
|
||||
def render() -> None:
|
||||
global FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP
|
||||
|
||||
has_face_debugger = 'face_debugger' in state_manager.get_item('processors')
|
||||
FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('uis.face_debugger_items_checkbox_group'),
|
||||
choices = processors_choices.face_debugger_items,
|
||||
value = state_manager.get_item('face_debugger_items'),
|
||||
visible = 'face_debugger' in state_manager.get_item('processors')
|
||||
visible = has_face_debugger
|
||||
)
|
||||
register_ui_component('face_debugger_items_checkbox_group', FACE_DEBUGGER_ITEMS_CHECKBOX_GROUP)
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ from typing import Optional, Sequence, Tuple
|
||||
import gradio
|
||||
|
||||
import facefusion.choices
|
||||
from facefusion import choices, face_detector, state_manager, wording
|
||||
from facefusion import face_detector, state_manager, wording
|
||||
from facefusion.common_helper import calc_float_step, get_last
|
||||
from facefusion.typing import Angle, FaceDetectorModel, Score
|
||||
from facefusion.uis.core import register_ui_component
|
||||
@@ -31,7 +31,7 @@ def render() -> None:
|
||||
with gradio.Row():
|
||||
FACE_DETECTOR_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.face_detector_model_dropdown'),
|
||||
choices = facefusion.choices.face_detector_set.keys(),
|
||||
choices = facefusion.choices.face_detector_models,
|
||||
value = state_manager.get_item('face_detector_model')
|
||||
)
|
||||
FACE_DETECTOR_SIZE_DROPDOWN = gradio.Dropdown(**face_detector_size_dropdown_options)
|
||||
@@ -65,7 +65,7 @@ def update_face_detector_model(face_detector_model : FaceDetectorModel) -> Tuple
|
||||
state_manager.set_item('face_detector_model', face_detector_model)
|
||||
|
||||
if face_detector.pre_check():
|
||||
face_detector_size_choices = choices.face_detector_set.get(state_manager.get_item('face_detector_model'))
|
||||
face_detector_size_choices = facefusion.choices.face_detector_set.get(state_manager.get_item('face_detector_model'))
|
||||
state_manager.set_item('face_detector_size', get_last(face_detector_size_choices))
|
||||
return gradio.Dropdown(value = state_manager.get_item('face_detector_model')), gradio.Dropdown(value = state_manager.get_item('face_detector_size'), choices = face_detector_size_choices)
|
||||
return gradio.Dropdown(), gradio.Dropdown()
|
||||
|
||||
@@ -43,11 +43,12 @@ def render() -> None:
|
||||
global FACE_EDITOR_HEAD_YAW_SLIDER
|
||||
global FACE_EDITOR_HEAD_ROLL_SLIDER
|
||||
|
||||
has_face_editor = 'face_editor' in state_manager.get_item('processors')
|
||||
FACE_EDITOR_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.face_editor_model_dropdown'),
|
||||
choices = processors_choices.face_editor_models,
|
||||
value = state_manager.get_item('face_editor_model'),
|
||||
visible = 'face_editor' in state_manager.get_item('processors')
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_EYEBROW_DIRECTION_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_eyebrow_direction_slider'),
|
||||
@@ -55,7 +56,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_eyebrow_direction_range),
|
||||
minimum = processors_choices.face_editor_eyebrow_direction_range[0],
|
||||
maximum = processors_choices.face_editor_eyebrow_direction_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_EYE_GAZE_HORIZONTAL_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_eye_gaze_horizontal_slider'),
|
||||
@@ -63,7 +64,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_eye_gaze_horizontal_range),
|
||||
minimum = processors_choices.face_editor_eye_gaze_horizontal_range[0],
|
||||
maximum = processors_choices.face_editor_eye_gaze_horizontal_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_EYE_GAZE_VERTICAL_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_eye_gaze_vertical_slider'),
|
||||
@@ -71,7 +72,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_eye_gaze_vertical_range),
|
||||
minimum = processors_choices.face_editor_eye_gaze_vertical_range[0],
|
||||
maximum = processors_choices.face_editor_eye_gaze_vertical_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_EYE_OPEN_RATIO_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_eye_open_ratio_slider'),
|
||||
@@ -79,7 +80,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_eye_open_ratio_range),
|
||||
minimum = processors_choices.face_editor_eye_open_ratio_range[0],
|
||||
maximum = processors_choices.face_editor_eye_open_ratio_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_LIP_OPEN_RATIO_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_lip_open_ratio_slider'),
|
||||
@@ -87,7 +88,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_lip_open_ratio_range),
|
||||
minimum = processors_choices.face_editor_lip_open_ratio_range[0],
|
||||
maximum = processors_choices.face_editor_lip_open_ratio_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_MOUTH_GRIM_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_mouth_grim_slider'),
|
||||
@@ -95,7 +96,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_mouth_grim_range),
|
||||
minimum = processors_choices.face_editor_mouth_grim_range[0],
|
||||
maximum = processors_choices.face_editor_mouth_grim_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_MOUTH_POUT_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_mouth_pout_slider'),
|
||||
@@ -103,7 +104,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_mouth_pout_range),
|
||||
minimum = processors_choices.face_editor_mouth_pout_range[0],
|
||||
maximum = processors_choices.face_editor_mouth_pout_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_MOUTH_PURSE_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_mouth_purse_slider'),
|
||||
@@ -111,7 +112,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_mouth_purse_range),
|
||||
minimum = processors_choices.face_editor_mouth_purse_range[0],
|
||||
maximum = processors_choices.face_editor_mouth_purse_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_MOUTH_SMILE_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_mouth_smile_slider'),
|
||||
@@ -119,7 +120,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_mouth_smile_range),
|
||||
minimum = processors_choices.face_editor_mouth_smile_range[0],
|
||||
maximum = processors_choices.face_editor_mouth_smile_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_MOUTH_POSITION_HORIZONTAL_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_mouth_position_horizontal_slider'),
|
||||
@@ -127,7 +128,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_mouth_position_horizontal_range),
|
||||
minimum = processors_choices.face_editor_mouth_position_horizontal_range[0],
|
||||
maximum = processors_choices.face_editor_mouth_position_horizontal_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_MOUTH_POSITION_VERTICAL_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_mouth_position_vertical_slider'),
|
||||
@@ -135,7 +136,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_mouth_position_vertical_range),
|
||||
minimum = processors_choices.face_editor_mouth_position_vertical_range[0],
|
||||
maximum = processors_choices.face_editor_mouth_position_vertical_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_HEAD_PITCH_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_head_pitch_slider'),
|
||||
@@ -143,7 +144,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_head_pitch_range),
|
||||
minimum = processors_choices.face_editor_head_pitch_range[0],
|
||||
maximum = processors_choices.face_editor_head_pitch_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_HEAD_YAW_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_head_yaw_slider'),
|
||||
@@ -151,7 +152,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_head_yaw_range),
|
||||
minimum = processors_choices.face_editor_head_yaw_range[0],
|
||||
maximum = processors_choices.face_editor_head_yaw_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
FACE_EDITOR_HEAD_ROLL_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_editor_head_roll_slider'),
|
||||
@@ -159,7 +160,7 @@ def render() -> None:
|
||||
step = calc_float_step(processors_choices.face_editor_head_roll_range),
|
||||
minimum = processors_choices.face_editor_head_roll_range[0],
|
||||
maximum = processors_choices.face_editor_head_roll_range[-1],
|
||||
visible = 'face_editor' in state_manager.get_item('processors'),
|
||||
visible = has_face_editor
|
||||
)
|
||||
register_ui_component('face_editor_model_dropdown', FACE_EDITOR_MODEL_DROPDOWN)
|
||||
register_ui_component('face_editor_eyebrow_direction_slider', FACE_EDITOR_EYEBROW_DIRECTION_SLIDER)
|
||||
|
||||
@@ -3,25 +3,29 @@ from typing import List, Optional, Tuple
|
||||
import gradio
|
||||
|
||||
from facefusion import state_manager, wording
|
||||
from facefusion.common_helper import calc_int_step
|
||||
from facefusion.common_helper import calc_float_step, calc_int_step
|
||||
from facefusion.processors import choices as processors_choices
|
||||
from facefusion.processors.core import load_processor_module
|
||||
from facefusion.processors.modules.face_enhancer import has_weight_input
|
||||
from facefusion.processors.typing import FaceEnhancerModel
|
||||
from facefusion.uis.core import get_ui_component, register_ui_component
|
||||
|
||||
FACE_ENHANCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_ENHANCER_BLEND_SLIDER : Optional[gradio.Slider] = None
|
||||
FACE_ENHANCER_WEIGHT_SLIDER : Optional[gradio.Slider] = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
global FACE_ENHANCER_MODEL_DROPDOWN
|
||||
global FACE_ENHANCER_BLEND_SLIDER
|
||||
global FACE_ENHANCER_WEIGHT_SLIDER
|
||||
|
||||
has_face_enhancer = 'face_enhancer' in state_manager.get_item('processors')
|
||||
FACE_ENHANCER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.face_enhancer_model_dropdown'),
|
||||
choices = processors_choices.face_enhancer_models,
|
||||
value = state_manager.get_item('face_enhancer_model'),
|
||||
visible = 'face_enhancer' in state_manager.get_item('processors')
|
||||
visible = has_face_enhancer
|
||||
)
|
||||
FACE_ENHANCER_BLEND_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_enhancer_blend_slider'),
|
||||
@@ -29,35 +33,50 @@ def render() -> None:
|
||||
step = calc_int_step(processors_choices.face_enhancer_blend_range),
|
||||
minimum = processors_choices.face_enhancer_blend_range[0],
|
||||
maximum = processors_choices.face_enhancer_blend_range[-1],
|
||||
visible = 'face_enhancer' in state_manager.get_item('processors')
|
||||
visible = has_face_enhancer
|
||||
)
|
||||
FACE_ENHANCER_WEIGHT_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.face_enhancer_weight_slider'),
|
||||
value = state_manager.get_item('face_enhancer_weight'),
|
||||
step = calc_float_step(processors_choices.face_enhancer_weight_range),
|
||||
minimum = processors_choices.face_enhancer_weight_range[0],
|
||||
maximum = processors_choices.face_enhancer_weight_range[-1],
|
||||
visible = has_face_enhancer and has_weight_input()
|
||||
)
|
||||
register_ui_component('face_enhancer_model_dropdown', FACE_ENHANCER_MODEL_DROPDOWN)
|
||||
register_ui_component('face_enhancer_blend_slider', FACE_ENHANCER_BLEND_SLIDER)
|
||||
register_ui_component('face_enhancer_weight_slider', FACE_ENHANCER_WEIGHT_SLIDER)
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
FACE_ENHANCER_MODEL_DROPDOWN.change(update_face_enhancer_model, inputs = FACE_ENHANCER_MODEL_DROPDOWN, outputs = FACE_ENHANCER_MODEL_DROPDOWN)
|
||||
FACE_ENHANCER_MODEL_DROPDOWN.change(update_face_enhancer_model, inputs = FACE_ENHANCER_MODEL_DROPDOWN, outputs = [ FACE_ENHANCER_MODEL_DROPDOWN, FACE_ENHANCER_WEIGHT_SLIDER ])
|
||||
FACE_ENHANCER_BLEND_SLIDER.release(update_face_enhancer_blend, inputs = FACE_ENHANCER_BLEND_SLIDER)
|
||||
FACE_ENHANCER_WEIGHT_SLIDER.release(update_face_enhancer_weight, inputs = FACE_ENHANCER_WEIGHT_SLIDER)
|
||||
|
||||
processors_checkbox_group = get_ui_component('processors_checkbox_group')
|
||||
if processors_checkbox_group:
|
||||
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = [ FACE_ENHANCER_MODEL_DROPDOWN, FACE_ENHANCER_BLEND_SLIDER ])
|
||||
processors_checkbox_group.change(remote_update, inputs = processors_checkbox_group, outputs = [ FACE_ENHANCER_MODEL_DROPDOWN, FACE_ENHANCER_BLEND_SLIDER, FACE_ENHANCER_WEIGHT_SLIDER ])
|
||||
|
||||
|
||||
def remote_update(processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Slider]:
|
||||
def remote_update(processors : List[str]) -> Tuple[gradio.Dropdown, gradio.Slider, gradio.Slider]:
|
||||
has_face_enhancer = 'face_enhancer' in processors
|
||||
return gradio.Dropdown(visible = has_face_enhancer), gradio.Slider(visible = has_face_enhancer)
|
||||
return gradio.Dropdown(visible = has_face_enhancer), gradio.Slider(visible = has_face_enhancer), gradio.Slider(visible = has_face_enhancer and has_weight_input())
|
||||
|
||||
|
||||
def update_face_enhancer_model(face_enhancer_model : FaceEnhancerModel) -> gradio.Dropdown:
|
||||
def update_face_enhancer_model(face_enhancer_model : FaceEnhancerModel) -> Tuple[gradio.Dropdown, gradio.Slider]:
|
||||
face_enhancer_module = load_processor_module('face_enhancer')
|
||||
face_enhancer_module.clear_inference_pool()
|
||||
state_manager.set_item('face_enhancer_model', face_enhancer_model)
|
||||
|
||||
if face_enhancer_module.pre_check():
|
||||
return gradio.Dropdown(value = state_manager.get_item('face_enhancer_model'))
|
||||
return gradio.Dropdown()
|
||||
return gradio.Dropdown(value = state_manager.get_item('face_enhancer_model')), gradio.Slider(visible = has_weight_input())
|
||||
return gradio.Dropdown(), gradio.Slider()
|
||||
|
||||
|
||||
def update_face_enhancer_blend(face_enhancer_blend : float) -> None:
|
||||
state_manager.set_item('face_enhancer_blend', int(face_enhancer_blend))
|
||||
|
||||
|
||||
def update_face_enhancer_weight(face_enhancer_weight : float) -> None:
|
||||
state_manager.set_item('face_enhancer_weight', face_enhancer_weight)
|
||||
|
||||
|
||||
@@ -3,11 +3,13 @@ from typing import List, Optional, Tuple
|
||||
import gradio
|
||||
|
||||
import facefusion.choices
|
||||
from facefusion import state_manager, wording
|
||||
from facefusion import face_masker, state_manager, wording
|
||||
from facefusion.common_helper import calc_float_step, calc_int_step
|
||||
from facefusion.typing import FaceMaskRegion, FaceMaskType
|
||||
from facefusion.typing import FaceMaskRegion, FaceMaskType, FaceOccluderModel, FaceParserModel
|
||||
from facefusion.uis.core import register_ui_component
|
||||
|
||||
FACE_OCCLUDER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_PARSER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
FACE_MASK_TYPES_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
|
||||
FACE_MASK_REGIONS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
|
||||
FACE_MASK_BLUR_SLIDER : Optional[gradio.Slider] = None
|
||||
@@ -18,6 +20,8 @@ FACE_MASK_PADDING_LEFT_SLIDER : Optional[gradio.Slider] = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
global FACE_OCCLUDER_MODEL_DROPDOWN
|
||||
global FACE_PARSER_MODEL_DROPDOWN
|
||||
global FACE_MASK_TYPES_CHECKBOX_GROUP
|
||||
global FACE_MASK_REGIONS_CHECKBOX_GROUP
|
||||
global FACE_MASK_BLUR_SLIDER
|
||||
@@ -28,6 +32,17 @@ def render() -> None:
|
||||
|
||||
has_box_mask = 'box' in state_manager.get_item('face_mask_types')
|
||||
has_region_mask = 'region' in state_manager.get_item('face_mask_types')
|
||||
with gradio.Row():
|
||||
FACE_OCCLUDER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.face_occluder_model_dropdown'),
|
||||
choices = facefusion.choices.face_occluder_models,
|
||||
value = state_manager.get_item('face_occluder_model')
|
||||
)
|
||||
FACE_PARSER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.face_parser_model_dropdown'),
|
||||
choices = facefusion.choices.face_parser_models,
|
||||
value = state_manager.get_item('face_parser_model')
|
||||
)
|
||||
FACE_MASK_TYPES_CHECKBOX_GROUP = gradio.CheckboxGroup(
|
||||
label = wording.get('uis.face_mask_types_checkbox_group'),
|
||||
choices = facefusion.choices.face_mask_types,
|
||||
@@ -82,6 +97,8 @@ def render() -> None:
|
||||
value = state_manager.get_item('face_mask_padding')[3],
|
||||
visible = has_box_mask
|
||||
)
|
||||
register_ui_component('face_occluder_model_dropdown', FACE_OCCLUDER_MODEL_DROPDOWN)
|
||||
register_ui_component('face_parser_model_dropdown', FACE_PARSER_MODEL_DROPDOWN)
|
||||
register_ui_component('face_mask_types_checkbox_group', FACE_MASK_TYPES_CHECKBOX_GROUP)
|
||||
register_ui_component('face_mask_regions_checkbox_group', FACE_MASK_REGIONS_CHECKBOX_GROUP)
|
||||
register_ui_component('face_mask_blur_slider', FACE_MASK_BLUR_SLIDER)
|
||||
@@ -92,6 +109,8 @@ def render() -> None:
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
FACE_OCCLUDER_MODEL_DROPDOWN.change(update_face_occluder_model, inputs = FACE_OCCLUDER_MODEL_DROPDOWN)
|
||||
FACE_PARSER_MODEL_DROPDOWN.change(update_face_parser_model, inputs = FACE_PARSER_MODEL_DROPDOWN)
|
||||
FACE_MASK_TYPES_CHECKBOX_GROUP.change(update_face_mask_types, inputs = FACE_MASK_TYPES_CHECKBOX_GROUP, outputs = [ FACE_MASK_TYPES_CHECKBOX_GROUP, FACE_MASK_REGIONS_CHECKBOX_GROUP, FACE_MASK_BLUR_SLIDER, FACE_MASK_PADDING_TOP_SLIDER, FACE_MASK_PADDING_RIGHT_SLIDER, FACE_MASK_PADDING_BOTTOM_SLIDER, FACE_MASK_PADDING_LEFT_SLIDER ])
|
||||
FACE_MASK_REGIONS_CHECKBOX_GROUP.change(update_face_mask_regions, inputs = FACE_MASK_REGIONS_CHECKBOX_GROUP, outputs = FACE_MASK_REGIONS_CHECKBOX_GROUP)
|
||||
FACE_MASK_BLUR_SLIDER.release(update_face_mask_blur, inputs = FACE_MASK_BLUR_SLIDER)
|
||||
@@ -100,6 +119,24 @@ def listen() -> None:
|
||||
face_mask_padding_slider.release(update_face_mask_padding, inputs = face_mask_padding_sliders)
|
||||
|
||||
|
||||
def update_face_occluder_model(face_occluder_model : FaceOccluderModel) -> gradio.Dropdown:
|
||||
face_masker.clear_inference_pool()
|
||||
state_manager.set_item('face_occluder_model', face_occluder_model)
|
||||
|
||||
if face_masker.pre_check():
|
||||
return gradio.Dropdown(value = state_manager.get_item('face_occluder_model'))
|
||||
return gradio.Dropdown()
|
||||
|
||||
|
||||
def update_face_parser_model(face_parser_model : FaceParserModel) -> gradio.Dropdown:
|
||||
face_masker.clear_inference_pool()
|
||||
state_manager.set_item('face_parser_model', face_parser_model)
|
||||
|
||||
if face_masker.pre_check():
|
||||
return gradio.Dropdown(value = state_manager.get_item('face_parser_model'))
|
||||
return gradio.Dropdown()
|
||||
|
||||
|
||||
def update_face_mask_types(face_mask_types : List[FaceMaskType]) -> Tuple[gradio.CheckboxGroup, gradio.CheckboxGroup, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider]:
|
||||
face_mask_types = face_mask_types or facefusion.choices.face_mask_types
|
||||
state_manager.set_item('face_mask_types', face_mask_types)
|
||||
|
||||
@@ -17,17 +17,18 @@ def render() -> None:
|
||||
global FACE_SWAPPER_MODEL_DROPDOWN
|
||||
global FACE_SWAPPER_PIXEL_BOOST_DROPDOWN
|
||||
|
||||
has_face_swapper = 'face_swapper' in state_manager.get_item('processors')
|
||||
FACE_SWAPPER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.face_swapper_model_dropdown'),
|
||||
choices = processors_choices.face_swapper_set.keys(),
|
||||
choices = processors_choices.face_swapper_models,
|
||||
value = state_manager.get_item('face_swapper_model'),
|
||||
visible = 'face_swapper' in state_manager.get_item('processors')
|
||||
visible = has_face_swapper
|
||||
)
|
||||
FACE_SWAPPER_PIXEL_BOOST_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.face_swapper_pixel_boost_dropdown'),
|
||||
choices = processors_choices.face_swapper_set.get(state_manager.get_item('face_swapper_model')),
|
||||
value = state_manager.get_item('face_swapper_pixel_boost'),
|
||||
visible = 'face_swapper' in state_manager.get_item('processors')
|
||||
visible = has_face_swapper
|
||||
)
|
||||
register_ui_component('face_swapper_model_dropdown', FACE_SWAPPER_MODEL_DROPDOWN)
|
||||
register_ui_component('face_swapper_pixel_boost_dropdown', FACE_SWAPPER_PIXEL_BOOST_DROPDOWN)
|
||||
|
||||
@@ -19,17 +19,18 @@ def render() -> None:
|
||||
global FRAME_COLORIZER_SIZE_DROPDOWN
|
||||
global FRAME_COLORIZER_BLEND_SLIDER
|
||||
|
||||
has_frame_colorizer = 'frame_colorizer' in state_manager.get_item('processors')
|
||||
FRAME_COLORIZER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.frame_colorizer_model_dropdown'),
|
||||
choices = processors_choices.frame_colorizer_models,
|
||||
value = state_manager.get_item('frame_colorizer_model'),
|
||||
visible = 'frame_colorizer' in state_manager.get_item('processors')
|
||||
visible = has_frame_colorizer
|
||||
)
|
||||
FRAME_COLORIZER_SIZE_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.frame_colorizer_size_dropdown'),
|
||||
choices = processors_choices.frame_colorizer_sizes,
|
||||
value = state_manager.get_item('frame_colorizer_size'),
|
||||
visible = 'frame_colorizer' in state_manager.get_item('processors')
|
||||
visible = has_frame_colorizer
|
||||
)
|
||||
FRAME_COLORIZER_BLEND_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.frame_colorizer_blend_slider'),
|
||||
@@ -37,7 +38,7 @@ def render() -> None:
|
||||
step = calc_int_step(processors_choices.frame_colorizer_blend_range),
|
||||
minimum = processors_choices.frame_colorizer_blend_range[0],
|
||||
maximum = processors_choices.frame_colorizer_blend_range[-1],
|
||||
visible = 'frame_colorizer' in state_manager.get_item('processors')
|
||||
visible = has_frame_colorizer
|
||||
)
|
||||
register_ui_component('frame_colorizer_model_dropdown', FRAME_COLORIZER_MODEL_DROPDOWN)
|
||||
register_ui_component('frame_colorizer_size_dropdown', FRAME_COLORIZER_SIZE_DROPDOWN)
|
||||
|
||||
@@ -17,11 +17,12 @@ def render() -> None:
|
||||
global FRAME_ENHANCER_MODEL_DROPDOWN
|
||||
global FRAME_ENHANCER_BLEND_SLIDER
|
||||
|
||||
has_frame_enhancer = 'frame_enhancer' in state_manager.get_item('processors')
|
||||
FRAME_ENHANCER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.frame_enhancer_model_dropdown'),
|
||||
choices = processors_choices.frame_enhancer_models,
|
||||
value = state_manager.get_item('frame_enhancer_model'),
|
||||
visible = 'frame_enhancer' in state_manager.get_item('processors')
|
||||
visible = has_frame_enhancer
|
||||
)
|
||||
FRAME_ENHANCER_BLEND_SLIDER = gradio.Slider(
|
||||
label = wording.get('uis.frame_enhancer_blend_slider'),
|
||||
@@ -29,7 +30,7 @@ def render() -> None:
|
||||
step = calc_int_step(processors_choices.frame_enhancer_blend_range),
|
||||
minimum = processors_choices.frame_enhancer_blend_range[0],
|
||||
maximum = processors_choices.frame_enhancer_blend_range[-1],
|
||||
visible = 'frame_enhancer' in state_manager.get_item('processors')
|
||||
visible = has_frame_enhancer
|
||||
)
|
||||
register_ui_component('frame_enhancer_model_dropdown', FRAME_ENHANCER_MODEL_DROPDOWN)
|
||||
register_ui_component('frame_enhancer_blend_slider', FRAME_ENHANCER_BLEND_SLIDER)
|
||||
|
||||
@@ -14,11 +14,12 @@ LIP_SYNCER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
def render() -> None:
|
||||
global LIP_SYNCER_MODEL_DROPDOWN
|
||||
|
||||
has_lip_syncer = 'lip_syncer' in state_manager.get_item('processors')
|
||||
LIP_SYNCER_MODEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.lip_syncer_model_dropdown'),
|
||||
choices = processors_choices.lip_syncer_models,
|
||||
value = state_manager.get_item('lip_syncer_model'),
|
||||
visible = 'lip_syncer' in state_manager.get_item('processors')
|
||||
visible = has_lip_syncer
|
||||
)
|
||||
register_ui_component('lip_syncer_model_dropdown', LIP_SYNCER_MODEL_DROPDOWN)
|
||||
|
||||
|
||||
@@ -11,6 +11,7 @@ from facefusion.common_helper import get_first
|
||||
from facefusion.content_analyser import analyse_frame
|
||||
from facefusion.core import conditional_append_reference_faces
|
||||
from facefusion.face_analyser import get_average_face, get_many_faces
|
||||
from facefusion.face_selector import sort_faces_by_order
|
||||
from facefusion.face_store import clear_reference_faces, clear_static_faces, get_reference_faces
|
||||
from facefusion.filesystem import filter_audio_paths, is_image, is_video
|
||||
from facefusion.processors.core import get_processors_modules
|
||||
@@ -74,7 +75,7 @@ def render() -> None:
|
||||
|
||||
def listen() -> None:
|
||||
PREVIEW_FRAME_SLIDER.release(update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE, show_progress = 'hidden')
|
||||
PREVIEW_FRAME_SLIDER.change(slide_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE, show_progress = 'hidden')
|
||||
PREVIEW_FRAME_SLIDER.change(slide_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE, show_progress = 'hidden', trigger_mode = 'once')
|
||||
|
||||
reference_face_position_gallery = get_ui_component('reference_face_position_gallery')
|
||||
if reference_face_position_gallery:
|
||||
@@ -110,6 +111,7 @@ def listen() -> None:
|
||||
for ui_component in get_ui_components(
|
||||
[
|
||||
'age_modifier_direction_slider',
|
||||
'deep_swapper_morph_slider',
|
||||
'expression_restorer_factor_slider',
|
||||
'face_editor_eyebrow_direction_slider',
|
||||
'face_editor_eye_gaze_horizontal_slider',
|
||||
@@ -126,6 +128,7 @@ def listen() -> None:
|
||||
'face_editor_head_yaw_slider',
|
||||
'face_editor_head_roll_slider',
|
||||
'face_enhancer_blend_slider',
|
||||
'face_enhancer_weight_slider',
|
||||
'frame_colorizer_blend_slider',
|
||||
'frame_enhancer_blend_slider',
|
||||
'reference_face_distance_slider',
|
||||
@@ -142,6 +145,7 @@ def listen() -> None:
|
||||
for ui_component in get_ui_components(
|
||||
[
|
||||
'age_modifier_model_dropdown',
|
||||
'deep_swapper_model_dropdown',
|
||||
'expression_restorer_model_dropdown',
|
||||
'processors_checkbox_group',
|
||||
'face_editor_model_dropdown',
|
||||
@@ -158,7 +162,9 @@ def listen() -> None:
|
||||
'face_detector_model_dropdown',
|
||||
'face_detector_size_dropdown',
|
||||
'face_detector_angles_checkbox_group',
|
||||
'face_landmarker_model_dropdown'
|
||||
'face_landmarker_model_dropdown',
|
||||
'face_occluder_model_dropdown',
|
||||
'face_parser_model_dropdown'
|
||||
]):
|
||||
ui_component.change(clear_and_update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)
|
||||
|
||||
@@ -190,7 +196,13 @@ def update_preview_image(frame_number : int = 0) -> gradio.Image:
|
||||
conditional_append_reference_faces()
|
||||
reference_faces = get_reference_faces() if 'reference' in state_manager.get_item('face_selector_mode') else None
|
||||
source_frames = read_static_images(state_manager.get_item('source_paths'))
|
||||
source_faces = get_many_faces(source_frames)
|
||||
source_faces = []
|
||||
|
||||
for source_frame in source_frames:
|
||||
temp_faces = get_many_faces([ source_frame ])
|
||||
temp_faces = sort_faces_by_order(temp_faces, 'large-small')
|
||||
if temp_faces:
|
||||
source_faces.append(get_first(temp_faces))
|
||||
source_face = get_average_face(source_faces)
|
||||
source_audio_path = get_first(filter_audio_paths(state_manager.get_item('source_paths')))
|
||||
source_audio_frame = create_empty_audio_frame()
|
||||
|
||||
@@ -4,7 +4,7 @@ import gradio
|
||||
|
||||
from facefusion import state_manager, wording
|
||||
from facefusion.filesystem import list_directory
|
||||
from facefusion.processors.core import clear_processors_modules, get_processors_modules
|
||||
from facefusion.processors.core import get_processors_modules
|
||||
from facefusion.uis.core import register_ui_component
|
||||
|
||||
PROCESSORS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
|
||||
@@ -26,15 +26,18 @@ def listen() -> None:
|
||||
|
||||
|
||||
def update_processors(processors : List[str]) -> gradio.CheckboxGroup:
|
||||
clear_processors_modules(state_manager.get_item('processors'))
|
||||
state_manager.set_item('processors', processors)
|
||||
|
||||
for processor_module in get_processors_modules(state_manager.get_item('processors')):
|
||||
if hasattr(processor_module, 'clear_inference_pool'):
|
||||
processor_module.clear_inference_pool()
|
||||
|
||||
for processor_module in get_processors_modules(processors):
|
||||
if not processor_module.pre_check():
|
||||
return gradio.CheckboxGroup()
|
||||
|
||||
state_manager.set_item('processors', processors)
|
||||
return gradio.CheckboxGroup(value = state_manager.get_item('processors'), choices = sort_processors(state_manager.get_item('processors')))
|
||||
|
||||
|
||||
def sort_processors(processors : List[str]) -> List[str]:
|
||||
available_processors = list_directory('facefusion/processors/modules')
|
||||
available_processors = [ file.get('name') for file in list_directory('facefusion/processors/modules') ]
|
||||
return sorted(available_processors, key = lambda processor : processors.index(processor) if processor in processors else len(processors))
|
||||
|
||||
@@ -7,8 +7,8 @@ from typing import Optional
|
||||
import gradio
|
||||
from tqdm import tqdm
|
||||
|
||||
import facefusion.choices
|
||||
from facefusion import logger, state_manager, wording
|
||||
from facefusion.choices import log_level_set
|
||||
from facefusion.typing import LogLevel
|
||||
|
||||
LOG_LEVEL_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
@@ -24,7 +24,7 @@ def render() -> None:
|
||||
|
||||
LOG_LEVEL_DROPDOWN = gradio.Dropdown(
|
||||
label = wording.get('uis.log_level_dropdown'),
|
||||
choices = log_level_set.keys(),
|
||||
choices = facefusion.choices.log_levels,
|
||||
value = state_manager.get_item('log_level')
|
||||
)
|
||||
TERMINAL_TEXTBOX = gradio.Textbox(
|
||||
|
||||
@@ -2,7 +2,7 @@ import os
|
||||
import subprocess
|
||||
from collections import deque
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from typing import Deque, Generator, Optional
|
||||
from typing import Deque, Generator, List, Optional
|
||||
|
||||
import cv2
|
||||
import gradio
|
||||
@@ -10,7 +10,7 @@ from tqdm import tqdm
|
||||
|
||||
from facefusion import logger, state_manager, wording
|
||||
from facefusion.audio import create_empty_audio_frame
|
||||
from facefusion.common_helper import is_windows
|
||||
from facefusion.common_helper import get_first, is_windows
|
||||
from facefusion.content_analyser import analyse_stream
|
||||
from facefusion.face_analyser import get_average_face, get_many_faces
|
||||
from facefusion.ffmpeg import open_ffmpeg
|
||||
@@ -27,14 +27,17 @@ WEBCAM_START_BUTTON : Optional[gradio.Button] = None
|
||||
WEBCAM_STOP_BUTTON : Optional[gradio.Button] = None
|
||||
|
||||
|
||||
def get_webcam_capture() -> Optional[cv2.VideoCapture]:
|
||||
def get_webcam_capture(webcam_device_id : int) -> Optional[cv2.VideoCapture]:
|
||||
global WEBCAM_CAPTURE
|
||||
|
||||
if WEBCAM_CAPTURE is None:
|
||||
cv2.setLogLevel(0)
|
||||
if is_windows():
|
||||
webcam_capture = cv2.VideoCapture(0, cv2.CAP_DSHOW)
|
||||
webcam_capture = cv2.VideoCapture(webcam_device_id, cv2.CAP_DSHOW)
|
||||
else:
|
||||
webcam_capture = cv2.VideoCapture(0)
|
||||
webcam_capture = cv2.VideoCapture(webcam_device_id)
|
||||
cv2.setLogLevel(3)
|
||||
|
||||
if webcam_capture and webcam_capture.isOpened():
|
||||
WEBCAM_CAPTURE = webcam_capture
|
||||
return WEBCAM_CAPTURE
|
||||
@@ -43,7 +46,7 @@ def get_webcam_capture() -> Optional[cv2.VideoCapture]:
|
||||
def clear_webcam_capture() -> None:
|
||||
global WEBCAM_CAPTURE
|
||||
|
||||
if WEBCAM_CAPTURE:
|
||||
if WEBCAM_CAPTURE and WEBCAM_CAPTURE.isOpened():
|
||||
WEBCAM_CAPTURE.release()
|
||||
WEBCAM_CAPTURE = None
|
||||
|
||||
@@ -68,32 +71,42 @@ def render() -> None:
|
||||
|
||||
|
||||
def listen() -> None:
|
||||
webcam_device_id_dropdown = get_ui_component('webcam_device_id_dropdown')
|
||||
webcam_mode_radio = get_ui_component('webcam_mode_radio')
|
||||
webcam_resolution_dropdown = get_ui_component('webcam_resolution_dropdown')
|
||||
webcam_fps_slider = get_ui_component('webcam_fps_slider')
|
||||
source_image = get_ui_component('source_image')
|
||||
|
||||
if webcam_mode_radio and webcam_resolution_dropdown and webcam_fps_slider:
|
||||
start_event = WEBCAM_START_BUTTON.click(start, inputs = [ webcam_mode_radio, webcam_resolution_dropdown, webcam_fps_slider ], outputs = WEBCAM_IMAGE)
|
||||
WEBCAM_STOP_BUTTON.click(stop, cancels = start_event)
|
||||
if webcam_device_id_dropdown and webcam_mode_radio and webcam_resolution_dropdown and webcam_fps_slider:
|
||||
start_event = WEBCAM_START_BUTTON.click(start, inputs = [ webcam_device_id_dropdown, webcam_mode_radio, webcam_resolution_dropdown, webcam_fps_slider ], outputs = WEBCAM_IMAGE)
|
||||
WEBCAM_STOP_BUTTON.click(stop, cancels = start_event, outputs = WEBCAM_IMAGE)
|
||||
|
||||
if source_image:
|
||||
source_image.change(stop, cancels = start_event, outputs = WEBCAM_IMAGE)
|
||||
|
||||
|
||||
def start(webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -> Generator[VisionFrame, None, None]:
|
||||
def start(webcam_device_id : int, webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -> Generator[VisionFrame, None, None]:
|
||||
state_manager.set_item('face_selector_mode', 'one')
|
||||
source_image_paths = filter_image_paths(state_manager.get_item('source_paths'))
|
||||
source_frames = read_static_images(source_image_paths)
|
||||
source_faces = get_many_faces(source_frames)
|
||||
source_face = get_average_face(source_faces)
|
||||
stream = None
|
||||
webcam_capture = None
|
||||
|
||||
if webcam_mode in [ 'udp', 'v4l2' ]:
|
||||
stream = open_stream(webcam_mode, webcam_resolution, webcam_fps) #type:ignore[arg-type]
|
||||
webcam_width, webcam_height = unpack_resolution(webcam_resolution)
|
||||
webcam_capture = get_webcam_capture()
|
||||
|
||||
if isinstance(webcam_device_id, int):
|
||||
webcam_capture = get_webcam_capture(webcam_device_id)
|
||||
|
||||
if webcam_capture and webcam_capture.isOpened():
|
||||
webcam_capture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG')) #type:ignore[attr-defined]
|
||||
webcam_capture.set(cv2.CAP_PROP_FRAME_WIDTH, webcam_width)
|
||||
webcam_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, webcam_height)
|
||||
webcam_capture.set(cv2.CAP_PROP_FPS, webcam_fps)
|
||||
|
||||
for capture_frame in multi_process_capture(source_face, webcam_capture, webcam_fps):
|
||||
if webcam_mode == 'inline':
|
||||
yield normalize_frame_color(capture_frame)
|
||||
@@ -107,19 +120,15 @@ def start(webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -
|
||||
|
||||
def multi_process_capture(source_face : Face, webcam_capture : cv2.VideoCapture, webcam_fps : Fps) -> Generator[VisionFrame, None, None]:
|
||||
deque_capture_frames: Deque[VisionFrame] = deque()
|
||||
with tqdm(desc = wording.get('processing'), unit = 'frame', ascii = ' =', disable = state_manager.get_item('log_level') in [ 'warn', 'error' ]) as progress:
|
||||
progress.set_postfix(
|
||||
{
|
||||
'execution_providers': state_manager.get_item('execution_providers'),
|
||||
'execution_thread_count': state_manager.get_item('execution_thread_count')
|
||||
})
|
||||
|
||||
with tqdm(desc = wording.get('streaming'), unit = 'frame', disable = state_manager.get_item('log_level') in [ 'warn', 'error' ]) as progress:
|
||||
with ThreadPoolExecutor(max_workers = state_manager.get_item('execution_thread_count')) as executor:
|
||||
futures = []
|
||||
|
||||
while webcam_capture and webcam_capture.isOpened():
|
||||
_, capture_frame = webcam_capture.read()
|
||||
if analyse_stream(capture_frame, webcam_fps):
|
||||
return
|
||||
yield None
|
||||
future = executor.submit(process_stream_frame, source_face, capture_frame)
|
||||
futures.append(future)
|
||||
|
||||
@@ -140,6 +149,7 @@ def stop() -> gradio.Image:
|
||||
|
||||
def process_stream_frame(source_face : Face, target_vision_frame : VisionFrame) -> VisionFrame:
|
||||
source_audio_frame = create_empty_audio_frame()
|
||||
|
||||
for processor_module in get_processors_modules(state_manager.get_item('processors')):
|
||||
logger.disable()
|
||||
if processor_module.pre_process('stream'):
|
||||
@@ -155,13 +165,27 @@ def process_stream_frame(source_face : Face, target_vision_frame : VisionFrame)
|
||||
|
||||
def open_stream(stream_mode : StreamMode, stream_resolution : str, stream_fps : Fps) -> subprocess.Popen[bytes]:
|
||||
commands = [ '-f', 'rawvideo', '-pix_fmt', 'bgr24', '-s', stream_resolution, '-r', str(stream_fps), '-i', '-']
|
||||
|
||||
if stream_mode == 'udp':
|
||||
commands.extend([ '-b:v', '2000k', '-f', 'mpegts', 'udp://localhost:27000?pkt_size=1316' ])
|
||||
if stream_mode == 'v4l2':
|
||||
try:
|
||||
device_name = os.listdir('/sys/devices/virtual/video4linux')[0]
|
||||
device_name = get_first(os.listdir('/sys/devices/virtual/video4linux'))
|
||||
if device_name:
|
||||
commands.extend([ '-f', 'v4l2', '/dev/' + device_name ])
|
||||
except FileNotFoundError:
|
||||
logger.error(wording.get('stream_not_loaded').format(stream_mode = stream_mode), __name__)
|
||||
return open_ffmpeg(commands)
|
||||
|
||||
|
||||
def get_available_webcam_ids(webcam_id_start : int, webcam_id_end : int) -> List[int]:
|
||||
available_webcam_ids = []
|
||||
|
||||
for index in range(webcam_id_start, webcam_id_end):
|
||||
webcam_capture = get_webcam_capture(index)
|
||||
|
||||
if webcam_capture and webcam_capture.isOpened():
|
||||
available_webcam_ids.append(index)
|
||||
clear_webcam_capture()
|
||||
|
||||
return available_webcam_ids
|
||||
|
||||
@@ -3,19 +3,29 @@ from typing import Optional
|
||||
import gradio
|
||||
|
||||
from facefusion import wording
|
||||
from facefusion.common_helper import get_first
|
||||
from facefusion.uis import choices as uis_choices
|
||||
from facefusion.uis.components.webcam import get_available_webcam_ids
|
||||
from facefusion.uis.core import register_ui_component
|
||||
|
||||
WEBCAM_DEVICE_ID_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
WEBCAM_MODE_RADIO : Optional[gradio.Radio] = None
|
||||
WEBCAM_RESOLUTION_DROPDOWN : Optional[gradio.Dropdown] = None
|
||||
WEBCAM_FPS_SLIDER : Optional[gradio.Slider] = None
|
||||
|
||||
|
||||
def render() -> None:
|
||||
global WEBCAM_DEVICE_ID_DROPDOWN
|
||||
global WEBCAM_MODE_RADIO
|
||||
global WEBCAM_RESOLUTION_DROPDOWN
|
||||
global WEBCAM_FPS_SLIDER
|
||||
|
||||
available_webcam_ids = get_available_webcam_ids(0, 10) or [ 'none' ] #type:ignore[list-item]
|
||||
WEBCAM_DEVICE_ID_DROPDOWN = gradio.Dropdown(
|
||||
value = get_first(available_webcam_ids),
|
||||
label = wording.get('uis.webcam_device_id_dropdown'),
|
||||
choices = available_webcam_ids
|
||||
)
|
||||
WEBCAM_MODE_RADIO = gradio.Radio(
|
||||
label = wording.get('uis.webcam_mode_radio'),
|
||||
choices = uis_choices.webcam_modes,
|
||||
@@ -33,6 +43,7 @@ def render() -> None:
|
||||
minimum = 1,
|
||||
maximum = 60
|
||||
)
|
||||
register_ui_component('webcam_device_id_dropdown', WEBCAM_DEVICE_ID_DROPDOWN)
|
||||
register_ui_component('webcam_mode_radio', WEBCAM_MODE_RADIO)
|
||||
register_ui_component('webcam_resolution_dropdown', WEBCAM_RESOLUTION_DROPDOWN)
|
||||
register_ui_component('webcam_fps_slider', WEBCAM_FPS_SLIDER)
|
||||
|
||||
Reference in New Issue
Block a user