3.0.0 (#748)
* Cleanup after age modifier PR * Cleanup after age modifier PR * Use OpenVino 2024.2.0 for installer * Prepare 3.0.0 for installer * Fix benchmark suite, Introduce sync_item() for state manager * Fix lint * Render slide preview also in lower res * Lower thread and queue count to avoid false usage * Fix spacing * Feat/jobs UI (#627) * Jobs UI part1 * Change naming * Jobs UI part2 * Jobs UI part3 * Jobs UI part4 * Jobs UI part4 * Jobs UI part5 * Jobs UI part6 * Jobs UI part7 * Jobs UI part8 * Jobs UI part9 * Jobs UI part10 * Jobs UI part11 * Jobs UI part12 * Fix rebase * Jobs UI part13 * Jobs UI part14 * Jobs UI part15 * changes (#626) * Remove useless ui registration * Remove useless ui registration * move job_list.py replace [0] with get_first() * optimize imports * fix date None problem add test job list * Jobs UI part16 * Jobs UI part17 * Jobs UI part18 * Jobs UI part19 * Jobs UI part20 * Jobs UI part21 * Jobs UI part22 * move job_list_options * Add label to job status checkbox group * changes * changes --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Update some dependencies * UI helper to convert 'none' * validate job (#628) * changes * changes * add test * changes * changes * Minor adjustments * Replace is_json with is_file * Handle empty and invalid json in job_list * Handle empty and invalid json in job_list * Handle empty and invalid json in job_list * Work on the job manager UI * Cosmetic changes on common helper * Just make it work for now * Just make it work for now * Just make it work for now * Streamline the step index lookups * Hide footer * Simplify instant runner * Simplify instant runner UI and job manager UI * Fix empty step choices * Fix empty step choices * Fix none values in UI * Rework on benchmark (add warmup) and job list * Improve ValueAndUnit * Add step 1 of x output * Cosmetic changes on the UI * Fix invalid job file names * Update preview * Introducing has_step() and sorting out insert behaviour * Introducing has_step() and sorting out insert behaviour * Add [ none ] to some job id dropdowns * Make updated dropdown values kinda perfect * Make updated dropdown values kinda perfect * Fix testing * Minor improvement on UI * Fix false config lookup * Remove TensorRT as our models are not made for it * Feat/cli commands second try rev2 (#640) * Refactor CLI to commands * Refactor CLI to commands part2 * Refactor CLI to commands part3 * Refactor CLI to commands part4 * Rename everything to facefusion.py * Refactor CLI to commands part5 * Refactor CLI to commands part6 * Adjust testing * Fix lint * Fix lint * Fix lint * Refactor CLI to commands part7 * Extend State typing * Fix false config lookup, adjust logical orders * Move away from passing program part1 * Move away from passing program part2 * Move away from passing program part3 * Fix lint * Move away from passing program part4 * ui-args update * ui-args update * ui-args update * temporary type fix * Move away from passing program part5 * remove unused * creates args.py * Move away from passing program part6 * Move away from passing program part7 --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Minor optimizations * Update commands in README * Fix job-retry command * Fix multi runs via UI * add more job keys * Cleanup codebase * One method to create inference session (#641) * One method to create inference session * Remove warnings, as there are none * Remember job id during processing * Fix face masker config block * Change wording * Prevent age modifier from using CoreML * add expression restorer (#642) * add expression restorer * fix import * fix lint * changes * changes * changes * Host the final model for expression restorer * Insert step on the given index * UI workover (#644) * UI workover part1 * Introduce ComponentOptions * Only set Media components to None when visibility changes * Clear static faces and reference faces between step processing * Minor changes * Minor changes * Fix testing * Enable test_sanitize_path_for_windows (#646) * Dynamic download during job processing (#647) * Fix face masker UI * Rename run-headless to headless-run * Feat/split frame processor UI (#649) * Split frame processor UI * Split frame processor UI part3, Refactor get_model_initializer * Split frame processor UI part4 * Feat/rename frame processors (#651) * Rename frame processors * Rename frame processors part2 * Fix imports Conflicts: facefusion/uis/layouts/benchmark.py facefusion/uis/layouts/default.py * Fix imports * Cosmetic changes * Fix multi threading for ROCm * Change temp frames pattern * Adjust terminal help * remove expression restorer (#653) * Expression restorer as processor (#655) * add expression restorer * changes * Cleanup code * Add TensorRT support back * Add TensorRT support back * Add TensorRT support back * changes (#656) * Change minor wording * Fix face enhancer slider * Add more typing * Fix expression-restorer when using trim (#659) * changes * changes * Rework/model and inference pool part2 (#660) * Rework on model and inference pool * Introduce inference sources and pools part1 * Introduce inference sources and pools part2 * Introduce inference sources and pools part3 * Introduce inference sources and pools part4 * Introduce inference sources and pools part5 * Introduce inference sources and pools part6 * Introduce inference sources and pools part6 * Introduce inference sources and pools part6 * Introduce inference sources and pools part7 * Introduce inference sources and pools part7 * Introduce inference sources and pools part8 * Introduce inference sources and pools part9 * Introduce inference sources and pools part10 * Introduce inference sources and pools part11 * Introduce inference sources and pools part11 * Introduce inference sources and pools part11 * Introduce inference sources and pools part12 * Reorganize the face masker UI * Fix trim in UI * Feat/hashed sources (#668) * Introduce source helper * Remove post_check() and just use process_manager * Remove post_check() part2 * Add hash based downloads * Add hash based downloads part2 * Add hash based downloads part3 * Add hash based downloads part4 * Add hash based downloads part5 * Add hash based downloads part6 * Add hash based downloads part7 * Add hash based downloads part7 * Add hash based downloads part8 * Remove print * Prepare 3.0.0 release * Fix UI * Release the check when really done * Update inputs for live portrait * Update to 3.0.0 releases, extend download postfix * Move files to the right place * Logging for the hash and source validation * Changing logic to handle corrupt sources * Fix typo * Use names over get_inputs(), Remove set_options() call * Age modifier now works for CoreML too * Update age_modifier.py * Add video encoder h264_videotoolbox and hevc_videotoolbox * Face editor add eye gaze & remove open factor sliders (#670) * changes * add eye gaze * changes * cleanup * add eyebrow control * changes * changes * Feat/terminal UI (#671) * Introduce terminal to the UI * Introduce terminal to the UI part2 * Introduce terminal to the UI part2 * Introduce terminal to the UI part2 * Calc range step to avoid weird values * Use Sequence for ranges * Use Sequence for ranges * changes (#673) * Use Sequence for ranges * Finalize terminal UI * Finalize terminal UI * Webcam cosmetics, Fix normalize fps to accept int * Cosmetic changes * Finalize terminal UI * Rename leftover typings * Fix wording * Fix rounding in metavar * Fix rounding in metavar * Rename to face classifier * Face editor lip moves (#677) * changes * changes * changes * Fix rounding in metavar * Rename to face classifier * changes * changes * update naming --------- Co-authored-by: henryruhs <info@henryruhs.com> * Fix wording * Feat/many landmarker + face analyser breakdown (#678) * Basic multi landmarker integration * Simplify some method names * Break into face_detector and face_landmarker * Fix cosmetics * Fix testing * Break into face_attributor and face_recognizer * Clear them all * Clear them all * Rename to face classifier * Rename to face classifier * Fix testing * Fix stuff * Add face landmarker model to UI * Add face landmarker model to UI part2 * Split the config * Split the UI * Improvement from code review * Improvement from code review * Validate args also for sub parsers * Remove clear of processors in process step * Allow finder control for the face editor * Fix lint * Improve testing performance * Remove unused file, Clear processors from the UI before job runs * Update the installer * Uniform set handler for swapper and detector in the UI * Fix example urls * Feat/inference manager (#684) * Introduce inference manager * Migrate all to inference manager * clean ini * Introduce app context based inference pools * Fix lint * Fix typing * Adjust layout * Less border radius * Rename app context names * Fix/live portrait directml (#691) * changes (#690) * Adjust naming * Use our assets release * Adjust naming --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Add caches to gitignore * Update dependencies and drop CUDA 11.8 support (#693) * Update dependencies and drop CUDA 11.8 support * Play save and keep numpy 1.x.x * Improve TensorRT optimization * changes * changes * changes * changes * changes * changes * changes * changes * changes * Reuse inference sessions (#696) * Fix force-download command * Refactor processors to forward() (#698) * Install tensorrt when selecting cuda * Minor changes * Use latest numpy * Fix limit system memory * Implement forward() for every inference (#699) * Implement forward() for every inference * Implement forward() for every inference * Implement forward() for every inference * Implement forward() for every inference * changes * changes * changes * changes * Feat/fairface (#710) * Replace gender_age model with fair face (#709) * changes * changes * changes * age dropdown to range-slider * Cleanup code * Cleanup code --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Extend installer to set library paths for cuda and tensorrt (#707) * Extend installer to set library paths for cuda and tensorrt * Add refresh of conda env * Remove invalid commands * Set the conda env according to operating system * Update for ROCm 6.2 * fix installer * Aktualisieren von installer.py * Add missing face selector keys * Try to keep original LD_LIBRARY_PATH * windows support installer * Final touch to the installer * Remove spaces * Simplidy collect_model_downloads() * Fix force download for once and forever * Housekeeping (#715) * changes * changes * changes * Fix performance part1 * Fix mixed states (#689) * Fix mixed states * Add missing sync for job args * Move UnionStateXXX to base typing * Undo * Remove UnionStateXXX * Fix app context performance lookup (#717) * Restore performance for inswapper * Mover upper() to the logger * Undo debugging * Move TensorRT installation to docs * Sort out log level typing, Add log level UI dropdown (#719) * Fix inference pool part1 * Validate conda library paths existence * Default face selector order to large-small * Fix inference pool context according to execution provider (#720) * Fix app context under Windows * CUDA and TensorRT update for the installer * Remove concept of static processor modules * Revert false commit * Change event order makes a difference * Fix multi model context in inference pool (#721) * Fix multi model context in inference pool * Fix multi model context in inference pool part2 * Use latest gradio to avoid fastapi bug * Rework on the Windows Installer * Use embedding converter (#724) * changes (#723) * Upload models to official assets repo --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Rework on the Windows Installer part2 * Resolve subprocess calls (#726) * Experiment * Resolve subprocess calls to cover edge cases like broken PATH * Adjust wording * Simplify code * Rework on the Windows Installer part3 * Rework on the Windows Installer part4 * Numpy fix for older onnxruntime * changes (#729) * Add space * Add MacOS installer * Use favicon * Fix disabled logger * Layout polishing (#731) * Update dependencies, Adjust many face landmarker logic * Cosmetics changes * Should be button * Introduce randomized action button * Fix update of lip syncer and expression restorer * Stop sharing inference session this prevents flushing VRAM * Fix test * Fix urls * Prepare release * Vanish inquirer * Sticky preview does not work on portrait images * Sticky preview only for landscape images and videos * remove gradio tunnel env * Change wording and deeplinks * increase peppa landmark score offset * Change wording * Graceful exit install.py * Just adding a required * Cannot use the exit_helper * Rename our model * Change color of face-landmark-68/5 * Limit liveportrait (#739) * changes * changes * changes * Cleanup * Cleanup --------- Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * limit expression restorer * change expression restorer 0-100 range * Use 256x icon * changes * changes * changes * changes * Limit face editor rotation (#745) * changes (#743) * Finish euler methods --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Use different coveralls badge * Move about wording * Shorten scope in the logger * changes * changes * Shorten scope in the logger * fix typo * Simplify the arcface converter names * Update preview --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com>
This commit is contained in:
@@ -4,6 +4,7 @@ WORDING : Dict[str, Any] =\
|
||||
{
|
||||
'conda_not_activated': 'Conda is not activated',
|
||||
'python_not_supported': 'Python version is not supported, upgrade to {version} or higher',
|
||||
'curl_not_installed': 'CURL is not installed',
|
||||
'ffmpeg_not_installed': 'FFMpeg is not installed',
|
||||
'creating_temp': 'Creating temporary resources',
|
||||
'extracting_frames': 'Extracting frames with a resolution of {resolution} and {fps} frames per second',
|
||||
@@ -31,19 +32,54 @@ WORDING : Dict[str, Any] =\
|
||||
'processing_image_failed': 'Processing to image failed',
|
||||
'processing_video_succeed': 'Processing to video succeed in {seconds} seconds',
|
||||
'processing_video_failed': 'Processing to video failed',
|
||||
'model_download_not_done': 'Download of the model is not done',
|
||||
'model_file_not_present': 'File of the model is not present',
|
||||
'select_image_source': 'Select a image for source path',
|
||||
'select_audio_source': 'Select a audio for source path',
|
||||
'select_video_target': 'Select a video for target path',
|
||||
'select_image_or_video_target': 'Select a image or video for target path',
|
||||
'select_file_or_directory_output': 'Select a file or directory for output path',
|
||||
'choose_image_source': 'Choose a image for the source',
|
||||
'choose_audio_source': 'Choose a audio for the source',
|
||||
'choose_video_target': 'Choose a video for the target',
|
||||
'choose_image_or_video_target': 'Choose a image or video for the target',
|
||||
'specify_image_or_video_output': 'Specify the output image or video within a directory',
|
||||
'match_target_and_output_extension': 'Match the target and output extension',
|
||||
'no_source_face_detected': 'No source face detected',
|
||||
'frame_processor_not_loaded': 'Frame processor {frame_processor} could not be loaded',
|
||||
'frame_processor_not_implemented': 'Frame processor {frame_processor} not implemented correctly',
|
||||
'processor_not_loaded': 'Processor {processor} could not be loaded',
|
||||
'processor_not_implemented': 'Processor {processor} not implemented correctly',
|
||||
'ui_layout_not_loaded': 'UI layout {ui_layout} could not be loaded',
|
||||
'ui_layout_not_implemented': 'UI layout {ui_layout} not implemented correctly',
|
||||
'stream_not_loaded': 'Stream {stream_mode} could not be loaded',
|
||||
'job_created': 'Job {job_id} created',
|
||||
'job_not_created': 'Job {job_id} not created',
|
||||
'job_submitted': 'Job {job_id} submitted',
|
||||
'job_not_submitted': 'Job {job_id} not submitted',
|
||||
'job_all_submitted': 'Jobs submitted',
|
||||
'job_all_not_submitted': 'Jobs not submitted',
|
||||
'job_deleted': 'Job {job_id} deleted',
|
||||
'job_not_deleted': 'Job {job_id} not deleted',
|
||||
'job_all_deleted': 'Jobs deleted',
|
||||
'job_all_not_deleted': 'Jobs not deleted',
|
||||
'job_step_added': 'Step added to job {job_id}',
|
||||
'job_step_not_added': 'Step not added to job {job_id}',
|
||||
'job_remix_step_added': 'Step {step_index} remixed from job {job_id}',
|
||||
'job_remix_step_not_added': 'Step {step_index} not remixed from job {job_id}',
|
||||
'job_step_inserted': 'Step {step_index} inserted to job {job_id}',
|
||||
'job_step_not_inserted': 'Step {step_index} not inserted to job {job_id}',
|
||||
'job_step_removed': 'Step {step_index} removed from job {job_id}',
|
||||
'job_step_not_removed': 'Step {step_index} not removed from job {job_id}',
|
||||
'running_job': 'Running queued job {job_id}',
|
||||
'running_jobs': 'Running all queued jobs',
|
||||
'retrying_job': 'Retrying failed job {job_id}',
|
||||
'retrying_jobs': 'Retrying all failed jobs',
|
||||
'processing_job_succeed': 'Processing of job {job_id} succeed',
|
||||
'processing_jobs_succeed': 'Processing of all job succeed',
|
||||
'processing_job_failed': 'Processing of job {job_id} failed',
|
||||
'processing_jobs_failed': 'Processing of all jobs failed',
|
||||
'processing_step': 'Processing step {step_current} of {step_total}',
|
||||
'validating_hash_succeed': 'Validating hash for {hash_file_name} succeed',
|
||||
'validating_hash_failed': 'Validating hash for {hash_file_name} failed',
|
||||
'validating_source_succeed': 'Validating source for {source_file_name} succeed',
|
||||
'validating_source_failed': 'Validating source for {source_file_name} failed',
|
||||
'deleting_corrupt_source': 'Deleting corrupt source for {source_file_name}',
|
||||
'time_ago_now': 'just now',
|
||||
'time_ago_minutes': '{minutes} minutes ago',
|
||||
'time_ago_hours': '{hours} hours and {minutes} minutes ago',
|
||||
'time_ago_days': '{days} days, {hours} hours and {minutes} minutes ago',
|
||||
'point': '.',
|
||||
'comma': ',',
|
||||
'colon': ':',
|
||||
@@ -52,36 +88,28 @@ WORDING : Dict[str, Any] =\
|
||||
'help':
|
||||
{
|
||||
# installer
|
||||
'install_dependency': 'select the variant of {dependency} to install',
|
||||
'install_dependency': 'choose the variant of {dependency} to install',
|
||||
'skip_conda': 'skip the conda environment check',
|
||||
# general
|
||||
'config': 'choose the config file to override defaults',
|
||||
'source': 'choose single or multiple source images or audios',
|
||||
'target': 'choose single target image or video',
|
||||
'output': 'specify the output file or directory',
|
||||
# misc
|
||||
'force_download': 'force automate downloads and exit',
|
||||
'skip_download': 'omit automate downloads and remote lookups',
|
||||
'headless': 'run the program without a user interface',
|
||||
'log_level': 'adjust the message severity displayed in the terminal',
|
||||
# execution
|
||||
'execution_device_id': 'specify the device used for processing',
|
||||
'execution_providers': 'accelerate the model inference using different providers (choices: {choices}, ...)',
|
||||
'execution_thread_count': 'specify the amount of parallel threads while processing',
|
||||
'execution_queue_count': 'specify the amount of frames each thread is processing',
|
||||
# memory
|
||||
'video_memory_strategy': 'balance fast frame processing and low VRAM usage',
|
||||
'system_memory_limit': 'limit the available RAM that can be used while processing',
|
||||
'config_path': 'choose the config file to override defaults',
|
||||
'source_paths': 'choose single or multiple source images or audios',
|
||||
'target_path': 'choose single target image or video',
|
||||
'output_path': 'specify the output image or video within a directory',
|
||||
'jobs_path': 'specify the directory to store jobs',
|
||||
# face analyser
|
||||
'face_analyser_order': 'specify the order in which the face analyser detects faces',
|
||||
'face_analyser_age': 'filter the detected faces based on their age',
|
||||
'face_analyser_gender': 'filter the detected faces based on their gender',
|
||||
'face_detector_model': 'choose the model responsible for detecting the face',
|
||||
'face_detector_model': 'choose the model responsible for detecting the faces',
|
||||
'face_detector_size': 'specify the size of the frame provided to the face detector',
|
||||
'face_detector_angles': 'specify the angles to rotate the frame before detecting faces',
|
||||
'face_detector_score': 'filter the detected faces base on the confidence score',
|
||||
'face_landmarker_score': 'filter the detected landmarks base on the confidence score',
|
||||
'face_landmarker_model': 'choose the model responsible for detecting the face landmarks',
|
||||
'face_landmarker_score': 'filter the detected face landmarks base on the confidence score',
|
||||
# face selector
|
||||
'face_selector_mode': 'use reference based tracking or simple matching',
|
||||
'face_selector_order': 'specify the order of the detected faces',
|
||||
'face_selector_gender': 'filter the detected faces based on their gender',
|
||||
'face_selector_age_start': 'filter the detected faces based on their age start',
|
||||
'face_selector_age_end': 'filter the detected faces based on their age end',
|
||||
'face_selector_race': 'filter the detected faces based on their race',
|
||||
'reference_face_position': 'specify the position used to create the reference face',
|
||||
'reference_face_distance': 'specify the desired similarity between the reference face and target face',
|
||||
'reference_frame_number': 'specify the frame used to create the reference face',
|
||||
@@ -98,18 +126,39 @@ WORDING : Dict[str, Any] =\
|
||||
# output creation
|
||||
'output_image_quality': 'specify the image quality which translates to the compression factor',
|
||||
'output_image_resolution': 'specify the image output resolution based on the target image',
|
||||
'output_video_encoder': 'specify the encoder use for the video compression',
|
||||
'output_audio_encoder': 'specify the encoder used for the audio output',
|
||||
'output_video_encoder': 'specify the encoder used for the video output',
|
||||
'output_video_preset': 'balance fast video processing and video file size',
|
||||
'output_video_quality': 'specify the video quality which translates to the compression factor',
|
||||
'output_video_resolution': 'specify the video output resolution based on the target video',
|
||||
'output_video_fps': 'specify the video output fps based on the target video',
|
||||
'skip_audio': 'omit the audio from the target video',
|
||||
# frame processors
|
||||
'frame_processors': 'load a single or multiple frame processors. (choices: {choices}, ...)',
|
||||
'face_debugger_items': 'load a single or multiple frame processors (choices: {choices})',
|
||||
# processors
|
||||
'processors': 'load a single or multiple processors. (choices: {choices}, ...)',
|
||||
'age_modifier_model': 'choose the model responsible for aging the face',
|
||||
'age_modifier_direction': 'specify the direction in which the age should be modified',
|
||||
'expression_restorer_model': 'choose the model responsible for restoring the expression',
|
||||
'expression_restorer_factor': 'restore factor of expression from target face',
|
||||
'face_debugger_items': 'load a single or multiple processors (choices: {choices})',
|
||||
'face_editor_model': 'choose the model responsible for editing the face',
|
||||
'face_editor_eyebrow_direction': 'specify the eyebrow direction',
|
||||
'face_editor_eye_gaze_horizontal': 'specify the horizontal eye gaze',
|
||||
'face_editor_eye_gaze_vertical': 'specify the vertical eye gaze',
|
||||
'face_editor_eye_open_ratio': 'specify the ratio of eye opening',
|
||||
'face_editor_lip_open_ratio': 'specify the ratio of lip opening',
|
||||
'face_editor_mouth_grim': 'specify the mouth grim amount',
|
||||
'face_editor_mouth_pout': 'specify the mouth pout amount',
|
||||
'face_editor_mouth_purse': 'specify the mouth purse amount',
|
||||
'face_editor_mouth_smile': 'specify the mouth smile amount',
|
||||
'face_editor_mouth_position_horizontal': 'specify the mouth position horizontal amount',
|
||||
'face_editor_mouth_position_vertical': 'specify the mouth position vertical amount',
|
||||
'face_editor_head_pitch': 'specify the head pitch amount',
|
||||
'face_editor_head_yaw': 'specify the head yaw amount',
|
||||
'face_editor_head_roll': 'specify the head roll amount',
|
||||
'face_enhancer_model': 'choose the model responsible for enhancing the face',
|
||||
'face_enhancer_blend': 'blend the enhanced into the previous face',
|
||||
'face_swapper_model': 'choose the model responsible for swapping the face',
|
||||
'face_swapper_pixel_boost': 'choose the pixel boost resolution for the face swapper',
|
||||
'frame_colorizer_model': 'choose the model responsible for colorizing the frame',
|
||||
'frame_colorizer_blend': 'blend the colorized into the previous frame',
|
||||
'frame_colorizer_size': 'specify the size of the frame provided to the frame colorizer',
|
||||
@@ -118,94 +167,144 @@ WORDING : Dict[str, Any] =\
|
||||
'lip_syncer_model': 'choose the model responsible for syncing the lips',
|
||||
# uis
|
||||
'open_browser': 'open the browser once the program is ready',
|
||||
'ui_layouts': 'launch a single or multiple UI layouts (choices: {choices}, ...)'
|
||||
'ui_layouts': 'launch a single or multiple UI layouts (choices: {choices}, ...)',
|
||||
'ui_workflow': 'choose the ui workflow',
|
||||
# execution
|
||||
'execution_device_id': 'specify the device used for processing',
|
||||
'execution_providers': 'accelerate the model inference using different providers (choices: {choices}, ...)',
|
||||
'execution_thread_count': 'specify the amount of parallel threads while processing',
|
||||
'execution_queue_count': 'specify the amount of frames each thread is processing',
|
||||
# memory
|
||||
'video_memory_strategy': 'balance fast processing and low VRAM usage',
|
||||
'system_memory_limit': 'limit the available RAM that can be used while processing',
|
||||
# misc
|
||||
'skip_download': 'omit downloads and remote lookups',
|
||||
'log_level': 'adjust the message severity displayed in the terminal',
|
||||
# run
|
||||
'run': 'run the program',
|
||||
'headless_run': 'run the program in headless mode',
|
||||
'force_download': 'force automate downloads and exit',
|
||||
# jobs
|
||||
'job_id': 'specify the job id',
|
||||
'step_index': 'specify the step index',
|
||||
# job manager
|
||||
'job_create': 'create a drafted job',
|
||||
'job_submit': 'submit a drafted job to become a queued job',
|
||||
'job_submit_all': 'submit all drafted jobs to become a queued jobs',
|
||||
'job_delete': 'delete a drafted, queued, failed or completed job',
|
||||
'job_delete_all': 'delete all drafted, queued, failed and completed jobs',
|
||||
'job_list': 'list jobs by status',
|
||||
'job_add_step': 'add a step to a drafted job',
|
||||
'job_remix_step': 'remix a previous step from a drafted job',
|
||||
'job_insert_step': 'insert a step to a drafted job',
|
||||
'job_remove_step': 'remove a step from a drafted job',
|
||||
# job runner
|
||||
'job_run': 'run a queued job',
|
||||
'job_run_all': 'run all queued jobs',
|
||||
'job_retry': 'retry a failed job',
|
||||
'job_retry_all': 'retry all failed jobs'
|
||||
},
|
||||
'about':
|
||||
{
|
||||
'become_a_member': 'become a member',
|
||||
'join_our_community': 'join our community',
|
||||
'read_the_documentation': 'read the documentation'
|
||||
},
|
||||
'uis':
|
||||
{
|
||||
# general
|
||||
'start_button': 'START',
|
||||
'stop_button': 'STOP',
|
||||
'clear_button': 'CLEAR',
|
||||
# about
|
||||
'donate_button': 'DONATE',
|
||||
# benchmark
|
||||
'benchmark_results_dataframe': 'BENCHMARK RESULTS',
|
||||
# benchmark options
|
||||
'benchmark_runs_checkbox_group': 'BENCHMARK RUNS',
|
||||
'age_modifier_direction_slider': 'AGE MODIFIER DIRECTION',
|
||||
'age_modifier_model_dropdown': 'AGE MODIFIER MODEL',
|
||||
'apply_button': 'APPLY',
|
||||
'benchmark_cycles_slider': 'BENCHMARK CYCLES',
|
||||
# common options
|
||||
'benchmark_runs_checkbox_group': 'BENCHMARK RUNS',
|
||||
'clear_button': 'CLEAR',
|
||||
'common_options_checkbox_group': 'OPTIONS',
|
||||
# execution
|
||||
'execution_providers_checkbox_group': 'EXECUTION PROVIDERS',
|
||||
# execution queue count
|
||||
'execution_queue_count_slider': 'EXECUTION QUEUE COUNT',
|
||||
# execution thread count
|
||||
'execution_thread_count_slider': 'EXECUTION THREAD COUNT',
|
||||
# face analyser
|
||||
'face_analyser_order_dropdown': 'FACE ANALYSER ORDER',
|
||||
'face_analyser_age_dropdown': 'FACE ANALYSER AGE',
|
||||
'face_analyser_gender_dropdown': 'FACE ANALYSER GENDER',
|
||||
'expression_restorer_factor_slider': 'EXPRESSION RESTORER FACTOR',
|
||||
'expression_restorer_model_dropdown': 'EXPRESSION RESTORER MODEL',
|
||||
'face_debugger_items_checkbox_group': 'FACE DEBUGGER ITEMS',
|
||||
'face_detector_angles_checkbox_group': 'FACE DETECTOR ANGLES',
|
||||
'face_detector_model_dropdown': 'FACE DETECTOR MODEL',
|
||||
'face_detector_size_dropdown': 'FACE DETECTOR SIZE',
|
||||
'face_detector_score_slider': 'FACE DETECTOR SCORE',
|
||||
'face_detector_size_dropdown': 'FACE DETECTOR SIZE',
|
||||
'face_editor_model_dropdown': 'FACE EDITOR MODEL',
|
||||
'face_editor_eye_gaze_horizontal_slider': 'FACE EDITOR EYE GAZE HORIZONTAL',
|
||||
'face_editor_eye_gaze_vertical_slider': 'FACE EDITOR EYE GAZE VERTICAL',
|
||||
'face_editor_eye_open_ratio_slider': 'FACE EDITOR EYE OPEN RATIO',
|
||||
'face_editor_eyebrow_direction_slider': 'FACE EDITOR EYEBROW DIRECTION',
|
||||
'face_editor_lip_open_ratio_slider': 'FACE EDITOR LIP OPEN RATIO',
|
||||
'face_editor_mouth_grim_slider': 'FACE EDITOR MOUTH GRIM',
|
||||
'face_editor_mouth_pout_slider': 'FACE EDITOR MOUTH POUT',
|
||||
'face_editor_mouth_purse_slider': 'FACE EDITOR MOUTH PURSE',
|
||||
'face_editor_mouth_smile_slider': 'FACE EDITOR MOUTH SMILE',
|
||||
'face_editor_mouth_position_horizontal_slider': 'FACE EDITOR MOUTH POSITION HORIZONTAL',
|
||||
'face_editor_mouth_position_vertical_slider': 'FACE EDITOR MOUTH POSITION VERTICAL',
|
||||
'face_editor_head_pitch_slider': 'FACE EDITOR HEAD PITCH',
|
||||
'face_editor_head_yaw_slider': 'FACE EDITOR HEAD YAW',
|
||||
'face_editor_head_roll_slider': 'FACE EDITOR HEAD ROLL',
|
||||
'face_enhancer_blend_slider': 'FACE ENHANCER BLEND',
|
||||
'face_enhancer_model_dropdown': 'FACE ENHANCER MODEL',
|
||||
'face_landmarker_model_dropdown': 'FACE LANDMARKER MODEL',
|
||||
'face_landmarker_score_slider': 'FACE LANDMARKER SCORE',
|
||||
# face masker
|
||||
'face_mask_types_checkbox_group': 'FACE MASK TYPES',
|
||||
'face_mask_blur_slider': 'FACE MASK BLUR',
|
||||
'face_mask_padding_top_slider': 'FACE MASK PADDING TOP',
|
||||
'face_mask_padding_right_slider': 'FACE MASK PADDING RIGHT',
|
||||
'face_mask_padding_bottom_slider': 'FACE MASK PADDING BOTTOM',
|
||||
'face_mask_padding_left_slider': 'FACE MASK PADDING LEFT',
|
||||
'face_mask_padding_right_slider': 'FACE MASK PADDING RIGHT',
|
||||
'face_mask_padding_top_slider': 'FACE MASK PADDING TOP',
|
||||
'face_mask_region_checkbox_group': 'FACE MASK REGIONS',
|
||||
# face selector
|
||||
'face_mask_types_checkbox_group': 'FACE MASK TYPES',
|
||||
'face_selector_gender_dropdown': 'FACE SELECTOR GENDER',
|
||||
'face_selector_race_dropdown': 'FACE SELECTOR RACE',
|
||||
'face_selector_age_range_slider': 'FACE SELECTOR AGE',
|
||||
'face_selector_mode_dropdown': 'FACE SELECTOR MODE',
|
||||
'reference_face_gallery': 'REFERENCE FACE',
|
||||
'reference_face_distance_slider': 'REFERENCE FACE DISTANCE',
|
||||
# frame processors
|
||||
'frame_processors_checkbox_group': 'FRAME PROCESSORS',
|
||||
# frame processors options
|
||||
'face_debugger_items_checkbox_group': 'FACE DEBUGGER ITEMS',
|
||||
'face_enhancer_model_dropdown': 'FACE ENHANCER MODEL',
|
||||
'face_enhancer_blend_slider': 'FACE ENHANCER BLEND',
|
||||
'face_selector_order_dropdown': 'FACE SELECTOR ORDER',
|
||||
'face_swapper_model_dropdown': 'FACE SWAPPER MODEL',
|
||||
'frame_colorizer_model_dropdown': 'FRAME COLORIZER MODEL',
|
||||
'face_swapper_pixel_boost_dropdown': 'FACE SWAPPER PIXEL BOOST',
|
||||
'frame_colorizer_blend_slider': 'FRAME COLORIZER BLEND',
|
||||
'frame_colorizer_model_dropdown': 'FRAME COLORIZER MODEL',
|
||||
'frame_colorizer_size_dropdown': 'FRAME COLORIZER SIZE',
|
||||
'frame_enhancer_model_dropdown': 'FRAME ENHANCER MODEL',
|
||||
'frame_enhancer_blend_slider': 'FRAME ENHANCER BLEND',
|
||||
'frame_enhancer_model_dropdown': 'FRAME ENHANCER MODEL',
|
||||
'job_list_status_checkbox_group': 'JOB STATUS',
|
||||
'job_manager_job_action_dropdown': 'JOB_ACTION',
|
||||
'job_manager_job_id_dropdown': 'JOB ID',
|
||||
'job_manager_step_index_dropdown': 'STEP INDEX',
|
||||
'job_runner_job_action_dropdown': 'JOB ACTION',
|
||||
'job_runner_job_id_dropdown': 'JOB ID',
|
||||
'lip_syncer_model_dropdown': 'LIP SYNCER MODEL',
|
||||
# memory
|
||||
'video_memory_strategy_dropdown': 'VIDEO MEMORY STRATEGY',
|
||||
'system_memory_limit_slider': 'SYSTEM MEMORY LIMIT',
|
||||
# output
|
||||
'log_level_dropdown': 'LOG LEVEL',
|
||||
'output_audio_encoder_dropdown': 'OUTPUT AUDIO ENCODER',
|
||||
'output_image_or_video': 'OUTPUT',
|
||||
# output options
|
||||
'output_path_textbox': 'OUTPUT PATH',
|
||||
'output_image_quality_slider': 'OUTPUT IMAGE QUALITY',
|
||||
'output_image_resolution_dropdown': 'OUTPUT IMAGE RESOLUTION',
|
||||
'output_path_textbox': 'OUTPUT PATH',
|
||||
'output_video_encoder_dropdown': 'OUTPUT VIDEO ENCODER',
|
||||
'output_video_fps_slider': 'OUTPUT VIDEO FPS',
|
||||
'output_video_preset_dropdown': 'OUTPUT VIDEO PRESET',
|
||||
'output_video_quality_slider': 'OUTPUT VIDEO QUALITY',
|
||||
'output_video_resolution_dropdown': 'OUTPUT VIDEO RESOLUTION',
|
||||
'output_video_fps_slider': 'OUTPUT VIDEO FPS',
|
||||
# preview
|
||||
'preview_image': 'PREVIEW',
|
||||
'preview_frame_slider': 'PREVIEW FRAME',
|
||||
# source
|
||||
'preview_image': 'PREVIEW',
|
||||
'processors_checkbox_group': 'PROCESSORS',
|
||||
'reference_face_distance_slider': 'REFERENCE FACE DISTANCE',
|
||||
'reference_face_gallery': 'REFERENCE FACE',
|
||||
'refresh_button': 'REFRESH',
|
||||
'source_file': 'SOURCE',
|
||||
# target
|
||||
'start_button': 'START',
|
||||
'stop_button': 'STOP',
|
||||
'system_memory_limit_slider': 'SYSTEM MEMORY LIMIT',
|
||||
'target_file': 'TARGET',
|
||||
# temp frame
|
||||
'terminal_textbox': 'TERMINAL',
|
||||
'temp_frame_format_dropdown': 'TEMP FRAME FORMAT',
|
||||
# trim frame
|
||||
'trim_frame_start_slider': 'TRIM FRAME START',
|
||||
'trim_frame_end_slider': 'TRIM FRAME END',
|
||||
# webcam
|
||||
'trim_frame_slider': 'TRIM FRAME',
|
||||
'ui_workflow': 'UI WORKFLOW',
|
||||
'video_memory_strategy_dropdown': 'VIDEO MEMORY STRATEGY',
|
||||
'webcam_fps_slider': 'WEBCAM FPS',
|
||||
'webcam_image': 'WEBCAM',
|
||||
# webcam options
|
||||
'webcam_mode_radio': 'WEBCAM MODE',
|
||||
'webcam_resolution_dropdown': 'WEBCAM RESOLUTION',
|
||||
'webcam_fps_slider': 'WEBCAM FPS'
|
||||
}
|
||||
}
|
||||
|
||||
@@ -213,8 +312,8 @@ WORDING : Dict[str, Any] =\
|
||||
def get(key : str) -> Optional[str]:
|
||||
if '.' in key:
|
||||
section, name = key.split('.')
|
||||
if section in WORDING and name in WORDING[section]:
|
||||
return WORDING[section][name]
|
||||
if section in WORDING and name in WORDING.get(section):
|
||||
return WORDING.get(section).get(name)
|
||||
if key in WORDING:
|
||||
return WORDING[key]
|
||||
return WORDING.get(key)
|
||||
return None
|
||||
|
||||
Reference in New Issue
Block a user