* Rename calcXXX to calculateXXX * Add migraphx support * Add migraphx support * Add migraphx support * Add migraphx support * Add migraphx support * Add migraphx support * Use True for the flags * Add migraphx support * add face-swapper-weight * add face-swapper-weight to facefusion.ini * changes * change choice * Fix typing for xxxWeight * Feat/log inference session (#906) * Log inference session, Introduce time helper * Log inference session, Introduce time helper * Log inference session, Introduce time helper * Log inference session, Introduce time helper * Mark as NEXT * Follow industry standard x1, x2, y1 and y2 * Follow industry standard x1, x2, y1 and y2 * Follow industry standard in terms of naming (#908) * Follow industry standard in terms of naming * Improve xxx_embedding naming * Fix norm vs. norms * Reduce timeout to 5 * Sort out voice_extractor once again * changes * Introduce many to the occlusion mask (#910) * Introduce many to the occlusion mask * Then we use minimum * Add support for wmv * Run platform tests before has_execution_provider (#911) * Add support for wmv * Introduce benchmark mode (#912) * Honestly makes no difference to me * Honestly makes no difference to me * Fix wording * Bring back YuNet (#922) * Reintroduce YuNet without cv2 dependency * Fix variable naming * Avoid RGB to YUV colorshift using libx264rgb * Avoid RGB to YUV colorshift using libx264rgb * Make libx264 the default again * Make libx264 the default again * Fix types in ffmpeg builder * Fix quality stuff in ffmpeg builder * Fix quality stuff in ffmpeg builder * Add libx264rgb to test * Revamp Processors (#923) * Introduce new concept of pure target frames * Radical refactoring of process flow * Introduce new concept of pure target frames * Fix webcam * Minor improvements * Minor improvements * Use deque for video processing * Use deque for video processing * Extend the video manager * Polish deque * Polish deque * Deque is not even used * Improve speed with multiple futures * Fix temp frame mutation and * Fix RAM usage * Remove old types and manage method * Remove execution_queue_count * Use init_state for benchmarker to avoid issues * add voice extractor option * Change the order of voice extractor in code * Use official download urls * Use official download urls * add gui * fix preview * Add remote updates for voice extractor * fix crash on headless-run * update test_job_helper.py * Fix it for good * Remove pointless method * Fix types and unused imports * Revamp reference (#925) * Initial revamp of face references * Initial revamp of face references * Initial revamp of face references * Terminate find_similar_faces * Improve find mutant faces * Improve find mutant faces * Move sort where it belongs * Forward reference vision frame * Forward reference vision frame also in preview * Fix reference selection * Use static video frame * Fix CI * Remove reference type from frame processors * Improve some naming * Fix types and unused imports * Fix find mutant faces * Fix find mutant faces * Fix imports * Correct naming * Correct naming * simplify pad * Improve webcam performance on highres * Camera manager (#932) * Introduce webcam manager * Fix order * Rename to camera manager, improve video manager * Fix CI * Remove optional * Fix naming in webcam options * Avoid using temp faces (#933) * output video scale * Fix imports * output image scale * upscale fix (not limiter) * add unit test scale_resolution & remove unused methods * fix and add test * fix * change pack_resolution * fix tests * Simplify output scale testing * Fix benchmark UI * Fix benchmark UI * Update dependencies * Introduce REAL multi gpu support using multi dimensional inference pool (#935) * Introduce REAL multi gpu support using multi dimensional inference pool * Remove the MULTI:GPU flag * Restore "processing stop" * Restore "processing stop" * Remove old templates * Go fill in with caching * add expression restorer areas * re-arrange * rename method * Fix stop for extract frames and merge video * Replace arcface_converter models with latest crossface models * Replace arcface_converter models with latest crossface models * Move module logs to debug mode * Refactor/streamer (#938) * Introduce webcam manager * Fix order * Rename to camera manager, improve video manager * Fix CI * Fix naming in webcam options * Move logic over to streamer * Fix streamer, improve webcam experience * Improve webcam experience * Revert method * Revert method * Improve webcam again * Use release on capture instead * Only forward valid frames * Fix resolution logging * Add AVIF support * Add AVIF support * Limit avif to unix systems * Drop avif * Drop avif * Drop avif * Default to Documents in the UI if output path is not set * Update wording.py (#939) "succeed" is grammatically incorrect in the given context. To succeed is the infinitive form of the verb. Correct would be either "succeeded" or alternatively a form involving the noun "success". * Fix more grammar issue * Fix more grammar issue * Sort out caching * Move webcam choices back to UI * Move preview options to own file (#940) * Fix Migraphx execution provider * Fix benchmark * Reuse blend frame method * Fix CI * Fix CI * Fix CI * Hotfix missing check in face debugger, Enable logger for preview * Fix reference selection (#942) * Fix reference selection * Fix reference selection * Fix reference selection * Fix reference selection * Side by side preview (#941) * Initial side by side preview * More work on preview, remove UI only stuff from vision.py * Improve more * Use fit frame * Add different fit methods for vision * Improve preview part2 * Improve preview part3 * Improve preview part4 * Remove none as choice * Remove useless methods * Fix CI * Fix naming * use 1024 as preview resolution default * Fix fit_cover_frame * Uniform fit_xxx_frame methods * Add back disabled logger * Use ui choices alias * Extract select face logic from processors (#943) * Extract select face logic from processors to use it for face by face in preview * Fix order * Remove old code * Merge methods * Refactor face debugger (#944) * Refactor huge method of face debugger * Remove text metrics from face debugger * Remove useless copy of temp frame * Resort methods * Fix spacing * Remove old method * Fix hard exit to work without signals * Prevent upscaling for face-by-face * Switch to version * Improve exiting --------- Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> Co-authored-by: Rafael Tappe Maestro <rafael@tappemaestro.com>
170 lines
9.3 KiB
Python
170 lines
9.3 KiB
Python
import subprocess
|
|
|
|
import pytest
|
|
|
|
from facefusion.download import conditional_download
|
|
from facefusion.vision import calculate_histogram_difference, count_trim_frame_total, count_video_frame_total, detect_image_resolution, detect_video_duration, detect_video_fps, detect_video_resolution, match_frame_color, normalize_resolution, pack_resolution, predict_video_frame_total, read_image, read_video_frame, restrict_image_resolution, restrict_trim_frame, restrict_video_fps, restrict_video_resolution, scale_resolution, unpack_resolution, write_image
|
|
from .helper import get_test_example_file, get_test_examples_directory, get_test_output_file, prepare_test_output_directory
|
|
|
|
|
|
@pytest.fixture(scope = 'module', autouse = True)
|
|
def before_all() -> None:
|
|
conditional_download(get_test_examples_directory(),
|
|
[
|
|
'https://github.com/facefusion/facefusion-assets/releases/download/examples-3.0.0/source.jpg',
|
|
'https://github.com/facefusion/facefusion-assets/releases/download/examples-3.0.0/target-240p.mp4',
|
|
'https://github.com/facefusion/facefusion-assets/releases/download/examples-3.0.0/target-1080p.mp4'
|
|
])
|
|
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-240p.mp4'), '-vframes', '1', get_test_example_file('target-240p.jpg') ])
|
|
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-240p.mp4'), '-vframes', '1', get_test_example_file('目标-240p.webp') ])
|
|
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-1080p.mp4'), '-vframes', '1', get_test_example_file('target-1080p.jpg') ])
|
|
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-240p.mp4'), '-vframes', '1', '-vf', 'hue=s=0', get_test_example_file('target-240p-0sat.jpg') ])
|
|
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-240p.mp4'), '-vframes', '1', '-vf', 'transpose=0', get_test_example_file('target-240p-90deg.jpg') ])
|
|
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-1080p.mp4'), '-vframes', '1', '-vf', 'transpose=0', get_test_example_file('target-1080p-90deg.jpg') ])
|
|
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-240p.mp4'), '-vf', 'fps=25', get_test_example_file('target-240p-25fps.mp4') ])
|
|
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-240p.mp4'), '-vf', 'fps=30', get_test_example_file('target-240p-30fps.mp4') ])
|
|
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-240p.mp4'), '-vf', 'fps=60', get_test_example_file('target-240p-60fps.mp4') ])
|
|
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-240p.mp4'), '-vf', 'transpose=0', get_test_example_file('target-240p-90deg.mp4') ])
|
|
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-1080p.mp4'), '-vf', 'transpose=0', get_test_example_file('target-1080p-90deg.mp4') ])
|
|
|
|
|
|
@pytest.fixture(scope = 'function', autouse = True)
|
|
def before_each() -> None:
|
|
prepare_test_output_directory()
|
|
|
|
|
|
def test_read_image() -> None:
|
|
assert read_image(get_test_example_file('target-240p.jpg')).shape == (226, 426, 3)
|
|
assert read_image(get_test_example_file('目标-240p.webp')).shape == (226, 426, 3)
|
|
assert read_image('invalid') is None
|
|
|
|
|
|
def test_write_image() -> None:
|
|
vision_frame = read_image(get_test_example_file('target-240p.jpg'))
|
|
|
|
assert write_image(get_test_output_file('target-240p.jpg'), vision_frame) is True
|
|
assert write_image(get_test_output_file('目标-240p.webp'), vision_frame) is True
|
|
|
|
|
|
def test_detect_image_resolution() -> None:
|
|
assert detect_image_resolution(get_test_example_file('target-240p.jpg')) == (426, 226)
|
|
assert detect_image_resolution(get_test_example_file('target-240p-90deg.jpg')) == (226, 426)
|
|
assert detect_image_resolution(get_test_example_file('target-1080p.jpg')) == (2048, 1080)
|
|
assert detect_image_resolution(get_test_example_file('target-1080p-90deg.jpg')) == (1080, 2048)
|
|
assert detect_image_resolution('invalid') is None
|
|
|
|
|
|
def test_restrict_image_resolution() -> None:
|
|
assert restrict_image_resolution(get_test_example_file('target-1080p.jpg'), (426, 226)) == (426, 226)
|
|
assert restrict_image_resolution(get_test_example_file('target-1080p.jpg'), (2048, 1080)) == (2048, 1080)
|
|
assert restrict_image_resolution(get_test_example_file('target-1080p.jpg'), (4096, 2160)) == (2048, 1080)
|
|
|
|
|
|
def test_read_video_frame() -> None:
|
|
assert hasattr(read_video_frame(get_test_example_file('target-240p-25fps.mp4')), '__array_interface__')
|
|
assert read_video_frame('invalid') is None
|
|
|
|
|
|
def test_count_video_frame_total() -> None:
|
|
assert count_video_frame_total(get_test_example_file('target-240p-25fps.mp4')) == 270
|
|
assert count_video_frame_total(get_test_example_file('target-240p-30fps.mp4')) == 324
|
|
assert count_video_frame_total(get_test_example_file('target-240p-60fps.mp4')) == 648
|
|
assert count_video_frame_total('invalid') == 0
|
|
|
|
|
|
def test_predict_video_frame_total() -> None:
|
|
assert predict_video_frame_total(get_test_example_file('target-240p-25fps.mp4'), 12.5, 0, 100) == 50
|
|
assert predict_video_frame_total(get_test_example_file('target-240p-25fps.mp4'), 25, 0, 100) == 100
|
|
assert predict_video_frame_total(get_test_example_file('target-240p-25fps.mp4'), 25, 0, 200) == 200
|
|
assert predict_video_frame_total('invalid', 25, 0, 100) == 0
|
|
|
|
|
|
def test_detect_video_fps() -> None:
|
|
assert detect_video_fps(get_test_example_file('target-240p-25fps.mp4')) == 25.0
|
|
assert detect_video_fps(get_test_example_file('target-240p-30fps.mp4')) == 30.0
|
|
assert detect_video_fps(get_test_example_file('target-240p-60fps.mp4')) == 60.0
|
|
assert detect_video_fps('invalid') is None
|
|
|
|
|
|
def test_restrict_video_fps() -> None:
|
|
assert restrict_video_fps(get_test_example_file('target-1080p.mp4'), 20.0) == 20.0
|
|
assert restrict_video_fps(get_test_example_file('target-1080p.mp4'), 25.0) == 25.0
|
|
assert restrict_video_fps(get_test_example_file('target-1080p.mp4'), 60.0) == 25.0
|
|
|
|
|
|
def test_detect_video_duration() -> None:
|
|
assert detect_video_duration(get_test_example_file('target-240p.mp4')) == 10.8
|
|
assert detect_video_duration('invalid') == 0
|
|
|
|
|
|
def test_count_trim_frame_total() -> None:
|
|
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), 0, 200) == 200
|
|
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), 70, 270) == 200
|
|
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), -10, None) == 270
|
|
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), None, -10) == 0
|
|
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), 280, None) == 0
|
|
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), None, 280) == 270
|
|
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), None, None) == 270
|
|
|
|
|
|
def test_restrict_trim_frame() -> None:
|
|
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), 0, 200) == (0, 200)
|
|
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), 70, 270) == (70, 270)
|
|
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), -10, None) == (0, 270)
|
|
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), None, -10) == (0, 0)
|
|
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), 280, None) == (270, 270)
|
|
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), None, 280) == (0, 270)
|
|
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), None, None) == (0, 270)
|
|
|
|
|
|
def test_detect_video_resolution() -> None:
|
|
assert detect_video_resolution(get_test_example_file('target-240p.mp4')) == (426, 226)
|
|
assert detect_video_resolution(get_test_example_file('target-240p-90deg.mp4')) == (226, 426)
|
|
assert detect_video_resolution(get_test_example_file('target-1080p.mp4')) == (2048, 1080)
|
|
assert detect_video_resolution(get_test_example_file('target-1080p-90deg.mp4')) == (1080, 2048)
|
|
assert detect_video_resolution('invalid') is None
|
|
|
|
|
|
def test_restrict_video_resolution() -> None:
|
|
assert restrict_video_resolution(get_test_example_file('target-1080p.mp4'), (426, 226)) == (426, 226)
|
|
assert restrict_video_resolution(get_test_example_file('target-1080p.mp4'), (2048, 1080)) == (2048, 1080)
|
|
assert restrict_video_resolution(get_test_example_file('target-1080p.mp4'), (4096, 2160)) == (2048, 1080)
|
|
|
|
|
|
def test_scale_resolution() -> None:
|
|
assert scale_resolution((426, 226), 0.5) == (212, 112)
|
|
assert scale_resolution((2048, 1080), 1.0) == (2048, 1080)
|
|
assert scale_resolution((4096, 2160), 2.0) == (8192, 4320)
|
|
|
|
|
|
def test_normalize_resolution() -> None:
|
|
assert normalize_resolution((2.5, 2.5)) == (2, 2)
|
|
assert normalize_resolution((3.0, 3.0)) == (4, 4)
|
|
assert normalize_resolution((6.5, 6.5)) == (6, 6)
|
|
|
|
|
|
def test_pack_resolution() -> None:
|
|
assert pack_resolution((1, 1)) == '0x0'
|
|
assert pack_resolution((2, 2)) == '2x2'
|
|
|
|
|
|
def test_unpack_resolution() -> None:
|
|
assert unpack_resolution('0x0') == (0, 0)
|
|
assert unpack_resolution('2x2') == (2, 2)
|
|
|
|
|
|
def test_calc_histogram_difference() -> None:
|
|
source_vision_frame = read_image(get_test_example_file('target-240p.jpg'))
|
|
target_vision_frame = read_image(get_test_example_file('target-240p-0sat.jpg'))
|
|
|
|
assert calculate_histogram_difference(source_vision_frame, source_vision_frame) == 1.0
|
|
assert calculate_histogram_difference(source_vision_frame, target_vision_frame) < 0.5
|
|
|
|
|
|
def test_match_frame_color() -> None:
|
|
source_vision_frame = read_image(get_test_example_file('target-240p.jpg'))
|
|
target_vision_frame = read_image(get_test_example_file('target-240p-0sat.jpg'))
|
|
output_vision_frame = match_frame_color(source_vision_frame, target_vision_frame)
|
|
|
|
assert calculate_histogram_difference(source_vision_frame, output_vision_frame) > 0.5
|