Next (#945)
* Rename calcXXX to calculateXXX * Add migraphx support * Add migraphx support * Add migraphx support * Add migraphx support * Add migraphx support * Add migraphx support * Use True for the flags * Add migraphx support * add face-swapper-weight * add face-swapper-weight to facefusion.ini * changes * change choice * Fix typing for xxxWeight * Feat/log inference session (#906) * Log inference session, Introduce time helper * Log inference session, Introduce time helper * Log inference session, Introduce time helper * Log inference session, Introduce time helper * Mark as NEXT * Follow industry standard x1, x2, y1 and y2 * Follow industry standard x1, x2, y1 and y2 * Follow industry standard in terms of naming (#908) * Follow industry standard in terms of naming * Improve xxx_embedding naming * Fix norm vs. norms * Reduce timeout to 5 * Sort out voice_extractor once again * changes * Introduce many to the occlusion mask (#910) * Introduce many to the occlusion mask * Then we use minimum * Add support for wmv * Run platform tests before has_execution_provider (#911) * Add support for wmv * Introduce benchmark mode (#912) * Honestly makes no difference to me * Honestly makes no difference to me * Fix wording * Bring back YuNet (#922) * Reintroduce YuNet without cv2 dependency * Fix variable naming * Avoid RGB to YUV colorshift using libx264rgb * Avoid RGB to YUV colorshift using libx264rgb * Make libx264 the default again * Make libx264 the default again * Fix types in ffmpeg builder * Fix quality stuff in ffmpeg builder * Fix quality stuff in ffmpeg builder * Add libx264rgb to test * Revamp Processors (#923) * Introduce new concept of pure target frames * Radical refactoring of process flow * Introduce new concept of pure target frames * Fix webcam * Minor improvements * Minor improvements * Use deque for video processing * Use deque for video processing * Extend the video manager * Polish deque * Polish deque * Deque is not even used * Improve speed with multiple futures * Fix temp frame mutation and * Fix RAM usage * Remove old types and manage method * Remove execution_queue_count * Use init_state for benchmarker to avoid issues * add voice extractor option * Change the order of voice extractor in code * Use official download urls * Use official download urls * add gui * fix preview * Add remote updates for voice extractor * fix crash on headless-run * update test_job_helper.py * Fix it for good * Remove pointless method * Fix types and unused imports * Revamp reference (#925) * Initial revamp of face references * Initial revamp of face references * Initial revamp of face references * Terminate find_similar_faces * Improve find mutant faces * Improve find mutant faces * Move sort where it belongs * Forward reference vision frame * Forward reference vision frame also in preview * Fix reference selection * Use static video frame * Fix CI * Remove reference type from frame processors * Improve some naming * Fix types and unused imports * Fix find mutant faces * Fix find mutant faces * Fix imports * Correct naming * Correct naming * simplify pad * Improve webcam performance on highres * Camera manager (#932) * Introduce webcam manager * Fix order * Rename to camera manager, improve video manager * Fix CI * Remove optional * Fix naming in webcam options * Avoid using temp faces (#933) * output video scale * Fix imports * output image scale * upscale fix (not limiter) * add unit test scale_resolution & remove unused methods * fix and add test * fix * change pack_resolution * fix tests * Simplify output scale testing * Fix benchmark UI * Fix benchmark UI * Update dependencies * Introduce REAL multi gpu support using multi dimensional inference pool (#935) * Introduce REAL multi gpu support using multi dimensional inference pool * Remove the MULTI:GPU flag * Restore "processing stop" * Restore "processing stop" * Remove old templates * Go fill in with caching * add expression restorer areas * re-arrange * rename method * Fix stop for extract frames and merge video * Replace arcface_converter models with latest crossface models * Replace arcface_converter models with latest crossface models * Move module logs to debug mode * Refactor/streamer (#938) * Introduce webcam manager * Fix order * Rename to camera manager, improve video manager * Fix CI * Fix naming in webcam options * Move logic over to streamer * Fix streamer, improve webcam experience * Improve webcam experience * Revert method * Revert method * Improve webcam again * Use release on capture instead * Only forward valid frames * Fix resolution logging * Add AVIF support * Add AVIF support * Limit avif to unix systems * Drop avif * Drop avif * Drop avif * Default to Documents in the UI if output path is not set * Update wording.py (#939) "succeed" is grammatically incorrect in the given context. To succeed is the infinitive form of the verb. Correct would be either "succeeded" or alternatively a form involving the noun "success". * Fix more grammar issue * Fix more grammar issue * Sort out caching * Move webcam choices back to UI * Move preview options to own file (#940) * Fix Migraphx execution provider * Fix benchmark * Reuse blend frame method * Fix CI * Fix CI * Fix CI * Hotfix missing check in face debugger, Enable logger for preview * Fix reference selection (#942) * Fix reference selection * Fix reference selection * Fix reference selection * Fix reference selection * Side by side preview (#941) * Initial side by side preview * More work on preview, remove UI only stuff from vision.py * Improve more * Use fit frame * Add different fit methods for vision * Improve preview part2 * Improve preview part3 * Improve preview part4 * Remove none as choice * Remove useless methods * Fix CI * Fix naming * use 1024 as preview resolution default * Fix fit_cover_frame * Uniform fit_xxx_frame methods * Add back disabled logger * Use ui choices alias * Extract select face logic from processors (#943) * Extract select face logic from processors to use it for face by face in preview * Fix order * Remove old code * Merge methods * Refactor face debugger (#944) * Refactor huge method of face debugger * Remove text metrics from face debugger * Remove useless copy of temp frame * Resort methods * Fix spacing * Remove old method * Fix hard exit to work without signals * Prevent upscaling for face-by-face * Switch to version * Improve exiting --------- Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com> Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> Co-authored-by: Rafael Tappe Maestro <rafael@tappemaestro.com>
This commit is contained in:
57
tests/test_cli_output_scale.py
Normal file
57
tests/test_cli_output_scale.py
Normal file
@@ -0,0 +1,57 @@
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
from facefusion.download import conditional_download
|
||||
from facefusion.jobs.job_manager import clear_jobs, init_jobs
|
||||
from facefusion.types import Resolution, Scale
|
||||
from facefusion.vision import detect_image_resolution, detect_video_resolution
|
||||
from .helper import get_test_example_file, get_test_examples_directory, get_test_jobs_directory, get_test_output_file, prepare_test_output_directory
|
||||
|
||||
|
||||
@pytest.fixture(scope = 'module', autouse = True)
|
||||
def before_all() -> None:
|
||||
conditional_download(get_test_examples_directory(),
|
||||
[
|
||||
'https://github.com/facefusion/facefusion-assets/releases/download/examples-3.0.0/source.jpg',
|
||||
'https://github.com/facefusion/facefusion-assets/releases/download/examples-3.0.0/target-240p.mp4'
|
||||
])
|
||||
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-240p.mp4'), '-vframes', '1', get_test_example_file('target-240p.jpg') ])
|
||||
|
||||
|
||||
@pytest.fixture(scope = 'function', autouse = True)
|
||||
def before_each() -> None:
|
||||
clear_jobs(get_test_jobs_directory())
|
||||
init_jobs(get_test_jobs_directory())
|
||||
prepare_test_output_directory()
|
||||
|
||||
|
||||
@pytest.mark.parametrize('output_image_scale, output_image_resolution',
|
||||
[
|
||||
(0.5, (212, 112)),
|
||||
(1.0, (426, 226)),
|
||||
(2.0, (852, 452)),
|
||||
(8.0, (3408, 1808))
|
||||
])
|
||||
def test_output_image_scale(output_image_scale : Scale, output_image_resolution : Resolution) -> None:
|
||||
output_file_path = get_test_output_file('test-output-image-scale-' + str(output_image_scale) + '.jpg')
|
||||
commands = [ sys.executable, 'facefusion.py', 'headless-run', '--jobs-path', get_test_jobs_directory(), '--processors', 'frame_enhancer', '-t', get_test_example_file('target-240p.jpg'), '-o', output_file_path, '--output-image-scale', str(output_image_scale) ]
|
||||
|
||||
assert subprocess.run(commands).returncode == 0
|
||||
assert detect_image_resolution(output_file_path) == output_image_resolution
|
||||
|
||||
|
||||
@pytest.mark.parametrize('output_video_scale, output_video_resolution',
|
||||
[
|
||||
(0.5, (212, 112)),
|
||||
(1.0, (426, 226)),
|
||||
(2.0, (852, 452)),
|
||||
(8.0, (3408, 1808))
|
||||
])
|
||||
def test_output_video_scale(output_video_scale : Scale, output_video_resolution : Resolution) -> None:
|
||||
output_file_path = get_test_output_file('test-output-video-scale-' + str(output_video_scale) + '.mp4')
|
||||
commands = [ sys.executable, 'facefusion.py', 'headless-run', '--jobs-path', get_test_jobs_directory(), '--processors', 'frame_enhancer', '-t', get_test_example_file('target-240p.mp4'), '-o', output_file_path, '--trim-frame-end', '1', '--output-video-scale', str(output_video_scale) ]
|
||||
|
||||
assert subprocess.run(commands).returncode == 0
|
||||
assert detect_video_resolution(output_file_path) == output_video_resolution
|
||||
@@ -1,4 +1,4 @@
|
||||
from facefusion.common_helper import calc_float_step, calc_int_step, create_float_metavar, create_float_range, create_int_metavar, create_int_range
|
||||
from facefusion.common_helper import calculate_float_step, calculate_int_step, create_float_metavar, create_float_range, create_int_metavar, create_int_range
|
||||
|
||||
|
||||
def test_create_int_metavar() -> None:
|
||||
@@ -20,8 +20,8 @@ def test_create_float_range() -> None:
|
||||
|
||||
|
||||
def test_calc_int_step() -> None:
|
||||
assert calc_int_step([ 0, 1 ]) == 1
|
||||
assert calculate_int_step([0, 1]) == 1
|
||||
|
||||
|
||||
def test_calc_float_step() -> None:
|
||||
assert calc_float_step([ 0.1, 0.2 ]) == 0.1
|
||||
assert calculate_float_step([0.1, 0.2]) == 0.1
|
||||
|
||||
@@ -4,8 +4,7 @@ import pytest
|
||||
|
||||
from facefusion import face_classifier, face_detector, face_landmarker, face_recognizer, state_manager
|
||||
from facefusion.download import conditional_download
|
||||
from facefusion.face_analyser import get_many_faces, get_one_face
|
||||
from facefusion.types import Face
|
||||
from facefusion.face_analyser import get_many_faces
|
||||
from facefusion.vision import read_static_image
|
||||
from .helper import get_test_example_file, get_test_examples_directory
|
||||
|
||||
@@ -19,7 +18,7 @@ def before_all() -> None:
|
||||
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('source.jpg'), '-vf', 'crop=iw*0.8:ih*0.8', get_test_example_file('source-80crop.jpg') ])
|
||||
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('source.jpg'), '-vf', 'crop=iw*0.7:ih*0.7', get_test_example_file('source-70crop.jpg') ])
|
||||
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('source.jpg'), '-vf', 'crop=iw*0.6:ih*0.6', get_test_example_file('source-60crop.jpg') ])
|
||||
state_manager.init_item('execution_device_id', '0')
|
||||
state_manager.init_item('execution_device_ids', [ '0' ])
|
||||
state_manager.init_item('execution_providers', [ 'cpu' ])
|
||||
state_manager.init_item('download_providers', [ 'github' ])
|
||||
state_manager.init_item('face_detector_angles', [ 0 ])
|
||||
@@ -56,9 +55,8 @@ def test_get_one_face_with_retinaface() -> None:
|
||||
for source_path in source_paths:
|
||||
source_frame = read_static_image(source_path)
|
||||
many_faces = get_many_faces([ source_frame ])
|
||||
face = get_one_face(many_faces)
|
||||
|
||||
assert isinstance(face, Face)
|
||||
assert len(many_faces) == 1
|
||||
|
||||
|
||||
def test_get_one_face_with_scrfd() -> None:
|
||||
@@ -77,9 +75,8 @@ def test_get_one_face_with_scrfd() -> None:
|
||||
for source_path in source_paths:
|
||||
source_frame = read_static_image(source_path)
|
||||
many_faces = get_many_faces([ source_frame ])
|
||||
face = get_one_face(many_faces)
|
||||
|
||||
assert isinstance(face, Face)
|
||||
assert len(many_faces) == 1
|
||||
|
||||
|
||||
def test_get_one_face_with_yoloface() -> None:
|
||||
@@ -98,9 +95,28 @@ def test_get_one_face_with_yoloface() -> None:
|
||||
for source_path in source_paths:
|
||||
source_frame = read_static_image(source_path)
|
||||
many_faces = get_many_faces([ source_frame ])
|
||||
face = get_one_face(many_faces)
|
||||
|
||||
assert isinstance(face, Face)
|
||||
assert len(many_faces) == 1
|
||||
|
||||
|
||||
def test_get_one_face_with_yunet() -> None:
|
||||
state_manager.init_item('face_detector_model', 'yunet')
|
||||
state_manager.init_item('face_detector_size', '640x640')
|
||||
face_detector.pre_check()
|
||||
|
||||
source_paths =\
|
||||
[
|
||||
get_test_example_file('source.jpg'),
|
||||
get_test_example_file('source-80crop.jpg'),
|
||||
get_test_example_file('source-70crop.jpg'),
|
||||
get_test_example_file('source-60crop.jpg')
|
||||
]
|
||||
|
||||
for source_path in source_paths:
|
||||
source_frame = read_static_image(source_path)
|
||||
many_faces = get_many_faces([ source_frame ])
|
||||
|
||||
assert len(many_faces) == 1
|
||||
|
||||
|
||||
def test_get_many_faces() -> None:
|
||||
@@ -108,6 +124,4 @@ def test_get_many_faces() -> None:
|
||||
source_frame = read_static_image(source_path)
|
||||
many_faces = get_many_faces([ source_frame, source_frame, source_frame ])
|
||||
|
||||
assert isinstance(many_faces[0], Face)
|
||||
assert isinstance(many_faces[1], Face)
|
||||
assert isinstance(many_faces[2], Face)
|
||||
assert len(many_faces) == 3
|
||||
|
||||
@@ -28,7 +28,7 @@ def before_all() -> None:
|
||||
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-240p.mp4'), '-vf', 'fps=30', get_test_example_file('target-240p-30fps.mp4') ])
|
||||
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('target-240p.mp4'), '-vf', 'fps=60', get_test_example_file('target-240p-60fps.mp4') ])
|
||||
|
||||
for output_video_format in [ 'avi', 'm4v', 'mkv', 'mov', 'mp4', 'webm' ]:
|
||||
for output_video_format in [ 'avi', 'm4v', 'mkv', 'mov', 'mp4', 'webm', 'wmv' ]:
|
||||
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('source.mp3'), '-i', get_test_example_file('target-240p.mp4'), '-ar', '16000', get_test_example_file('target-240p-16khz.' + output_video_format) ])
|
||||
|
||||
subprocess.run([ 'ffmpeg', '-i', get_test_example_file('source.mp3'), '-i', get_test_example_file('target-240p.mp4'), '-ar', '48000', get_test_example_file('target-240p-48khz.mp4') ])
|
||||
@@ -84,7 +84,7 @@ def test_extract_frames() -> None:
|
||||
for target_path, trim_frame_start, trim_frame_end, frame_total in test_set:
|
||||
create_temp_directory(target_path)
|
||||
|
||||
assert extract_frames(target_path, '452x240', 30.0, trim_frame_start, trim_frame_end) is True
|
||||
assert extract_frames(target_path, (452, 240), 30.0, trim_frame_start, trim_frame_end) is True
|
||||
assert len(resolve_temp_frame_paths(target_path)) == frame_total
|
||||
|
||||
clear_temp_directory(target_path)
|
||||
@@ -98,7 +98,8 @@ def test_merge_video() -> None:
|
||||
get_test_example_file('target-240p-16khz.mkv'),
|
||||
get_test_example_file('target-240p-16khz.mp4'),
|
||||
get_test_example_file('target-240p-16khz.mov'),
|
||||
get_test_example_file('target-240p-16khz.webm')
|
||||
get_test_example_file('target-240p-16khz.webm'),
|
||||
get_test_example_file('target-240p-16khz.wmv')
|
||||
]
|
||||
output_video_encoders = get_available_encoder_set().get('video')
|
||||
|
||||
@@ -106,9 +107,9 @@ def test_merge_video() -> None:
|
||||
for output_video_encoder in output_video_encoders:
|
||||
state_manager.init_item('output_video_encoder', output_video_encoder)
|
||||
create_temp_directory(target_path)
|
||||
extract_frames(target_path, '452x240', 25.0, 0, 1)
|
||||
extract_frames(target_path, (452, 240), 25.0, 0, 1)
|
||||
|
||||
assert merge_video(target_path, 25.0, '452x240', 25.0, 0, 1) is True
|
||||
assert merge_video(target_path, 25.0, (452, 240), 25.0, 0, 1) is True
|
||||
|
||||
clear_temp_directory(target_path)
|
||||
|
||||
@@ -141,7 +142,8 @@ def test_restore_audio() -> None:
|
||||
(get_test_example_file('target-240p-16khz.mov'), get_test_output_file('target-240p-16khz.mov')),
|
||||
(get_test_example_file('target-240p-16khz.mp4'), get_test_output_file('target-240p-16khz.mp4')),
|
||||
(get_test_example_file('target-240p-48khz.mp4'), get_test_output_file('target-240p-48khz.mp4')),
|
||||
(get_test_example_file('target-240p-16khz.webm'), get_test_output_file('target-240p-16khz.webm'))
|
||||
(get_test_example_file('target-240p-16khz.webm'), get_test_output_file('target-240p-16khz.webm')),
|
||||
(get_test_example_file('target-240p-16khz.wmv'), get_test_output_file('target-240p-16khz.wmv'))
|
||||
]
|
||||
output_audio_encoders = get_available_encoder_set().get('audio')
|
||||
|
||||
|
||||
@@ -51,6 +51,9 @@ def test_set_video_quality() -> None:
|
||||
assert set_video_quality('libx264', 0) == [ '-crf', '51' ]
|
||||
assert set_video_quality('libx264', 50) == [ '-crf', '26' ]
|
||||
assert set_video_quality('libx264', 100) == [ '-crf', '0' ]
|
||||
assert set_video_quality('libx264rgb', 0) == [ '-crf', '51' ]
|
||||
assert set_video_quality('libx264rgb', 50) == [ '-crf', '26' ]
|
||||
assert set_video_quality('libx264rgb', 100) == [ '-crf', '0' ]
|
||||
assert set_video_quality('libx265', 0) == [ '-crf', '51' ]
|
||||
assert set_video_quality('libx265', 50) == [ '-crf', '26' ]
|
||||
assert set_video_quality('libx265', 100) == [ '-crf', '0' ]
|
||||
|
||||
@@ -9,7 +9,7 @@ from facefusion.inference_manager import INFERENCE_POOL_SET, get_inference_pool
|
||||
|
||||
@pytest.fixture(scope = 'module', autouse = True)
|
||||
def before_all() -> None:
|
||||
state_manager.init_item('execution_device_id', '0')
|
||||
state_manager.init_item('execution_device_ids', [ '0' ])
|
||||
state_manager.init_item('execution_providers', [ 'cpu' ])
|
||||
state_manager.init_item('download_providers', [ 'github' ])
|
||||
content_analyser.pre_check()
|
||||
|
||||
@@ -6,3 +6,4 @@ from facefusion.jobs.job_helper import get_step_output_path
|
||||
def test_get_step_output_path() -> None:
|
||||
assert get_step_output_path('test-job', 0, 'test.mp4') == 'test-test-job-0.mp4'
|
||||
assert get_step_output_path('test-job', 0, 'test/test.mp4') == os.path.join('test', 'test-test-job-0.mp4')
|
||||
assert get_step_output_path('test-job', 0, 'invalid') is None
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from facefusion.date_helper import describe_time_ago
|
||||
from facefusion.time_helper import describe_time_ago
|
||||
|
||||
|
||||
def get_time_ago(days : int, hours : int, minutes : int) -> datetime:
|
||||
@@ -3,7 +3,7 @@ import subprocess
|
||||
import pytest
|
||||
|
||||
from facefusion.download import conditional_download
|
||||
from facefusion.vision import calc_histogram_difference, count_trim_frame_total, count_video_frame_total, create_image_resolutions, create_video_resolutions, detect_image_resolution, detect_video_duration, detect_video_fps, detect_video_resolution, match_frame_color, normalize_resolution, pack_resolution, predict_video_frame_total, read_image, read_video_frame, restrict_image_resolution, restrict_trim_frame, restrict_video_fps, restrict_video_resolution, unpack_resolution, write_image
|
||||
from facefusion.vision import calculate_histogram_difference, count_trim_frame_total, count_video_frame_total, detect_image_resolution, detect_video_duration, detect_video_fps, detect_video_resolution, match_frame_color, normalize_resolution, pack_resolution, predict_video_frame_total, read_image, read_video_frame, restrict_image_resolution, restrict_trim_frame, restrict_video_fps, restrict_video_resolution, scale_resolution, unpack_resolution, write_image
|
||||
from .helper import get_test_example_file, get_test_examples_directory, get_test_output_file, prepare_test_output_directory
|
||||
|
||||
|
||||
@@ -60,14 +60,6 @@ def test_restrict_image_resolution() -> None:
|
||||
assert restrict_image_resolution(get_test_example_file('target-1080p.jpg'), (4096, 2160)) == (2048, 1080)
|
||||
|
||||
|
||||
def test_create_image_resolutions() -> None:
|
||||
assert create_image_resolutions((426, 226)) == [ '106x56', '212x112', '320x170', '426x226', '640x340', '852x452', '1064x564', '1278x678', '1492x792', '1704x904' ]
|
||||
assert create_image_resolutions((226, 426)) == [ '56x106', '112x212', '170x320', '226x426', '340x640', '452x852', '564x1064', '678x1278', '792x1492', '904x1704' ]
|
||||
assert create_image_resolutions((2048, 1080)) == [ '512x270', '1024x540', '1536x810', '2048x1080', '3072x1620', '4096x2160', '5120x2700', '6144x3240', '7168x3780', '8192x4320' ]
|
||||
assert create_image_resolutions((1080, 2048)) == [ '270x512', '540x1024', '810x1536', '1080x2048', '1620x3072', '2160x4096', '2700x5120', '3240x6144', '3780x7168', '4320x8192' ]
|
||||
assert create_image_resolutions(None) == []
|
||||
|
||||
|
||||
def test_read_video_frame() -> None:
|
||||
assert hasattr(read_video_frame(get_test_example_file('target-240p-25fps.mp4')), '__array_interface__')
|
||||
assert read_video_frame('invalid') is None
|
||||
@@ -139,12 +131,10 @@ def test_restrict_video_resolution() -> None:
|
||||
assert restrict_video_resolution(get_test_example_file('target-1080p.mp4'), (4096, 2160)) == (2048, 1080)
|
||||
|
||||
|
||||
def test_create_video_resolutions() -> None:
|
||||
assert create_video_resolutions((426, 226)) == [ '426x226', '452x240', '678x360', '904x480', '1018x540', '1358x720', '2036x1080', '2714x1440', '4072x2160', '8144x4320' ]
|
||||
assert create_video_resolutions((226, 426)) == [ '226x426', '240x452', '360x678', '480x904', '540x1018', '720x1358', '1080x2036', '1440x2714', '2160x4072', '4320x8144' ]
|
||||
assert create_video_resolutions((2048, 1080)) == [ '456x240', '682x360', '910x480', '1024x540', '1366x720', '2048x1080', '2730x1440', '4096x2160', '8192x4320' ]
|
||||
assert create_video_resolutions((1080, 2048)) == [ '240x456', '360x682', '480x910', '540x1024', '720x1366', '1080x2048', '1440x2730', '2160x4096', '4320x8192' ]
|
||||
assert create_video_resolutions(None) == []
|
||||
def test_scale_resolution() -> None:
|
||||
assert scale_resolution((426, 226), 0.5) == (212, 112)
|
||||
assert scale_resolution((2048, 1080), 1.0) == (2048, 1080)
|
||||
assert scale_resolution((4096, 2160), 2.0) == (8192, 4320)
|
||||
|
||||
|
||||
def test_normalize_resolution() -> None:
|
||||
@@ -167,8 +157,8 @@ def test_calc_histogram_difference() -> None:
|
||||
source_vision_frame = read_image(get_test_example_file('target-240p.jpg'))
|
||||
target_vision_frame = read_image(get_test_example_file('target-240p-0sat.jpg'))
|
||||
|
||||
assert calc_histogram_difference(source_vision_frame, source_vision_frame) == 1.0
|
||||
assert calc_histogram_difference(source_vision_frame, target_vision_frame) < 0.5
|
||||
assert calculate_histogram_difference(source_vision_frame, source_vision_frame) == 1.0
|
||||
assert calculate_histogram_difference(source_vision_frame, target_vision_frame) < 0.5
|
||||
|
||||
|
||||
def test_match_frame_color() -> None:
|
||||
@@ -176,4 +166,4 @@ def test_match_frame_color() -> None:
|
||||
target_vision_frame = read_image(get_test_example_file('target-240p-0sat.jpg'))
|
||||
output_vision_frame = match_frame_color(source_vision_frame, target_vision_frame)
|
||||
|
||||
assert calc_histogram_difference(source_vision_frame, output_vision_frame) > 0.5
|
||||
assert calculate_histogram_difference(source_vision_frame, output_vision_frame) > 0.5
|
||||
|
||||
Reference in New Issue
Block a user