Next (#216)
* Simplify bbox access * Code cleanup * Simplify bbox access * Move code to face helper * Swap and paste back without insightface * Swap and paste back without insightface * Remove semaphore where possible * Improve paste back performance * Cosmetic changes * Move the predictor to ONNX to avoid tensorflow, Use video ranges for prediction * Make CI happy * Move template and size to the options * Fix different color on box * Uniform model handling for predictor * Uniform frame handling for predictor * Pass kps direct to warp_face * Fix urllib * Analyse based on matches * Analyse based on rate * Fix CI * ROCM and OpenVINO mapping for torch backends * Fix the paste back speed * Fix import * Replace retinaface with yunet (#168) * Remove insightface dependency * Fix urllib * Some fixes * Analyse based on matches * Analyse based on rate * Fix CI * Migrate to Yunet * Something is off here * We indeed need semaphore for yunet * Normalize the normed_embedding * Fix download of models * Fix download of models * Fix download of models * Add score and improve affine_matrix * Temp fix for bbox out of frame * Temp fix for bbox out of frame * ROCM and OpenVINO mapping for torch backends * Normalize bbox * Implement gender age * Cosmetics on cli args * Prevent face jumping * Fix the paste back speed * FIx import * Introduce detection size * Cosmetics on face analyser ARGS and globals * Temp fix for shaking face * Accurate event handling * Accurate event handling * Accurate event handling * Set the reference_frame_number in face_selector component * Simswap model (#171) * Add simswap models * Add ghost models * Introduce normed template * Conditional prepare and normalize for ghost * Conditional prepare and normalize for ghost * Get simswap working * Get simswap working * Fix refresh of swapper model * Refine face selection and detection (#174) * Refine face selection and detection * Update README.md * Fix some face analyser UI * Fix some face analyser UI * Introduce range handling for CLI arguments * Introduce range handling for CLI arguments * Fix some spacings * Disable onnxruntime warnings * Use cv2.blur over cv2.GaussianBlur for better performance * Revert "Use cv2.blur over cv2.GaussianBlur for better performance" This reverts commit bab666d6f9216a9f24faa84ead2d006b76f30159. * Prepare universal face detection * Prepare universal face detection part2 * Reimplement retinaface * Introduce cached anchors creation * Restore filtering to enhance performance * Minor changes * Minor changes * More code but easier to understand * Minor changes * Rename predictor to content analyser * Change detection/recognition to detector/recognizer * Fix crop frame borders * Fix spacing * Allow normalize output without a source * Improve conditional set face reference * Update dependencies * Add timeout for get_download_size * Fix performance due disorder * Move models to assets repository, Adjust namings * Refactor face analyser * Rename models once again * Fix spacing * Highres simswap (#192) * Introduce highres simswap * Fix simswap 256 color issue (#191) * Fix simswap 256 color issue * Update face_swapper.py * Normalize models and host in our repo * Normalize models and host in our repo --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Rename face analyser direction to face analyser order * Improve the UI for face selector * Add best-worst, worst-best detector ordering * Clear as needed and fix zero score bug * Fix linter * Improve startup time by multi thread remote download size * Just some cosmetics * Normalize swagger source input, Add blendface_256 (unfinished) * New paste back (#195) * add new paste_back (#194) * add new paste_back * Update face_helper.py * Update face_helper.py * add commandline arguments and gui * fix conflict * Update face_mask.py * type fix * Clean some wording and typing --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> * Clean more names, use blur range approach * Add blur padding range * Change the padding order * Fix yunet filename * Introduce face debugger * Use percent for mask padding * Ignore this * Ignore this * Simplify debugger output * implement blendface (#198) * Clean up after the genius * Add gpen_bfr_256 * Cosmetics * Ignore face_mask_padding on face enhancer * Update face_debugger.py (#202) * Shrink debug_face() to a minimum * Mark as 2.0.0 release * remove unused (#204) * Apply NMS (#205) * Apply NMS * Apply NMS part2 * Fix restoreformer url * Add debugger cli and gui components (#206) * Add debugger cli and gui components * update * Polishing the types * Fix usage in README.md * Update onnxruntime * Support for webp * Rename paste-back to face-mask * Add license to README * Add license to README * Extend face selector mode by one * Update utilities.py (#212) * Stop inline camera on stream * Minor webcam updates * Gracefully start and stop webcam * Rename capture to video_capture * Make get webcam capture pure * Check webcam to not be None * Remove some is not None * Use index 0 for webcam * Remove memory lookup within progress bar * Less progress bar updates * Uniform progress bar * Use classic progress bar * Fix image and video validation * Use different hash for cache * Use best-worse order for webcam * Normalize padding like CSS * Update preview * Fix max memory * Move disclaimer and license to the docs * Update wording in README * Add LICENSE.md * Fix argument in README --------- Co-authored-by: Harisreedhar <46858047+harisreedhar@users.noreply.github.com> Co-authored-by: alex00ds <31631959+alex00ds@users.noreply.github.com>
This commit is contained in:
102
facefusion/content_analyser.py
Normal file
102
facefusion/content_analyser.py
Normal file
@@ -0,0 +1,102 @@
|
||||
from typing import Any, Dict
|
||||
from functools import lru_cache
|
||||
import threading
|
||||
import cv2
|
||||
import numpy
|
||||
import onnxruntime
|
||||
from tqdm import tqdm
|
||||
|
||||
import facefusion.globals
|
||||
from facefusion import wording
|
||||
from facefusion.typing import Frame, ModelValue
|
||||
from facefusion.vision import get_video_frame, count_video_frame_total, read_image, detect_fps
|
||||
from facefusion.utilities import resolve_relative_path, conditional_download
|
||||
|
||||
CONTENT_ANALYSER = None
|
||||
THREAD_LOCK : threading.Lock = threading.Lock()
|
||||
MODELS : Dict[str, ModelValue] =\
|
||||
{
|
||||
'open_nsfw':
|
||||
{
|
||||
'url': 'https://github.com/facefusion/facefusion-assets/releases/download/models/open_nsfw.onnx',
|
||||
'path': resolve_relative_path('../.assets/models/open_nsfw.onnx')
|
||||
}
|
||||
}
|
||||
MAX_PROBABILITY = 0.80
|
||||
MAX_RATE = 5
|
||||
STREAM_COUNTER = 0
|
||||
|
||||
|
||||
def get_content_analyser() -> Any:
|
||||
global CONTENT_ANALYSER
|
||||
|
||||
with THREAD_LOCK:
|
||||
if CONTENT_ANALYSER is None:
|
||||
model_path = MODELS.get('open_nsfw').get('path')
|
||||
CONTENT_ANALYSER = onnxruntime.InferenceSession(model_path, providers = facefusion.globals.execution_providers)
|
||||
return CONTENT_ANALYSER
|
||||
|
||||
|
||||
def clear_content_analyser() -> None:
|
||||
global CONTENT_ANALYSER
|
||||
|
||||
CONTENT_ANALYSER = None
|
||||
|
||||
|
||||
def pre_check() -> bool:
|
||||
if not facefusion.globals.skip_download:
|
||||
download_directory_path = resolve_relative_path('../.assets/models')
|
||||
model_url = MODELS.get('open_nsfw').get('url')
|
||||
conditional_download(download_directory_path, [ model_url ])
|
||||
return True
|
||||
|
||||
|
||||
def analyse_stream(frame : Frame, fps : float) -> bool:
|
||||
global STREAM_COUNTER
|
||||
|
||||
STREAM_COUNTER = STREAM_COUNTER + 1
|
||||
if STREAM_COUNTER % int(fps) == 0:
|
||||
return analyse_frame(frame)
|
||||
return False
|
||||
|
||||
|
||||
def prepare_frame(frame : Frame) -> Frame:
|
||||
frame = cv2.resize(frame, (224, 224)).astype(numpy.float32)
|
||||
frame -= numpy.array([ 104, 117, 123 ]).astype(numpy.float32)
|
||||
frame = numpy.expand_dims(frame, axis = 0)
|
||||
return frame
|
||||
|
||||
|
||||
def analyse_frame(frame : Frame) -> bool:
|
||||
content_analyser = get_content_analyser()
|
||||
frame = prepare_frame(frame)
|
||||
probability = content_analyser.run(None,
|
||||
{
|
||||
'input:0': frame
|
||||
})[0][0][1]
|
||||
return probability > MAX_PROBABILITY
|
||||
|
||||
|
||||
@lru_cache(maxsize = None)
|
||||
def analyse_image(image_path : str) -> bool:
|
||||
frame = read_image(image_path)
|
||||
return analyse_frame(frame)
|
||||
|
||||
|
||||
@lru_cache(maxsize = None)
|
||||
def analyse_video(video_path : str, start_frame : int, end_frame : int) -> bool:
|
||||
video_frame_total = count_video_frame_total(video_path)
|
||||
fps = detect_fps(video_path)
|
||||
frame_range = range(start_frame or 0, end_frame or video_frame_total)
|
||||
rate = 0.0
|
||||
counter = 0
|
||||
with tqdm(total = len(frame_range), desc = wording.get('analysing'), unit = 'frame', ascii = ' =') as progress:
|
||||
for frame_number in frame_range:
|
||||
if frame_number % int(fps) == 0:
|
||||
frame = get_video_frame(video_path, frame_number)
|
||||
if analyse_frame(frame):
|
||||
counter += 1
|
||||
rate = counter * int(fps) / len(frame_range) * 100
|
||||
progress.update()
|
||||
progress.set_postfix(rate = rate)
|
||||
return rate > MAX_RATE
|
||||
Reference in New Issue
Block a user