ComfyUI-AdvancedLivePortrait

2294

AdvancedLivePortrait

AdvancedLivePortrait Node Documentation

Overview

The AdvancedLivePortrait node is part of the ComfyUI Advanced Live Portrait extension. It is designed to work with videos, photos, and animations, offering functionality to track facial expressions, edit them, and integrate them into moving images in real-time. This node allows users to add facial expressions to videos, generate animations from multiple facial expressions, and create videos from static images.

Features

  • Real-Time Preview: Provides a fast and real-time preview of the output.
  • Facial Expression Manipulation: Enables the editing of facial expressions in both static images and videos.
  • Animation Creation: Users can create animations by manipulating facial expressions over time.
  • Expression Data Storage: Save and load expression data for reuse.
  • Non-Video Animation: Generate animations without a source video.
  • Model Integration: Utilizes advanced models for facial feature extraction and motion retargeting.

Inputs

  1. Retargeting Options:

    • retargeting_eyes: A value controlling how much eye motion is retargeted to the output.
    • retargeting_mouth: A value controlling how much mouth motion is retargeted to the output.
  2. Settings:

    • crop_factor: Controls how much of the image to crop around detected faces. Adjusting this can improve tracking accuracy.
  3. Boolean Toggles:

    • turn_on: Activates or deactivates the node.
    • tracking_src_vid: Enables or disables face tracking from source videos.
    • animate_without_vid: Allows animation to proceed without an input driving video.
  4. Command Input:

    • command: A multiline command string defining expression sequences and timings.
  5. Image Inputs:

    • src_images: Source images for generating expressions.
    • motion_link: Link to other nodes for expression manipulation.
    • driving_images: Images used to drive facial motion.

Outputs

  • Image Sequence: The node outputs animated images based on the processed expressions and motions, returned as a single tensor containing frames of the animation.

Usage in ComfyUI Workflows

The AdvancedLivePortrait node is used in workflows where users need to animate facial expressions seamlessly. Within ComfyUI, this node can be integrated with a variety of other nodes to create compelling animated visuals:

  • Expression Modification Workflows: Users can integrate AdvancedLivePortrait with ExpressionEditor nodes to modify expressions dynamically.
  • Animation Creation: By linking the node with other graphical nodes, users can create expressive animations that react to source images or predefined commands.
  • Expression Data Management: Use with LoadExpData and SaveExpData nodes for managing saved expression datasets, enabling batch processing and reusable expressions.

Special Features and Considerations

  • Command Syntax: The command input requires a specific syntax to accurately describe motion and expression sequences.
  • System Resource Usage: The advanced features of this node, such as real-time processing and model-based expression modification, may require significant computational resources. Ensure adequate system capabilities for optimal performance.
  • Model Dependencies: Proper functioning may depend on pre-trained models, which need to be present or downloaded from specified sources.

In summary, the AdvancedLivePortrait node in ComfyUI is a powerful tool for animating and editing precise facial expressions from static images or videos, making it invaluable for graphic designers and digital artists aiming for high-quality motion graphics.