The "Upper Body Tracking From Pose Keypoints (InstanceDiffusion)" node is a part of the ComfyUI's ControlNet Auxiliary Preprocessors. This node is specifically designed to track and generate bounding boxes for upper body parts based on pose keypoints data. It is particularly useful for applications where detailed tracking of body parts is required, such as animation, augmented reality, or advanced graphic design.
This node focuses exclusively on tracking the upper body parts of a person. It does this by taking a set of pose keypoints, identifying specific parts, and calculating bounding boxes around them. These parts include the head, neck, shoulders, torso, arms, and forearms. By parsing these keypoints, it converts the information into usable data for further processing or analysis.
The node accepts the following inputs:
Pose Keypoints: This input is of the "POSE_KEYPOINT" type, representing the keypoints data extracted from a pose estimation process. The data must be in a format that includes canvas dimensions and specific points associated with the person's pose.
ID Include: A string input that allows specification of which people (if more than one) from the pose data should be considered for tracking. These should be identifiers separated by commas.
Part Width and Height: For each body part tracked, you can specify desired width and height as a string in the format "WIDTH, HEIGHT". Parts include:
The node produces the following outputs:
Tracking: Outputs a detailed dictionary that holds the tracking data for each specified body part. It maps each frame's part coordinates and dimensions according to the input pose keypoints. This data can be useful for drawing or processing in other nodes downstream.
Prompt: This is a string output designed to integrate with other nodes that may use textual prompts for imagery generation or editing. It provides a mapped prompt string for each tracked part, linked to their identifiers.
In ComfyUI workflows, this node can be exceptionally useful in several scenarios:
Character Animation: By generating bounding boxes around upper body parts, this node can assist in animating characters based on real-world motion capture data, enhancing interactive experiences such as virtual avatars.
Augmented Reality (AR): Applications in AR can leverage this data to overlay virtual objects over specific body parts, offering realistic and dynamic user interactions.
Motion Analysis: When used in conjunction with other analytical nodes, it can aid in studying body posture and movement patterns, useful in sports science or medical fields.
Multi-Person Handling: The node can handle multiple individuals' data from the input but needs specific identifiers if you want to track certain people.
Customizable Tracking: Users have control over the size of tracking boxes for each part, allowing for tailored outputs according to specific use cases or visual styles.
Integration-Friendly: With outputs formatted as tracking data and prompts, the node is designed to integrate smoothly with other nodes within the ComfyUI ecosystem, enabling complex workflows involving pose recognition and processing.
When integrating this node into your ComfyUI setup, ensure your input data is correctly formatted and that you've defined the necessary identifiers for the individuals you wish to track. This ensures the node processes the data effectively and produces the desired results.