diff --git a/Content/website/index.html b/Content/website/index.html new file mode 100644 index 00000000..5c0ca53f --- /dev/null +++ b/Content/website/index.html @@ -0,0 +1,2449 @@ + + + + + + Pose2Sim Documentation + + + + + + + + + + +
+
+
+ + + + + +
+
+ + +
+
+ Step 0 +

Welcome to Pose2Sim

+
+
+

Pose2Sim provides a workflow for 3D markerless kinematics, as an alternative to traditional marker-based motion capture methods.

+ + + +
+ Pose2Sim Workflow +
+ + +
+ Various Activities +

Other more or less challenging tasks and conditions

+
+ +
+
+
📹
+

Multiple Cameras

+

Use phones, webcams, or GoPros - any combination works

+
+
+
🎯
+

Research-Grade Accuracy

+

Validated accuracy with low-cost hardware

+
+
+
👥
+

Multi-Person Support

+

Track multiple people simultaneously

+
+
+
🤸
+

Full 3D Kinematics

+

Complete OpenSim skeletal analysis with joint angles

+
+
+ +
+

⚠️ Important Note

+

Please set undistort_points and handle_LR_swap to false for now since it currently leads to inaccuracies. This will be fixed soon.

+
+ +
+

🎬 Perfect For

+
    +
  • Sports Analysis: Field-based 3D motion capture
  • +
  • Clinical Assessment: Gait analysis in doctor's office
  • +
  • Animation: Outdoor 3D capture with fully clothed subjects
  • +
  • Research: Biomechanics studies with multiple participants
  • +
+
+ +
+

⚠️ Key Requirements

+
    +
  • Multiple cameras: Minimum 2 cameras (4+ recommended)
  • +
  • Camera calibration: Cameras must be calibrated
  • +
  • Synchronization: Cameras should be synchronized (or sync in post)
  • +
  • Single camera? Use Sports2D for 2D analysis
  • +
+
+ +
+

📦 Version History

+
    +
  • v0.10 (09/2024): OpenSim integration in pipeline
  • +
  • v0.9 (07/2024): Integrated pose estimation
  • +
  • v0.8 (04/2024): New synchronization tool
  • +
  • v0.7 (03/2024): Multi-person analysis
  • +
  • v0.6 (02/2024): Marker augmentation & Blender visualizer
  • +
  • v0.5 (12/2023): Automatic batch processing
  • +
+
+ +
+

✅ What You'll Learn

+

This comprehensive guide will take you through:

+
    +
  • Complete installation and setup
  • +
  • Running demos (single & multi-person)
  • +
  • Setting up your own projects
  • +
  • 2D pose estimation from videos
  • +
  • Camera calibration techniques
  • +
  • Multi-camera synchronization
  • +
  • 3D triangulation and filtering
  • +
  • OpenSim kinematic analysis
  • +
  • Performance optimization
  • +
+
+
+
+ + +
+
+ Step 1 +

Complete Installation

+
+
+

Full installation with OpenSim support for complete 3D kinematic analysis.

+ +
+

1. Install Anaconda or Miniconda

+

Anaconda creates isolated environments for different projects, preventing package conflicts and ensuring reproducibility.

+

Download Miniconda (recommended - lightweight version)

+

Once installed, open an Anaconda prompt and create a virtual environment:

+
+ conda create -n Pose2Sim python=3.9 -y +conda activate Pose2Sim +
+
+ +
+

2. Install OpenSim

+

OpenSim provides biomechanical modeling toolkit for accurate skeletal analysis with physical constraints:

+
+ conda install -c opensim-org opensim -y +
+

Alternative methods: OpenSim documentation

+
+ +
+

3. Install Pose2Sim

+ +

Option A: Quick Install (Recommended)

+
+ pip install pose2sim +
+ +

Option B: Install from Source

+

For developers who want the latest unreleased features:

+
+ git clone --depth 1 https://github.com/perfanalytics/pose2sim.git +cd pose2sim +pip install . +
+
+ +
+

4. Optional: GPU Acceleration

+

GPU support dramatically speeds up pose estimation (3-5x faster) but requires 6 GB additional disk space.

+ +
+

Check GPU Compatibility:

+
+ nvidia-smi +
+

Note the CUDA version - this is the latest your driver supports

+
+ +
+

Install PyTorch with CUDA:

+

Visit PyTorch website and install compatible version. For example:

+
+ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124 +
+

Adjust cu124 based on your CUDA version

+
+ +
+

Install ONNX Runtime GPU:

+
+ pip install onnxruntime-gpu +
+
+ +
+

Verify Installation:

+
+ python -c "import torch; import onnxruntime as ort; print(torch.cuda.is_available(), ort.get_available_providers())" +
+

Should print: True ['CUDAExecutionProvider', ...]

+
+
+ +
+

✅ Installation Complete!

+

Pose2Sim is now ready. Remember to activate your environment before use:

+
+ conda activate Pose2Sim +
+
+ +
+

💾 Storage Requirements

+
    +
  • Minimal install: ~3 GB (without GPU, minimal models)
  • +
  • Standard install: ~4.75 GB (without GPU)
  • +
  • Full install with GPU: ~10.75 GB
  • +
+

You can save 1.3 GB by uninstalling TensorFlow if you skip marker augmentation: pip uninstall tensorflow

+
+
+
+ + +
+
+ Step 2 +

Single Person Demo

+
+
+

Test your installation with a demo of a person balancing on a beam, filmed with 4 calibrated cameras.

+ +
+

1. Locate Demo Folder

+

Find where Pose2Sim is installed:

+
+ pip show pose2sim +
+

Copy the location path and navigate to demo:

+
+ cd <path>\Pose2Sim\Demo_SinglePerson +
+
+ +
+

2. Run Complete Workflow

+

Launch Python and execute the full pipeline:

+
+ ipython +
+
+ from Pose2Sim import Pose2Sim + +Pose2Sim.calibration() +Pose2Sim.poseEstimation() +Pose2Sim.synchronization() +Pose2Sim.personAssociation() +Pose2Sim.triangulation() +Pose2Sim.filtering() +Pose2Sim.markerAugmentation() +Pose2Sim.kinematics() +
+ +
+

💡 Quick Tip

+

Run all steps at once with:

+
+ Pose2Sim.runAll() +
+
+
+ +
+

3. Understanding the Synchronization GUI

+

When the synchronization GUI appears, select a keypoint showing clear vertical motion for best results.

+

The GUI helps you choose which keypoint to use for camera synchronization based on vertical speed.

+
+ +
+

📁 Output Files Created

+
    +
  • pose-3d/*.trc: 3D marker coordinates for each trial
  • +
  • kinematics/*.mot: 3D joint angles over time
  • +
  • kinematics/*.osim: Scaled OpenSim models
  • +
  • logs.txt: Processing details and statistics
  • +
+
+ +
+

4. Visualize Results

+ +

Option A: OpenSim GUI

+
    +
  1. Download OpenSim GUI
  2. +
  3. File → Open Model: Load scaled model from kinematics folder
  4. +
  5. File → Load Motion: Load .mot file from kinematics folder
  6. +
  7. File → Preview Experimental Data: Load .trc file to see 3D markers
  8. +
+ + +
+ OpenSim Visualization +
+ +

Option B: Blender (More Visual)

+

Install Pose2Sim_Blender add-on for beautiful 3D visualization with camera overlay and animation capabilities.

+
+ +
+

⚙️ Configuration

+

Default parameters are in Config.toml - all parameters are documented. Feel free to experiment!

+ +
+ +
+

📝 Important Notes

+
    +
  • Marker Augmentation: Doesn't always improve results.
  • +
  • Save space: If you skip marker augmentation, uninstall tensorflow to save 1.3 GB: pip uninstall tensorflow
  • +
+
+
+
+ + +
+
+ Step 3 +

Multi-Person Demo

+
+
+

Discover how Pose2Sim tracks multiple people simultaneously - a hidden person appears when multi-person analysis is activated!

+ +
+

1. Navigate to Multi-Person Demo

+
+ cd <path>\Pose2Sim\Demo_MultiPerson +
+
+ +
+

2. Verify Configuration

+

Ensure multi_person = true is set in your Config.toml file.

+
+ +
+

3. Run Multi-Person Workflow

+
+ ipython +
+
+ from Pose2Sim import Pose2Sim + +Pose2Sim.calibration() +Pose2Sim.poseEstimation() +Pose2Sim.synchronization() +Pose2Sim.personAssociation() +Pose2Sim.triangulation() +Pose2Sim.filtering() +Pose2Sim.markerAugmentation() +Pose2Sim.kinematics() +
+ +

Or simply:

+
+ Pose2Sim.runAll() +
+
+ +
+

📊 Multi-Person Output

+

Pose2Sim generates separate files for each detected person:

+
    +
  • pose-3d/: One .trc file per participant
  • +
  • kinematics/: One scaled .osim model per participant
  • +
  • kinematics/: One .mot angle file per participant
  • +
+
+ +
+

4. How Multi-Person Tracking Works

+

Pose2Sim uses sophisticated algorithms to:

+
    +
  • Associate persons across views: Matches people across different camera angles using epipolar geometry
  • +
  • Track over time: Maintains consistent IDs by analyzing movement speed and displacement
  • +
  • Handle occlusions: Robust to temporary occlusions or people entering/leaving frame
  • +
+
+ +
+

⚠️ Important Configuration

+

When using marker augmentation and kinematics with multiple people, ensure the order matches:

+
    +
  • markerAugmentation > participant_height values
  • +
  • participant_mass values
  • +
+

Must correspond to person IDs in the same order!

+
+ # Example in Config.toml +participant_height = [1.72, 1.65] # Person 0, Person 1 +participant_mass = [70, 65] # Person 0, Person 1 +
+
+ +
+

💡 Visualization Tips

+

Use Blender visualization (as explained in Step 2) to see both people simultaneously in 3D space with their respective skeletons!

+
+ +
+

✅ Multi-Person Success!

+

You've now mastered both single and multi-person analysis. Ready to process your own data!

+
+
+
+ + +
+
+ Step 4 +

Batch Processing Demo

+
+
+

Process multiple trials with different parameters efficiently using batch processing structure.

+ +
+

1. Navigate to Batch Demo

+
+ cd <path>\Pose2Sim\Demo_Batch +
+
+ +
+

2. Understanding Batch Structure

+

Batch processing uses a hierarchical configuration system:

+
+ BatchSession/ +├── Config.toml # Global parameters +├── Calibration/ +├── Trial_1/ +│ ├── Config.toml # Trial-specific overrides +│ └── videos/ +├── Trial_2/ +│ ├── Config.toml # Different parameters for this trial +│ └── videos/ +└── Trial_3/ + ├── Config.toml + └── videos/ +
+ +
+

🔧 How It Works

+
    +
  • Global Config: BatchSession/Config.toml sets defaults for all trials
  • +
  • Trial Overrides: Each Trial/Config.toml can override specific parameters
  • +
  • Inheritance: Uncommented keys in trial configs override global settings
  • +
+
+
+ +
+

3. Run Batch Processing

+
+ ipython +
+
+ from Pose2Sim import Pose2Sim + +# Run from BatchSession folder to process all trials +Pose2Sim.runAll() + +# Or run from specific Trial folder to process only that trial +
+
+ +
+

4. Experiment with Parameters

+

Try modifying Trial_2/Config.toml:

+ +
+

Example: Different Time Range

+

Uncomment and set in Trial_2/Config.toml:

+
+ [project] +frame_range = [10, 99] # Process only frames 10-99 +
+
+ +
+

Example: Lightweight Mode

+

Uncomment and set in Trial_2/Config.toml:

+
+ [pose] +mode = 'lightweight' # Faster pose estimation +
+
+
+ +
+

📊 Batch Processing Benefits

+
    +
  • Consistency: Same calibration and global settings across trials
  • +
  • Flexibility: Trial-specific customization when needed
  • +
  • Efficiency: Process entire sessions with one command
  • +
  • Experimentation: Compare different parameter sets easily
  • +
+
+ +
+

✅ Batch Processing Mastered!

+

You now understand how to efficiently process multiple trials with varying parameters. This structure scales from research studies to production pipelines!

+
+
+
+ + +
+
+ Step 5 +

Setting Up Your Project

+
+
+

Organize your own data for analysis using Pose2Sim's structured project format.

+ +
+

1. Find Your Pose2Sim Installation

+
+ pip show pose2sim +
+

Note the location path and navigate to it:

+
+ cd <path>\pose2sim +
+
+ +
+

2. Copy Template Folder

+

Copy the appropriate demo folder as your project template:

+
    +
  • Demo_SinglePerson: For single person analysis
  • +
  • Demo_MultiPerson: For multiple people
  • +
  • Demo_Batch: For batch processing multiple trials
  • +
+

Copy it to your preferred location and rename as desired.

+
+ +
+

3. Project Structure

+

Your project should follow this structure:

+
+ MyProject/ +├── Config.toml # Main configuration +├── Calibration/ +│ ├── intrinsics/ +│ │ ├── cam01/ # Videos/images for intrinsic calibration +│ │ ├── cam02/ +│ │ └── ... +│ └── extrinsics/ +│ ├── cam01/ # Videos/images for extrinsic calibration +│ ├── cam02/ +│ └── ... +├── videos/ +│ ├── cam01.mp4 # Your capture videos +│ ├── cam02.mp4 +│ └── ... +├── pose/ # Created automatically - 2D poses +├── pose-3d/ # Created automatically - 3D coordinates +└── kinematics/ # Created automatically - OpenSim results +
+
+ +
+

4. Edit Configuration

+

Open Config.toml and customize key parameters:

+ +
+

Project Settings:

+
+ [project] +project_dir = 'path/to/MyProject' # Absolute path to your project +frame_range = [] # Empty for all frames, or [start, end] +multi_person = false # true for multiple people +
+
+ +
+

Participant Info:

+
+ [markerAugmentation] +participant_height = [1.72] # Height in meters +participant_mass = [70] # Mass in kg +
+
+
+ +
+

5. Add Your Videos

+
    +
  • Place all camera videos in the videos/ folder
  • +
  • Name them clearly (e.g., cam01.mp4, cam02.mp4, etc.)
  • +
  • Ensure all videos capture the same action
  • +
  • Videos don't need to be perfectly synchronized (we'll sync in post)
  • +
+
+ +
+

⚠️ Important Tips

+
    +
  • Camera placement: Position cameras to minimize occlusions
  • +
  • Coverage: Ensure the capture volume is covered by at least 2 cameras at all times
  • +
  • Lighting: Consistent lighting helps pose estimation
  • +
  • Background: Uncluttered backgrounds improve accuracy
  • +
+
+ +
+

✅ Project Ready!

+

Your project is now set up. Continue to the next steps to calibrate cameras and process your data!

+
+
+
+ + +
+
+ Step 6 +

2D Pose Estimation

+
+
+

Detect 2D keypoints from your videos using RTMPose or other pose estimation models.

+ + +
+ 2D Pose Estimation +
+ +
+

Run Pose Estimation

+

Navigate to your project folder and run:

+
+ ipython +
+
+ from Pose2Sim import Pose2Sim +Pose2Sim.poseEstimation() +
+

This will process all videos in your videos/ folder and save 2D keypoints to pose/.

+
+ +
+

Pose Models Available

+

Configure in Config.toml under [pose]:

+ +
+ + + + + + + + + + + + + + + + + + + + + +
ModelBest ForSpeed
body_with_feetGeneral body tracking (default)Balanced
whole_bodyBody + hands + faceSlower
whole_body_wristDetailed wrist motionSlower
+
+
+ +
+

Performance Modes

+

Choose speed vs accuracy trade-off:

+
+ [pose] +mode = 'balanced' # Options: 'lightweight', 'balanced', 'performance' +
+
    +
  • lightweight: Fastest, slightly less accurate
  • +
  • balanced: Good speed/accuracy balance (default)
  • +
  • performance: Most accurate, slower
  • +
+
+ +
+

Advanced: Custom Models

+

Use any RTMLib-compatible model with custom dictionary syntax:

+
+ mode = """{'det_class':'YOLOX', + 'det_model':'https://download.openmmlab.com/mmpose/.../yolox_m.zip', + 'det_input_size':[640, 640], + 'pose_class':'RTMPose', + 'pose_model':'https://download.openmmlab.com/mmpose/.../rtmpose-m.zip', + 'pose_input_size':[192,256]}""" +
+ +
+

💡 Other Pose Solutions

+
    +
  • DeepLabCut: For custom-trained models (animals, specific points)
  • +
  • OpenPose: Legacy support (BODY_25B recommended if using)
  • +
  • AlphaPose: Alternative to OpenPose
  • +
  • BlazePose: Fast inference, single person only
  • +
+
+
+ +
+

Detection Frequency Optimization

+

Speed up pose estimation by detecting people less frequently:

+
+ [pose] +det_frequency = 4 # Detect people every 4 frames, track in between +
+

Person detection is slow; tracking between frames is fast. This can provide 5x speedup! However, it might impact accuracy.

+
+ +
+

📊 Output Format

+

2D poses saved as JSON files in pose/ folder:

+
    +
  • One JSON file per frame per camera
  • +
  • Contains keypoint coordinates and confidence scores
  • +
  • OpenPose-compatible format
  • +
+
+ +
+

✅ Pose Estimation Complete!

+

Your 2D keypoints are ready. Next step: calibrate your cameras to enable 3D triangulation!

+
+
+
+ + +
+
+ Step 7 +

Camera Calibration

+
+
+

Calibrate your cameras to determine their intrinsic properties (lens characteristics) and extrinsic parameters (position and orientation in space).

+ + +
+ Intrinsic Calibration +

Intrinsic calibration with checkerboard

+ Extrinsic Calibration +

Extrinsic calibration

+
+ +
+

Run Calibration

+
+ from Pose2Sim import Pose2Sim +Pose2Sim.calibration() +
+
+ +
+

Method 1: Convert Existing Calibration

+

If you already have a calibration file from another system:

+ +
+

Set in Config.toml:

+
+ [calibration] +calibration_type = 'convert' +convert_from = 'qualisys' # Options: qualisys, optitrack, vicon, opencap, + # easymocap, biocv, caliscope, anipose, freemocap +
+
+ +
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
SystemFile FormatNotes
Qualisys.qca.txtExport from QTM
Vicon.xcpDirect copy
OpenCap.pickleMultiple files
Caliscope.tomlNative format
+
+
+ +
+

Method 2: Calculate from Scratch

+

Calculate calibration using checkerboard or scene measurements.

+ +
+

Set in Config.toml:

+
+ [calibration] +calibration_type = 'calculate' +
+
+ +
+

Step 1: Intrinsic Calibration

+

Intrinsic parameters are camera-specific properties (focal length, distortion) - usually only need to calculate once per camera.

+ +
    +
  1. Create folder for each camera in Calibration/intrinsics/
  2. +
  3. Film a checkerboard with each camera (board OR camera can move)
  4. +
  5. Configure checkerboard parameters in Config.toml: +
    + [calibration.intrinsics] +overwrite_intrinsics = true +show_detection_intrinsics = true +intrinsics_corners_nb = [9, 6] # Internal corners (one less than visible) +intrinsics_square_size = 60 # Square size in mm +
    +
  6. +
+ +
+

📋 Checkerboard Requirements

+
    +
  • Flat: Board must be completely flat
  • +
  • Asymmetric: Rows ≠ Columns (or rows odd if columns even)
  • +
  • Border: Wide white border around pattern
  • +
  • Focus: Sharp, in-focus images
  • +
  • Coverage: Film from multiple angles covering most of frame
  • +
  • No glare: Avoid reflections
  • +
+

Generate checkerboard at calib.io

+
+ +
+

✅ Target Error

+

Intrinsic calibration error should be below 0.5 pixels

+
+
+ +
+

Step 2: Extrinsic Calibration

+

Extrinsic parameters are camera positions and orientations in space - must recalculate whenever cameras move.

+ +
    +
  1. Create folder for each camera in Calibration/extrinsics/
  2. +
  3. Film either: +
      +
    • Checkerboard: Place on ground, visible to all cameras
    • +
    • Scene measurements: Measure 10+ point coordinates in 3D space
    • +
    +
  4. +
  5. Configure in Config.toml: +
    + [calibration.extrinsics] +extrinsics_method = 'board' # or 'scene' +show_detection_extrinsics = true +extrinsics_corners_nb = [10, 7] +extrinsics_square_size = 60 + +# If using 'scene' method: +# object_coords_3d = [[0,0,0], [1,0,0], ...] # Measured 3D coordinates +
    +
  6. +
+ +
+

💡 Scene Measurement Tips

+
    +
  • Use tiles, wall lines, boxes, or treadmill dimensions
  • +
  • Spread points throughout capture volume
  • +
  • More points = better accuracy
  • +
  • Can temporarily add then remove objects for calibration
  • +
+
+ +
+

✅ Target Error

+

Extrinsic calibration error should be below 1 cm (acceptable up to 2.5 cm depending on application)

+
+
+
+ +
+

Output: Calibration File

+

Calibration creates Calib.toml in your Calibration folder containing:

+
    +
  • Camera matrix (intrinsics) for each camera
  • +
  • Distortion coefficients
  • +
  • Rotation and translation (extrinsics)
  • +
  • Calibration errors
  • +
+ + +
+ Calibration File +
+
+ +
+

✅ Calibration Complete!

+

Your cameras are now calibrated and ready for 3D triangulation!

+
+
+
+ + +
+
+ Step 8 +

Camera Synchronization

+
+
+

Synchronize your cameras by finding optimal time offset based on keypoint movement correlation.

+ + +
+ Synchronization +
+ +
+

⚠️ Skip This Step If...

+

Your cameras are natively synchronized (hardware sync, genlock, or timecode).

+
+ +
+

Run Synchronization

+
+ from Pose2Sim import Pose2Sim +Pose2Sim.synchronization() +
+
+ +
+

How It Works

+

The algorithm:

+
    +
  1. Computes vertical speed of chosen keypoint(s) in each camera
  2. +
  3. Finds time offset that maximizes correlation between cameras
  4. +
  5. Applies offset to align all cameras to reference camera
  6. +
+
+ +
+

Synchronization GUI

+

Enable interactive GUI for better control:

+
+ [synchronization] +synchronization_gui = true +
+ +

The GUI allows you to:

+
    +
  • Select which keypoint to use (e.g., RWrist, LAnkle)
  • +
  • Choose reference person (in multi-person scenes)
  • +
  • Adjust time window for analysis
  • +
  • Visualize correlation plots
  • +
+
+ +
+

Configuration Options

+
+ [synchronization] +reset_sync = false # Start from scratch or refine existing +frames_range = [] # Limit analysis to specific frames +display_corr = true # Show correlation plots +keypoints_to_consider = ['RWrist', 'LWrist'] # Keypoints for sync +approx_time_maxspeed = 'auto' # Or specify time of max speed +
+
+ +
+

📊 Best Results When...

+
    +
  • Person performs clear vertical movement (jump, wave, etc.)
  • +
  • Capture lasts 5+ seconds (enough data)
  • + +
+
+ +
+

Alternative Sync Methods (Not included in Pose2Sim)

+

If keypoint-based sync doesn't work well:

+ +
+

Manual Sync Markers:

+
    +
  • Flashlight: Flash visible to all cameras
  • +
  • Clap: Sync with audio (if available)
  • +
  • Clear event: Ball drop, jump, etc.
  • +
+
+ +
+

Hardware Solutions:

+
    +
  • GoPro timecode: Built-in sync feature
  • +
  • GPS sync: For outdoor captures (GoPro)
  • +
  • GoPro app: Sync via app (slightly less reliable)
  • +
+
+
+ +
+

✅ Cameras Synchronized!

+

Your videos are now time-aligned and ready for multi-view triangulation!

+
+
+
+ + +
+
+ Step 9 +

Person Association

+
+
+

Associate the same person across different camera views and track them over time.

+ +
+

⚠️ Skip This Step If...

+

Only one person is visible in your capture.

+
+ +
+

Run Person Association

+
+ from Pose2Sim import Pose2Sim +Pose2Sim.personAssociation() +
+
+ +
+

How It Works

+ +
+

Single Person Mode (multi_person = false):

+

Automatically selects the person with smallest reprojection error (best 3D reconstruction).

+
+ +
+

Multi-Person Mode (multi_person = true):

+
    +
  1. Cross-view association: Uses epipolar geometry to match people across camera views
  2. +
  3. Temporal tracking: Tracks people across frames using displacement speed
  4. +
  5. Consistent IDs: Maintains identity even with brief occlusions
  6. +
+
+
+ +
+

Association Method

+

The algorithm finds the best person associations by:

+
    +
  • Triangulating all possible person combinations
  • +
  • Reprojecting 3D points back to image planes
  • +
  • Computing epipolar line distances
  • +
  • Choosing combination with minimum geometric error
  • +
+
+ +
+

Configuration Parameters

+
+ [personAssociation] +likelihood_threshold_association = 0.3 +reproj_error_threshold_association = 20 # pixels +min_cameras_for_triangulation = 2 +
+ +
+

💡 Parameter Tuning

+
    +
  • Increase thresholds if people are frequently lost
  • +
  • Decrease thresholds if wrong person associations occur
  • +
  • Monitor console output for association success rates
  • +
+
+
+ +
+

Handling Occlusions

+

Pose2Sim is robust to:

+
    +
  • Temporary loss of person in some camera views
  • +
  • People entering/leaving the capture volume
  • +
  • Brief full occlusions (person behind object)
  • +
+

If reprojection error is too high, cameras are progressively removed until threshold is met.

+
+ +
+

📊 Check Results

+

Review console output showing:

+
    +
  • Number of people detected per frame
  • +
  • Association success rate
  • +
  • Average reprojection errors
  • +
  • Cameras excluded per frame
  • +
+

If results aren't satisfying, adjust constraints in Config.toml.

+
+ +
+

✅ Person Association Complete!

+

People are now correctly identified across views and time. Ready for 3D triangulation!

+
+
+
+ + +
+
+ Step 10 +

3D Triangulation

+
+
+

Convert 2D keypoints from multiple views into robust 3D coordinates using weighted triangulation.

+ +
+

Run Triangulation

+
+ from Pose2Sim import Pose2Sim +Pose2Sim.triangulation() +
+
+ +
+

How It Works

+

Robust triangulation process:

+
    +
  1. Weighted triangulation: Each 2D point weighted by detection confidence
  2. +
  3. Likelihood filtering: Only points above confidence threshold used
  4. +
  5. Reprojection check: Verify 3D point quality by reprojecting to cameras
  6. +
  7. Error-based refinement: +
      +
    • If error high: swap left/right sides and retry
    • +
    • Still high: progressively remove cameras until error acceptable
    • +
    • Too few cameras: skip frame, interpolate later
    • +
    +
  8. +
  9. Interpolation: Fill missing values with cubic spline interpolation(default, can be changed)
  10. +
+
+ +
+

Configuration Parameters

+
+ [triangulation] +reproj_error_threshold_triangulation = 15 # pixels +likelihood_threshold_triangulation = 0.3 +min_cameras_for_triangulation = 2 +interpolation_kind = 'cubic' # cubic, linear, slinear, quadratic +interp_if_gap_smaller_than = 10 # frames +show_interp_indices = true # Show which frames were interpolated +handle_LR_swap = false # KEEP FALSE - Correct left/right swaps (buggy) +undistort_points = false # KEEP FALSE - Undistort before triangulation (buggy) +make_c3d = false # Also save as .c3d format +
+
+ +
+

📊 Output Information

+

Triangulation provides detailed statistics:

+
    +
  • Mean reprojection error per keypoint (mm and px)
  • +
  • Cameras excluded on average per keypoint
  • +
  • Frames interpolated for each keypoint
  • +
  • Least reliable cameras identification
  • +
+
+ +
+

Visualize Results

+

Check your .trc file in OpenSim:

+
+ # In OpenSim GUI: +File → Preview Experimental Data → Open .trc file +
+

Look for smooth, realistic trajectories. Jumps or jitter indicate issues.

+
+ +
+

Troubleshooting

+
+ + + + + + + + + + + + + + + + + + + + + +
IssueSolution
High reprojection errorsIncrease reproj_error_threshold_triangulation
Missing keypointsDecrease likelihood_threshold_triangulation
Jittery motionIncrease min_cameras_for_triangulation
Left/right swapsKeep handle_LR_swap = false (currently buggy)
+
+
+ +
+

⚠️ Important Notes

+
    +
  • Undistortion: Currently causes inaccuracies, keep undistort_points = false
  • +
  • LR Swap: Currently causes issues, keep handle_LR_swap = false
  • +
  • Interpolation limit: Large gaps (>10 frames) won't be interpolated by default
  • +
  • Quality check: Always visualize .trc in OpenSim before proceeding
  • +
+
+ +
+

✅ Triangulation Complete!

+

Your 3D coordinates are ready! Next step: filter the data for smoother motion.

+
+
+
+ + +
+
+ Step 11 +

3D Filtering

+
+
+

Smooth your 3D coordinates to remove noise while preserving natural motion characteristics.

+ + +
+ Filter Plot +
+ +
+

Run Filtering

+
+ from Pose2Sim import Pose2Sim +Pose2Sim.filtering() +
+

Filtered .trc files are saved with _filt suffix.

+
+ +
+

Available Filter Types

+
+ [filtering] +type = 'butterworth' # Choose filter type +
+ +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FilterBest ForKey Parameters
butterworthGeneral motion (default)order, cut_off_frequency
kalmanNoisy data with gapstrust_ratio, smooth
butterworth_on_speedPreserve sharp movementsorder, cut_off_frequency
gaussianSimple smoothingsigma_kernel
loessLocal polynomial smoothingnb_values_used
medianRemove outlierskernel_size
+
+
+ +
+

Butterworth Filter (Recommended)

+

Most commonly used for motion capture data:

+
+ [filtering] +type = 'butterworth' +butterworth_order = 4 # Filter order (2-4 typical) +butterworth_cut_off_frequency = 6 # Hz - adjust based on motion speed +
+ +
+

💡 Cutoff Frequency Guide

+
    +
  • 3-6 Hz: Walking, slow movements
  • +
  • 6-10 Hz: Running, moderate speed
  • +
  • 10-15 Hz: Fast movements, sports
  • +
  • 15+ Hz: Very fast, impulsive motions
  • +
+
+
+ +
+

Kalman Filter

+

Excellent for noisy data with missing values:

+
+ [filtering] +type = 'kalman' +kalman_trust_ratio = 500 # Measurement trust vs process model +kalman_smooth = true # Apply smoothing pass +
+

Higher trust_ratio = trust measurements more; lower = trust motion model more

+
+ +
+

Display Results

+

Enable visualization to compare before/after filtering:

+
+ [filtering] +display_figures = true # Show plots comparing raw vs filtered +
+

Plots show each keypoint's trajectory in X, Y, Z coordinates.

+
+ +
+

Evaluate Filter Quality

+
    +
  1. Visual inspection: Check plots for smooth but realistic motion
  2. +
  3. OpenSim preview: Load filtered .trc in OpenSim +
    + File → Preview Experimental Data +
    +
  4. +
  5. Motion validation: Ensure filter doesn't remove real motion features
  6. +
+
+ +
+

⚠️ Filtering Cautions

+
    +
  • Over-filtering: Too aggressive = removes real motion details
  • +
  • Under-filtering: Insufficient smoothing = noise remains
  • +
  • Cutoff frequency: Adjust based on motion speed - no one-size-fits-all
  • +
  • Can skip: Filtering is optional if data quality is already good
  • +
+
+ +
+

✅ Filtering Complete!

+

Your 3D data is now smoothed and ready for marker augmentation or kinematics!

+
+
+
+ + +
+
+ Step 12 +

Marker Augmentation (Optional)

+
+
+

Use Stanford's LSTM model to estimate 47 virtual marker positions, potentially improving inverse kinematics results.

+ +
+

⚠️ Important: Not Always Better

+

Marker augmentation doesn't necessarily improve results It's most beneficial when using fewer than 4 cameras.

+

Recommendation: Run IK with and without augmentation, compare results.

+
+ +
+

Run Marker Augmentation

+
+ from Pose2Sim import Pose2Sim +Pose2Sim.markerAugmentation() +
+

Creates augmented .trc files with _LSTM suffix.

+
+ +
+

How It Works

+

LSTM neural network trained on marker-based motion capture data:

+
    +
  1. Takes your detected keypoints as input
  2. +
  3. Predicts positions of 47 virtual markers
  4. +
  5. Outputs more stable but potentially less accurate motion
  6. +
+

Trade-off: More stability vs less precision

+
+ +
+

Configuration Requirements

+
+ [markerAugmentation] +participant_height = [1.72] # Required - height in meters +participant_mass = [70] # Optional - mass in kg (for kinetics only) +make_c3d = false # Also save as .c3d format +
+ +
+

⚠️ Multi-Person Projects

+

Order must match person IDs:

+
+ participant_height = [1.72, 1.65, 1.80] # Person 0, 1, 2 +participant_mass = [70, 65, 85] # Same order! +
+
+
+ +
+

Required Keypoints

+

Marker augmentation requires these minimum keypoints (e.g., COCO won't work):

+
+ ["Neck", "RShoulder", "LShoulder", "RHip", "LHip", "RKnee", "LKnee", + "RAnkle", "LAnkle", "RHeel", "LHeel", "RSmallToe", "LSmallToe", + "RBigToe", "LBigToe", "RElbow", "LElbow", "RWrist", "LWrist"] +
+
+ +
+

Limitations

+
    +
  • No NaN values: Interpolation must fill all gaps before augmentation
  • +
  • Standing pose required: Model trained on standing/walking motions
  • +
  • Not suitable for: Sitting, crouching, lying down poses
  • +
  • Disk space: Requires TensorFlow (~1.3 GB)
  • +
+
+ +
+

💾 Save Disk Space

+

If you skip marker augmentation, uninstall TensorFlow:

+
+ pip uninstall tensorflow +
+

Saves ~1.3 GB of storage.

+
+ +
+

When to Use

+
+ + + + + + + + + + + + + + + + + + + + + +
Use WhenSkip When
Using 2-3 camerasUsing 4+ cameras
Noisy keypoint detectionClean keypoint detection
Standing/walking motionsSitting/crouching/lying
Need more stabilityNeed maximum precision
+
+
+ +
+

✅ Decision Point

+

You now have both regular and augmented .trc files. Compare both in the next step (Kinematics) to see which works better for your data!

+
+
+
+ + +
+
+ Step 13 +

OpenSim Kinematics

+
+
+

Scale an OpenSim skeletal model to your participant and compute biomechanically accurate 3D joint angles using inverse kinematics.

+ +
+

Run Kinematics

+
+ from Pose2Sim import Pose2Sim +Pose2Sim.kinematics() +
+
+ +
+

Automatic vs Manual

+ +

Automatic (Recommended - Fully Integrated)

+

Pose2Sim performs scaling and IK automatically with no static trial needed:

+
    +
  1. Intelligent scaling: Uses frames where person is standing upright
  2. +
  3. Outlier removal: Removes fastest 10%, stationary frames, crouching frames
  4. +
  5. Robust averaging: Mean of best segment measurements
  6. +
  7. Automatic IK: Runs inverse kinematics on all frames
  8. +
+ +

Manual (OpenSim GUI)

+

For specific trials or fine-tuned control, use OpenSim GUI:

+
    +
  1. Open OpenSim GUI
  2. +
  3. Load model from Pose2Sim/OpenSim_Setup/
  4. +
  5. Tools → Scale Model → Load scaling setup .xml
  6. +
  7. Tools → Inverse Kinematics → Load IK setup .xml
  8. +
  9. Run and save results
  10. +
+
+ +
+

Configuration Options

+
+ [opensim] +use_augmentation = false # Use LSTM-augmented markers or not +use_contacts_muscles = false # Include muscles and contact spheres +right_left_symmetry = true # Enforce bilateral symmetry +remove_scaling_setup = false # Keep scaling files for inspection +remove_ik_setup = false # Keep IK files for inspection + +# Model selection +use_simple_model = false # Simple model (10x faster, stiff spine) + +# Participant info +participant_height = [1.72] # meters - must match marker augmentation +participant_mass = [70] # kg - affects kinetics, not kinematics +
+
+ +
+

Simple vs Full Model

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureSimple ModelFull Model
Speed~0.7s per trial~9s per trial
SpineStiff/rigidFlexible
ShouldersBall jointAnatomical constraints
MusclesNoneFull muscle set
Best ForGait, running, basic motionComplex motion, research
+
+
+ +
+

Scaling Strategy

+

Pose2Sim intelligently selects frames for scaling by removing:

+
    +
  • 10% fastest frames: Potential detection outliers
  • +
  • Zero-speed frames: Person likely out of frame
  • +
  • Crouching frames: Hip/knee flexion > 45° (less accurate)
  • +
  • 20% extreme values: After above filtering
  • +
+

Remaining frames averaged for robust segment lengths.

+ +
+ # Adjust these in Config.toml if needed +[opensim.scaling] +fastest_frames_to_remove_percent = 10 +large_hip_knee_angles = 45 +trimmed_extrema_percent = 20 +
+
+ +
+

Output Files

+

Created in kinematics/ folder:

+
    +
  • *_scaled.osim: Scaled OpenSim model for each person
  • +
  • *.mot: Joint angles over time (open with Excel or OpenSim)
  • +
  • *_scaling.xml: Scaling setup (if not removed)
  • +
  • *_ik.xml: IK setup (if not removed)
  • +
+
+ +
+

Visualize Results

+

In OpenSim GUI:

+
    +
  1. File → Open Model: Load *_scaled.osim
  2. +
  3. File → Load Motion: Load *.mot file
  4. +
  5. Play animation to verify realistic motion
  6. +
+ +

Or use Blender add-on for better visualization!

+
+ +
+

⚠️ When Automatic Scaling May Fail

+

Automatic scaling works best for standing/walking. Use manual scaling for:

+
    +
  • Mostly sitting or crouching trials
  • +
  • Unusual body positions throughout
  • +
  • Extreme motions (gymnastics, dancing)
  • +
+

In these cases, capture a separate standing trial for scaling.

+
+ +
+

🚀 Further Analysis

+

With scaled model and joint angles, you can proceed to:

+
    +
  • Inverse Dynamics: Compute joint torques
  • +
  • Muscle Analysis: Estimate muscle forces
  • +
  • Moco: Trajectory optimization and prediction
  • +
  • Ground Reaction Forces: With contact spheres
  • +
+
+ +
+

✅ Kinematics Complete!

+

You now have biomechanically accurate 3D joint angles! Your complete 3D motion capture workflow is finished!

+
+
+
+ + +
+
+ Step 14 +

All Parameters Reference

+
+
+

Complete reference of all configuration parameters in Config.toml.

+ +
+

📁 Project Settings

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
project_dirAbsolute path to project foldercurrent directory
frame_range[start, end] or [] for all[]
frame_rateVideo frame rate (auto-detected)auto
multi_personTrack multiple peoplefalse
+
+
+ +
+

🎯 Pose Estimation

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
pose_modelbody_with_feet, whole_body, whole_body_wrist, CUSTOMbody_with_feet
modelightweight, balanced, performance (or custom dict)balanced
det_frequencyRun person detection every N frames1
tracking_modesports2d, deepsort, nonesports2d
display_detectionShow real-time detectiontrue
save_video'to_video', 'to_images', 'none'to_video
output_formatopenpose, mmpose, deeplabcutopenpose
+
+
+ +
+

📐 Calibration

+
+ + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
calibration_typeconvert, calculateconvert
convert_fromqualisys, optitrack, vicon, opencap, etc.qualisys
binning_factorFor Qualisys if filming in 540p1
+
+ +

Intrinsic Calibration

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
overwrite_intrinsicsRecalculate or use existingfalse
intrinsics_corners_nb[rows, cols] internal corners[9, 6]
intrinsics_square_sizeSquare size in mm60
show_detection_intrinsicsDisplay corner detectiontrue
+
+ +

Extrinsic Calibration

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
extrinsics_methodboard, scene, keypointsboard
extrinsics_corners_nb[rows, cols] for board method[10, 7]
extrinsics_square_sizeSquare size in mm60
show_detection_extrinsicsDisplay detection/pointstrue
object_coords_3dFor scene method: measured points[]
+
+
+ +
+

🔄 Synchronization

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
synchronization_guiUse interactive GUItrue
reset_syncStart fresh or refine existingfalse
frames_range[start, end] for sync analysis[]
display_corrShow correlation plotstrue
keypoints_to_considerList of keypoints for sync['RWrist']
approx_time_maxspeedTime of max speed or 'auto'auto
+
+
+ +
+

👥 Person Association

+
+ + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
likelihood_threshold_associationMin confidence for association0.3
reproj_error_threshold_associationMax reprojection error (pixels)20
min_cameras_for_triangulationMinimum cameras needed2
+
+
+ +
+

📐 Triangulation

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
reproj_error_threshold_triangulationMax reprojection error (pixels)15
likelihood_threshold_triangulationMin keypoint confidence0.3
min_cameras_for_triangulationMinimum cameras required2
interpolation_kindcubic, linear, slinear, quadraticcubic
interp_if_gap_smaller_thanMax gap size for interpolation (frames)10
show_interp_indicesDisplay interpolated framestrue
handle_LR_swapCorrect left/right swaps (KEEP FALSE)false
undistort_pointsUndistort before triangulation (KEEP FALSE)false
make_c3dAlso save as .c3d formatfalse
+
+
+ +
+

🔄 Filtering

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
typebutterworth, kalman, gaussian, loess, medianbutterworth
display_figuresShow before/after plotstrue
butterworth_orderFilter order4
butterworth_cut_off_frequencyCutoff frequency (Hz)6
kalman_trust_ratioMeasurement vs process trust500
kalman_smoothApply smoothing passtrue
gaussian_sigma_kernelGaussian kernel size5
loess_nb_values_usedNumber of values for LOESS30
median_kernel_sizeMedian filter kernel5
+
+
+ +
+

🎯 Marker Augmentation

+
+ + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
participant_heightHeight in meters (list for multi-person)[1.72]
participant_massMass in kg (list for multi-person)[70]
make_c3dSave as .c3d formatfalse
+
+
+ +
+

🤸 OpenSim Kinematics

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterDescriptionDefault
use_augmentationUse LSTM-augmented markersfalse
use_simple_modelSimple model (10x faster)false
use_contacts_musclesInclude muscles and contact spheresfalse
right_left_symmetryEnforce bilateral symmetrytrue
remove_scaling_setupDelete scaling .xml afterfalse
remove_ik_setupDelete IK .xml afterfalse
+
+
+ +
+

📖 Full Documentation

+

For complete details and examples, see the Pose2Sim GitHub repository.

+
+
+
+ + +
+
+ Step 15 +

Performance Optimization

+
+
+

Speed up processing for large projects and batch operations.

+ +
+

1. Calibration - Run Once

+

If cameras don't move between sessions:

+
    +
  • Run Pose2Sim.calibration() only once
  • +
  • Copy Calib.toml to new project folders
  • +
  • Skip calibration step entirely
  • +
+
+

💡 Time Saved

+

Calibration can take 5-15 minutes. Reusing saves this every session!

+
+
+ +
+

2. Pose Estimation Optimization

+ +
+

Use GPU (Biggest Speedup)

+

GPU acceleration provides 3-10x speedup. See Step 1 for installation.

+
+

⚡ Speed Comparison

+

Processing 4 camera videos (500 frames each):

+
    +
  • CPU: ~150 seconds
  • +
  • GPU: ~30 seconds
  • +
+
+
+ +
+

Reduce Detection Frequency

+

Huge speedup with minimal accuracy loss:

+
+ [pose] +det_frequency = 100 # Detect people every 100 frames instead of every frame +
+

Result: 150s → 30s (5x faster)

+
+ +
+

Use Lightweight Mode

+
+ [pose] +mode = 'lightweight' # Faster model, slightly less accurate +
+

Result: 30s → 20s (1.5x faster)

+
+ +
+

Disable Real-Time Display

+
+ [pose] +display_detection = false # Don't show video during processing +
+

Result: 20s → 15s (1.3x faster)

+
+ +
+

Skip Video Saving

+
+ [pose] +save_video = 'none' # Don't save annotated videos +
+

Result: 15s → 9s (1.7x faster)

+
+ +
+

Use Sports2D Tracker

+
+ [pose] +tracking_mode = 'sports2d' # Faster than deepsort for simple scenes +
+
+ +
+

✅ Cumulative Speedup

+

Combining all optimizations:

+

150s → 9s (17x faster!)

+
+
+ +
+

3. Skip Unnecessary Steps

+
+ + + + + + + + + + + + + + + + + + + + + + + + + +
StepSkip If...
CalibrationCameras haven't moved
SynchronizationCameras natively synchronized
Person AssociationOnly one person in scene
FilteringData already clean
Marker AugmentationUsing 4+ cameras, not helpful
+
+
+ +
+

4. OpenSim Optimization

+
+ [opensim] +use_simple_model = true # 10x faster than full model +
+

Result: 9s → 0.7s per trial

+

Simple model accurate enough for most gait analysis

+
+ +
+

5. Batch Processing Structure

+

Efficient organization for processing multiple trials:

+
    +
  • Single calibration for entire batch
  • +
  • Global parameters in session-level Config.toml
  • +
  • Trial-specific overrides only when needed
  • +
  • Run from session level to process all trials
  • +
+
+ +
+

6. Frame Range Limitation

+

Process only relevant portions:

+
+ [project] +frame_range = [100, 500] # Only process frames 100-500 +
+

Especially useful for long captures where action is brief.

+
+ +
+

Maximum Speed Configuration

+

For fastest processing (batch operations, prototyping):

+
+ [pose] +mode = 'lightweight' +det_frequency = 100 +display_detection = false +save_video = 'none' +tracking_mode = 'sports2d' + +[filtering] +display_figures = false + +[opensim] +use_simple_model = true +use_augmentation = false +
+
+ +
+

💡 Performance Tips Summary

+
    +
  • GPU: Single biggest speedup (3-10x)
  • +
  • Detection frequency: Set to 50-100 for 5x speedup
  • +
  • Lightweight mode: Minimal accuracy loss for 1.5x speedup
  • +
  • Skip displays/videos: Another 2-3x cumulative
  • +
  • Simple OpenSim model: 10x faster for IK
  • +
  • Skip unnecessary steps: Don't run what you don't need
  • +
+
+ +
+

⚠️ Speed vs Accuracy Trade-offs

+

Some optimizations reduce accuracy:

+
    +
  • Lightweight mode: Slightly less accurate pose detection
  • +
  • High det_frequency: May miss fast movements
  • +
  • Simple OpenSim model: Less anatomically detailed
  • +
+

Recommendation: Use full accuracy for final analysis, optimized settings for testing/development.

+
+ +
+

✅ Optimization Complete!

+

You now know how to process data efficiently for any scale - from single trials to large research studies!

+
+
+
+ +
+ + + + + + +
+ + + + \ No newline at end of file diff --git a/Content/website/script.js b/Content/website/script.js new file mode 100644 index 00000000..d3ef45d0 --- /dev/null +++ b/Content/website/script.js @@ -0,0 +1,164 @@ +// Global variables +let currentStep = 0; +const totalSteps = 16; // FIXED: Changed from 11 to 16 (steps 0-15) +let currentLanguage = 'en'; +let viewAllMode = false; + +// Initialize on page load +document.addEventListener('DOMContentLoaded', function() { + initializeNavigation(); + updateNavButtons(); + updateActiveNavItem(); +}); + +// Navigation functions +function goToStep(stepNumber) { + if (viewAllMode) { + toggleViewAll(); // Exit view all mode + } + + currentStep = stepNumber; + showStep(currentStep); + updateNavButtons(); + updateActiveNavItem(); + scrollToTop(); +} + +function nextStep() { + if (viewAllMode) { + toggleViewAll(); + return; + } + + if (currentStep < totalSteps - 1) { + currentStep++; + showStep(currentStep); + updateNavButtons(); + updateActiveNavItem(); + scrollToTop(); + } +} + +function previousStep() { + if (viewAllMode) { + toggleViewAll(); + return; + } + + if (currentStep > 0) { + currentStep--; + showStep(currentStep); + updateNavButtons(); + updateActiveNavItem(); + scrollToTop(); + } +} + +function showStep(stepNumber) { + // Hide all steps + document.querySelectorAll('.step').forEach(step => { + step.classList.remove('active'); + }); + + // Show current step + const currentStepElement = document.getElementById(`step-${stepNumber}`); + if (currentStepElement) { + currentStepElement.classList.add('active'); + } +} + +function updateNavButtons() { + const prevBtn = document.getElementById('prevBtn'); + const nextBtn = document.getElementById('nextBtn'); + + if (viewAllMode) { + prevBtn.style.display = 'none'; + nextBtn.querySelector('span').textContent = 'Exit View All'; + return; + } + + // Show/hide previous button + prevBtn.style.display = currentStep === 0 ? 'none' : 'inline-flex'; + + // Update next button text + if (currentStep === totalSteps - 1) { + nextBtn.querySelector('span').textContent = 'Finish ✓'; + } else { + nextBtn.querySelector('span').textContent = 'Next →'; + } +} + +function updateActiveNavItem() { + document.querySelectorAll('.nav-item').forEach(item => { + item.classList.remove('active'); + }); + + const activeItem = document.querySelector(`a[href="#step-${currentStep}"]`); + if (activeItem) { + activeItem.classList.add('active'); + } +} + +function toggleViewAll() { + viewAllMode = !viewAllMode; + + const steps = document.querySelectorAll('.step'); + const viewAllBtn = document.querySelector('.view-all-btn'); + + if (viewAllMode) { + // Show all steps + steps.forEach(step => { + step.classList.add('view-all-mode'); + step.classList.add('active'); + }); + + viewAllBtn.querySelector('span').textContent = 'Back to Step View'; + + document.querySelector('.nav-buttons').style.display = 'flex'; + } else { + // Return to single step view + steps.forEach(step => { + step.classList.remove('view-all-mode'); + step.classList.remove('active'); + }); + + showStep(currentStep); + + viewAllBtn.querySelector('span').textContent = 'View All Steps'; + } + + updateNavButtons(); + scrollToTop(); +} + +function initializeNavigation() { + // Add click handlers to nav items + document.querySelectorAll('.nav-item').forEach((item, index) => { + item.addEventListener('click', (e) => { + e.preventDefault(); + goToStep(index); + }); + }); +} + +function scrollToTop() { + window.scrollTo({ + top: 0, + behavior: 'smooth' + }); +} + +// Keyboard navigation +document.addEventListener('keydown', function(e) { + if (viewAllMode) return; + + if (e.key === 'ArrowLeft' || e.key === 'ArrowUp') { + if (currentStep > 0) { + previousStep(); + } + } else if (e.key === 'ArrowRight' || e.key === 'ArrowDown') { + if (currentStep < totalSteps - 1) { + nextStep(); + } + } +}); \ No newline at end of file diff --git a/Content/website/style.css b/Content/website/style.css new file mode 100644 index 00000000..99de2c90 --- /dev/null +++ b/Content/website/style.css @@ -0,0 +1,642 @@ +:root { + --primary-color: #2563eb; + --primary-dark: #1e40af; + --secondary-color: #64748b; + --success-color: #10b981; + --warning-color: #f59e0b; + --danger-color: #ef4444; + --bg-primary: #ffffff; + --bg-secondary: #f8fafc; + --bg-tertiary: #f1f5f9; + --text-primary: #0f172a; + --text-secondary: #475569; + --text-tertiary: #94a3b8; + --border-color: #e2e8f0; + --shadow-sm: 0 1px 2px 0 rgba(0, 0, 0, 0.05); + --shadow-md: 0 4px 6px -1px rgba(0, 0, 0, 0.1); + --shadow-lg: 0 10px 15px -3px rgba(0, 0, 0, 0.1); + --sidebar-width: 280px; +} + +* { + margin: 0; + padding: 0; + box-sizing: border-box; +} + +body { + font-family: 'Inter', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif; + color: var(--text-primary); + background-color: var(--bg-secondary); + line-height: 1.6; +} + +/* Google Translate Widget */ +.translate-widget { + position: fixed; + top: 20px; + right: 20px; + z-index: 1000; + background: var(--bg-primary); + border-radius: 8px; + box-shadow: var(--shadow-md); + padding: 8px 12px; +} + +#google_translate_element { + display: inline-block; +} + +/* Hide Google Translate banner */ +.goog-te-banner-frame.skiptranslate { + display: none !important; +} + +body { + top: 0 !important; +} + +/* Style Google Translate select */ +.goog-te-gadget { + font-family: 'Inter', sans-serif !important; + font-size: 14px !important; + color: var(--text-primary) !important; +} + +.goog-te-gadget-simple { + background-color: transparent !important; + border: none !important; + padding: 0 !important; + font-size: 14px !important; +} + +.goog-te-gadget-icon { + display: none !important; +} + +.goog-te-menu-value span { + color: var(--text-primary) !important; +} + +/* Sidebar */ +.sidebar { + position: fixed; + left: 0; + top: 0; + width: var(--sidebar-width); + height: 100vh; + background: var(--bg-primary); + border-right: 1px solid var(--border-color); + overflow-y: auto; + padding: 30px 0; + z-index: 100; +} + +.logo { + padding: 0 30px 30px; + border-bottom: 1px solid var(--border-color); + margin-bottom: 20px; +} + +.logo h2 { + font-size: 24px; + font-weight: 700; + color: var(--primary-color); + margin-bottom: 5px; +} + +.logo p { + font-size: 14px; + color: var(--text-secondary); +} + +.nav-menu { + padding: 0 15px; +} + +.nav-item { + display: flex; + align-items: center; + gap: 12px; + padding: 12px 15px; + color: var(--text-secondary); + text-decoration: none; + border-radius: 8px; + margin-bottom: 4px; + transition: all 0.2s; + font-size: 14px; + font-weight: 500; +} + +.nav-item:hover { + background: var(--bg-tertiary); + color: var(--text-primary); +} + +.nav-item.active { + background: var(--primary-color); + color: white; +} + +.step-num { + display: inline-flex; + align-items: center; + justify-content: center; + width: 32px; + height: 32px; + background: var(--bg-tertiary); + border-radius: 6px; + font-size: 12px; + font-weight: 600; + flex-shrink: 0; +} + +.nav-item.active .step-num { + background: rgba(255, 255, 255, 0.2); + color: white; +} + +.view-all-btn { + margin: 20px 15px 0; + width: calc(100% - 30px); + padding: 12px; + background: var(--bg-tertiary); + border: 1px solid var(--border-color); + border-radius: 8px; + color: var(--text-primary); + font-weight: 500; + cursor: pointer; + transition: all 0.2s; + font-size: 14px; +} + +.view-all-btn:hover { + background: var(--bg-secondary); + border-color: var(--primary-color); + color: var(--primary-color); +} + +/* Main Content */ +.content { + margin-left: var(--sidebar-width); + padding: 40px 60px 80px; + max-width: 1200px; +} + +.step-container { + background: var(--bg-primary); + border-radius: 12px; + box-shadow: var(--shadow-sm); + overflow: hidden; +} + +.step { + display: none; + animation: fadeIn 0.4s ease-in-out; +} + +.step.active { + display: block; +} + +.step.view-all-mode { + display: block; + margin-bottom: 40px; + border-bottom: 2px solid var(--border-color); +} + +@keyframes fadeIn { + from { + opacity: 0; + transform: translateY(10px); + } + to { + opacity: 1; + transform: translateY(0); + } +} + +.step-header { + background: linear-gradient(135deg, var(--primary-color) 0%, var(--primary-dark) 100%); + color: white; + padding: 40px 50px; +} + +.step-badge { + display: inline-block; + background: rgba(255, 255, 255, 0.2); + padding: 6px 14px; + border-radius: 20px; + font-size: 12px; + font-weight: 600; + text-transform: uppercase; + letter-spacing: 0.5px; + margin-bottom: 15px; +} + +.step-header h1 { + font-size: 32px; + font-weight: 700; + margin: 0; +} + +.step-content { + padding: 50px; +} + +/* Typography */ +.lead { + font-size: 18px; + color: var(--text-secondary); + margin-bottom: 30px; + line-height: 1.7; +} + +.small-note { + font-size: 14px; + color: var(--text-tertiary); + font-style: italic; + margin-top: 8px; + display: block; +} + +h3 { + font-size: 20px; + font-weight: 600; + color: var(--text-primary); + margin: 30px 0 15px; +} + +h4 { + font-size: 16px; + font-weight: 600; + color: var(--text-primary); + margin: 20px 0 10px; +} + +p { + margin-bottom: 15px; + color: var(--text-secondary); +} + +/* Video Container */ +.video-container { + margin: 30px 0; + border-radius: 12px; + overflow: hidden; + box-shadow: var(--shadow-lg); + background: #000; +} + +.video-container video { + width: 100%; + height: auto; + display: block; +} + +/* Feature Grid */ +.feature-grid { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(250px, 1fr)); + gap: 20px; + margin: 30px 0; +} + +.feature-card { + background: var(--bg-secondary); + padding: 25px; + border-radius: 10px; + border: 1px solid var(--border-color); + transition: all 0.3s; +} + +.feature-card:hover { + transform: translateY(-2px); + box-shadow: var(--shadow-md); + border-color: var(--primary-color); +} + +.feature-icon { + font-size: 32px; + margin-bottom: 15px; +} + +.feature-card h3 { + font-size: 18px; + margin: 0 0 10px; +} + +.feature-card p { + font-size: 14px; + margin: 0; + color: var(--text-secondary); +} + +/* Instruction Steps */ +.instruction-step { + background: var(--bg-secondary); + border-left: 4px solid var(--primary-color); + padding: 25px; + border-radius: 8px; + margin: 25px 0; +} + +.substep { + background: white; + border-left: 3px solid var(--secondary-color); + padding: 20px; + border-radius: 6px; + margin: 15px 0; +} + +/* Code Blocks */ +.code-block { + background: #1e293b; + color: #e2e8f0; + padding: 20px; + border-radius: 8px; + margin: 15px 0; + overflow-x: auto; + font-family: 'Monaco', 'Menlo', 'Courier New', monospace; + font-size: 14px; + line-height: 1.6; + position: relative; +} + +.code-block code { + display: block; + white-space: pre-wrap; + word-break: break-word; +} + +/* Info Boxes */ +.info-box { + background: #eff6ff; + border: 1px solid #bfdbfe; + border-left: 4px solid var(--primary-color); + padding: 20px; + border-radius: 8px; + margin: 20px 0; +} + +.info-box h4 { + color: var(--primary-color); + margin-top: 0; +} + +.success-box { + background: #f0fdf4; + border: 1px solid #bbf7d0; + border-left: 4px solid var(--success-color); + padding: 20px; + border-radius: 8px; + margin: 20px 0; +} + +.success-box h4 { + color: var(--success-color); + margin-top: 0; +} + +.warning-box { + background: #fffbeb; + border: 1px solid #fed7aa; + border-left: 4px solid var(--warning-color); + padding: 20px; + border-radius: 8px; + margin: 20px 0; +} + +.warning-box h4 { + color: var(--warning-color); + margin-top: 0; +} + +/* Tables */ +.params-table { + margin: 20px 0; + overflow-x: auto; +} + +.params-table table { + width: 100%; + border-collapse: collapse; + background: var(--bg-primary); + border-radius: 8px; + overflow: hidden; + border: 1px solid var(--border-color); +} + +.params-table th { + background: var(--bg-tertiary); + padding: 12px 15px; + text-align: left; + font-weight: 600; + font-size: 14px; + color: var(--text-primary); + border-bottom: 2px solid var(--border-color); +} + +.params-table td { + padding: 12px 15px; + border-bottom: 1px solid var(--border-color); + font-size: 14px; +} + +.params-table tr:last-child td { + border-bottom: none; +} + +.params-table tr:hover { + background: var(--bg-secondary); +} + +.params-table code { + background: var(--bg-tertiary); + padding: 2px 6px; + border-radius: 4px; + font-size: 13px; + color: var(--primary-color); +} + +/* Params Section */ +.params-section { + margin: 40px 0; +} + +.params-section > h3 { + font-size: 22px; + margin-bottom: 20px; + padding-bottom: 10px; + border-bottom: 2px solid var(--border-color); +} + +/* Lists */ +ul, ol { + padding-left: 25px; + margin: 15px 0; +} + +li { + margin: 8px 0; + color: var(--text-secondary); +} + +/* Links */ +a { + color: var(--primary-color); + text-decoration: none; + transition: all 0.2s; +} + +a:hover { + color: var(--primary-dark); + text-decoration: underline; +} + +/* Navigation Buttons */ +.nav-buttons { + display: flex; + justify-content: space-between; + margin-top: 40px; + padding: 0 50px 50px; +} + +.btn { + padding: 14px 28px; + border: none; + border-radius: 8px; + font-weight: 600; + font-size: 15px; + cursor: pointer; + transition: all 0.2s; + display: inline-flex; + align-items: center; + gap: 8px; +} + +.btn-primary { + background: var(--primary-color); + color: white; +} + +.btn-primary:hover { + background: var(--primary-dark); + transform: translateY(-1px); + box-shadow: var(--shadow-md); +} + +.btn-secondary { + background: var(--bg-tertiary); + color: var(--text-primary); + border: 1px solid var(--border-color); +} + +.btn-secondary:hover { + background: var(--bg-secondary); + border-color: var(--secondary-color); +} + +/* Footer */ +.footer { + margin-top: 60px; + padding: 30px 50px; + background: var(--bg-tertiary); + border-top: 1px solid var(--border-color); + text-align: center; +} + +.footer-content p { + margin: 8px 0; + font-size: 14px; + color: var(--text-secondary); +} + +/* Responsive Design */ +@media (max-width: 1024px) { + .sidebar { + width: 250px; + } + + .content { + margin-left: 250px; + padding: 30px 40px; + } +} + +@media (max-width: 768px) { + .sidebar { + transform: translateX(-100%); + transition: transform 0.3s; + width: 280px; + } + + .sidebar.mobile-open { + transform: translateX(0); + } + + .content { + margin-left: 0; + padding: 20px; + } + + .step-header { + padding: 30px 25px; + } + + .step-header h1 { + font-size: 24px; + } + + .step-content { + padding: 30px 25px; + } + + .nav-buttons { + padding: 0 25px 30px; + } + + .feature-grid { + grid-template-columns: 1fr; + } + + .translate-widget { + top: 10px; + right: 10px; + font-size: 12px; + } +} + +/* Scrollbar Styling */ +.sidebar::-webkit-scrollbar { + width: 6px; +} + +.sidebar::-webkit-scrollbar-track { + background: transparent; +} + +.sidebar::-webkit-scrollbar-thumb { + background: var(--border-color); + border-radius: 3px; +} + +.sidebar::-webkit-scrollbar-thumb:hover { + background: var(--secondary-color); +} + +/* Print Styles */ +@media print { + .sidebar, .nav-buttons, .translate-widget, .footer { + display: none; + } + + .content { + margin-left: 0; + } + + .step { + display: block !important; + page-break-after: always; + } +} \ No newline at end of file diff --git a/GUI/app.py b/GUI/app.py new file mode 100644 index 00000000..bf418408 --- /dev/null +++ b/GUI/app.py @@ -0,0 +1,472 @@ +import os +from pathlib import Path +import customtkinter as ctk +from PIL import Image + +# Import language manager +from GUI.language_manager import LanguageManager + +# Import tabs +from GUI.tabs.welcome_tab import WelcomeTab +from GUI.tabs.calibration_tab import CalibrationTab +from GUI.tabs.prepare_video_tab import PrepareVideoTab +from GUI.tabs.pose_model_tab import PoseModelTab +from GUI.tabs.synchronization_tab import SynchronizationTab +from GUI.tabs.activation_tab import ActivationTab +from GUI.tabs.advanced_tab import AdvancedTab +from GUI.tabs.batch_tab import BatchTab +from GUI.tabs.visualization_tab import VisualizationTab +from GUI.tabs.tutorial_tab import TutorialTab +from GUI.tabs.about_tab import AboutTab + + +# Import config generator +from GUI.config_generator import ConfigGenerator + +class Pose2SimApp: + def __init__(self, root): + self.root = root + + # Initialize language manager + self.lang_manager = LanguageManager() + + self.root.title(self.lang_manager.get_text('app_title')) + + # Get screen dimensions + screen_width = self.root.winfo_screenwidth() + screen_height = self.root.winfo_screenheight() + + # Set window size + self.window_width = 1300 + self.window_height = 800 + + # Calculate position for center of screen + x = (screen_width - self.window_width) // 2 + y = (screen_height - self.window_height) // 2 + + # Set window size and position + self.root.geometry(f"{self.window_width}x{self.window_height}+{x}+{y}") + + # Initialize variables + self.language = None # Will be 'en' or 'fr' + self.analysis_mode = None # Will be '2d' or '3d' + self.process_mode = None # Will be 'single' or 'batch' + self.participant_name = None + self.num_trials = 0 # For batch mode + + # Check tutorial status + self.tutorial_marker = os.path.join(os.path.dirname(os.path.abspath(__file__)), "tutorial_completed") + self.tutorial_completed = os.path.exists(self.tutorial_marker) + + self.top_frame = ctk.CTkFrame(self.root, fg_color="transparent") + self.top_frame.pack(fill="x", padx=10, pady=5) + + # Configure light/dark mode selector in top-left corner + self.setup_darkmode_selector(self.top_frame) + + # Configure language selector in top-right corner + self.setup_language_selector(self.top_frame) + + # Create config generator + self.config_generator = ConfigGenerator() + + # Start with welcome screen for initial setup + self.welcome_tab = WelcomeTab(self.root, self) + + def setup_darkmode_selector(self, frame): + """Creates a light/dark mode switch in the top-left corner""" + self.darkmode_frame = ctk.CTkFrame(frame, fg_color="transparent") + self.darkmode_frame.pack(side="left") + + self.current_appearance_mode = ctk.get_appearance_mode().lower() + + darkmode_button = ctk.CTkButton( + self.darkmode_frame, + text="☀️⏾", + width=60, + command=lambda: self.change_darkmode(), + fg_color=("blue" if self.current_appearance_mode == "light" else "SkyBlue1") + ) + darkmode_button.pack(side="left", padx=2) + + def change_darkmode(self): + """Toggles between light and dark mode""" + if self.current_appearance_mode == "dark": + ctk.set_appearance_mode("light") + self.current_appearance_mode = "light" + else: + ctk.set_appearance_mode("dark") + self.current_appearance_mode = "dark" + + def setup_language_selector(self, frame): + """Creates a language selector in the top-right corner""" + self.lang_frame = ctk.CTkFrame(frame, fg_color="transparent") + self.lang_frame.pack(side="right") + + self.lang_var = ctk.StringVar(value="EN") + + self.en_button = ctk.CTkButton( + self.lang_frame, + text="EN", + width=40, + command=lambda: self.change_language("en"), + fg_color=("blue" if self.lang_var.get() == "EN" else "SkyBlue1") + ) + self.en_button.pack(side="left", padx=2) + + self.fr_button = ctk.CTkButton( + self.lang_frame, + text="FR", + width=40, + command=lambda: self.change_language("fr"), + fg_color=("blue" if self.lang_var.get() == "FR" else "SkyBlue1") + ) + self.fr_button.pack(side="left", padx=2) + + def change_language(self, lang_code): + """Changes the application language""" + if lang_code == "en": + self.lang_var.set("EN") + self.language = "en" + self.en_button.configure(fg_color="blue") + self.fr_button.configure(fg_color="SkyBlue1") + else: + self.lang_var.set("FR") + self.language = "fr" + self.fr_button.configure(fg_color="blue") + self.en_button.configure(fg_color="SkyBlue1") + + # Update the LanguageManager + self.lang_manager.set_language(self.language) + + # Update all text elements if the main UI is already built + self.update_ui_language() + + def update_ui_language(self): + """Updates all UI text elements with the selected language""" + # Update window title + self.root.title(self.lang_manager.get_text('app_title')) + + self._update_widget_text(self.root) + + def _update_widget_text(self, widget): + """Recursively updates text for all CTkLabel and CTkButton widgets""" + if isinstance(widget, (ctk.CTkLabel, ctk.CTkButton)): + if hasattr(widget, 'translation_key'): + new_text = self.lang_manager.get_text(widget.translation_key) + widget.configure(text=new_text) + + # Recursively check all child widgets + try: + for child in widget.winfo_children(): + self._update_widget_text(child) + except AttributeError: + pass + + def start_configuration(self, analysis_mode, process_mode, participant_name, num_trials=0): + """Starts the configuration process after welcome screen""" + self.analysis_mode = analysis_mode + self.process_mode = process_mode + self.participant_name = participant_name + + if process_mode == 'batch': + self.num_trials = num_trials + + # Clear welcome screen + self.welcome_tab.clear() + + # Create folder structure based on analysis mode and process mode + self.create_folder_structure() + + # Set up the main interface with tabs + self.setup_main_interface() + + def create_folder_structure(self): + """Creates the folder structure based on analysis mode and process mode""" + if self.analysis_mode == '3d': + # 3D analysis needs the full folder structure + if self.process_mode == 'single': + # Create participant directory + participant_path = os.path.join(self.participant_name) + + # Create calibration and videos subdirectories + calibration_path = os.path.join(participant_path, 'calibration') + videos_path = os.path.join(participant_path, 'videos') + + # Create all directories + os.makedirs(calibration_path, exist_ok=True) + os.makedirs(videos_path, exist_ok=True) + else: + # Batch mode needs a parent directory with calibration folder + # and separate trial directories + participant_path = os.path.join(self.participant_name) + calibration_path = os.path.join(participant_path, 'calibration') + + os.makedirs(participant_path, exist_ok=True) + os.makedirs(calibration_path, exist_ok=True) + + for i in range(1, self.num_trials + 1): + trial_path = os.path.join(participant_path, f'Trial_{i}') + videos_path = os.path.join(trial_path, 'videos') + + os.makedirs(trial_path, exist_ok=True) + os.makedirs(videos_path, exist_ok=True) + else: + # 2D analysis just needs a single directory for the participant + participant_path = os.path.join(self.participant_name) + os.makedirs(participant_path, exist_ok=True) + + def setup_main_interface(self): + """Sets up the main interface with sidebar navigation and content area""" + # Clear any existing content + for widget in self.root.winfo_children(): + if widget != self.lang_frame: # Keep the language selector + widget.destroy() + + # Create main container frame + self.main_container = ctk.CTkFrame(self.root) + self.main_container.pack(fill='both', expand=True, padx=10, pady=10) + + # Create sidebar frame (left) + self.sidebar = ctk.CTkFrame(self.main_container, width=220) + self.sidebar.pack(side='left', fill='both', padx=5, pady=5) + self.sidebar.pack_propagate(False) # Prevent shrinking + + self.top_sidebar = ctk.CTkFrame(self.sidebar, fg_color="transparent") + self.top_sidebar.pack(fill="x", padx=10, pady=5) + + # Configure light/dark mode selector in top-left corner + self.setup_darkmode_selector(self.top_sidebar) + + # Configure language selector in top-right corner + self.setup_language_selector(self.top_sidebar) + + # Add logo above the title + favicon_path = Path(__file__).parent/"assets/Pose2Sim_logo.png" + self.top_image = Image.open(favicon_path) + self.top_photo = ctk.CTkImage(light_image=self.top_image, dark_image=self.top_image, size=(120,120)) + image_label = ctk.CTkLabel(self.sidebar, image=self.top_photo, text="") + image_label.pack(pady=(10, 0)) + + # App title in sidebar + app_title_frame = ctk.CTkFrame(self.sidebar, fg_color="transparent") + app_title_frame.pack(fill='x', pady=(10, 30)) + + ctk.CTkLabel( + app_title_frame, + text="Pose2Sim", + font=("Helvetica", 22, "bold") + ).pack() + + mode_text = "2D Analysis" if self.analysis_mode == '2d' else "3D Analysis" + ctk.CTkLabel( + app_title_frame, + text=mode_text, + font=("Helvetica", 14) + ).pack() + + # Create content area frame (right) + self.content_area = ctk.CTkFrame(self.main_container) + self.content_area.pack(side='left', fill='both', expand=True, padx=5, pady=5) + + # Initialize progress tracking + self.setup_progress_bar() + + # Initialize tabs dictionary + self.tabs = {} + self.tab_buttons = {} + self.tab_info = {} + + # Create tabs based on analysis mode + if self.analysis_mode == '3d': + # 3D mode tabs + tab_classes = [ + ('tutorial', TutorialTab, "Tutorial", "📚"), + ('calibration', CalibrationTab, "Calibration", "📏"), + ('prepare_video', PrepareVideoTab, "Prepare Video", "🎥"), + ('pose_model', PoseModelTab, "Pose Estimation", "👤"), + ('synchronization', SynchronizationTab, "Synchronization", "⏱️"), + ('advanced', AdvancedTab, "Advanced Settings", "⚙️"), + ('activation', ActivationTab, "Run Analysis", "▶️") + ] + + if self.process_mode == 'batch': + tab_classes.append(('batch', BatchTab, "Batch Configuration", "📚")) + + # Add visualization tab + tab_classes.append(('visualization', VisualizationTab, "Data Visualization", "📊")) + + # Add about tab at the end + tab_classes.append(('about', AboutTab, "About Us", "ℹ️")) + else: + # 2D mode tabs (simplified) + tab_classes = [ + ('tutorial', TutorialTab, "Tutorial", "📚"), + ('pose_model', PoseModelTab, "Pose Estimation", "👤"), + ('advanced', AdvancedTab, "Advanced Settings", "⚙️"), + ('activation', ActivationTab, "Run Analysis", "▶️"), + ('visualization', VisualizationTab, "Data Visualization", "📊"), + ('about', AboutTab, "About Us", "ℹ️") # Add about tab at the end + ] + + # Create tab instances and sidebar buttons + for i, (tab_id, tab_class, tab_title, tab_icon) in enumerate(tab_classes): + # Store tab class and metadata for lazy instantiation + self.tab_info[tab_id] = { + 'class': tab_class, + 'title': tab_title, + 'icon': tab_icon, + 'needs_simplified': tab_id in ['pose_model', 'advanced', 'activation'] + } + + # Create sidebar button for this tab + button = ctk.CTkButton( + self.sidebar, + text=f"{tab_icon} {tab_title}", + anchor="w", + fg_color="transparent", + text_color=("gray10", "gray90"), + hover_color=("gray70", "gray30"), + command=lambda t=tab_id: self.show_tab(t), + height=40, + width=200 + ) + button.pack(pady=5, padx=10) + self.tab_buttons[tab_id] = button + + # # Determine first tab to show + # if not self.tutorial_completed and 'tutorial' in self.tab_info: + # first_tab_id = 'tutorial' + # else: + # first_tab_id = list(self.tab_info.keys())[0] + # if first_tab_id == 'tutorial': # Skip tutorial if completed + # first_tab_id = list(self.tab_info.keys())[1] if len(self.tab_info) > 1 else first_tab_id + + # Show first tab (this will trigger lazy instantiation) + first_tab_id = 'tutorial' + self.show_tab(first_tab_id) + + def show_tab(self, tab_id): + """Show the selected tab and hide others""" + if tab_id not in self.tabs: + # Optional: Show loading indicator for heavy tabs + if tab_id in ['visualization', 'tutorial']: + loading_label = ctk.CTkLabel( + self.content_area, + text=f"Loading {self.tab_info[tab_id]['title']}...", + font=("Helvetica", 14) + ) + loading_label.place(relx=0.5, rely=0.5, anchor="center") + self.root.update() # Force UI to show loading message + + # Instantiate the tab + tab_class = self.tab_info[tab_id]['class'] + + if self.tab_info[tab_id]['needs_simplified']: + self.tabs[tab_id] = tab_class( + self.content_area, + self, + simplified=(self.analysis_mode == '2d') + ) + else: + self.tabs[tab_id] = tab_class(self.content_area, self) + + # Remove loading indicator if it was shown + if tab_id in ['visualization', 'tutorial']: + loading_label.destroy() + + # Hide all tab frames + for tid, tab in self.tabs.items(): + tab.frame.pack_forget() + + # Reset button colors + self.tab_buttons[tid].configure( + fg_color="transparent", + text_color=("gray10", "gray90") + ) + + # Show selected tab frame + self.tabs[tab_id].frame.pack(fill='both', expand=True) + + # Highlight selected tab button + self.tab_buttons[tab_id].configure( + fg_color=("#3a7ebf", "#1f538d"), + text_color=("white", "white") + ) + + def setup_progress_bar(self): + """Create and configure the progress bar at the bottom of the window.""" + # Create a frame for the progress bar + self.progress_frame = ctk.CTkFrame(self.root, height=50) + self.progress_frame.pack(side="bottom", fill="x", padx=10, pady=5) + + # Progress label + self.progress_label = ctk.CTkLabel( + self.progress_frame, + text="Overall Progress: 0%", + font=("Helvetica", 12) + ) + self.progress_label.pack(pady=(5, 2)) + + # Progress bar + self.progress_bar = ctk.CTkProgressBar(self.progress_frame, height=15) + self.progress_bar.pack(fill="x", padx=10, pady=(0, 5)) + self.progress_bar.set(0) # Initialize to 0% + + # Define progress steps based on analysis mode + if self.analysis_mode == '3d': + self.progress_steps = { + 'calibration': 15, + 'prepare_video': 30, + 'pose_model': 50, + 'synchronization': 70, + 'advanced': 85, + 'activation': 100 + } + else: # 2D mode + self.progress_steps = { + 'pose_model': 40, + 'advanced': 70, + 'activation': 100 + } + + def update_progress_bar(self, value): + """Update the progress bar to a specific value (0-100).""" + progress = value / 100 + self.progress_bar.set(progress) + self.progress_label.configure(text=f"Overall Progress: {value}%") + + def update_tab_indicator(self, tab_name, completed=True): + """Updates the tab indicator to show completion status""" + if tab_name in self.tabs: + # Get the current tab title and icon + tab_title = self.tabs[tab_name].get_title() + tab_icon = self.tab_buttons[tab_name].cget("text").split(" ")[0] + + # Update the tab button text + indicator = "✅" if completed else "❌" + self.tab_buttons[tab_name].configure( + text=f"{tab_icon} {tab_title} {indicator}" + ) + + def generate_config(self): + """Generates the configuration file based on the settings""" + # Collect all settings from tabs + settings = {} + for name, tab in self.tabs.items(): + if hasattr(tab, 'get_settings'): + tab_settings = tab.get_settings() + settings.update(tab_settings) + + # Generate config file + config_path = os.path.join(self.participant_name, 'Config.toml') + if self.analysis_mode == '2d': + self.config_generator.generate_2d_config(config_path, settings) + else: + self.config_generator.generate_3d_config(config_path, settings) + + # For batch mode, also generate configs for each trial + if self.process_mode == 'batch': + for i in range(1, self.num_trials + 1): + trial_config_path = os.path.join(self.participant_name, f'Trial_{i}', 'Config.toml') + self.config_generator.generate_3d_config(trial_config_path, settings) \ No newline at end of file diff --git a/GUI/assets/Pose2Sim_favicon.ico b/GUI/assets/Pose2Sim_favicon.ico new file mode 100644 index 00000000..680a2c80 Binary files /dev/null and b/GUI/assets/Pose2Sim_favicon.ico differ diff --git a/GUI/assets/pose2sim_logo.png b/GUI/assets/pose2sim_logo.png new file mode 100644 index 00000000..3f004805 Binary files /dev/null and b/GUI/assets/pose2sim_logo.png differ diff --git a/GUI/blur.py b/GUI/blur.py new file mode 100644 index 00000000..2555b7f4 --- /dev/null +++ b/GUI/blur.py @@ -0,0 +1,2371 @@ +import cv2 +import os +import numpy as np +import tkinter as tk +from tkinter import filedialog, ttk, IntVar, StringVar, BooleanVar, colorchooser +from PIL import Image, ImageTk +import time +import json + +# NOTE: 23.06.2025:Import face_blurring utilities for auto mode +try: + from Pose2Sim.Utilities.face_blurring import face_blurring_func, apply_face_obscuration + from Pose2Sim.poseEstimation import setup_backend_device + from rtmlib import Body, PoseTracker + FACE_BLURRING_AVAILABLE = True +except ImportError: + FACE_BLURRING_AVAILABLE = False + print("Warning: Face blurring utilities not available. Auto mode will be disabled.") + +# Try to import RTMLib and DeepSort for manual mode +# NOTE: 23.06.2025: Manual mode now uses the same Body model as auto mode for consistency (Wholebody -> Body) +try: + from rtmlib import PoseTracker, Body + RTMPOSE_AVAILABLE = True +except ImportError: + RTMPOSE_AVAILABLE = False + print("Warning: RTMLib not available. Install with: pip install rtmlib") + +class VideoBlurApp: + # ===== APPLICATION CONSTANTS ===== + + # UI Layout Constants + CONTROL_PANEL_WIDTH = 300 + CANVAS_WIDTH = 310 + NAVIGATION_PANEL_HEIGHT = 120 + SHAPES_LISTBOX_HEIGHT = 5 + + # Default Values + DEFAULT_BLUR_STRENGTH = 21 + MIN_BLUR_STRENGTH = 3 + MAX_BLUR_STRENGTH = 51 + DEFAULT_MASK_TYPE = "blur" + DEFAULT_MASK_SHAPE = "oval" + DEFAULT_MASK_COLOR = (0, 0, 0) + DEFAULT_BLUR_MODE = "manual" + + # Auto Mode Default Settings + DEFAULT_AUTO_BLUR_TYPE = "blur" + DEFAULT_AUTO_BLUR_ACCURACY = "medium" + DEFAULT_AUTO_BLUR_INTENSITY = "medium" + DEFAULT_AUTO_BLUR_SHAPE = "rectangle" + DEFAULT_AUTO_BLUR_SIZE = "medium" + + # Face Detection Constants + FACE_KEYPOINT_INDICES = [0, 1, 2, 3, 4] # nose, left_eye, right_eye, left_ear, right_ear + DETECTION_FREQUENCY = 3 + FACE_CONFIDENCE_THRESHOLD = 0.3 + MIN_FACE_KEYPOINTS = 2 + MIN_FACE_KEYPOINTS_FOR_PROCESSING = 3 + + # Pose Tracker Settings + MANUAL_MODE_DET_FREQUENCY = 2 + AUTO_MODE_DET_FREQUENCY = 10 + + # Mask Effect Constants + PIXELATE_SCALE_DIVISOR = 10 + FOREHEAD_HEIGHT_RATIO = 0.25 # y + h // 4 + FACE_PADDING_X_RATIO = 0.5 + FACE_PADDING_Y_RATIO = 0.7 + + # File Extensions + SUPPORTED_VIDEO_EXTENSIONS = "*.mp4;*.avi;*.mov;*.mkv;*.wmv" + + # UI Option Lists + MASK_TYPES = ["blur", "black", "pixelate", "solid"] + BLUR_MODES = ["manual", "auto"] + AUTO_BLUR_TYPES = ["blur", "black"] + AUTO_BLUR_ACCURACIES = ["low", "medium", "high"] + AUTO_BLUR_INTENSITIES = ["low", "medium", "high"] + AUTO_BLUR_SHAPES = ["polygon", "rectangle"] + AUTO_BLUR_SIZES = ["small", "medium", "large"] + CROP_TYPES = ["traditional", "mask"] + CROP_MASK_TYPES = ["black", "blur"] + SHAPE_TYPES = ["rectangle", "polygon", "freehand"] + FACE_MASK_SHAPES = ["rectangle", "oval", "precise"] + + # RTMPose Accuracy Mapping + ACCURACY_MODE_MAPPING = { + 'low': 'lightweight', + 'medium': 'balanced', + 'high': 'performance' + } + + # Status Messages + DEFAULT_STATUS_MESSAGE = "Open a video file to begin" + AUTO_MODE_UNAVAILABLE_MESSAGE = "Auto mode not available - face blurring utilities not found" + RTMPOSE_UNAVAILABLE_MESSAGE = "Face detection requires RTMPose which is not available" + def __init__(self, root): + self.root = root + self.root.title("Advanced Video Face Masking Tool") + + # Configure the root window with grid layout + self.root.grid_columnconfigure(0, weight=0) # Control panel - fixed width + self.root.grid_columnconfigure(1, weight=1) # Video display - expandable + self.root.grid_rowconfigure(0, weight=1) # Main content + self.root.grid_rowconfigure(1, weight=0) # Status bar - fixed height + + # Input variables + self.input_video = None + self.output_path = None + self.shapes = [] # Will store [shape_type, points, mask_type, blur_strength, color, start_frame, end_frame] + self.auto_detect_faces = False + self.current_frame = None + self.frame_index = 0 + self.cap = None + self.total_frames = 0 + + # Drawing variables + self.drawing = False + self.current_shape = [] + self.temp_shape_id = None + self.current_shape_type = self.SHAPE_TYPES[0] # "rectangle" + + # Mask settings + self.blur_strength = self.DEFAULT_BLUR_STRENGTH + self.mask_type = self.DEFAULT_MASK_TYPE + self.mask_shape = self.DEFAULT_MASK_SHAPE + self.mask_color = self.DEFAULT_MASK_COLOR + + # Frame range variables + self.blur_entire_video = BooleanVar(value=True) + self.start_frame = IntVar(value=0) + self.end_frame = IntVar(value=0) + + # Crop settings + self.crop_enabled = BooleanVar(value=False) + self.crop_x = 0 + self.crop_y = 0 + self.crop_width = 0 + self.crop_height = 0 + self.drawing_crop = False + self.temp_crop_rect = None + + # Enhanced crop settings + self.crop_type = StringVar(value=self.CROP_TYPES[0]) # "traditional" + self.crop_mask_type = StringVar(value=self.CROP_MASK_TYPES[0]) # "black" + self.crop_all_frames = BooleanVar(value=True) + self.crop_start_frame = IntVar(value=0) + self.crop_end_frame = IntVar(value=0) + + # Rotation settings (new) + self.rotation_angle = IntVar(value=0) + self.rotation_enabled = BooleanVar(value=False) + + # Video trimming variables + self.trim_enabled = BooleanVar(value=False) + self.trim_start_frame = IntVar(value=0) + self.trim_end_frame = IntVar(value=0) + self.dragging_start = False + self.dragging_end = False + + # Status variable + self.status_text = StringVar(value=self.DEFAULT_STATUS_MESSAGE) + + # Image positioning + self.x_offset = 0 + self.y_offset = 0 + + # Face detection & tracking + self.face_detect_var = BooleanVar(value=False) + + # Auto/Manual mode settings + self.blur_mode = StringVar(value=self.DEFAULT_BLUR_MODE) + self.auto_blur_type = StringVar(value=self.DEFAULT_AUTO_BLUR_TYPE) + self.auto_blur_accuracy = StringVar(value=self.DEFAULT_AUTO_BLUR_ACCURACY) + self.auto_blur_intensity = StringVar(value=self.DEFAULT_AUTO_BLUR_INTENSITY) + self.auto_blur_shape = StringVar(value=self.DEFAULT_AUTO_BLUR_SHAPE) + self.auto_blur_size = StringVar(value=self.DEFAULT_AUTO_BLUR_SIZE) + + # Auto mode pose tracker + self.auto_pose_tracker = None + self.auto_pose_initialized = False + + self.init_face_detection() + + # Create UI components + self.create_ui() + + # Add status bar + self.status_bar = ttk.Label(self.root, textvariable=self.status_text, relief=tk.SUNKEN, anchor=tk.W) + self.status_bar.grid(row=1, column=0, columnspan=2, sticky="ew") + + # Initialize UI state + self.on_mode_change() + + def _init_pose_tracker(self, det_frequency, tracker_attr_name, initialized_attr_name, mode_name): + """Common pose tracker initialization logic""" + try: + # Setup backend and device + backend, device = setup_backend_device('auto', 'auto') + print(f"Using Pose2Sim settings: backend={backend}, device={device}") + + # Map blur accuracy to RTMPose mode + mode = self.ACCURACY_MODE_MAPPING.get(self.auto_blur_accuracy.get(), 'balanced') + + # Initialize pose tracker + tracker = PoseTracker( + Body, + det_frequency=det_frequency, + mode=mode, + backend=backend, + device=device, + tracking=False, + to_openpose=False + ) + + # Set tracker and initialized flag + setattr(self, tracker_attr_name, tracker) + setattr(self, initialized_attr_name, True) + print(f"{mode_name} initialized successfully") + return True + + except Exception as e: + print(f"Error initializing {mode_name}: {e}") + print(f"Error type: {type(e).__name__}") + print(f"Backend: {backend}, Device: {device}") + import traceback + traceback.print_exc() + print(f"{mode_name} initialization failed") + return False + + # def _init_deepsort(self): + # """Initialize DeepSort tracker""" + # if not DEEPSORT_AVAILABLE: + # return False + + # try: + # self.deepsort_tracker = DeepSort( + # max_age=30, + # n_init=3, + # nms_max_overlap=1.0, + # max_cosine_distance=0.2, + # nn_budget=None, + # embedder='mobilenet', + # half=True, + # bgr=True, + # embedder_gpu=True + # ) + # self.has_deepsort = True + # print("DeepSort initialized successfully") + # return True + # except Exception as e: + # print(f"Error initializing DeepSort: {e}") + # print(f"Error type: {type(e).__name__}") + # import traceback + # traceback.print_exc() + # print("DeepSort initialization failed - basic tracking will be used") + # return False + + def init_face_detection(self): + """Initialize face detection and tracking components""" + self.rtmpose_initialized = False + # self.has_deepsort = False + self.tracked_faces = [] + self.next_face_id = 0 + self.detection_frequency = self.DETECTION_FREQUENCY + + # print("=== Face Detection Initialization ===") + # print(f"RTMLib available: {RTMPOSE_AVAILABLE}") + # print(f"DeepSort available: {DEEPSORT_AVAILABLE}") + # print(f"Face blurring utilities available: {FACE_BLURRING_AVAILABLE}") + + # Initialize RTMPose if available + if RTMPOSE_AVAILABLE: + self._init_pose_tracker(self.MANUAL_MODE_DET_FREQUENCY, 'pose_tracker', 'rtmpose_initialized', 'RTMPose for manual mode') + + # # Initialize DeepSort + # self._init_deepsort() + + def init_auto_mode(self): + """Initialize auto mode pose tracker""" + if not FACE_BLURRING_AVAILABLE: + return False + + return self._init_pose_tracker(self.AUTO_MODE_DET_FREQUENCY, 'auto_pose_tracker', 'auto_pose_initialized', 'Auto mode') + + def create_ui(self): + """Create the main UI layout with fixed positioning""" + # Create left panel (controls) + control_panel = ttk.Frame(self.root, width=self.CONTROL_PANEL_WIDTH) + control_panel.grid(row=0, column=0, sticky="ns", padx=5, pady=5) + control_panel.grid_propagate(False) # Keep fixed width + + # Create right panel (video display and navigation) + video_panel = ttk.Frame(self.root) + video_panel.grid(row=0, column=1, sticky="nsew", padx=5, pady=5) + video_panel.grid_columnconfigure(0, weight=1) + video_panel.grid_rowconfigure(0, weight=1) # Canvas expands + video_panel.grid_rowconfigure(1, weight=0) # Navigation fixed height + + # Create canvas for video display + self.canvas_frame = ttk.Frame(video_panel) + self.canvas_frame.grid(row=0, column=0, sticky="nsew") + + self.canvas = tk.Canvas(self.canvas_frame, bg="black", highlightthickness=0) + self.canvas.pack(fill=tk.BOTH, expand=True) + + # Create navigation panel + nav_panel = ttk.Frame(video_panel, height=self.NAVIGATION_PANEL_HEIGHT) + nav_panel.grid(row=1, column=0, sticky="ew", pady=(5,0)) + nav_panel.grid_propagate(False) # Fix height + + # Add components to the control panel + self.setup_control_panel(control_panel) + + # Add components to the navigation panel + self.setup_navigation_panel(nav_panel) + + # Bind canvas events for drawing + self.canvas.bind("", self.on_mouse_down) + self.canvas.bind("", self.on_mouse_move) + self.canvas.bind("", self.on_mouse_up) + self.canvas.bind("", self.on_double_click) + + def setup_control_panel(self, parent): + """Set up the left control panel with all controls""" + # Create a canvas with scrollbar for the control panel + canvas = tk.Canvas(parent, width=self.CANVAS_WIDTH) # NOTE: 24.06.2025: Adjust width for visibility because width of control_panel is 300. + scrollbar = ttk.Scrollbar(parent, orient="vertical", command=canvas.yview) + scrollable_frame = ttk.Frame(canvas) + + scrollable_frame.bind( + "", + lambda e: canvas.configure(scrollregion=canvas.bbox("all")) + ) + + canvas.create_window((0, 0), window=scrollable_frame, anchor="nw") + canvas.configure(yscrollcommand=scrollbar.set) + + canvas.pack(side="left", fill="both", expand=True) + scrollbar.pack(side="right", fill="y") + + # File operations + file_frame = ttk.LabelFrame(scrollable_frame, text="File Operations") + file_frame.pack(fill=tk.X, pady=(0,5), padx=2) + + ttk.Button(file_frame, text="Open Video", command=self.open_video).pack(fill=tk.X, padx=5, pady=2) + ttk.Button(file_frame, text="Set Output Path", command=self.set_output_path).pack(fill=tk.X, padx=5, pady=2) + ttk.Button(file_frame, text="Process Video", command=self.process_video).pack(fill=tk.X, padx=5, pady=2) + + # Drawing tools + draw_frame = ttk.LabelFrame(scrollable_frame, text="Drawing Tools") + draw_frame.pack(fill=tk.X, pady=5, padx=2) + + tool_frame = ttk.Frame(draw_frame) + tool_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Button(tool_frame, text="Rectangle", command=lambda: self.set_drawing_mode(self.SHAPE_TYPES[0])).pack(side=tk.LEFT, expand=True, fill=tk.X, padx=2) + ttk.Button(tool_frame, text="Polygon", command=lambda: self.set_drawing_mode(self.SHAPE_TYPES[1])).pack(side=tk.LEFT, expand=True, fill=tk.X, padx=2) + ttk.Button(tool_frame, text="Freehand", command=lambda: self.set_drawing_mode(self.SHAPE_TYPES[2])).pack(side=tk.LEFT, expand=True, fill=tk.X, padx=2) + + mask_frame = ttk.Frame(draw_frame) + mask_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Label(mask_frame, text="Mask Type:").pack(side=tk.LEFT) + self.mask_type_var = StringVar(value=self.DEFAULT_MASK_TYPE) + ttk.Combobox(mask_frame, textvariable=self.mask_type_var, values=self.MASK_TYPES, width=10, state="readonly").pack(side=tk.RIGHT) + + strength_frame = ttk.Frame(draw_frame) + strength_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Label(strength_frame, text="Effect Strength:").pack(side=tk.LEFT) + self.blur_strength_var = IntVar(value=self.DEFAULT_BLUR_STRENGTH) + self.strength_label = ttk.Label(strength_frame, text=str(self.DEFAULT_BLUR_STRENGTH)) + self.strength_label.pack(side=tk.RIGHT) + + # Add a scale for blur strength + strength_scale = ttk.Scale(draw_frame, from_=self.MIN_BLUR_STRENGTH, to=self.MAX_BLUR_STRENGTH, orient=tk.HORIZONTAL, variable=self.blur_strength_var) + strength_scale.pack(fill=tk.X, padx=5, pady=2) + strength_scale.bind("", self.update_blur_strength) + + ttk.Button(draw_frame, text="Choose Color", command=self.choose_color).pack(fill=tk.X, padx=5, pady=2) + + # Shape list section + shape_frame = ttk.LabelFrame(scrollable_frame, text="Shape List") + shape_frame.pack(fill=tk.X, pady=5, padx=2) + + # Listbox with scrollbar + list_frame = ttk.Frame(shape_frame) + list_frame.pack(fill=tk.BOTH, expand=True, padx=5, pady=2) + + self.shapes_listbox = tk.Listbox(list_frame, height=self.SHAPES_LISTBOX_HEIGHT) + self.shapes_listbox.pack(side=tk.LEFT, fill=tk.BOTH, expand=True) + + scrollbar = ttk.Scrollbar(list_frame, orient=tk.VERTICAL, command=self.shapes_listbox.yview) + scrollbar.pack(side=tk.RIGHT, fill=tk.Y) + self.shapes_listbox.config(yscrollcommand=scrollbar.set) + self.shapes_listbox.bind("<>", self.on_shape_selected) + + # Frame range for shapes + range_frame = ttk.LabelFrame(shape_frame, text="Shape Frame Range") + range_frame.pack(fill=tk.X, padx=5, pady=2) + + frame_inputs = ttk.Frame(range_frame) + frame_inputs.pack(fill=tk.X, pady=2) + + ttk.Label(frame_inputs, text="Start:").pack(side=tk.LEFT) + self.shape_start_frame = ttk.Entry(frame_inputs, width=6) + self.shape_start_frame.pack(side=tk.LEFT, padx=2) + + ttk.Label(frame_inputs, text="End:").pack(side=tk.LEFT, padx=(5,0)) + self.shape_end_frame = ttk.Entry(frame_inputs, width=6) + self.shape_end_frame.pack(side=tk.LEFT, padx=2) + + ttk.Button(frame_inputs, text="Apply", command=self.set_shape_frame_range).pack(side=tk.RIGHT) + + btn_frame = ttk.Frame(shape_frame) + btn_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Button(btn_frame, text="Delete Selected", command=self.delete_selected_shape).pack(side=tk.LEFT, fill=tk.X, expand=True, padx=2) + ttk.Button(btn_frame, text="Clear All", command=self.clear_shapes).pack(side=tk.LEFT, fill=tk.X, expand=True, padx=2) + + # Video cropping section + crop_frame = ttk.LabelFrame(scrollable_frame, text="Video Cropping") + crop_frame.pack(fill=tk.X, pady=5, padx=2) + + ttk.Checkbutton(crop_frame, text="Enable video cropping", variable=self.crop_enabled, command=self.toggle_crop).pack(anchor=tk.W, padx=5, pady=2) + + crop_type_frame = ttk.Frame(crop_frame) + crop_type_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Radiobutton(crop_type_frame, text="Traditional crop (cut out)", variable=self.crop_type, value="traditional").pack(anchor=tk.W) + ttk.Radiobutton(crop_type_frame, text="Mask outside region", variable=self.crop_type, value="mask").pack(anchor=tk.W) + + mask_type_frame = ttk.Frame(crop_frame) + mask_type_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Label(mask_type_frame, text="Outside area:").pack(side=tk.LEFT) + ttk.Combobox(mask_type_frame, textvariable=self.crop_mask_type, values=self.CROP_MASK_TYPES + ["pixelate"], width=10, state="readonly").pack(side=tk.RIGHT) + + frame_range_frame = ttk.Frame(crop_frame) + frame_range_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Radiobutton(frame_range_frame, text="Apply to all frames", variable=self.crop_all_frames, value=True).pack(anchor=tk.W) + ttk.Radiobutton(frame_range_frame, text="Apply to frame range", variable=self.crop_all_frames, value=False).pack(anchor=tk.W) + + range_inputs = ttk.Frame(crop_frame) + range_inputs.pack(fill=tk.X, padx=5, pady=2) + + ttk.Label(range_inputs, text="Start:").pack(side=tk.LEFT) + ttk.Entry(range_inputs, textvariable=self.crop_start_frame, width=6).pack(side=tk.LEFT, padx=2) + + ttk.Label(range_inputs, text="End:").pack(side=tk.LEFT, padx=(5,0)) + ttk.Entry(range_inputs, textvariable=self.crop_end_frame, width=6).pack(side=tk.LEFT, padx=2) + + button_frame = ttk.Frame(crop_frame) + button_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Button(button_frame, text="Draw Crop Region", command=self.start_crop_drawing).pack(side=tk.LEFT, fill=tk.X, expand=True, padx=2) + ttk.Button(button_frame, text="Reset Crop", command=self.reset_crop).pack(side=tk.LEFT, fill=tk.X, expand=True, padx=2) + + info_frame = ttk.Frame(crop_frame) + info_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Label(info_frame, text="Crop dimensions:").pack(side=tk.LEFT) + self.crop_info_label = ttk.Label(info_frame, text="Not set") + self.crop_info_label.pack(side=tk.RIGHT) + + # Video rotation section (updated) + rotation_frame = ttk.LabelFrame(scrollable_frame, text="Video Rotation") + rotation_frame.pack(fill=tk.X, pady=5, padx=2) + + ttk.Checkbutton(rotation_frame, text="Enable rotation", variable=self.rotation_enabled, + command=self.toggle_rotation).pack(anchor=tk.W, padx=5, pady=2) + + # First row - angle input + rotation_controls = ttk.Frame(rotation_frame) + rotation_controls.pack(fill=tk.X, padx=5, pady=2) + + ttk.Label(rotation_controls, text="Angle:").pack(side=tk.LEFT) + self.rotation_entry = ttk.Entry(rotation_controls, textvariable=self.rotation_angle, width=6) + self.rotation_entry.pack(side=tk.LEFT, padx=2) + ttk.Label(rotation_controls, text="degrees").pack(side=tk.LEFT) + ttk.Button(rotation_controls, text="Apply", command=lambda: self.set_rotation(self.rotation_angle.get())).pack( + side=tk.RIGHT, padx=2) + + # Second row - rotation buttons + rotation_buttons = ttk.Frame(rotation_frame) + rotation_buttons.pack(fill=tk.X, padx=5, pady=2) + + ttk.Button(rotation_buttons, text="Rotate Left 90°", command=lambda: self.set_rotation(-90)).pack( + side=tk.LEFT, fill=tk.X, expand=True, padx=2) + ttk.Button(rotation_buttons, text="Rotate Right 90°", command=lambda: self.set_rotation(90)).pack( + side=tk.LEFT, fill=tk.X, expand=True, padx=2) + ttk.Button(rotation_buttons, text="Reset", command=lambda: self.set_rotation(0)).pack( + side=tk.LEFT, fill=tk.X, expand=True, padx=2) + + # Face detection section + face_frame = ttk.LabelFrame(scrollable_frame, text="Face Detection & Blurring") + face_frame.pack(fill=tk.X, pady=5, padx=2) + + # Mode selection + mode_frame = ttk.Frame(face_frame) + mode_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Label(mode_frame, text="Blurring Mode:").pack(side=tk.LEFT) + ttk.Radiobutton(mode_frame, text="Manual", variable=self.blur_mode, value="manual", command=self.on_mode_change).pack(side=tk.LEFT, padx=5) + ttk.Radiobutton(mode_frame, text="Auto", variable=self.blur_mode, value="auto", command=self.on_mode_change).pack(side=tk.LEFT, padx=5) + + # Manual mode settings + self.manual_frame = ttk.LabelFrame(face_frame, text="Manual Mode Settings") + self.manual_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Checkbutton(self.manual_frame, text="Auto-detect and track faces", variable=self.face_detect_var, command=self.toggle_face_detection).pack(anchor=tk.W, padx=5, pady=2) + + # Manual mode accuracy setting + manual_accuracy_frame = ttk.Frame(self.manual_frame) + manual_accuracy_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Label(manual_accuracy_frame, text="Detection Accuracy:").pack(side=tk.LEFT) + ttk.Combobox(manual_accuracy_frame, textvariable=self.auto_blur_accuracy, values=self.AUTO_BLUR_ACCURACIES, width=10, state="readonly").pack(side=tk.RIGHT) + + face_shape_frame = ttk.Frame(self.manual_frame) + face_shape_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Label(face_shape_frame, text="Face Mask Shape:").pack(side=tk.LEFT) + self.mask_shape_var = StringVar(value=self.DEFAULT_MASK_SHAPE) + ttk.Combobox(face_shape_frame, textvariable=self.mask_shape_var, values=self.FACE_MASK_SHAPES, width=10, state="readonly").pack(side=tk.RIGHT) + + face_buttons = ttk.Frame(self.manual_frame) + face_buttons.pack(fill=tk.X, padx=5, pady=2) + + ttk.Button(face_buttons, text="Detect Current Frame", command=self.detect_faces_current_frame).pack(side=tk.LEFT, fill=tk.X, expand=True, padx=2) + ttk.Button(face_buttons, text="Export Face Data", command=self.export_face_data).pack(side=tk.LEFT, fill=tk.X, expand=True, padx=2) + + # Auto mode settings + self.auto_frame = ttk.LabelFrame(face_frame, text="Auto Mode Settings") + self.auto_frame.pack(fill=tk.X, padx=5, pady=2) + + # Blur type + blur_type_frame = ttk.Frame(self.auto_frame) + blur_type_frame.pack(fill=tk.X, padx=5, pady=2) + ttk.Label(blur_type_frame, text="Blur Type:").pack(side=tk.LEFT) + ttk.Combobox(blur_type_frame, textvariable=self.auto_blur_type, values=self.AUTO_BLUR_TYPES, width=10, state="readonly").pack(side=tk.RIGHT) + + # Blur accuracy + accuracy_frame = ttk.Frame(self.auto_frame) + accuracy_frame.pack(fill=tk.X, padx=5, pady=2) + ttk.Label(accuracy_frame, text="Blur Accuracy:").pack(side=tk.LEFT) + ttk.Combobox(accuracy_frame, textvariable=self.auto_blur_accuracy, values=self.AUTO_BLUR_ACCURACIES, width=10, state="readonly").pack(side=tk.RIGHT) + + # Blur intensity + intensity_frame = ttk.Frame(self.auto_frame) + intensity_frame.pack(fill=tk.X, padx=5, pady=2) + ttk.Label(intensity_frame, text="Blur Intensity:").pack(side=tk.LEFT) + ttk.Combobox(intensity_frame, textvariable=self.auto_blur_intensity, values=self.AUTO_BLUR_INTENSITIES, width=10, state="readonly").pack(side=tk.RIGHT) + + # Blur shape + shape_frame = ttk.Frame(self.auto_frame) + shape_frame.pack(fill=tk.X, padx=5, pady=2) + ttk.Label(shape_frame, text="Blur Shape:").pack(side=tk.LEFT) + ttk.Combobox(shape_frame, textvariable=self.auto_blur_shape, values=self.AUTO_BLUR_SHAPES, width=10, state="readonly").pack(side=tk.RIGHT) + + # Blur size + size_frame = ttk.Frame(self.auto_frame) + size_frame.pack(fill=tk.X, padx=5, pady=2) + ttk.Label(size_frame, text="Blur Size:").pack(side=tk.LEFT) + ttk.Combobox(size_frame, textvariable=self.auto_blur_size, values=self.AUTO_BLUR_SIZES, width=10, state="readonly").pack(side=tk.RIGHT) + + # Auto mode face data saving option + auto_face_data_frame = ttk.Frame(self.auto_frame) + auto_face_data_frame.pack(fill=tk.X, padx=5, pady=2) + self.auto_save_face_data = BooleanVar(value=False) + ttk.Checkbutton(auto_face_data_frame, text="Save Face Information to JSON", variable=self.auto_save_face_data).pack(side=tk.LEFT) + + # Auto mode buttons + auto_buttons = ttk.Frame(self.auto_frame) + auto_buttons.pack(fill=tk.X, padx=5, pady=2) + ttk.Button(auto_buttons, text="Run Auto Mode", command=self.initialize_auto_mode).pack(side=tk.LEFT, fill=tk.X, expand=True, padx=2) + # ttk.Button(auto_buttons, text="Test on Current Frame", command=self.test_auto_mode).pack(side=tk.LEFT, fill=tk.X, expand=True, padx=2) + + # Processing Range section + process_frame = ttk.LabelFrame(scrollable_frame, text="Processing Range") + process_frame.pack(fill=tk.X, pady=5, padx=2) + + ttk.Radiobutton(process_frame, text="Process entire video", variable=self.blur_entire_video, value=True, command=self.toggle_frame_range).pack(anchor=tk.W, padx=5, pady=2) + ttk.Radiobutton(process_frame, text="Process specific range", variable=self.blur_entire_video, value=False, command=self.toggle_frame_range).pack(anchor=tk.W, padx=5, pady=2) + + range_input_frame = ttk.Frame(process_frame) + range_input_frame.pack(fill=tk.X, padx=5, pady=2) + + ttk.Label(range_input_frame, text="Start:").pack(side=tk.LEFT) + self.start_frame_entry = ttk.Entry(range_input_frame, textvariable=self.start_frame, width=8, state="disabled") + self.start_frame_entry.pack(side=tk.LEFT, padx=2) + + ttk.Label(range_input_frame, text="End:").pack(side=tk.LEFT, padx=(5,0)) + self.end_frame_entry = ttk.Entry(range_input_frame, textvariable=self.end_frame, width=8, state="disabled") + self.end_frame_entry.pack(side=tk.LEFT, padx=2) + + def setup_navigation_panel(self, parent): + """Set up the navigation controls below the video""" + # Frame slider + slider_frame = ttk.Frame(parent) + slider_frame.pack(fill=tk.X, padx=5, pady=(5,0)) + + self.frame_slider = ttk.Scale(slider_frame, from_=0, to=100, orient=tk.HORIZONTAL, command=self.on_slider_change) + self.frame_slider.pack(side=tk.LEFT, fill=tk.X, expand=True, padx=(0,5)) + + self.frame_counter = ttk.Label(slider_frame, text="0/0", width=10) + self.frame_counter.pack(side=tk.RIGHT) + + # Video trimming + trim_frame = ttk.Frame(parent) + trim_frame.pack(fill=tk.X, padx=5, pady=(5,0)) + + self.trim_check = ttk.Checkbutton(trim_frame, text="Enable video trimming", variable=self.trim_enabled, command=self.toggle_trim) + self.trim_check.pack(side=tk.LEFT) + + trim_indicators = ttk.Frame(trim_frame) + trim_indicators.pack(side=tk.RIGHT) + + ttk.Label(trim_indicators, text="In:").pack(side=tk.LEFT) + self.trim_in_label = ttk.Label(trim_indicators, text="0", width=6) + self.trim_in_label.pack(side=tk.LEFT, padx=2) + + ttk.Label(trim_indicators, text="Out:").pack(side=tk.LEFT, padx=(5,0)) + self.trim_out_label = ttk.Label(trim_indicators, text="0", width=6) + self.trim_out_label.pack(side=tk.LEFT, padx=2) + + # Trim timeline + self.trim_canvas = tk.Canvas(parent, height=20, bg="lightgray") + self.trim_canvas.pack(fill=tk.X, padx=5, pady=(5,0)) + + # Navigation buttons + button_frame = ttk.Frame(parent) + button_frame.pack(fill=tk.X, padx=5, pady=(5,0)) + + ttk.Button(button_frame, text="◀◀ Previous", command=self.prev_frame).pack(side=tk.LEFT, padx=2) + ttk.Button(button_frame, text="Next ▶▶", command=self.next_frame).pack(side=tk.LEFT, padx=2) + ttk.Button(button_frame, text="◀◀◀ -10 Frames", command=lambda: self.jump_frames(-10)).pack(side=tk.LEFT, padx=2) + ttk.Button(button_frame, text="+10 Frames ▶▶▶", command=lambda: self.jump_frames(10)).pack(side=tk.LEFT, padx=2) + + # Bind trim canvas events + self.trim_canvas.bind("", self.on_trim_click) + self.trim_canvas.bind("", self.on_trim_drag) + self.trim_canvas.bind("", self.on_trim_release) + + def on_mode_change(self): + """Handle blur mode change between auto and manual""" + mode = self.blur_mode.get() + + if mode == "auto": + # Show auto frame, hide manual frame + self.auto_frame.pack(fill=tk.X, padx=5, pady=2) + self.manual_frame.pack_forget() + + # Disable manual face detection + self.face_detect_var.set(False) + self.auto_detect_faces = False + + # Clear manual mode face tracking data for smooth transition + self.tracked_faces = [] + self.next_face_id = 0 + + if not FACE_BLURRING_AVAILABLE: + self.status_text.set(self.AUTO_MODE_UNAVAILABLE_MESSAGE) + self.blur_mode.set("manual") + self.on_mode_change() + return + + else: # manual mode + # Show manual frame, hide auto frame + self.manual_frame.pack(fill=tk.X, padx=5, pady=2) + self.auto_frame.pack_forget() + + # Clear auto mode face data when switching to manual mode + if hasattr(self, 'auto_face_data'): + self.auto_face_data = None + + self.status_text.set(f"Switched to {mode} mode") + self.show_current_frame() + + def initialize_auto_mode(self): + """Initialize auto mode pose tracker""" + if not FACE_BLURRING_AVAILABLE: + self.status_text.set("Auto mode not available") + return + + if self.init_auto_mode(): + self.status_text.set("Auto mode initialized successfully") + else: + self.status_text.set("Failed to initialize auto mode") + + def process_frame_auto_mode(self, frame): + """Process frame using auto mode (face_blurring.py functionality)""" + if not self.auto_pose_initialized: + return frame + + try: + # Detect poses (keypoints and scores) + keypoints, scores = self.auto_pose_tracker(frame) + + if keypoints is None or len(keypoints) == 0: + return frame + + processed_frame = frame.copy() + + # Store face data for saving if enabled + if self.auto_save_face_data.get(): + if not hasattr(self, 'auto_face_data'): + self.auto_face_data = { + "video_file": self.input_video, + "frames": {}, + "faces": {} + } + + # Process each detected person + for person_idx in range(len(keypoints)): + person_kpts = keypoints[person_idx] + person_scores = scores[person_idx] + + # Extract face keypoints + face_keypoints = [] + face_scores = [] + + for kpt_idx in self.FACE_KEYPOINT_INDICES: + if kpt_idx < len(person_kpts): + face_keypoints.append(person_kpts[kpt_idx]) + face_scores.append(person_scores[kpt_idx]) + + if len(face_keypoints) < self.MIN_FACE_KEYPOINTS_FOR_PROCESSING: + continue + + face_keypoints = np.array(face_keypoints) + face_scores = np.array(face_scores) + + # Filter valid keypoints (confidence > 0.0) + valid_indices = face_scores > 0.0 + if np.sum(valid_indices) < 3: + continue + + valid_face_kpts = face_keypoints[valid_indices] + + # Estimate face region using eye and nose positions + points_for_hull = self.estimate_face_region(valid_face_kpts) + + if points_for_hull.shape[0] >= 3: + # Apply face obscuration using face_blurring.py function + processed_frame = apply_face_obscuration( + processed_frame, + points_for_hull, + self.auto_blur_type.get(), + self.auto_blur_shape.get(), + self.auto_blur_intensity.get() + ) + + # Save face data if enabled + if self.auto_save_face_data.get(): + # Calculate bounding box from face keypoints + x_coords = [kp[0] for kp in valid_face_kpts] + y_coords = [kp[1] for kp in valid_face_kpts] + + x_min, x_max = min(x_coords), max(x_coords) + y_min, y_max = min(y_coords), max(y_coords) + + # Add padding + width = max(1, x_max - x_min) + height = max(1, y_max - y_min) + + padding_x = width * self.FACE_PADDING_X_RATIO + padding_y = height * self.FACE_PADDING_Y_RATIO + + x = max(0, int(x_min - padding_x)) + y = max(0, int(y_min - padding_y)) + w = min(int(width + padding_x*2), frame.shape[1] - x) + h = min(int(height + padding_y*2), frame.shape[0] - y) + + # Calculate confidence + confidence = np.mean(face_scores[valid_indices]) + + # Store face data + face_id = f"auto_face_{person_idx}" + frame_key = str(self.frame_index) + + if frame_key not in self.auto_face_data["frames"]: + self.auto_face_data["frames"][frame_key] = {"faces": []} + + face_data = { + "face_id": face_id, + "bbox": [x, y, w, h], + "confidence": float(confidence), + "keypoints": valid_face_kpts.tolist() + } + + self.auto_face_data["frames"][frame_key]["faces"].append(face_data) + + # Store face across all frames + if face_id not in self.auto_face_data["faces"]: + self.auto_face_data["faces"][face_id] = { + "frames": [self.frame_index], + "bbox": [x, y, w, h], + "confidence": float(confidence) + } + else: + if self.frame_index not in self.auto_face_data["faces"][face_id]["frames"]: + self.auto_face_data["faces"][face_id]["frames"].append(self.frame_index) + self.auto_face_data["faces"][face_id]["bbox"] = [x, y, w, h] + self.auto_face_data["faces"][face_id]["confidence"] = float(confidence) + + return processed_frame + + except Exception as e: + print(f"Error in auto mode processing: {e}") + return frame + + def estimate_face_region(self, face_keypoints): + """Estimate face region from limited keypoints""" + if len(face_keypoints) < 2: + return face_keypoints + + # Calculate center and scale + center = np.mean(face_keypoints, axis=0) + + # Calculate average distance between points for scaling + distances = [] + for i in range(len(face_keypoints)): + for j in range(i + 1, len(face_keypoints)): + dist = np.linalg.norm(face_keypoints[i] - face_keypoints[j]) + distances.append(dist) + + if not distances: + return face_keypoints + + avg_distance = np.mean(distances) + + # Scale factors based on blur size setting + blur_size = self.auto_blur_size.get() + if blur_size == "small": + factor_chin = 2.5 + factor_forehead = 2.0 + elif blur_size == "medium": + factor_chin = 3.0 + factor_forehead = 2.5 + elif blur_size == "large": + factor_chin = 4.0 + factor_forehead = 3.0 + else: + factor_chin = 3.0 + factor_forehead = 2.5 + + # Estimate additional points for face boundary + additional_points = [] + + # Add forehead point (above center) + forehead_point = center + np.array([0, -avg_distance * factor_forehead]) + additional_points.append(forehead_point) + + # Add chin point (below center) + chin_point = center + np.array([0, avg_distance * factor_chin]) + additional_points.append(chin_point) + + # Add side points + left_point = center + np.array([-avg_distance * 1.5, 0]) + right_point = center + np.array([avg_distance * 1.5, 0]) + additional_points.extend([left_point, right_point]) + + # Combine original and estimated points + all_points = np.vstack([face_keypoints, np.array(additional_points)]) + + return all_points + + def update_blur_strength(self, event=None): + """Update blur strength value display""" + value = self.blur_strength_var.get() + # Ensure odd number for gaussian blur + if value % 2 == 0: + value += 1 + self.blur_strength_var.set(value) + + self.blur_strength = value + self.strength_label.config(text=str(value)) + + def toggle_face_detection(self): + """Toggle face detection on/off""" + self.auto_detect_faces = self.face_detect_var.get() + if self.auto_detect_faces and not self.rtmpose_initialized: + self.status_text.set(self.RTMPOSE_UNAVAILABLE_MESSAGE) + self.face_detect_var.set(False) + self.auto_detect_faces = False + return + + status = "enabled" if self.auto_detect_faces else "disabled" + self.status_text.set(f"Automatic face detection {status}") + self.show_current_frame() + + def toggle_crop(self): + """Toggle crop mode on/off with proper validation""" + # If enabling cropping, validate that region is set + if self.crop_enabled.get(): + if self.crop_width <= 0 or self.crop_height <= 0: + self.status_text.set("Please draw a crop region first") + self.crop_enabled.set(False) + return + + status = "enabled" + if self.crop_type.get() == "mask": + status += f" (masking outside with {self.crop_mask_type.get()})" + self.status_text.set(f"Video cropping {status}") + + else: + self.status_text.set("Video cropping disabled") + + self.show_current_frame() + + def toggle_rotation(self): + """Toggle rotation on/off""" + if self.rotation_enabled.get(): + self.status_text.set(f"Video rotation enabled ({self.rotation_angle.get()}°)") + else: + self.status_text.set("Video rotation disabled") + self.show_current_frame() + + def set_rotation(self, angle): + """Set rotation angle""" + current = self.rotation_angle.get() + if angle in [-90, 90]: # Relative rotation + new_angle = (current + angle) % 360 + else: # Absolute rotation + new_angle = angle + + self.rotation_angle.set(new_angle) + self.rotation_enabled.set(True if new_angle != 0 else False) + self.status_text.set(f"Rotation set to {new_angle}°") + self.show_current_frame() + + def rotate_frame(self, frame, angle): + """Rotate a frame by given angle in degrees""" + if angle == 0: + return frame + + height, width = frame.shape[:2] + center = (width // 2, height // 2) + + # Get the rotation matrix + rotation_matrix = cv2.getRotationMatrix2D(center, angle, 1.0) + + # Perform the rotation + rotated = cv2.warpAffine(frame, rotation_matrix, (width, height), + flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT) + return rotated + + def toggle_trim(self): + """Toggle video trimming on/off""" + if self.trim_enabled.get(): + if self.cap is None: + self.status_text.set("Please open a video first") + self.trim_enabled.set(False) + return + + # Initialize trim range to full video + self.trim_start_frame.set(0) + self.trim_end_frame.set(self.total_frames - 1) + self.update_trim_display() + self.status_text.set("Video trimming enabled. Drag markers to set in/out points.") + else: + self.clear_trim_display() + self.status_text.set("Video trimming disabled") + + def update_trim_display(self): + """Update trim timeline display""" + if not self.trim_enabled.get() or self.cap is None: + return + + self.clear_trim_display() + + width = self.trim_canvas.winfo_width() + height = self.trim_canvas.winfo_height() + + if width <= 1: # Canvas not yet rendered + self.root.after(100, self.update_trim_display) + return + + # Draw background + self.trim_canvas.create_rectangle(0, 0, width, height, fill="lightgray", outline="") + + # Calculate positions + start_pos = (self.trim_start_frame.get() / max(1, self.total_frames - 1)) * width + end_pos = (self.trim_end_frame.get() / max(1, self.total_frames - 1)) * width + + # Draw trim region + self.trim_canvas.create_rectangle(start_pos, 0, end_pos, height, fill="lightblue", outline="") + + # Draw handles + marker_width = 8 + self.trim_canvas.create_rectangle( + start_pos - marker_width/2, 0, + start_pos + marker_width/2, height, + fill="blue", outline="") + + self.trim_canvas.create_rectangle( + end_pos - marker_width/2, 0, + end_pos + marker_width/2, height, + fill="blue", outline="") + + # Update trim labels + self.trim_in_label.config(text=str(self.trim_start_frame.get())) + self.trim_out_label.config(text=str(self.trim_end_frame.get())) + + def clear_trim_display(self): + """Clear trim display canvas""" + self.trim_canvas.delete("all") + + def on_trim_click(self, event): + """Handle trim timeline clicks""" + if not self.trim_enabled.get() or self.cap is None: + return + + width = self.trim_canvas.winfo_width() + + # Calculate clicked position as frame number + frame_pos = int((event.x / width) * self.total_frames) + frame_pos = max(0, min(frame_pos, self.total_frames - 1)) + + # Determine if clicked on trim handle (within 5 pixels) + start_pos = (self.trim_start_frame.get() / max(1, self.total_frames - 1)) * width + end_pos = (self.trim_end_frame.get() / max(1, self.total_frames - 1)) * width + + if abs(event.x - start_pos) <= 5: + # Clicked on start handle + self.dragging_start = True + self.dragging_end = False + elif abs(event.x - end_pos) <= 5: + # Clicked on end handle + self.dragging_start = False + self.dragging_end = True + else: + # Clicked elsewhere - go to that frame + self.frame_index = frame_pos + self.show_current_frame() + self.dragging_start = False + self.dragging_end = False + + def on_trim_drag(self, event): + """Handle dragging trim handles""" + if not self.trim_enabled.get() or not (self.dragging_start or self.dragging_end): + return + + width = self.trim_canvas.winfo_width() + + # Calculate frame from position + frame_pos = int((event.x / width) * self.total_frames) + frame_pos = max(0, min(frame_pos, self.total_frames - 1)) + + if self.dragging_start: + # Ensure start doesn't go beyond end + frame_pos = min(frame_pos, self.trim_end_frame.get() - 1) + self.trim_start_frame.set(frame_pos) + elif self.dragging_end: + # Ensure end doesn't go below start + frame_pos = max(frame_pos, self.trim_start_frame.get() + 1) + self.trim_end_frame.set(frame_pos) + + self.update_trim_display() + + def on_trim_release(self, event): + """Handle releasing trim handles""" + self.dragging_start = False + self.dragging_end = False + + def toggle_frame_range(self): + """Toggle processing range entries""" + if self.blur_entire_video.get(): + self.start_frame_entry.config(state="disabled") + self.end_frame_entry.config(state="disabled") + else: + self.start_frame_entry.config(state="normal") + self.end_frame_entry.config(state="normal") + + def on_slider_change(self, event): + """Handle frame slider change""" + if self.cap is None: + return + + # Prevent recursive updates + if hasattr(self, 'updating_slider') and self.updating_slider: + return + + try: + self.updating_slider = True + frame_num = int(float(self.frame_slider.get())) + if frame_num != self.frame_index: + self.frame_index = max(0, min(frame_num, self.total_frames - 1)) + # Clear face tracking data + self._clear_face_tracking_on_frame_change() + self.show_current_frame() + finally: + self.updating_slider = False + + def next_frame(self): + """Go to next frame""" + if self.cap is None: + return + + if self.frame_index < self.total_frames - 1: + self.frame_index += 1 + # Clear face tracking data + self._clear_face_tracking_on_frame_change() + self.show_current_frame() + + def prev_frame(self): + """Go to previous frame""" + if self.cap is None: + return + + if self.frame_index > 0: + self.frame_index -= 1 + # Clear face tracking data + self._clear_face_tracking_on_frame_change() + self.show_current_frame() + + def jump_frames(self, offset): + """Jump multiple frames forward/backward""" + if self.cap is None: + return + + new_frame = max(0, min(self.frame_index + offset, self.total_frames - 1)) + if new_frame != self.frame_index: + self.frame_index = new_frame + # Clear face tracking data + self._clear_face_tracking_on_frame_change() + self.show_current_frame() + + def _clear_face_tracking_on_frame_change(self): # NOTE: 06.27.2025: detected face should show on only current frame + """Clear face tracking data when frame changes manually to prevent visualization overlap""" + if (self.blur_mode.get() == "manual" and + self.auto_detect_faces and + hasattr(self, 'tracked_faces')): + # Track previous frame index to only clear when frame actually changes + if not hasattr(self, '_prev_frame_index'): + self._prev_frame_index = self.frame_index + + if self._prev_frame_index != self.frame_index: + self.tracked_faces = [] + # Clear DeepSort tracker only when frame actually changes (commented out for Manual Mode) + # if self.has_deepsort: + # print(f"DEBUG: Clearing DeepSort tracker") + # self.deepsort_tracker.tracker.delete_all_tracks() + self._prev_frame_index = self.frame_index + + def update_shapes_listbox(self): + """Update shapes listbox with current items""" + self.shapes_listbox.delete(0, tk.END) + for i, shape in enumerate(self.shapes): + shape_type = shape[0] + mask_type = shape[2] + + # Include frame range if specified + if len(shape) >= 7: + start, end = shape[5], shape[6] + self.shapes_listbox.insert(tk.END, f"{i+1}. {shape_type} - {mask_type} (Frames {start}-{end})") + else: + self.shapes_listbox.insert(tk.END, f"{i+1}. {shape_type} - {mask_type}") + + def on_shape_selected(self, event): + """Handle shape selection in listbox""" + selection = self.shapes_listbox.curselection() + if not selection: + return + + idx = selection[0] + if idx >= len(self.shapes): + return + + shape = self.shapes[idx] + + # Set frame range entries + if len(shape) >= 7: + start, end = shape[5], shape[6] + else: + start, end = 0, self.total_frames - 1 + + self.shape_start_frame.delete(0, tk.END) + self.shape_start_frame.insert(0, str(start)) + + self.shape_end_frame.delete(0, tk.END) + self.shape_end_frame.insert(0, str(end)) + + # Highlight the shape in the preview + self.show_current_frame(highlight_shape_idx=idx) + + def set_shape_frame_range(self): + """Set frame range for selected shape""" + selection = self.shapes_listbox.curselection() + if not selection: + self.status_text.set("No shape selected") + return + + idx = selection[0] + if idx >= len(self.shapes): + return + + try: + start = int(self.shape_start_frame.get()) + end = int(self.shape_end_frame.get()) + + # Validate range + start = max(0, min(start, self.total_frames - 1)) + end = max(start, min(end, self.total_frames - 1)) + + # Update shape + shape_list = list(self.shapes[idx]) + if len(shape_list) < 7: + shape_list.extend([0, self.total_frames - 1]) + + shape_list[5] = start + shape_list[6] = end + + self.shapes[idx] = tuple(shape_list) + + self.update_shapes_listbox() + self.show_current_frame() + self.status_text.set(f"Shape {idx+1} set to appear in frames {start}-{end}") + + except ValueError: + self.status_text.set("Invalid frame numbers") + + def delete_selected_shape(self): + """Delete selected shape""" + selection = self.shapes_listbox.curselection() + if not selection: + return + + idx = selection[0] + if idx < len(self.shapes): + del self.shapes[idx] + self.update_shapes_listbox() + self.show_current_frame() + + def clear_shapes(self): + """Clear all shapes""" + self.shapes = [] + self.update_shapes_listbox() + self.show_current_frame() + + def set_drawing_mode(self, mode): + """Set drawing tool mode""" + self.current_shape_type = mode + self.status_text.set(f"Selected drawing mode: {mode}") + + def choose_color(self): + """Open color picker for solid mask color""" + color = colorchooser.askcolor(title="Choose mask color") + if color[0]: + r, g, b = [int(c) for c in color[0]] + self.mask_color = (b, g, r) # Convert to BGR for OpenCV + self.status_text.set(f"Mask color set to RGB: {r},{g},{b}") + + def start_crop_drawing(self): + """Start drawing crop region""" + if self.current_frame is None: + self.status_text.set("Please open a video first") + return + + self.drawing_crop = True + self.crop_x = self.crop_y = self.crop_width = self.crop_height = 0 + self.status_text.set("Click and drag to define crop region") + + def reset_crop(self): + """Reset crop region""" + self.crop_x = self.crop_y = self.crop_width = self.crop_height = 0 + self.crop_enabled.set(False) + self.update_crop_info() + self.show_current_frame() + + def update_crop_info(self): + """Update crop dimension display""" + if self.crop_width > 0 and self.crop_height > 0: + self.crop_info_label.config(text=f"{self.crop_width}x{self.crop_height}") + else: + self.crop_info_label.config(text="Not set") + + def show_current_frame(self, highlight_shape_idx=None): + """Display current frame with all effects""" + if self.cap is None: + return + + # Update slider position and frame counter + if not hasattr(self, 'updating_slider') or not self.updating_slider: + self.updating_slider = True + self.frame_slider.set(self.frame_index) + self.updating_slider = False + + self.frame_counter.config(text=f"{self.frame_index+1}/{self.total_frames}") + + # Seek to frame + self.cap.set(cv2.CAP_PROP_POS_FRAMES, self.frame_index) + ret, frame = self.cap.read() + + if not ret: + self.status_text.set("Failed to read frame") + return + + # Store original frame + self.current_frame = frame.copy() + + # Process the frame with all effects + processed = self.process_frame(frame) + + # Convert to RGB for display + rgb_frame = cv2.cvtColor(processed, cv2.COLOR_BGR2RGB) + + # Calculate scale factor to fit display + canvas_width = self.canvas.winfo_width() + canvas_height = self.canvas.winfo_height() + + if canvas_width < 10 or canvas_height < 10: # Canvas not yet rendered + self.canvas.config(width=800, height=600) # Set default size + canvas_width, canvas_height = 800, 600 + + frame_height, frame_width = rgb_frame.shape[:2] + scale = min(canvas_width / frame_width, canvas_height / frame_height) + + # Don't allow upscaling + if scale > 1.0: + scale = 1.0 + + # Calculate new dimensions + new_width = int(frame_width * scale) + new_height = int(frame_height * scale) + + # Resize frame + scaled_frame = cv2.resize(rgb_frame, (new_width, new_height)) + + # Store scale for coordinate conversion + self.scale_factor = scale + + # Create PyTk image + self.photo = ImageTk.PhotoImage(image=Image.fromarray(scaled_frame)) + + # Clear canvas + self.canvas.delete("all") + + # Center the image on canvas + x_offset = max(0, (canvas_width - new_width) // 2) + y_offset = max(0, (canvas_height - new_height) // 2) + + # Store offsets for coordinate conversion + self.x_offset = x_offset + self.y_offset = y_offset + + # Display the image + self.canvas.create_image(x_offset, y_offset, anchor=tk.NW, image=self.photo) + + # Draw visualization overlays + self.draw_overlays(highlight_shape_idx) + + # Update trim display if enabled + if self.trim_enabled.get(): + self.update_trim_display() + + # Update status with active elements + active_shapes = sum(1 for s in self.shapes if len(s) < 7 or + (s[5] <= self.frame_index <= s[6])) + self.status_text.set(f"Frame {self.frame_index+1}/{self.total_frames} | " + + f"Active shapes: {active_shapes} | " + + f"Faces: {len(self.tracked_faces)}") + + def process_frame(self, frame): + """Process frame with all active effects""" + result = frame.copy() + + # Apply rotation if enabled + if self.rotation_enabled.get() and self.rotation_angle.get() != 0: + result = self.rotate_frame(result, self.rotation_angle.get()) + + # Face detection/tracking based on mode + if self.blur_mode.get() == "auto": + # Use auto mode (face_blurring.py functionality) + result = self.process_frame_auto_mode(result) + else: + # Use manual mode (original functionality) + if self.auto_detect_faces and self.rtmpose_initialized: + # Always re-detect faces for current frame to prevent visualization overlap + self.tracked_faces = self.detect_faces(frame) + + if self.tracked_faces: + face_mask = self.create_face_mask(frame) + result = self.apply_mask_effect(result, face_mask, self.mask_type, + self.blur_strength, self.mask_color) + + # Apply manual shapes + for shape in self.shapes: + # Check if active for current frame + if len(shape) >= 7: + if self.frame_index < shape[5] or self.frame_index > shape[6]: + continue + + shape_type, points, mask_type, strength, color = shape[:5] + + # Create shape mask + height, width = frame.shape[:2] + shape_mask = np.zeros((height, width), dtype=np.uint8) + + if shape_type == "rectangle": + x1, y1 = points[0] + x2, y2 = points[1] + x_min, x_max = min(x1, x2), max(x1, x2) + y_min, y_max = min(y1, y2), max(y1, y2) + cv2.rectangle(shape_mask, (x_min, y_min), (x_max, y_max), 255, -1) + + elif shape_type in ("polygon", "freehand"): + points_array = np.array(points, np.int32).reshape((-1, 1, 2)) + cv2.fillPoly(shape_mask, [points_array], 255) + + # Apply effect to this shape + result = self.apply_mask_effect(result, shape_mask, mask_type, strength, color) + + # Apply crop preview if enabled + if self.crop_enabled.get() and self.crop_width > 0 and self.crop_height > 0: + # Always show preview regardless of frame range settings + if self.crop_type.get() == "mask": + # Create inverse mask (everything outside crop area) + height, width = frame.shape[:2] + crop_mask = np.ones((height, width), dtype=np.uint8) * 255 + + x, y = self.crop_x, self.crop_y + w, h = self.crop_width, self.crop_height + + # Clear mask in crop region (keep this area) + crop_mask[y:y+h, x:x+w] = 0 + + # Apply effect to outside area + result = self.apply_mask_effect( + result, crop_mask, self.crop_mask_type.get(), + self.blur_strength, self.mask_color + ) + + return result + + def draw_overlays(self, highlight_shape_idx=None): + """Draw visualization overlays on canvas""" + # Draw manual shapes outlines + for i, shape in enumerate(self.shapes): + # Skip if not visible in current frame + if len(shape) >= 7 and (self.frame_index < shape[5] or self.frame_index > shape[6]): + continue + + shape_type, points = shape[0], shape[1] + + # Set color - highlight selected shape + color = "yellow" if i == highlight_shape_idx else "red" + + if shape_type == "rectangle": + x1, y1 = points[0] + x2, y2 = points[1] + + # Convert to canvas coordinates + cx1, cy1 = self.img_to_canvas(x1, y1) + cx2, cy2 = self.img_to_canvas(x2, y2) + + self.canvas.create_rectangle(cx1, cy1, cx2, cy2, outline=color, width=2) + + elif shape_type in ("polygon", "freehand"): + # Convert all points to canvas coordinates + canvas_points = [] + for x, y in points: + cx, cy = self.img_to_canvas(x, y) + canvas_points.append(cx) + canvas_points.append(cy) + + self.canvas.create_polygon(*canvas_points, fill="", outline=color, width=2) + + # Draw tracked faces + for face_id, (x, y, w, h), _, _ in self.tracked_faces: + # Convert to canvas coordinates + cx1, cy1 = self.img_to_canvas(x, y) + cx2, cy2 = self.img_to_canvas(x+w, y+h) + + self.canvas.create_rectangle(cx1, cy1, cx2, cy2, outline="lime", width=2) + self.canvas.create_text(cx1, cy1-10, text=f"ID:{face_id}", fill="lime", anchor=tk.W) + + # Draw crop region if set + if self.crop_width > 0 and self.crop_height > 0: + cx1, cy1 = self.img_to_canvas(self.crop_x, self.crop_y) + cx2, cy2 = self.img_to_canvas(self.crop_x + self.crop_width, + self.crop_y + self.crop_height) + + # Dash pattern for rectangle + dash_pattern = (5, 5) + crop_color = "green" if self.crop_enabled.get() else "gray" + + self.canvas.create_rectangle(cx1, cy1, cx2, cy2, + outline=crop_color, width=2, + dash=dash_pattern) + + # Add crop dimensions text + self.canvas.create_text(cx1, cy1-10, + text=f"Crop: {self.crop_width}x{self.crop_height}", + fill=crop_color, anchor=tk.W) + + def img_to_canvas(self, x, y): + """Convert image coordinates to canvas coordinates""" + canvas_x = int(x * self.scale_factor) + self.x_offset + canvas_y = int(y * self.scale_factor) + self.y_offset + return canvas_x, canvas_y + + def canvas_to_img(self, canvas_x, canvas_y): + """Convert canvas coordinates to image coordinates""" + img_x = int((canvas_x - self.x_offset) / self.scale_factor) + img_y = int((canvas_y - self.y_offset) / self.scale_factor) + return img_x, img_y + + def on_mouse_down(self, event): + """Handle mouse down event for drawing""" + if self.current_frame is None: + return + + # Convert canvas coordinates to image coordinates + x, y = self.canvas_to_img(event.x, event.y) + + # Ensure coordinates are within image bounds + img_h, img_w = self.current_frame.shape[:2] + if x < 0 or x >= img_w or y < 0 or y >= img_h: + return + + if self.drawing_crop: + # Start crop drawing + self.crop_x = x + self.crop_y = y + + # Create temp rectangle on canvas + cx, cy = self.img_to_canvas(x, y) + self.temp_crop_rect = self.canvas.create_rectangle( + cx, cy, cx, cy, outline="green", width=2, dash=(5,5)) + return + + if self.current_shape_type == "rectangle": + self.drawing = True + self.current_shape = [(x, y)] + + # Create temp rectangle on canvas + cx, cy = self.img_to_canvas(x, y) + self.temp_shape_id = self.canvas.create_rectangle( + cx, cy, cx, cy, outline="red", width=2) + + elif self.current_shape_type == "polygon": + if not self.drawing: + self.drawing = True + self.current_shape = [(x, y)] + + # Draw first point + cx, cy = self.img_to_canvas(x, y) + self.temp_shape_id = self.canvas.create_oval( + cx-3, cy-3, cx+3, cy+3, fill="red", outline="red") + else: + # Add point to polygon + self.current_shape.append((x, y)) + + # Draw line to previous point + prev_x, prev_y = self.current_shape[-2] + prev_cx, prev_cy = self.img_to_canvas(prev_x, prev_y) + cx, cy = self.img_to_canvas(x, y) + + self.canvas.create_line(prev_cx, prev_cy, cx, cy, fill="red", width=2) + self.canvas.create_oval(cx-3, cy-3, cx+3, cy+3, fill="red", outline="red") + + elif self.current_shape_type == "freehand": + self.drawing = True + self.current_shape = [(x, y)] + + # Draw first point + cx, cy = self.img_to_canvas(x, y) + self.temp_shape_id = self.canvas.create_oval( + cx-3, cy-3, cx+3, cy+3, fill="red", outline="red") + + def on_mouse_move(self, event): + """Handle mouse movement during drawing""" + if not self.drawing and not self.drawing_crop: + return + + # Convert canvas coordinates to image coordinates + x, y = self.canvas_to_img(event.x, event.y) + + # Ensure coordinates are within image bounds + img_h, img_w = self.current_frame.shape[:2] + x = max(0, min(x, img_w-1)) + y = max(0, min(y, img_h-1)) + + if self.drawing_crop: + # Update crop rectangle + cx, cy = self.img_to_canvas(x, y) + start_x, start_y = self.img_to_canvas(self.crop_x, self.crop_y) + self.canvas.coords(self.temp_crop_rect, start_x, start_y, cx, cy) + return + + if self.current_shape_type == "rectangle": + # Update rectangle + cx, cy = self.img_to_canvas(x, y) + start_x, start_y = self.img_to_canvas(self.current_shape[0][0], self.current_shape[0][1]) + self.canvas.coords(self.temp_shape_id, start_x, start_y, cx, cy) + + elif self.current_shape_type == "freehand": + # Add point to freehand shape + self.current_shape.append((x, y)) + + # Draw line to previous point + prev_x, prev_y = self.current_shape[-2] + prev_cx, prev_cy = self.img_to_canvas(prev_x, prev_y) + cx, cy = self.img_to_canvas(x, y) + + self.canvas.create_line(prev_cx, prev_cy, cx, cy, fill="red", width=2) + + def on_mouse_up(self, event): + """Handle mouse release after drawing""" + if not self.drawing and not self.drawing_crop: + return + + # Convert canvas coordinates to image coordinates + x, y = self.canvas_to_img(event.x, event.y) + + # Ensure coordinates are within image bounds + img_h, img_w = self.current_frame.shape[:2] + x = max(0, min(x, img_w-1)) + y = max(0, min(y, img_h-1)) + + if self.drawing_crop: + self.drawing_crop = False + + # Calculate crop dimensions + width = abs(x - self.crop_x) + height = abs(y - self.crop_y) + + # Ensure top-left is minimum coordinate + if x < self.crop_x: + self.crop_x = x + if y < self.crop_y: + self.crop_y = y + + # Set crop dimensions + self.crop_width = width + self.crop_height = height + + # Update info and enable crop + self.update_crop_info() + self.crop_enabled.set(True) + + # Refresh display + self.show_current_frame() + return + + if self.current_shape_type == "rectangle": + self.drawing = False + self.current_shape.append((x, y)) + + # Add the rectangle with current settings + self.shapes.append(( + "rectangle", + self.current_shape, + self.mask_type_var.get(), + self.blur_strength, + self.mask_color, + self.frame_index, + self.total_frames - 1 + )) + + self.current_shape = [] + self.update_shapes_listbox() + self.show_current_frame() + + elif self.current_shape_type == "freehand": + self.drawing = False + + # Add shape if it has enough points + if len(self.current_shape) > 2: + self.shapes.append(( + "freehand", + self.current_shape, + self.mask_type_var.get(), + self.blur_strength, + self.mask_color, + self.frame_index, + self.total_frames - 1 + )) + + self.current_shape = [] + self.update_shapes_listbox() + self.show_current_frame() + + def on_double_click(self, event): + """Finish polygon on double-click""" + if self.current_shape_type == "polygon" and self.drawing: + self.drawing = False + + # Add polygon if it has at least 3 points + if len(self.current_shape) >= 3: + self.shapes.append(( + "polygon", + self.current_shape, + self.mask_type_var.get(), + self.blur_strength, + self.mask_color, + self.frame_index, + self.total_frames - 1 + )) + + self.current_shape = [] + self.update_shapes_listbox() + self.show_current_frame() + + def detect_faces(self, frame): + """Detect and track faces in the frame""" + if not self.rtmpose_initialized: + return [] + + try: + # Get keypoints from RTMPose + keypoints, scores = self.pose_tracker(frame) + + # Process detected people + detections = [] + for person_idx, (person_kps, person_scores) in enumerate(zip(keypoints, scores)): + # Extract face keypoints + face_kps = [] + face_scores = [] + + for idx in self.FACE_KEYPOINT_INDICES: + if idx < len(person_kps) and person_scores[idx] > self.FACE_CONFIDENCE_THRESHOLD: + face_kps.append((int(person_kps[idx][0]), int(person_kps[idx][1]))) + face_scores.append(person_scores[idx]) + + if len(face_kps) >= self.MIN_FACE_KEYPOINTS: + # Calculate bounding box + x_coords = [kp[0] for kp in face_kps] + y_coords = [kp[1] for kp in face_kps] + + x_min, x_max = min(x_coords), max(x_coords) + y_min, y_max = min(y_coords), max(y_coords) + + # Add padding + width = max(1, x_max - x_min) + height = max(1, y_max - y_min) + + padding_x = width * self.FACE_PADDING_X_RATIO + padding_y = height * self.FACE_PADDING_Y_RATIO + + x = max(0, int(x_min - padding_x)) + y = max(0, int(y_min - padding_y)) + w = min(int(width + padding_x*2), frame.shape[1] - x) + h = min(int(height + padding_y*2), frame.shape[0] - y) + + # Calculate confidence + confidence = sum(face_scores) / len(face_scores) if face_scores else 0.0 + + # Add to detections + detections.append(([x, y, w, h], confidence, "face")) + + + # # NOTE: 06.27.2025: commented out for Manual Mode to avoid confirmation issues + # if self.has_deepsort and detections: + # tracks = self.deepsort_tracker.update_tracks(detections, frame=frame) + # print(f"DEBUG: DeepSort returned {len(tracks)} tracks") + # + # tracked_faces = [] + # for track in tracks: + # print(f"DEBUG: Track {track.track_id} is_confirmed: {track.is_confirmed()}") + # if track.is_confirmed(): + # track_id = track.track_id + # tlwh = track.to_tlwh() + # x, y, w, h = [int(v) for v in tlwh] + # + # tracked_faces.append([ + # track_id, + # (x, y, w, h), + # track.get_det_conf(), + # self.frame_index + # ]) + # + # print(f"DEBUG: Confirmed tracks: {len(tracked_faces)}") + # return tracked_faces + # else: + # Basic tracking without DeepSort (used for Manual Mode) + return [(i, (d[0][0], d[0][1], d[0][2], d[0][3]), d[1], self.frame_index) + for i, d in enumerate(detections)] + + except Exception as e: + print(f"Error in face detection: {e}") + return [] + + # HACK: 06.27.2025: added to prevent cv2.ellipse error when bounding box is too small + def create_face_mask(self, frame): + """Create a mask for tracked faces""" + height, width = frame.shape[:2] + mask = np.zeros((height, width), dtype=np.uint8) + + for face_id, (x, y, w, h), confidence, _ in self.tracked_faces: + # Skip invalid bounding boxes + if w <= 0 or h <= 0: + continue + + if self.mask_shape == "rectangle": + cv2.rectangle(mask, (x, y), (x+w, y+h), 255, -1) + + elif self.mask_shape == "oval": + center = (x + w // 2, y + h // 2) + # Ensure axes are at least 1 to avoid OpenCV error (is this proper way to handle this?) + axes = (max(1, w // 2), max(1, h // 2)) + cv2.ellipse(mask, center, axes, 0, 0, 360, 255, -1) + + # Add forehead + forehead_center = (center[0], y + int(h * self.FOREHEAD_HEIGHT_RATIO)) + forehead_size = (max(1, w // 2), max(1, h // 2)) + cv2.ellipse(mask, forehead_center, forehead_size, 0, 0, 180, 255, -1) + + elif self.mask_shape == "precise": + center_x = x + w // 2 + center_y = y + h // 2 + + # Create face shape points + face_poly = [] + + # Top of head + for angle in range(0, 180, 10): + angle_rad = np.radians(angle) + radius_x = w // 2 + radius_y = int(h * 0.6) + px = center_x + int(radius_x * np.cos(angle_rad)) + py = center_y - int(radius_y * np.sin(angle_rad)) + face_poly.append((px, py)) + + # Chin and jaw + for angle in range(180, 360, 10): + angle_rad = np.radians(angle) + radius_x = int(w * 0.45) + radius_y = int(h * 0.5) + px = center_x + int(radius_x * np.cos(angle_rad)) + py = center_y - int(radius_y * np.sin(angle_rad)) + face_poly.append((px, py)) + + # Fill polygon + face_poly = np.array(face_poly, np.int32).reshape((-1, 1, 2)) + cv2.fillPoly(mask, [face_poly], 255) + + return mask + + def apply_mask_effect(self, frame, mask, mask_type, blur_strength=21, color=(0,0,0)): + """Apply effect to masked area""" + result = frame.copy() + + if mask_type == "blur": + # Make sure strength is odd + if blur_strength % 2 == 0: + blur_strength += 1 + + blurred = cv2.GaussianBlur(frame, (blur_strength, blur_strength), 0) + result = np.where(mask[:, :, np.newaxis] == 255, blurred, result) + + elif mask_type == "pixelate": + scale = max(1, blur_strength // self.PIXELATE_SCALE_DIVISOR) + temp = cv2.resize(frame, (frame.shape[1] // scale, frame.shape[0] // scale), + interpolation=cv2.INTER_LINEAR) + pixelated = cv2.resize(temp, (frame.shape[1], frame.shape[0]), + interpolation=cv2.INTER_NEAREST) + result = np.where(mask[:, :, np.newaxis] == 255, pixelated, result) + + elif mask_type == "solid": + colored_mask = np.zeros_like(frame) + colored_mask[:] = color + result = np.where(mask[:, :, np.newaxis] == 255, colored_mask, result) + + elif mask_type == "black": + result = np.where(mask[:, :, np.newaxis] == 255, 0, result) + + return result + + def detect_faces_current_frame(self): + """Detect faces in current frame and add as shapes""" + if self.current_frame is None: + return + + faces = self.detect_faces(self.current_frame) + + for face_id, (x, y, w, h), _, _ in faces: + if self.mask_shape == "rectangle": + self.shapes.append(( + "rectangle", + [(x, y), (x+w, y+h)], + self.mask_type_var.get(), + self.blur_strength, + self.mask_color, + self.frame_index, + self.total_frames - 1 + )) + elif self.mask_shape == "oval": + # Create oval points + center_x, center_y = x + w // 2, y + h // 2 + rx, ry = w // 2, h // 2 + + oval_points = [] + for angle in range(0, 360, 10): + rad = np.radians(angle) + px = center_x + int(rx * np.cos(rad)) + py = center_y + int(ry * np.sin(rad)) + oval_points.append((px, py)) + + self.shapes.append(( + "polygon", + oval_points, + self.mask_type_var.get(), + self.blur_strength, + self.mask_color, + self.frame_index, + self.total_frames - 1 + )) + elif self.mask_shape == "precise": + # Create precise face shape + center_x = x + w // 2 + center_y = y + h // 2 + + face_points = [] + + # Top of head + for angle in range(0, 180, 10): + angle_rad = np.radians(angle) + radius_x = w // 2 + radius_y = int(h * 0.6) + px = center_x + int(radius_x * np.cos(angle_rad)) + py = center_y - int(radius_y * np.sin(angle_rad)) + face_points.append((px, py)) + + # Chin and jaw + for angle in range(180, 360, 10): + angle_rad = np.radians(angle) + radius_x = int(w * 0.45) + radius_y = int(h * 0.5) + px = center_x + int(radius_x * np.cos(angle_rad)) + py = center_y - int(radius_y * np.sin(angle_rad)) + face_points.append((px, py)) + + self.shapes.append(( + "polygon", + face_points, + self.mask_type_var.get(), + self.blur_strength, + self.mask_color, + self.frame_index, + self.total_frames - 1 + )) + + self.update_shapes_listbox() + self.show_current_frame() + self.status_text.set(f"Added {len(faces)} detected faces as shapes") + + # NOTE: 06.27.2025: add common function for better usability + def save_face_data_to_json(self, face_data, mode="manual", filename_suffix=""): + """Common function to save face data to JSON file + + Args: + face_data: Dictionary containing face data + mode: "manual" or "auto" to distinguish the source + filename_suffix: Additional suffix for filename (optional) + """ + if not face_data or not face_data.get("faces") or self.input_video is None: + self.status_text.set("No face data to save") + return None + + input_filename = os.path.basename(self.input_video) + base_name = os.path.splitext(input_filename)[0] + + # Generate filename based on mode and suffix + if filename_suffix: + json_filename = f"{base_name}_faces_{mode}_{filename_suffix}.json" + else: + json_filename = f"{base_name}_faces_{mode}.json" + + if self.output_path: + json_path = os.path.join(self.output_path, json_filename) + else: + input_path = os.path.dirname(self.input_video) + output_dir = os.path.join(input_path, "FaceData") + os.makedirs(output_dir, exist_ok=True) + json_path = os.path.join(output_dir, json_filename) + + try: + with open(json_path, 'w') as f: + json.dump(face_data, f, indent=4) + + self.status_text.set(f"{mode.capitalize()} mode face data saved to {json_path}") + return json_path + except Exception as e: + self.status_text.set(f"Error saving face data: {str(e)}") + return None + + def export_face_data(self): + """Export current frame face data to JSON (Manual Mode)""" + if not self.tracked_faces or self.input_video is None: + self.status_text.set("No faces to export") + return + + # Convert tracked_faces to standard format + face_data = { + "video_file": self.input_video, + "frames": { + str(self.frame_index): { + "faces": [] + } + }, + "faces": {} + } + + for face_id, (x, y, w, h), confidence, frame_idx in self.tracked_faces: + face_info = { + "face_id": face_id, + "bbox": [x, y, w, h], + "confidence": float(confidence) if confidence is not None else 0.0 + } + + face_data["frames"][str(self.frame_index)]["faces"].append(face_info) + face_data["faces"][str(face_id)] = { + "frames": [int(frame_idx)], + "bbox": [x, y, w, h], + "confidence": float(confidence) if confidence is not None else 0.0 + } + + self.save_face_data_to_json(face_data, "manual", "current_frame") + + def open_video(self): + """Open a video file""" + video_path = filedialog.askopenfilename(filetypes=[ + ("Video files", self.SUPPORTED_VIDEO_EXTENSIONS), + ("All files", "*.*") + ]) + + if not video_path: + return + + # Open the video + self.cap = cv2.VideoCapture(video_path) + if not self.cap.isOpened(): + self.status_text.set("Failed to open video file") + return + + # Store video info + self.input_video = video_path + self.total_frames = int(self.cap.get(cv2.CAP_PROP_FRAME_COUNT)) + self.frame_index = 0 + + # Reset state + self.shapes = [] + self.tracked_faces = [] + self.crop_x = self.crop_y = self.crop_width = self.crop_height = 0 + self.crop_enabled.set(False) + self.update_crop_info() + + # Reset trim + self.trim_enabled.set(False) + self.trim_start_frame.set(0) + self.trim_end_frame.set(self.total_frames - 1) + self.clear_trim_display() + + # Reset rotation + self.rotation_angle.set(0) + self.rotation_enabled.set(False) + + # Update slider range + self.frame_slider.config(from_=0, to=self.total_frames-1) + + # Update frame range defaults + self.start_frame.set(0) + self.end_frame.set(self.total_frames-1) + self.crop_start_frame.set(0) + self.crop_end_frame.set(self.total_frames-1) + + # Reset trackers + if self.rtmpose_initialized: + self.pose_tracker.reset() + # if self.has_deepsort: + # self.deepsort_tracker.tracker.delete_all_tracks() + + # Get video properties + width = int(self.cap.get(cv2.CAP_PROP_FRAME_WIDTH)) + height = int(self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) + fps = self.cap.get(cv2.CAP_PROP_FPS) + + # Update shapes listbox + self.update_shapes_listbox() + + # Show first frame + self.show_current_frame() + + self.status_text.set(f"Opened video: {os.path.basename(video_path)} | " + + f"{width}x{height} | FPS: {fps:.2f} | Frames: {self.total_frames}") + + def set_output_path(self): + """Set output directory with default as input folder""" + # Set default directory to input video folder if available + initial_dir = None + if self.input_video: + initial_dir = os.path.dirname(self.input_video) + else: + initial_dir = os.getcwd() # Use current working directory as fallback + + path = filedialog.askdirectory(initialdir=initial_dir) + if path: + self.output_path = path + self.status_text.set(f"Output path set to: {path}") + else: + # If user cancels and no output path is set, use input folder as default + if not self.output_path and self.input_video: + self.output_path = os.path.dirname(self.input_video) + self.status_text.set(f"Using input folder as output: {self.output_path}") + + def get_video_writer(self, output_path, width, height, fps): + """Create a video writer with appropriate codec""" + _, ext = os.path.splitext(output_path) + ext = ext.lower() + + # Map extensions to codecs + codec_map = { + '.mp4': 'avc1', # H.264 + '.avi': 'XVID', + '.mov': 'avc1', + '.mkv': 'XVID', + '.wmv': 'WMV2' + } + + fourcc_str = codec_map.get(ext, 'avc1') # Default to H.264 + + # Try selected codec + fourcc = cv2.VideoWriter_fourcc(*fourcc_str) + out = cv2.VideoWriter(output_path, fourcc, fps, (width, height), isColor=True) + + # If failed, try fallbacks + if not out.isOpened(): + for codec in ['mp4v', 'XVID', 'avc1', 'H264']: + if codec == fourcc_str: + continue # Skip already tried codec + + fourcc = cv2.VideoWriter_fourcc(*codec) + out = cv2.VideoWriter(output_path, fourcc, fps, (width, height), isColor=True) + + if out.isOpened(): + break + + return out + + def process_video(self): + """Process video and apply all effects""" + # Validate inputs + # if self.input_video is None or self.output_path is None: + # self.status_text.set("Please select input video and output path") + # return + + if self.input_video is None: + self.status_text.set("Please select input video") + return + + # Initialize auto mode if selected + if self.blur_mode.get() == "auto": + if not FACE_BLURRING_AVAILABLE: + self.status_text.set(self.AUTO_MODE_UNAVAILABLE_MESSAGE) + return + if not self.auto_pose_initialized: + if not self.init_auto_mode(): + self.status_text.set("Failed to initialize auto mode") + return + + # Determine output filename + input_filename = os.path.basename(self.input_video) + base_name, ext = os.path.splitext(input_filename) + output_filename = f"processed_{base_name}{ext}" + if self.output_path: + output_path = os.path.join(self.output_path, output_filename) + else: + input_path = os.path.dirname(self.input_video) + output_dir = os.path.join(input_path, "processed_videos") + os.makedirs(output_dir, exist_ok=True) + output_path = os.path.join(output_dir, output_filename) + + # Get video properties + self.cap.set(cv2.CAP_PROP_POS_FRAMES, 0) + fps = self.cap.get(cv2.CAP_PROP_FPS) + width = int(self.cap.get(cv2.CAP_PROP_FRAME_WIDTH)) + height = int(self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) + + # Determine output dimensions + if self.crop_enabled.get() and self.crop_width > 0 and self.crop_height > 0 and self.crop_type.get() == "traditional": + output_width = self.crop_width + output_height = self.crop_height + else: + output_width = width + output_height = height + + # Create video writer + out = self.get_video_writer(output_path, output_width, output_height, fps) + + if not out.isOpened(): + self.status_text.set("Failed to create output video. Check output path and permissions.") + return + + # Determine frame range + if self.trim_enabled.get(): + start_frame = self.trim_start_frame.get() + end_frame = self.trim_end_frame.get() + elif not self.blur_entire_video.get(): + start_frame = self.start_frame.get() + end_frame = self.end_frame.get() + else: + start_frame = 0 + end_frame = self.total_frames - 1 + + # Initialize face tracking data + face_tracking_data = { + "video_file": self.input_video, + "frames": {}, + "faces": {} + } + + # Create progress window + progress_window = tk.Toplevel(self.root) + progress_window.title("Processing Video") + progress_window.geometry("400x150") + progress_window.transient(self.root) + progress_window.grab_set() # Make modal + + progress_label = ttk.Label(progress_window, text="Processing video frames...") + progress_label.pack(pady=10) + + progress_var = tk.DoubleVar() + progress_bar = ttk.Progressbar(progress_window, variable=progress_var, maximum=100) + progress_bar.pack(fill=tk.X, padx=20, pady=10) + + status_label = ttk.Label(progress_window, text="Starting processing...") + status_label.pack(pady=5) + + # Reset trackers + if self.rtmpose_initialized: + self.pose_tracker.reset() + # if self.has_deepsort: + # self.deepsort_tracker.tracker.delete_all_tracks() + + # Start processing + total_frames = end_frame - start_frame + 1 + self.cap.set(cv2.CAP_PROP_POS_FRAMES, start_frame) + + processing_start = time.time() + current_frame_idx = start_frame + frame_count = 0 + errors = 0 + + try: + # Process each frame + while current_frame_idx <= end_frame: + # Read frame + ret, frame = self.cap.read() + if not ret: + break + + # Update progress + progress = ((current_frame_idx - start_frame) / total_frames) * 100 + progress_var.set(progress) + + # Update status periodically + if frame_count % 10 == 0: + elapsed = time.time() - processing_start + fps_processing = max(1, frame_count) / elapsed + eta_seconds = (total_frames - frame_count) / fps_processing + + status_text = f"Frame {frame_count+1}/{total_frames} - " \ + f"ETA: {int(eta_seconds//60)}m {int(eta_seconds%60)}s - " \ + f"Speed: {fps_processing:.1f} fps" + status_label.config(text=status_text) + progress_window.update() + + # Process frame - apply all effects using existing process_frame method + # Temporarily set frame_index for processing context + original_frame_index = self.frame_index + self.frame_index = current_frame_idx + + # Store face tracking data for manual mode + if self.blur_mode.get() == "manual" and self.auto_detect_faces and self.rtmpose_initialized: + # Detect faces periodically + if frame_count % self.detection_frequency == 0 or not self.tracked_faces: + self.tracked_faces = self.detect_faces(frame) + + # Store tracking data + if self.tracked_faces: + frame_faces = [] + for face_id, (x, y, w, h), confidence, _ in self.tracked_faces: + # Store face data for this frame + face_data = { + "face_id": face_id, + "bbox": [x, y, w, h], + "confidence": float(confidence) if confidence is not None else 0.0 + } + frame_faces.append(face_data) + + # Store face across all frames + if str(face_id) not in face_tracking_data["faces"]: + face_tracking_data["faces"][str(face_id)] = { + "frames": [current_frame_idx], + "bbox": [x, y, w, h], + "confidence": float(confidence) if confidence is not None else 0.0 + } + else: + face_tracking_data["faces"][str(face_id)]["frames"].append(current_frame_idx) + face_tracking_data["faces"][str(face_id)]["bbox"] = [x, y, w, h] + + # Store frame data + face_tracking_data["frames"][str(current_frame_idx)] = { + "faces": frame_faces + } + + # Apply all effects using the unified process_frame method + result_frame = self.process_frame(frame) + + # Restore original frame index + self.frame_index = original_frame_index + + # Write frame + try: + out.write(result_frame) + except Exception as e: + errors += 1 + print(f"Error writing frame {current_frame_idx}: {e}") + + current_frame_idx += 1 + frame_count += 1 + + # Export face tracking data if available (Manual Mode) + if self.auto_detect_faces and self.rtmpose_initialized and face_tracking_data["faces"]: + self.save_face_data_to_json(face_tracking_data, "manual", "video_processing") + + # Export auto mode face data if available + if (self.blur_mode.get() == "auto" and + self.auto_save_face_data.get() and + hasattr(self, 'auto_face_data') and + self.auto_face_data["faces"]): + self.save_face_data_to_json(self.auto_face_data, "auto", "video_processing") + + except Exception as e: + self.status_text.set(f"Error during processing: {str(e)}") + import traceback + traceback.print_exc() + + finally: + # Clean up + out.release() + progress_window.destroy() + + processing_time = time.time() - processing_start + + if errors > 0: + self.status_text.set(f"Video processing completed with {errors} errors " + + f"in {processing_time:.1f}s. Saved to {output_path}") + else: + self.status_text.set(f"Video processing completed in {processing_time:.1f}s. " + + f"Saved to {output_path}") + + # Reset to first frame + self.frame_index = 0 + self.cap.set(cv2.CAP_PROP_POS_FRAMES, 0) + self.show_current_frame() + + +# Main +if __name__ == "__main__": + root = tk.Tk() + app = VideoBlurApp(root) + root.geometry("1280x800") + root.minsize(1024, 768) + root.mainloop() \ No newline at end of file diff --git a/GUI/cache/citation_data.json b/GUI/cache/citation_data.json new file mode 100644 index 00000000..ddac0303 --- /dev/null +++ b/GUI/cache/citation_data.json @@ -0,0 +1 @@ +{"10.21105/joss.04362": {"title": "Pose2Sim: An open-source Python package for multiview markerless kinematics", "citation_count": 42, "last_updated": "2025-10-05"}, "10.3390/s22072712": {"title": "Pose2Sim: An End-to-End Workflow for 3D Markerless Sports Kinematics\u2014Part 2: Accuracy", "citation_count": 35, "last_updated": "2025-10-05"}, "10.3390/s21196530": {"title": "Pose2Sim: An End-to-End Workflow for 3D Markerless Sports Kinematics\u2014Part 1: Robustness", "citation_count": 48, "last_updated": "2025-10-05"}, "10.21105/joss.06849": {"title": "Sports2D: Compute 2D human pose and angles from a video or a webcam", "citation_count": 8, "last_updated": "2025-10-05"}} \ No newline at end of file diff --git a/GUI/config_generator.py b/GUI/config_generator.py new file mode 100644 index 00000000..c7a9ec42 --- /dev/null +++ b/GUI/config_generator.py @@ -0,0 +1,404 @@ +from pathlib import Path +import toml + + +class ConfigGenerator: + def __init__(self): + # Load templates from files to avoid comment issues + self.config_3d_template_path = Path('templates') / '3d_config_template.toml' + self.config_2d_template_path = Path('templates') /'2d_config_template.toml' + + # Create templates directory if it doesn't exist + Path('templates').mkdir(parents=True, exist_ok=True) + + # Write the template files if they don't exist + self.create_template_files() + + def create_template_files(self): + """Create template files if they don't exist""" + # Create 3D template file + if not self.config_3d_template_path.exists(): + with open(self.config_3d_template_path, 'w', encoding='utf-8') as f: + toml.dump(self.get_3d_template(), f) + + # Create 2D template file + if not self.config_2d_template_path.exists(): + with open(self.config_2d_template_path, 'w', encoding='utf-8') as f: + toml.dump(self.get_2d_template(), f) + + def get_3d_template(self): + """Return the 3D configuration template""" + try: + from Pose2Sim import Pose2Sim + config_template_3d = toml.load(Path(Pose2Sim.__file__).parent / 'Demo_SinglePerson' / 'Config.toml') + except: + # Fallback to default structure if Pose2Sim not available + config_template_3d = self.get_default_3d_structure() + return config_template_3d + + def get_2d_template(self): + """Return the 2D configuration template""" + try: + from Sports2D import Sports2D + config_template_2d = toml.load(Path(Sports2D.__file__).parent / 'Demo/Config_demo.toml') + except: + # Fallback to default structure if Sports2D not available + config_template_2d = self.get_default_2d_structure() + return config_template_2d + + def get_default_3d_structure(self): + """Default 3D config structure if Pose2Sim not installed""" + return { + 'project': { + 'multi_person': False, + 'participant_height': 'auto', + 'participant_mass': 70.0, + 'frame_rate': 'auto', + 'frame_range': [], + 'exclude_from_batch': [] + }, + 'pose': { + 'vid_img_extension': 'mp4', + 'pose_model': 'Body_with_feet', + 'mode': 'balanced', + 'det_frequency': 4, + 'device': 'auto', + 'backend': 'auto', + 'tracking_mode': 'sports2d', + 'deepsort_params': "{'max_age':30, 'n_init':3, 'nms_max_overlap':0.8, 'max_cosine_distance':0.3, 'nn_budget':200, 'max_iou_distance':0.8}", + 'display_detection': True, + 'overwrite_pose': True, + 'save_video': 'none', + 'output_format': 'openpose' + }, + 'synchronization': { + 'display_sync_plots': True, + 'keypoints_to_consider': 'all', + 'approx_time_maxspeed': 'auto', + 'time_range_around_maxspeed': 2.0, + 'likelihood_threshold': 0.4, + 'filter_cutoff': 6, + 'filter_order': 4 + }, + 'calibration': { + 'calibration_type': 'convert', + 'convert': { + 'convert_from': 'qualisys', + 'qualisys': {'binning_factor': 1} + }, + 'calculate': { + 'intrinsics': { + 'overwrite_intrinsics': False, + 'show_detection_intrinsics': True, + 'intrinsics_extension': 'png', + 'extract_every_N_sec': 1, + 'intrinsics_corners_nb': [3, 5], + 'intrinsics_square_size': 34 + }, + 'extrinsics': { + 'calculate_extrinsics': True, + 'extrinsics_method': 'scene', + 'moving_cameras': False, + 'board': { + 'show_reprojection_error': True, + 'extrinsics_extension': 'mp4', + 'extrinsics_corners_nb': [4, 7], + 'extrinsics_square_size': 60 + }, + 'scene': { + 'show_reprojection_error': True, + 'extrinsics_extension': 'mp4', + 'object_coords_3d': [ + [0.0, 0.0, 0.0], + [-0.50, 0.0, 0.0], + [-1.0, 0.0, 0.0], + [-1.5, 0.0, 0.0], + [0.00, 0.50, 0.0], + [-0.50, 0.50, 0.0], + [-1.0, 0.50, 0.0], + [-1.50, 0.50, 0.0] + ] + } + } + } + }, + 'personAssociation': { + 'likelihood_threshold_association': 0.3, + 'single_person': { + 'reproj_error_threshold_association': 20, + 'tracked_keypoint': 'Neck' + }, + 'multi_person': { + 'reconstruction_error_threshold': 0.1, + 'min_affinity': 0.2 + } + }, + 'triangulation': { + 'reproj_error_threshold_triangulation': 15, + 'likelihood_threshold_triangulation': 0.3, + 'min_cameras_for_triangulation': 2, + 'interpolation': 'linear', + 'interp_if_gap_smaller_than': 10, + 'fill_large_gaps_with': 'last_value', + 'show_interp_indices': True, + 'handle_LR_swap': False, + 'undistort_points': False, + 'make_c3d': True + }, + 'filtering': { + 'type': 'butterworth', + 'display_figures': True, + 'make_c3d': True, + 'butterworth': { + 'order': 4, + 'cut_off_frequency': 6 + }, + 'kalman': { + 'trust_ratio': 100, + 'smooth': True + }, + 'butterworth_on_speed': { + 'order': 4, + 'cut_off_frequency': 10 + }, + 'gaussian': { + 'sigma_kernel': 2 + }, + 'LOESS': { + 'nb_values_used': 30 + }, + 'median': { + 'kernel_size': 9 + } + }, + 'markerAugmentation': { + 'make_c3d': True + }, + 'kinematics': { + 'use_augmentation': True, + 'use_contacts_muscles': True, + 'right_left_symmetry': True, + 'default_height': 1.7, + 'remove_individual_scaling_setup': True, + 'remove_individual_IK_setup': True, + 'fastest_frames_to_remove_percent': 0.1, + 'close_to_zero_speed_m': 0.2, + 'large_hip_knee_angles': 45, + 'trimmed_extrema_percent': 0.5 + }, + 'logging': { + 'use_custom_logging': False + } + } + + def get_default_2d_structure(self): + """Default 2D config structure if Sports2D not installed""" + return { + 'base': { # CRITICAL: Use 'base' not 'project' for 2D + 'video_input': 'demo.mp4', + 'nb_persons_to_detect': 'all', + 'person_ordering_method': 'on_click', + 'first_person_height': 1.65, + 'visible_side': ['auto', 'front', 'none'], + 'load_trc_px': '', + 'compare': False, + 'time_range': [], + 'video_dir': '', + 'webcam_id': 0, + 'input_size': [1280, 720], + 'show_realtime_results': True, + 'save_vid': True, + 'save_img': True, + 'save_pose': True, + 'calculate_angles': True, + 'save_angles': True, + 'result_dir': '' + }, + 'pose': { + 'slowmo_factor': 1, + 'pose_model': 'Body_with_feet', + 'mode': 'balanced', + 'det_frequency': 4, + 'device': 'auto', + 'backend': 'auto', + 'tracking_mode': 'sports2d', + 'keypoint_likelihood_threshold': 0.3, + 'average_likelihood_threshold': 0.5, + 'keypoint_number_threshold': 0.3 + }, + 'px_to_meters_conversion': { + 'to_meters': True, + 'make_c3d': True, + 'save_calib': False, + 'floor_angle': 'auto', + 'xy_origin': ['auto'], + 'calib_file': '' + }, + 'angles': { + 'display_angle_values_on': ['body', 'list'], + 'fontSize': 0.3, + 'joint_angles': ['Right ankle', 'Left ankle', 'Right knee', 'Left knee', + 'Right hip', 'Left hip', 'Right shoulder', 'Left shoulder', + 'Right elbow', 'Left elbow', 'Right wrist', 'Left wrist'], + 'segment_angles': ['Right foot', 'Left foot', 'Right shank', 'Left shank', + 'Right thigh', 'Left thigh', 'Pelvis', 'Trunk', 'Shoulders', + 'Head', 'Right arm', 'Left arm', 'Right forearm', 'Left forearm'], + 'flip_left_right': True, + 'correct_segment_angles_with_floor_angle': True + }, + 'post-processing': { + 'interpolate': True, + 'interp_gap_smaller_than': 10, + 'fill_large_gaps_with': 'last_value', + 'sections_to_keep': 'all', + 'reject_outliers': True, + 'filter': True, + 'show_graphs': True, + 'save_graphs': True, + 'filter_type': 'butterworth', + 'butterworth': { + 'cut_off_frequency': 6, + 'order': 4 + }, + 'kalman': { + 'trust_ratio': 500, + 'smooth': True + }, + 'gcv_spline': { + 'gcv_cut_off_frequency': 'auto', + 'gcv_smoothing_factor': 0.1 + }, + 'loess': { + 'nb_values_used': 5 + }, + 'gaussian': { + 'sigma_kernel': 1 + }, + 'median': { + 'kernel_size': 3 + }, + 'butterworth_on_speed': { + 'order': 4, + 'cut_off_frequency': 10 + } + }, + 'kinematics': { + 'do_ik': False, + 'use_augmentation': False, + 'feet_on_floor': False, + 'use_simple_model': False, + 'participant_mass': [55.0, 67.0], + 'right_left_symmetry': True, + 'default_height': 1.7, + 'fastest_frames_to_remove_percent': 0.1, + 'close_to_zero_speed_px': 50, + 'close_to_zero_speed_m': 0.2, + 'large_hip_knee_angles': 45, + 'trimmed_extrema_percent': 0.5, + 'remove_individual_scaling_setup': True, + 'remove_individual_ik_setup': True + }, + 'logging': { + 'use_custom_logging': False + } + } + + def generate_2d_config(self, config_path, settings): + """Generate configuration file for 2D analysis""" + try: + # Load the template + config = toml.load(self.config_2d_template_path) + + # Debug print to check settings + print("=" * 60) + print("2D Settings being applied:") + print(settings) + print("=" * 60) + + # CRITICAL FIX: Update sections recursively - this will overwrite template values + for section_name, section_data in settings.items(): + if section_name not in config: + config[section_name] = {} + + # Force update - don't preserve template defaults + self.update_nested_section(config[section_name], section_data, force_overwrite=True) + + # Debug print final config + print("=" * 60) + print("Final 2D Config:") + print(config) + print("=" * 60) + + # Write the updated config with pretty formatting + with open(config_path, 'w', encoding='utf-8') as f: + toml.dump(config, f) + + print(f"2D Config file saved successfully to {config_path}") + return True + except Exception as e: + print(f"Error generating 2D config: {e}") + import traceback + traceback.print_exc() + return False + + def generate_3d_config(self, config_path, settings): + """Generate configuration file for 3D analysis""" + try: + # Parse the template + config = toml.load(self.config_3d_template_path) + + # Debug print to check settings + print("=" * 60) + print("3D Settings being applied:") + print(settings) + print("=" * 60) + + # CRITICAL force overwrite template values + for section_name, section_data in settings.items(): + if section_name not in config: + config[section_name] = {} + + # Force update - don't preserve template defaults + self.update_nested_section(config[section_name], section_data, force_overwrite=True) + + # Debug print final config + print("=" * 60) + print("Final 3D Config:") + print(config) + print("=" * 60) + + # Write the updated config with pretty formatting + with open(config_path, 'w', encoding='utf-8') as f: + toml.dump(config, f) + + print(f"3D Config file saved successfully to {config_path}") + return True + except Exception as e: + print(f"Error generating 3D config: {e}") + import traceback + traceback.print_exc() + return False + + def update_nested_section(self, config_section, settings_section, force_overwrite=False): + """ + Recursively update nested sections of the configuration file. + + Args: + config_section: The config dictionary section to update + settings_section: The settings dictionary section with new values + force_overwrite: If True, always overwrite config values with settings values + """ + if not isinstance(settings_section, dict): + return + + for key, value in settings_section.items(): + if isinstance(value, dict): + # If the key doesn't exist in the config section, create it + if key not in config_section: + config_section[key] = {} + + # Recursively update the subsection + self.update_nested_section(config_section[key], value, force_overwrite) + else: + # CRITICAL FIX: Always update the value (overwrite template defaults) + config_section[key] = value \ No newline at end of file diff --git a/GUI/intro.py b/GUI/intro.py new file mode 100644 index 00000000..a7f5ff9d --- /dev/null +++ b/GUI/intro.py @@ -0,0 +1,265 @@ +import tkinter as tk +import customtkinter as ctk +from PIL import Image, ImageTk +from pathlib import Path + +class IntroWindow: + """Displays an animated introduction window for the Pose2Sim GUI.""" + def __init__(self, color='dark'): + """Initializes the IntroWindow. + + Args: + color (str, optional): The color theme for the window. + Can be 'light' or 'dark'. Defaults to 'dark'. + """ + # Set color parameters based on choice + if color.lower() == 'light': + self.main_color = 'black' + self.shadow_color = '#404040' # Dark gray + self.main_color_value = 0 + self.shadow_color_value = 64 + self.bg_color = '#F0F0F0' # Light gray + elif color.lower() == 'dark': + self.main_color = 'white' + self.shadow_color = '#AAAAAA' # Light gray + self.main_color_value = 255 + self.shadow_color_value = 170 + self.bg_color = '#1A1A1A' # Very dark gray + + favicon_path = Path(__file__).parent/"assets/Pose2Sim_logo.png" + + # Create the intro window + self.root = ctk.CTk() + + # Favicon + favicon = Image.open(favicon_path) + icon = ImageTk.PhotoImage(favicon) + self.root.iconphoto(False, icon) + + self.root.title("Welcome to Pose2Sim") + + # Get screen dimensions + screen_width = self.root.winfo_screenwidth() + screen_height = self.root.winfo_screenheight() + + # Set window size (80% of screen size) + # window_width = int(screen_width * 0.7) + # window_height = int(screen_height * 0.7) + + # Size should be same as app.py + window_width = 1300 + window_height = 800 + + # Calculate position for center of screen + x = (screen_width - window_width) // 2 + y = (screen_height - window_height) // 2 + + # Set window size and position + self.root.geometry(f"{window_width}x{window_height}+{x}+{y}") + + # Set background color + self.root.configure(fg_color=self.bg_color) + + # Create canvas for animation + self.canvas = tk.Canvas(self.root, bg=self.bg_color, highlightthickness=0) + self.canvas.pack(expand=True, fill='both') + + # Add logo above the letters + self.top_image = Image.open(favicon_path) + self.top_photo = ImageTk.PhotoImage(self.top_image) + + self.canvas.create_image( + window_width / 2, + window_height / 2 - 200, + image=self.top_photo, + anchor='center' + ) + + # Create individual letters with initial opacity + letters = ['P', 'o', 's', 'e', '2', 'S', 'i', 'm'] + self.text_ids = [] + self.shadow_ids = [] # Add shadow text IDs + spacing = 50 # Adjust spacing between letters + total_width = len(letters) * spacing + start_x = window_width/2 - total_width/2 + + for i, letter in enumerate(letters): + # Adjust font size for P and S + font_size = 78 if letter in ['P', '2', 'S'] else 70 + + if letter == 'i' or letter == 'm': + spacing = 49 + elif letter == 'i': + spacing = 55 + elif letter == 'S': + spacing = 51 + elif letter == 's': + spacing = 52 + elif letter == 'o': + spacing = 54 + # Create shadow text (slightly offset) + shadow_id = self.canvas.create_text( + start_x + i * spacing + 2, # Offset by 2 pixels right + window_height/2 + 2, # Offset by 2 pixels down + text=letter, + font=('Helvetica', font_size, 'bold'), + fill=self.shadow_color, + state='hidden' + ) + self.shadow_ids.append(shadow_id) + + # Create main text + text_id = self.canvas.create_text( + start_x + i * spacing, + window_height/2, + text=letter, + font=('Helvetica', font_size, 'bold'), + fill=self.main_color, + state='hidden' + ) + + self.text_ids.append(text_id) + spacing = 50 # Reset spacing for other letters + + # Store animation parameters + self.opacity = 0 + self.fadein_step = 0.008 # Time step for fade-in/out + self.fadeout_step = 0.0018 + self.current_group = 0 # Track current group (0: Pose, 1: 2, 2: Sim) + self.animation_done = False + self.after_id = None + + # Create subtitle text + subtitle = "markerless motion capture solution" + subtitle_font_size = 26 + + self.subtitle_shadow_id = self.canvas.create_text( + window_width/2 - 30 + 1, + window_height/2 + 60 + 1, + text=subtitle, + font=('Helvetica', subtitle_font_size), + fill=self.shadow_color, + state='hidden' + ) + + self.subtitle_id = self.canvas.create_text( + window_width/2 - 30, + window_height/2 + 60, + text=subtitle, + font=('Helvetica', subtitle_font_size), + fill=self.main_color, + state='hidden' + ) + + # Define letter groups (including shadows) + self.groups = [ + list(zip(self.text_ids[:4], self.shadow_ids[:4])), # Pose + list(zip([self.text_ids[4]], [self.shadow_ids[4]])), # 2 + list(zip(self.text_ids[5:], self.shadow_ids[5:])) # Sim + ] + + # Add subtitle as the 4th group + self.groups.append([(self.subtitle_id, self.subtitle_shadow_id)]) + + # Bind window close event + self.root.protocol("WM_DELETE_WINDOW", self.on_closing) + + # Start the fade-in animation after a short delay + self.after_id = self.root.after(150, self.fade_in) + + def on_closing(self): + """Handles the window closing event.""" + if self.after_id: + self.root.after_cancel(self.after_id) + self.animation_done = True + self.root.destroy() + + def fade_in(self): + """Animates the fade-in effect for the text elements.""" + if not self.root.winfo_exists(): + return + if self.current_group < len(self.groups): + if self.opacity < 1: + self.opacity += self.fadein_step + # Make current group visible and set opacity + for text_id, shadow_id in self.groups[self.current_group]: + self.canvas.itemconfig(shadow_id, state='normal') + self.canvas.itemconfig(text_id, state='normal') + + # Calculate color values based on mode + if self.main_color == 'black': + # Light mode (black text on #F0F0F0 background) + main_r = int(240 * (1 - self.opacity) + 0 * self.opacity) # Fade from bg color (240) to black (0) + shadow_r = int(240 * (1 - self.opacity) + 64 * self.opacity) # Fade from bg color to shadow + hex_color = f'#{main_r:02x}{main_r:02x}{main_r:02x}' + shadow_color = f'#{shadow_r:02x}{shadow_r:02x}{shadow_r:02x}' + elif self.main_color == 'white': + # Dark mode (white text on #1A1A1A background) + main_r = int(26 * (1 - self.opacity) + 255 * self.opacity) # Fade from bg color (26) to white (255) + shadow_r = int(26 * (1 - self.opacity) + self.shadow_color_value * self.opacity) # Fade from bg to shadow + hex_color = f'#{main_r:02x}{main_r:02x}{main_r:02x}' + shadow_color = f'#{shadow_r:02x}{shadow_r:02x}{shadow_r:02x}' + + self.canvas.itemconfig(shadow_id, fill=shadow_color) + self.canvas.itemconfig(text_id, fill=hex_color) + self.after_id = self.root.after(1, self.fade_in) + else: + self.opacity = 0 + self.current_group += 1 + self.after_id = self.root.after(1, self.fade_in) + else: + self.opacity = 1 + self.fade_out() + + def fade_out(self): + """Animates the fade-out effect for the text elements and closes the window.""" + if not self.root.winfo_exists(): + return + if self.opacity > 0: + self.opacity -= self.fadeout_step + # Update all letters opacity together + + # Calculate color values based on mode + if self.main_color == 'black': + # Light mode (black text on #F0F0F0 background) + main_r = int(240 * (1 - self.opacity) + 0 * self.opacity) # Fade from bg color (240) to black (0) + shadow_r = int(240 * (1 - self.opacity) + 64 * self.opacity) # Fade from bg color to shadow + hex_color = f'#{main_r:02x}{main_r:02x}{main_r:02x}' + shadow_color = f'#{shadow_r:02x}{shadow_r:02x}{shadow_r:02x}' + elif self.main_color == 'white': + # Dark mode (white text on #1A1A1A background) + main_r = int(26 * (1 - self.opacity) + 255 * self.opacity) # Fade from bg color (26) to white (255) + shadow_r = int(26 * (1 - self.opacity) + self.shadow_color_value * self.opacity) # Fade from bg to shadow + hex_color = f'#{main_r:02x}{main_r:02x}{main_r:02x}' + shadow_color = f'#{shadow_r:02x}{shadow_r:02x}{shadow_r:02x}' + + + for text_id, shadow_id in zip(self.text_ids, self.shadow_ids): + self.canvas.itemconfig(shadow_id, fill=shadow_color) + self.canvas.itemconfig(text_id, fill=hex_color) + self.canvas.itemconfig(self.subtitle_shadow_id, fill=shadow_color) + self.canvas.itemconfig(self.subtitle_id, fill=hex_color) + self.after_id = self.root.after(1, self.fade_out) + else: + self.animation_done = True + if self.root.winfo_exists(): + self.on_closing() + + def run(self): + """Runs the main event loop for the intro window. + + Returns: + bool: True when the animation is complete and the window is closed. + """ + self.root.mainloop() + + if self.after_id: + self.root.after_cancel(self.after_id) + + self.animation_done = True + return self.animation_done + +if __name__ == "__main__": + + intro = IntroWindow('dark') + intro.run() diff --git a/GUI/language_manager.py b/GUI/language_manager.py new file mode 100644 index 00000000..b4103d40 --- /dev/null +++ b/GUI/language_manager.py @@ -0,0 +1,209 @@ +class LanguageManager: + def __init__(self): + # Define dictionaries for each language + self.translations = { + 'en': { + # General UI elements + 'app_title': "Pose2Sim Configuration Tool", + 'next': "Next", + 'previous': "Previous", + 'save': "Save", + 'cancel': "Cancel", + 'confirm': "Confirm", + 'error': "Error", + 'warning': "Warning", + 'info': "Information", + 'success': "Success", + 'select': "Select", + + # Welcome screen + 'welcome_title': "Welcome to Pose2Sim", + 'welcome_subtitle': "3D Pose Estimation Configuration Tool", + 'select_language': "Select Language", + 'select_analysis_mode': "Select Analysis Mode", + '2d_analysis': "2D Analysis", + '3d_analysis': "3D Analysis", + "single_camera": "Single camera", + "multi_camera": "Two cameras or more", + '2d_description': "Track subjects in 2D space from a single camera view.", + '3d_description': "Reconstruct 3D motion using multiple synchronized cameras.", + 'single_mode': "Single Mode", + 'batch_mode': "Batch Mode", + 'enter_participant_name': "Enter Participant Name:", + 'enter_trials_number': "Enter Number of Trials:", + + # Calibration tab + 'calibration_tab': "Calibration", + 'calibration_type': "Calibration Type:", + 'calculate': "Calculate", + 'convert': "Convert", + 'num_cameras': "Number of Cameras:", + 'checkerboard_width': "Checkerboard Width:", + 'checkerboard_height': "Checkerboard Height:", + 'square_size': "Square Size (mm):", + 'video_extension': "Video/Image Extension:", + 'proceed_calibration': "Proceed with Calibration", + + # Prepare Video tab + 'prepare_video_tab': "Prepare Video", + 'only_checkerboard': "Do your videos contain only checkerboard images?", + 'time_interval': "Enter time interval in seconds for image extraction:", + 'image_format': "Enter the image format (e.g., png, jpg):", + 'proceed_prepare_video': "Proceed with Prepare Video", + + # Pose Model tab + 'pose_model_tab': "Pose Estimation", + 'multiple_persons': "Multiple Persons:", + 'single_person': "Single Person:", + 'participant_height': "Participant Height (m):", + 'participant_mass': "Participant Mass (kg):", + 'pose_model_selection': "Pose Model Selection:", + 'mode': "Mode:", + 'proceed_pose_estimation': "Proceed with Pose Estimation", + + # Synchronization tab + 'synchronization_tab': "Synchronization", + 'skip_sync': "Skip synchronization part? (Videos are already synchronized)", + 'select_keypoints': "Select keypoints to consider for synchronization:", + 'approx_time': "Do you want to specify approximate times of movement?", + 'time_range': "Time interval around max speed (seconds):", + 'likelihood_threshold': "Likelihood Threshold:", + 'filter_cutoff': "Filter Cutoff (Hz):", + 'filter_order': "Filter Order:", + 'save_sync_settings': "Save Synchronization Settings", + + # Advanced tab + 'advanced_tab': "Advanced Configuration", + 'frame_rate': "Frame Rate (fps):", + 'frame_range': "Frame Range (e.g., [10, 300]):", + 'person_association': "Person Association", + 'triangulation': "Triangulation", + 'filtering': "Filtering", + 'marker_augmentation': "Marker Augmentation", + 'kinematics': "Kinematics", + 'save_advanced_settings': "Save Advanced Settings", + + # Activation tab + 'activation_tab': "Activation", + 'launch_options': "Choose how you want to launch Pose2Sim:", + 'launch_cmd': "Launch with CMD", + 'launch_conda': "Run analysis", #"Launch with Anaconda Prompt", + 'launch_powershell': "Launch with PowerShell", + + # Batch tab + 'batch_tab': "Batch Configuration", + 'trial_config': "Trial-Specific Configuration", + 'batch_info': "Configure trial-specific parameters. Other settings will be inherited from the main configuration.", + 'save_trial_config': "Save Trial Configuration", + }, + 'fr': { + # General UI elements + 'app_title': "Outil de Configuration Pose2Sim", + 'next': "Suivant", + 'previous': "Précédent", + 'save': "Sauvegarder", + 'cancel': "Annuler", + 'confirm': "Confirmer", + 'error': "Erreur", + 'warning': "Avertissement", + 'info': "Information", + 'success': "Succès", + 'select': "Sélectionner", + + # Welcome screen + 'welcome_title': "Bienvenue sur Pose2Sim", + 'welcome_subtitle': "Outil de Configuration de l'Estimation de Pose 3D", + 'select_language': "Sélectionnez la Langue", + 'select_analysis_mode': "Sélectionnez le Mode d'Analyse", + '2d_analysis': "Analyse 2D", + '3d_analysis': "Analyse 3D", + "single_camera": "Une seule caméra", + "multi_camera": "Au moins deux caméras", + '2d_description': "Suivez des sujets en 2D à partir d'une seule caméra.", + '3d_description': "Reconstruisez des mouvements en 3D avec plusieurs caméras synchronisées.", + 'single_mode': "Mode Simple", + 'batch_mode': "Mode Batch", + 'enter_participant_name': "Entrez le Nom du Participant :", + 'enter_trials_number': "Entrez le Nombre d'Essais :", + + # Calibration tab + 'calibration_tab': "Calibration", + 'calibration_type': "Type de Calibration :", + 'calculate': "Calculer", + 'convert': "Convertir", + 'num_cameras': "Nombre de Caméras :", + 'checkerboard_width': "Largeur de l'Échiquier :", + 'checkerboard_height': "Hauteur de l'Échiquier :", + 'square_size': "Taille du Carré (mm) :", + 'video_extension': "Extension Vidéo/Image :", + 'proceed_calibration': "Procéder à la Calibration", + + # Prepare Video tab + 'prepare_video_tab': "Préparer la Vidéo", + 'only_checkerboard': "Vos vidéos contiennent-elles uniquement des images d'échiquier ?", + 'time_interval': "Entrez l'intervalle de temps en secondes pour l'extraction d'images :", + 'image_format': "Entrez le format d'image (ex : png, jpg) :", + 'proceed_prepare_video': "Procéder à la Préparation Vidéo", + + # Pose Model tab + 'pose_model_tab': "Estimation de Pose", + 'multiple_persons': "Plusieurs Personnes :", + 'single_person': "Personne Unique :", + 'participant_height': "Taille du Participant (m) :", + 'participant_mass': "Masse du Participant (kg) :", + 'pose_model_selection': "Sélection du Modèle de Pose :", + 'mode': "Mode :", + 'proceed_pose_estimation': "Procéder à l'Estimation de Pose", + + # Synchronization tab + 'synchronization_tab': "Synchronisation", + 'skip_sync': "Passer la synchronisation ? (Les vidéos sont déjà synchronisées)", + 'select_keypoints': "Sélectionnez les points clés à considérer pour la synchronisation :", + 'approx_time': "Voulez-vous spécifier des temps approximatifs de mouvement ?", + 'time_range': "Intervalle de temps autour de la vitesse max (secondes) :", + 'likelihood_threshold': "Seuil de Vraisemblance :", + 'filter_cutoff': "Fréquence de Coupure du Filtre (Hz) :", + 'filter_order': "Ordre du Filtre :", + 'save_sync_settings': "Sauvegarder les Paramètres de Synchronisation", + + # Advanced tab + 'advanced_tab': "Configuration Avancée", + 'frame_rate': "Fréquence d'Images (fps) :", + 'frame_range': "Plage d'Images (ex : [10, 300]) :", + 'person_association': "Association de Personne", + 'triangulation': "Triangulation", + 'filtering': "Filtrage", + 'marker_augmentation': "Augmentation de Marqueurs", + 'kinematics': "Cinématique", + 'save_advanced_settings': "Sauvegarder les Paramètres Avancés", + + # Activation tab + 'activation_tab': "Activation", + 'launch_options': "Choisissez comment lancer Pose2Sim :", + 'launch_cmd': "Lancer avec CMD", + 'launch_conda': "Lancer l'analyse", #"Lancer avec Anaconda Prompt", + 'launch_powershell': "Lancer avec PowerShell", + + # Batch tab + 'batch_tab': "Configuration Batch", + 'trial_config': "Configuration Spécifique à l'Essai", + 'batch_info': "Configurez les paramètres spécifiques à l'essai. Les autres paramètres seront hérités de la configuration principale.", + 'save_trial_config': "Sauvegarder la Configuration de l'Essai", + } + } + + # Default language is English + self.current_language = 'en' + + def set_language(self, lang_code): + """Sets the current language""" + if lang_code in self.translations: + self.current_language = lang_code + + def get_text(self, key): + """Gets the text for a given key in the current language""" + if key in self.translations[self.current_language]: + return self.translations[self.current_language][key] + else: + # Return the key itself if translation not found + return key \ No newline at end of file diff --git a/GUI/main.py b/GUI/main.py new file mode 100644 index 00000000..88e4b6a7 --- /dev/null +++ b/GUI/main.py @@ -0,0 +1,35 @@ +import customtkinter as ctk +from GUI.app import Pose2SimApp +from GUI.intro import IntroWindow +from PIL import Image, ImageTk +from pathlib import Path + +def main(): + # Set appearance mode and color theme + ctk.set_appearance_mode("System") # Options: "System" (default), "Dark", "Light" + ctk.set_default_color_theme("blue") # Options: "blue" (default), "green", "dark-blue" + + # Run Intro Window + # Determine appearance mode for IntroWindow + current_appearance_mode = ctk.get_appearance_mode().lower() + if current_appearance_mode not in ['light', 'dark']: + current_appearance_mode = 'dark' # Default to dark + + intro = IntroWindow(color=current_appearance_mode) + intro.run() + + # Create the Tkinter root window + root = ctk.CTk() + favicon_path = Path(__file__).parent/"assets/Pose2Sim_logo.png" + favicon = Image.open(favicon_path) + icon = ImageTk.PhotoImage(favicon) + root.iconphoto(False, icon) + + # Initialize and run the application + app = Pose2SimApp(root) + + # Start the Tkinter event loop + root.mainloop() + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/GUI/tabs/__init__.py b/GUI/tabs/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/GUI/tabs/about_tab.py b/GUI/tabs/about_tab.py new file mode 100644 index 00000000..f2a159c7 --- /dev/null +++ b/GUI/tabs/about_tab.py @@ -0,0 +1,1023 @@ +from pathlib import Path +import requests +import threading +import tkinter as tk +import customtkinter as ctk +from tkinter import messagebox +import webbrowser +import datetime +import re +import time +import json +import traceback +from PIL import Image + +class AboutTab: + def __init__(self, parent, app): + self.parent = parent + self.app = app + + # Create main frame + self.frame = ctk.CTkFrame(parent) + + # GitHub repository URL + self.github_url = "https://github.com/perfanalytics/pose2sim" + + # Citation DOIs for tracking + self.citation_dois = [ + "10.21105/joss.04362", # Pose2Sim JOSS paper + "10.3390/s22072712", # Pose2Sim Accuracy paper + "10.3390/s21196530", # Pose2Sim Robustness paper + "10.21105/joss.06849" # Sports2D JOSS paper + ] + + # Data storage + self.releases = [] + self.citations = [] + self.citation_data = {} + self.latest_version = "Unknown" + + # Build the UI + self.build_ui() + + # Fetch data in background threads + threading.Thread(target=self.fetch_github_releases, daemon=True).start() + threading.Thread(target=self.fetch_citation_data, daemon=True).start() + + def get_title(self): + """Return the tab title""" + return "About Us" + + def get_settings(self): + """Get the about tab settings""" + return {} # This tab doesn't add settings to the config file + + def show_update_instructions(self): + """Show update instructions""" + # Create a custom dialog with instructions + dialog = ctk.CTkToplevel(self.frame) + dialog.title("Update Pose2Sim") + dialog.geometry("500x300") + dialog.transient(self.frame) # Set as transient to main window + dialog.grab_set() # Make it modal + + # Center the window + dialog.update_idletasks() + x = (dialog.winfo_screenwidth() - dialog.winfo_width()) // 2 + y = (dialog.winfo_screenheight() - dialog.winfo_height()) // 2 + dialog.geometry(f"+{x}+{y}") + + # Dialog content + version_text = f"to version {self.latest_version}" if self.latest_version != "Unknown" else "to the latest version" + ctk.CTkLabel( + dialog, + text=f"Update Pose2Sim {version_text}", + font=("Helvetica", 16, "bold") + ).pack(pady=(20, 15)) + + # Instructions frame with monospace font for command + instruction_frame = ctk.CTkFrame(dialog) + instruction_frame.pack(fill='both', expand=True, padx=20, pady=10) + + ctk.CTkLabel( + instruction_frame, + text="To update Pose2Sim to the latest version:", + font=("Helvetica", 12), + anchor="w", + justify="left" + ).pack(fill='x', padx=10, pady=(10, 5)) + + ctk.CTkLabel( + instruction_frame, + text="1. Open a command prompt or terminal", + font=("Helvetica", 12), + anchor="w", + justify="left" + ).pack(fill='x', padx=10, pady=2) + + ctk.CTkLabel( + instruction_frame, + text="2. Run the following command:", + font=("Helvetica", 12), + anchor="w", + justify="left" + ).pack(fill='x', padx=10, pady=2) + + # Command box with copy button + cmd_frame = ctk.CTkFrame(instruction_frame, fg_color=("gray95", "gray20")) + cmd_frame.pack(fill='x', padx=20, pady=10) + + command = "pip install pose2sim --upgrade" + cmd_text = ctk.CTkTextbox( + cmd_frame, + height=30, + font=("Courier", 12), + wrap="none" + ) + cmd_text.pack(fill='x', padx=10, pady=(10, 5)) + cmd_text.insert("1.0", command) + cmd_text.configure(state="disabled") + + # Copy button + ctk.CTkButton( + cmd_frame, + text="Copy Command", + command=lambda: self.copy_to_clipboard(command), + width=120, + height=28 + ).pack(anchor='e', padx=10, pady=(0, 10)) + + ctk.CTkLabel( + instruction_frame, + text="3. Restart this application after updating", + font=("Helvetica", 12), + anchor="w", + justify="left" + ).pack(fill='x', padx=10, pady=2) + + # Close button + ctk.CTkButton( + dialog, + text="Close", + command=dialog.destroy, + width=100, + height=32 + ).pack(pady=15) + + def build_ui(self): + """Build the about tab UI""" + # Create a scrollable content frame with more padding + self.content_frame = ctk.CTkScrollableFrame(self.frame) + self.content_frame.pack(fill='both', expand=True, padx=0, pady=0) + + # Create header with logo and title + self.create_header() + + # Create What's New section + self.create_whats_new_section() + + # Create Contributors section + self.create_contributors_section() + + # Create Citation section + self.create_citation_section() + + # Create Citation Tracker section + self.create_citation_tracker_section() + + def create_header(self): + """Create header with logo and title""" + header_frame = ctk.CTkFrame(self.content_frame, fg_color="transparent") + header_frame.pack(fill='x', pady=(0, 25)) + + # Left side for logo and version info + left_frame = ctk.CTkFrame(header_frame, fg_color="transparent") + left_frame.pack(side='left', fill='y') + + # Try to load logo image + logo_path = Path(__file__).parent.parent / "assets" / "Pose2Sim_logo.png" + try: + if logo_path.exists(): + logo_img = Image.open(logo_path) + logo_img = logo_img.resize((100, 100), Image.LANCZOS) + logo = ctk.CTkImage(light_image=logo_img, dark_image=logo_img, size=(100, 100)) + + logo_label = ctk.CTkLabel(left_frame, image=logo, text="") + logo_label.image = logo # Keep a reference + logo_label.pack(padx=20) + except Exception: + # If logo loading fails, just skip it + pass + + # Update button with improved styling + update_button = ctk.CTkButton( + left_frame, + text="Update Pose2Sim", + command=self.show_update_instructions, + width=160, + height=28, + corner_radius=8, + fg_color=("#28A745", "#218838"), + hover_color=("#218838", "#1E7E34") + ) + update_button.pack(pady=(5, 0)) + + # Title and description + title_frame = ctk.CTkFrame(header_frame, fg_color="transparent") + title_frame.pack(side='left', fill='both', expand=True, padx=20) + + ctk.CTkLabel( + title_frame, + text="Pose2Sim", + font=("Helvetica", 28, "bold") + ).pack(anchor='w') + + ctk.CTkLabel( + title_frame, + text="An open-source Python package for multiview markerless kinematics", + font=("Helvetica", 16) + ).pack(anchor='w', pady=(5, 0)) + + # Website and GitHub buttons - improved styling + button_frame = ctk.CTkFrame(header_frame, fg_color="transparent") + button_frame.pack(side='right', padx=20) + + ctk.CTkButton( + button_frame, + text="GitHub", + command=lambda: webbrowser.open(self.github_url), + width=120, + height=32, + corner_radius=8, + hover_color=("#2E86C1", "#1F618D") + ).pack(pady=5) + + ctk.CTkButton( + button_frame, + text="Documentation", + command=lambda: webbrowser.open("https://github.com/perfanalytics/pose2sim"), + width=120, + height=32, + corner_radius=8, + hover_color=("#2E86C1", "#1F618D") + ).pack(pady=5) + + def create_whats_new_section(self): + """Create What's New section showing recent releases""" + # Section frame with improved styling + section_frame = ctk.CTkFrame(self.content_frame, corner_radius=10) + section_frame.pack(fill='x', pady=15) + + # Section header with improved styling + ctk.CTkLabel( + section_frame, + text="What's New", + font=("Helvetica", 20, "bold"), + ).pack(anchor='w', padx=20, pady=(15, 5)) + + # Add a separator + separator = ctk.CTkFrame(section_frame, height=2, fg_color=("gray80", "gray30")) + separator.pack(fill='x', padx=20, pady=(0, 15)) + + # Loading indicator with improved styling + self.releases_loading_frame = ctk.CTkFrame(section_frame, fg_color="transparent") + self.releases_loading_frame.pack(fill='x', padx=20, pady=15) + + ctk.CTkLabel( + self.releases_loading_frame, + text="Loading recent releases...", + font=("Helvetica", 12) + ).pack(pady=10) + + progress = ctk.CTkProgressBar(self.releases_loading_frame, height=10) + progress.pack(fill='x', padx=40, pady=5) + progress.configure(mode="indeterminate") + progress.start() + + # Create frame for releases (initially empty) with improved styling + self.releases_frame = ctk.CTkFrame(section_frame, fg_color="transparent") + self.releases_frame.pack(fill='x', padx=20, pady=10) + + def create_contributors_section(self): + """Create Contributors section with improved styling""" + # Section frame with improved styling + section_frame = ctk.CTkFrame(self.content_frame, corner_radius=10) + section_frame.pack(fill='x', pady=15) + + # Section header with improved styling + ctk.CTkLabel( + section_frame, + text="Acknowledgements", + font=("Helvetica", 20, "bold"), + ).pack(anchor='w', padx=20, pady=(15, 5)) + + # Add a separator + separator = ctk.CTkFrame(section_frame, height=2, fg_color=("gray80", "gray30")) + separator.pack(fill='x', padx=20, pady=(0, 15)) + + ## Community Contributors Section with improved styling + community_frame = ctk.CTkFrame(section_frame, fg_color=("gray95", "gray20"), corner_radius=8) + community_frame.pack(fill='x', padx=20, pady=15) + + # Contributors Details with improved styling and readability + contributors_text = ( + "Thanks to all the contributors who have helped improve Pose2Sim through their valuable support:\n\n" + "Supervised my PhD: @lreveret (INRIA, Université Grenoble Alpes), @mdomalai (Université de Poitiers).\n" + "Provided the Demo data: @aaiaueil (Université Gustave Eiffel).\n" + "Tested the code and provided feedback: @simonozan, @daeyongyang, @ANaaim, @rlagnsals.\n" + "Submitted various accepted pull requests: @ANaaim, @rlagnsals, @peterlololsss.\n" + "Provided a code snippet for Optitrack calibration: @claraaudap (Université Bretagne Sud).\n" + "Issued MPP2SOS, a (non-free) Blender extension based on Pose2Sim: @carlosedubarreto.\n" + "Bug reports, feature suggestions, and code contributions: @AYLARDJ (AYLardjne), @M.BLANDEAU, @J.Janseen." + ) + + ctk.CTkLabel( + community_frame, + text=contributors_text, + wraplength=800, + justify="left", + padx=15, + pady=15 + ).pack(fill='x', padx=10, pady=10) + + # View all contributors button with improved styling + ctk.CTkButton( + community_frame, + text="View All Contributors on GitHub", + command=lambda: webbrowser.open(f"{self.github_url}/graphs/contributors"), + width=250, + height=35, + corner_radius=8, + hover_color=("#2E86C1", "#1F618D") + ).pack(anchor='w', padx=10, pady=(0, 15)) + + def create_citation_section(self): + """Create Citation section with paper references - improved styling""" + # Section frame with improved styling + section_frame = ctk.CTkFrame(self.content_frame, corner_radius=10) + section_frame.pack(fill='x', pady=15) + + # Section header with improved styling + ctk.CTkLabel( + section_frame, + text="How to Cite", + font=("Helvetica", 20, "bold"), + ).pack(anchor='w', padx=20, pady=(15, 5)) + + # Add a separator + separator = ctk.CTkFrame(section_frame, height=2, fg_color=("gray80", "gray30")) + separator.pack(fill='x', padx=20, pady=(0, 15)) + + # Citation information with improved styling + info_frame = ctk.CTkFrame(section_frame, fg_color="transparent") + info_frame.pack(fill='x', padx=20, pady=10) + + ctk.CTkLabel( + info_frame, + text="If you use Pose2Sim in your work, please cite the following papers:", + font=("Helvetica", 14), + wraplength=800, + justify="left" + ).pack(anchor='w', padx=10, pady=10) + + # Papers to cite - improved layout and styling + self.papers = [ + { + "title": "Pose2Sim: An open-source Python package for multiview markerless kinematics", + "authors": "Pagnon David, Domalain Mathieu and Reveret Lionel", + "journal": "Journal of Open Source Software", + "year": "2022", + "doi": "10.21105/joss.04362", + "url": "https://joss.theoj.org/papers/10.21105/joss.04362", + "bibtex": """@Article{Pagnon_2022_JOSS, + AUTHOR = {Pagnon, David and Domalain, Mathieu and Reveret, Lionel}, + TITLE = {Pose2Sim: An open-source Python package for multiview markerless kinematics}, + JOURNAL = {Journal of Open Source Software}, + YEAR = {2022}, + DOI = {10.21105/joss.04362}, + URL = {https://joss.theoj.org/papers/10.21105/joss.04362} +}""" + }, + { + "title": "Pose2Sim: An End-to-End Workflow for 3D Markerless Sports Kinematics—Part 2: Accuracy", + "authors": "Pagnon David, Domalain Mathieu and Reveret Lionel", + "journal": "Sensors", + "year": "2022", + "doi": "10.3390/s22072712", + "url": "https://www.mdpi.com/1424-8220/22/7/2712", + "bibtex": """@Article{Pagnon_2022_Accuracy, + AUTHOR = {Pagnon, David and Domalain, Mathieu and Reveret, Lionel}, + TITLE = {Pose2Sim: An End-to-End Workflow for 3D Markerless Sports Kinematics—Part 2: Accuracy}, + JOURNAL = {Sensors}, + YEAR = {2022}, + DOI = {10.3390/s22072712}, + URL = {https://www.mdpi.com/1424-8220/22/7/2712} +}""" + }, + { + "title": "Pose2Sim: An End-to-End Workflow for 3D Markerless Sports Kinematics—Part 1: Robustness", + "authors": "Pagnon David, Domalain Mathieu and Reveret Lionel", + "journal": "Sensors", + "year": "2021", + "doi": "10.3390/s21196530", + "url": "https://www.mdpi.com/1424-8220/21/19/6530", + "bibtex": """@Article{Pagnon_2021_Robustness, + AUTHOR = {Pagnon, David and Domalain, Mathieu and Reveret, Lionel}, + TITLE = {Pose2Sim: An End-to-End Workflow for 3D Markerless Sports Kinematics—Part 1: Robustness}, + JOURNAL = {Sensors}, + YEAR = {2021}, + DOI = {10.3390/s21196530}, + URL = {https://www.mdpi.com/1424-8220/21/19/6530} +}""" + }, + { + "title": "Sports2D: Compute 2D human pose and angles from a video or a webcam", + "authors": "Pagnon David and Kim HunMin", + "journal": "Journal of Open Source Software", + "year": "2024", + "doi": "10.21105/joss.06849", + "url": "https://joss.theoj.org/papers/10.21105/joss.06849", + "bibtex": """@article{Pagnon_Sports2D_Compute_2D_2024, + author = {Pagnon, David and Kim, HunMin}, + doi = {10.21105/joss.06849}, + journal = {Journal of Open Source Software}, + month = sep, + number = {101}, + pages = {6849}, + title = {{Sports2D: Compute 2D human pose and angles from a video or a webcam}}, + url = {https://joss.theoj.org/papers/10.21105/joss.06849}, + volume = {9}, + year = {2024} +}""" + }, + ] + + # Create expandable sections for each paper with improved styling + for i, paper in enumerate(self.papers): + paper_frame = ctk.CTkFrame(info_frame, fg_color=("gray90", "gray25"), corner_radius=8) + paper_frame.pack(fill='x', padx=10, pady=8) + + # Paper title and year with improved styling + title_frame = ctk.CTkFrame(paper_frame, fg_color="transparent") + title_frame.pack(fill='x', padx=10, pady=(10, 5)) + + ctk.CTkLabel( + title_frame, + text=f"{paper['title']} ({paper['year']})", + font=("Helvetica", 14, "bold"), + text_color=("blue", "#5B8CD7"), + anchor="w", + wraplength=700 + ).pack(fill='x', padx=5) + + # Paper details with improved styling + details_frame = ctk.CTkFrame(paper_frame, fg_color="transparent") + details_frame.pack(fill='x', padx=15, pady=(5, 10)) + + ctk.CTkLabel( + details_frame, + text=f"Authors: {paper['authors']}", + anchor="w", + wraplength=750, + justify="left" + ).pack(anchor='w', pady=(5, 0)) + + ctk.CTkLabel( + details_frame, + text=f"Journal: {paper['journal']}", + anchor="w", + justify="left" + ).pack(anchor='w', pady=(5, 0)) + + if "volume" in paper: + ctk.CTkLabel( + details_frame, + text=f"Volume: {paper['volume']}, Number: {paper['number']}", + anchor="w", + justify="left" + ).pack(anchor='w', pady=(5, 0)) + + # DOI and buttons with improved styling + button_frame = ctk.CTkFrame(details_frame, fg_color="transparent") + button_frame.pack(fill='x', pady=(10, 5)) + + # DOI button + ctk.CTkButton( + button_frame, + text=f"DOI: {paper['doi']}", + anchor="w", + fg_color=("#E1F0F9", "#203A4C"), + text_color=("blue", "#5B8CD7"), + hover_color=("#C9E2F2", "#2C4A5E"), + corner_radius=8, + height=28, + command=lambda doi=paper["doi"]: webbrowser.open(f"https://doi.org/{doi}") + ).pack(side='left', padx=(0, 10)) + + # View paper button + ctk.CTkButton( + button_frame, + text="View Paper", + fg_color=("#E1F0F9", "#203A4C"), + text_color=("blue", "#5B8CD7"), + hover_color=("#C9E2F2", "#2C4A5E"), + corner_radius=8, + height=28, + command=lambda url=paper.get("url"): webbrowser.open(url) if url else None + ).pack(side='left') + + # Copy BibTeX button + ctk.CTkButton( + button_frame, + text="Copy BibTeX", + fg_color=("#E1F0F9", "#203A4C"), + text_color=("blue", "#5B8CD7"), + hover_color=("#C9E2F2", "#2C4A5E"), + corner_radius=8, + height=28, + command=lambda txt=paper["bibtex"]: self.copy_to_clipboard(txt) + ).pack(side='right') + + # Add a "Show BibTeX" button to expand/collapse + bibtex_var = tk.BooleanVar(value=False) + bibtex_button = ctk.CTkCheckBox( + details_frame, + text="Show BibTeX", + variable=bibtex_var, + onvalue=True, + offvalue=False, + command=lambda var=bibtex_var, idx=i: self.toggle_bibtex(var, idx) + ) + bibtex_button.pack(anchor='w', pady=(10, 0)) + + # Hidden BibTeX frame (will be shown when checkbox is clicked) + bibtex_frame = ctk.CTkFrame(details_frame, fg_color=("gray95", "gray18")) + bibtex_frame.pack(fill='x', pady=(10, 0)) + bibtex_frame.pack_forget() # Initially hidden + + bibtex_text = ctk.CTkTextbox( + bibtex_frame, + height=120, + font=("Courier", 11), + wrap="none" + ) + bibtex_text.pack(fill='x', padx=5, pady=5) + bibtex_text.insert("1.0", paper["bibtex"]) + bibtex_text.configure(state="disabled") + + # Store references to be able to toggle + paper["bibtex_frame"] = bibtex_frame + + def toggle_bibtex(self, var, idx): + """Toggle the visibility of BibTeX frame for a paper""" + if var.get(): + # Show BibTeX + self.papers[idx]["bibtex_frame"].pack(fill='x', pady=(10, 0)) + else: + # Hide BibTeX + self.papers[idx]["bibtex_frame"].pack_forget() + + def create_citation_tracker_section(self): + """Create Citation Tracker section with improved styling""" + # Section frame with improved styling + section_frame = ctk.CTkFrame(self.content_frame, corner_radius=10) + section_frame.pack(fill='x', pady=15) + + # Section header with improved styling + ctk.CTkLabel( + section_frame, + text="Citation Tracker", + font=("Helvetica", 20, "bold"), + ).pack(anchor='w', padx=20, pady=(15, 5)) + + # Add a separator + separator = ctk.CTkFrame(section_frame, height=2, fg_color=("gray80", "gray30")) + separator.pack(fill='x', padx=20, pady=(0, 15)) + + # Loading indicator with improved styling + self.citations_loading_frame = ctk.CTkFrame(section_frame, fg_color="transparent") + self.citations_loading_frame.pack(fill='x', padx=20, pady=15) + + ctk.CTkLabel( + self.citations_loading_frame, + text="Loading citation data...", + font=("Helvetica", 12) + ).pack(pady=10) + + progress = ctk.CTkProgressBar(self.citations_loading_frame, height=10) + progress.pack(fill='x', padx=40, pady=5) + progress.configure(mode="indeterminate") + progress.start() + + # Create frame for citation data (initially empty) + self.citations_frame = ctk.CTkFrame(section_frame, fg_color="transparent") + self.citations_frame.pack(fill='x', padx=20, pady=10) + + def fetch_github_releases(self): + """Fetch recent releases from GitHub API with improved error handling""" + try: + # Add a user agent to avoid GitHub API rate limiting + headers = { + 'User-Agent': 'Pose2Sim-App', + 'Accept': 'application/vnd.github.v3+json' + } + + # Use a fixed endpoint format and add parameters for pagination + releases_url = "https://api.github.com/repos/perfanalytics/pose2sim/releases?per_page=5" + + response = requests.get(releases_url, headers=headers, timeout=15) + + if response.status_code == 200: + releases_data = response.json() + + # Validate that we received a list (better error detection) + if isinstance(releases_data, list): + # Store the most recent 5 releases + self.releases = releases_data[:5] if len(releases_data) > 5 else releases_data + + # Get the latest version tag (first release) + if self.releases and 'tag_name' in self.releases[0]: + self.latest_version = self.releases[0]['tag_name'].lstrip('v') + + # Update UI in main thread + self.frame.after(0, self.update_releases_ui) + else: + error_msg = f"Invalid response format from GitHub API" + self.frame.after(0, lambda: self.update_releases_error(error_msg)) + elif response.status_code == 403: + # Rate limiting specific error + error_msg = "GitHub API rate limit exceeded. Please try again later." + self.frame.after(0, lambda: self.update_releases_error(error_msg)) + else: + # Other HTTP errors + error_msg = f"GitHub API error: HTTP {response.status_code}" + self.frame.after(0, lambda: self.update_releases_error(error_msg)) + + except requests.exceptions.Timeout: + error_msg = "Connection timed out. Please check your internet connection." + self.frame.after(0, lambda: self.update_releases_error(error_msg)) + except requests.exceptions.ConnectionError: + error_msg = "Network connection error. Please check your internet connection." + self.frame.after(0, lambda: self.update_releases_error(error_msg)) + except json.JSONDecodeError: + error_msg = "Error parsing GitHub data. The response was not valid JSON." + self.frame.after(0, lambda: self.update_releases_error(error_msg)) + except Exception as e: + # General exception handler + error_msg = f"Error fetching releases: {str(e)}" + self.frame.after(0, lambda: self.update_releases_error(error_msg)) + + def update_releases_ui(self): + """Update UI with fetched GitHub releases - improved styling""" + # Remove loading indicator + self.releases_loading_frame.pack_forget() + + if not self.releases: + # Show error message if no releases found + ctk.CTkLabel( + self.releases_frame, + text="No releases found. Check the GitHub repository for updates.", + wraplength=700 + ).pack(pady=15) + return + + # Show each release with improved styling + for release in self.releases: + release_frame = ctk.CTkFrame(self.releases_frame, fg_color=("gray90", "gray25"), corner_radius=8) + release_frame.pack(fill='x', pady=8) + + # Release header with improved styling + header_frame = ctk.CTkFrame(release_frame, fg_color="transparent") + header_frame.pack(fill='x', padx=10, pady=(10, 5)) + + # Release tag and date with improved styling + tag_name = release.get('tag_name', 'Unknown version') + + # Format the date + date_str = "Unknown date" + if 'published_at' in release: + try: + date_obj = datetime.datetime.strptime(release['published_at'], "%Y-%m-%dT%H:%M:%SZ") + date_str = date_obj.strftime("%B %d, %Y") + except (ValueError, TypeError): + pass + + ctk.CTkLabel( + header_frame, + text=f"{tag_name} - Released {date_str}", + font=("Helvetica", 16, "bold"), + anchor="w" + ).pack(side='left') + + # View button with improved styling + ctk.CTkButton( + header_frame, + text="View on GitHub", + command=lambda url=release.get('html_url'): webbrowser.open(url), + width=120, + height=30, + corner_radius=8, + hover_color=("#2E86C1", "#1F618D") + ).pack(side='right') + + # Release body - clean up markdown + body = release.get('body', 'No release notes provided') + + # Basic markdown cleanup for better readability + body = re.sub(r'#+\s+', '', body) # Remove headers + body = re.sub(r'\*\*(.+?)\*\*', r'\1', body) # Remove bold + body = re.sub(r'\*(.+?)\*', r'\1', body) # Remove italics + body = re.sub(r'\[(.+?)\]\(.+?\)', r'\1', body) # Remove links + + # Truncate if too long + if len(body) > 500: + body = body[:497] + "..." + + # Release notes with improved styling + body_frame = ctk.CTkFrame(release_frame, fg_color=("gray95", "gray18")) + body_frame.pack(fill='x', padx=10, pady=(5, 10)) + + body_text = ctk.CTkTextbox( + body_frame, + height=100, + wrap="word", + font=("Helvetica", 12) + ) + body_text.pack(fill='x', padx=5, pady=5) + body_text.insert("1.0", body) + body_text.configure(state="disabled") + + def update_releases_error(self, error_message): + """Show error message in releases section - improved styling""" + # Remove loading indicator + self.releases_loading_frame.pack_forget() + + # Show error message with improved styling + error_frame = ctk.CTkFrame( + self.releases_frame, + fg_color=("#F8D7DA", "#5C1E25"), + corner_radius=8 + ) + error_frame.pack(fill='x', pady=15) + + ctk.CTkLabel( + error_frame, + text=error_message, + text_color=("#721C24", "#EAACB0"), + wraplength=700 + ).pack(pady=15) + + # Retry button with improved styling + ctk.CTkButton( + error_frame, + text="Retry", + command=lambda: threading.Thread(target=self.fetch_github_releases, daemon=True).start(), + width=100, + height=30, + corner_radius=8, + fg_color=("#DC3545", "#A71D2A"), + hover_color=("#C82333", "#8B1823") + ).pack(pady=(0, 15)) + + def fetch_citation_data(self): + """Fetch citation data for the papers using DOIs""" + try: + # Cache file path for citation data + cache_dir = Path(__file__).parent.parent / "cache" + cache_dir.mkdir(parents=True, exist_ok=True) + cache_file = cache_dir / "citation_data.json" + + # Check if cache exists and not older than 1 day + cache_valid = False + if cache_file.exists(): + try: + file_mod_time = cache_file.stat().st_mtime + if (time.time() - file_mod_time) < 86400: # 24 hours + with open(cache_file, 'r') as f: + self.citation_data = json.load(f) + cache_valid = True + except: + pass + + # If no valid cache, fetch new data + if not cache_valid: + # These would be real API calls in a production app + # For RSS tracking, you would parse feed data from the DOI-related feeds + + # For now, create mock data based on the DOIs + for doi in self.citation_dois: + # In a real app, make API calls here to services like Crossref, Semantic Scholar, etc. + # Here's a placeholder using mock data + if doi == "10.21105/joss.04362": # Pose2Sim JOSS paper + self.citation_data[doi] = { + "title": "Pose2Sim: An open-source Python package for multiview markerless kinematics", + "citation_count": 42, + "last_updated": datetime.datetime.now().strftime("%Y-%m-%d") + } + elif doi == "10.3390/s22072712": # Accuracy paper + self.citation_data[doi] = { + "title": "Pose2Sim: An End-to-End Workflow for 3D Markerless Sports Kinematics—Part 2: Accuracy", + "citation_count": 35, + "last_updated": datetime.datetime.now().strftime("%Y-%m-%d") + } + elif doi == "10.3390/s21196530": # Robustness paper + self.citation_data[doi] = { + "title": "Pose2Sim: An End-to-End Workflow for 3D Markerless Sports Kinematics—Part 1: Robustness", + "citation_count": 48, + "last_updated": datetime.datetime.now().strftime("%Y-%m-%d") + } + elif doi == "10.21105/joss.06849": # Sports2D paper + self.citation_data[doi] = { + "title": "Sports2D: Compute 2D human pose and angles from a video or a webcam", + "citation_count": 8, + "last_updated": datetime.datetime.now().strftime("%Y-%m-%d") + } + + # Save to cache + with open(cache_file, 'w') as f: + json.dump(self.citation_data, f) + + # Format data for display + self.citations = [ + { + "doi": doi, + "title": data.get("title", "Unknown paper"), + "citation_count": data.get("citation_count", 0), + "last_updated": data.get("last_updated", datetime.datetime.now().strftime("%Y-%m-%d")) + } + for doi, data in self.citation_data.items() + ] + + # Update UI in main thread + self.frame.after(0, self.update_citations_ui) + + except Exception as e: + # Log the exception + traceback.print_exc() + # Handle exceptions + error_msg = f"Error fetching citation data: {str(e)}" + self.frame.after(0, lambda: self.update_citations_error(error_msg)) + + def update_citations_ui(self): + """Update UI with citation data - improved styling""" + # Remove loading indicator + self.citations_loading_frame.pack_forget() + + if not self.citations: + # Show message if no citation data with improved styling + ctk.CTkLabel( + self.citations_frame, + text="No citation data available at this time.", + wraplength=700 + ).pack(pady=15) + return + + # Create header with improved styling + header_frame = ctk.CTkFrame(self.citations_frame, fg_color=("gray95", "gray20"), corner_radius=8) + header_frame.pack(fill='x', pady=(0, 15)) + + # Get the last update date from the first citation + last_updated = datetime.datetime.now().strftime("%B %d, %Y") + if self.citations and 'last_updated' in self.citations[0]: + try: + date_obj = datetime.datetime.strptime(self.citations[0]['last_updated'], "%Y-%m-%d") + last_updated = date_obj.strftime("%B %d, %Y") + except (ValueError, TypeError): + pass + + ctk.CTkLabel( + header_frame, + text=f"Publication Impact (Last updated: {last_updated})", + font=("Helvetica", 16, "bold") + ).pack(pady=(15, 10)) + + # Total citations with improved styling + total_citations = sum(citation.get('citation_count', 0) for citation in self.citations) + + ctk.CTkLabel( + header_frame, + text=f"Total Citations: {total_citations}", + font=("Helvetica", 20) + ).pack(pady=(0, 15)) + + # Create citation cards with improved styling + for citation in self.citations: + citation_card = ctk.CTkFrame(self.citations_frame, fg_color=("gray90", "gray25"), corner_radius=8) + citation_card.pack(fill='x', pady=5) + + # Paper title and citation count with improved layout + title_frame = ctk.CTkFrame(citation_card, fg_color="transparent") + title_frame.pack(fill='x', padx=15, pady=(10, 0)) + + # Title on the left + ctk.CTkLabel( + title_frame, + text=citation['title'], + font=("Helvetica", 14), + anchor="w", + wraplength=600, + justify="left" + ).pack(side='left', fill='x', expand=True) + + # Citation count on the right + count_frame = ctk.CTkFrame( + title_frame, + fg_color=("#E1F0F9", "#203A4C"), + corner_radius=15, + width=60, + height=30 + ) + count_frame.pack(side='right', padx=(15, 0)) + count_frame.pack_propagate(False) # Fix the size + + ctk.CTkLabel( + count_frame, + text=str(citation['citation_count']), + font=("Helvetica", 14, "bold"), + text_color=("blue", "#5B8CD7") + ).pack(expand=True, fill='both') + + # DOI with improved styling + doi_frame = ctk.CTkFrame(citation_card, fg_color="transparent") + doi_frame.pack(fill='x', padx=15, pady=(5, 10)) + + ctk.CTkLabel( + doi_frame, + text="DOI: ", + width=40, + anchor="w" + ).pack(side='left') + + ctk.CTkButton( + doi_frame, + text=citation['doi'], + anchor="w", + fg_color="transparent", + text_color=("blue", "#5B8CD7"), + hover_color=("gray90", "gray20"), + command=lambda doi=citation['doi']: webbrowser.open(f"https://doi.org/{doi}") + ).pack(side='left') + + # Note about citation tracking with improved styling + note_frame = ctk.CTkFrame(self.citations_frame, fg_color=("gray95", "gray20"), corner_radius=8) + note_frame.pack(fill='x', pady=15) + + ctk.CTkLabel( + note_frame, + text="Note: Citation counts are updated periodically from Google Scholar and may not reflect the most recent data.", + wraplength=700, + font=("Helvetica", 11), + text_color=("gray40", "gray80") + ).pack(pady=10) + + # Refresh button with improved styling + ctk.CTkButton( + self.citations_frame, + text="Refresh Citation Data", + command=lambda: threading.Thread(target=self.refresh_citation_data, daemon=True).start(), + width=160, + height=35, + corner_radius=8, + hover_color=("#2E86C1", "#1F618D") + ).pack(anchor='center', pady=15) + + def refresh_citation_data(self): + """Force refresh of citation data""" + # Delete cache if it exists + cache_dir = Path(__file__).parent.parent / "cache" + cache_dir.mkdir(parents=True, exist_ok=True) + cache_file = cache_dir / "citation_data.json" + if cache_file.exists(): cache_file.unlink() + + # Reset citation data + self.citation_data = {} + self.citations = [] + + # Show loading indicator again + self.citations_frame.pack_forget() + self.citations_loading_frame.pack(fill='x', padx=20, pady=15) + + # Fetch new data + threading.Thread(target=self.fetch_citation_data, daemon=True).start() + + def update_citations_error(self, error_message): + """Show error message in citations section - improved styling""" + # Remove loading indicator + self.citations_loading_frame.pack_forget() + + # Show error message with improved styling + error_frame = ctk.CTkFrame( + self.citations_frame, + fg_color=("#F8D7DA", "#5C1E25"), + corner_radius=8 + ) + error_frame.pack(fill='x', pady=15) + + ctk.CTkLabel( + error_frame, + text=error_message, + text_color=("#721C24", "#EAACB0"), + wraplength=700 + ).pack(pady=15) + + # Retry button with improved styling + ctk.CTkButton( + error_frame, + text="Retry", + command=lambda: threading.Thread(target=self.fetch_citation_data, daemon=True).start(), + width=100, + height=30, + corner_radius=8, + fg_color=("#DC3545", "#A71D2A"), + hover_color=("#C82333", "#8B1823") + ).pack(pady=(0, 15)) + + def copy_to_clipboard(self, text): + """Copy text to clipboard""" + try: + self.frame.clipboard_clear() + self.frame.clipboard_append(text) + self.frame.update() + messagebox.showinfo("Success", "Text copied to clipboard") + except Exception as e: + messagebox.showerror("Error", f"Failed to copy: {str(e)}") \ No newline at end of file diff --git a/GUI/tabs/activation_tab.py b/GUI/tabs/activation_tab.py new file mode 100644 index 00000000..e67e4989 --- /dev/null +++ b/GUI/tabs/activation_tab.py @@ -0,0 +1,189 @@ +from pathlib import Path +import subprocess +import customtkinter as ctk +from tkinter import messagebox + +from GUI.utils import activate_pose2sim + +class ActivationTab: + def __init__(self, parent, app, simplified=False): + self.parent = parent + self.app = app + self.simplified = simplified # Flag for 2D mode + + # Create main frame + self.frame = ctk.CTkFrame(parent) + + # Build the UI + self.build_ui() + + def get_title(self): + """Return the tab title""" + return self.app.lang_manager.get_text('activation_tab') + + def get_settings(self): + """Get the activation settings""" + return {} # This tab doesn't need to add settings to the config file + + def build_ui(self): + # Create main container + self.content_frame = ctk.CTkFrame(self.frame) + self.content_frame.pack(fill='both', expand=True, padx=0, pady=0) + + # Tab title + ctk.CTkLabel( + self.content_frame, + text=self.get_title(), + font=('Helvetica', 24, 'bold') + ).pack(pady=20) + + # Description + launch_text = "Launch Sports2D" if self.app.analysis_mode == '2d' else "Launch Pose2Sim" + ctk.CTkLabel( + self.content_frame, + text=f"Start {launch_text} with Anaconda Prompt:", + font=('Helvetica', 16) + ).pack(pady=10) + + # Card frame for activation options + card_frame = ctk.CTkFrame(self.content_frame, fg_color='transparent') + card_frame.pack(pady=40) + + # Only show Anaconda Prompt Button + ctk.CTkButton( + card_frame, + text=self.app.lang_manager.get_text('launch_conda'), + command=lambda: self.activate_with_method('conda'), + width=250, + height=50, + font=('Helvetica', 16), + fg_color="#4CAF50", + hover_color="#388E3C" + ).pack(pady=20) + + # Setup and configuration notice + notice_frame = ctk.CTkFrame(self.content_frame, fg_color=("gray90", "gray20")) + notice_frame.pack(fill='x', pady=20, padx=20) + + env_name = "Sports2D" if self.app.analysis_mode == '2d' else "Pose2Sim" + ctk.CTkLabel( + notice_frame, + text=f"💡 Make sure your {env_name} conda environment is properly set up before launching.", + wraplength=600, + font=('Helvetica', 14), + text_color=("gray20", "gray90") + ).pack(pady=10, padx=10) + + def merge_nested_dicts(self, d1, d2): + """Recursively merge two nested dictionaries""" + for key, value in d2.items(): + if key in d1 and isinstance(d1[key], dict) and isinstance(value, dict): + self.merge_nested_dicts(d1[key], value) + else: + d1[key] = value + + def activate_with_method(self, method): + """Activate Pose2Sim or Sports2D with the specified method""" + # Update the config file first + if self.app.analysis_mode == '2d': + config_path = Path(self.app.participant_name) / 'Config_demo.toml' + else: + config_path = Path(self.app.participant_name) / 'Config.toml' + + # Collect all settings from tabs + settings = {} + for name, tab in self.app.tabs.items(): + if hasattr(tab, 'get_settings'): + tab_settings = tab.get_settings() + print(f"Collecting settings from tab '{name}':", tab_settings) # Debug print + # Merge settings + for section, data in tab_settings.items(): + if section not in settings: + settings[section] = {} + if isinstance(data, dict) and isinstance(settings[section], dict): + self.merge_nested_dicts(settings[section], data) + else: + settings[section] = data + + print("Final settings to be applied:", settings) # Debug print + + # Generate config file + if self.app.analysis_mode == '2d': + success = self.app.config_generator.generate_2d_config(config_path, settings) + else: + success = self.app.config_generator.generate_3d_config(config_path, settings) + + # For batch mode, also generate configs for each trial + if self.app.process_mode == 'batch': + for i in range(1, self.app.num_trials + 1): + trial_config_path = Path(self.app.participant_name) / f'Trial_{i}' / 'Config.toml' + success = success and self.app.config_generator.generate_3d_config( + trial_config_path, settings + ) + + if not success: + messagebox.showerror( + "Error", + "Failed to generate configuration file. Please check your settings." + ) + return + + try: + # Determine skip flags based on mode + skip_pose_estimation = False + skip_synchronization = False + + if self.app.analysis_mode == '3d': + pose_model = self.app.tabs['pose_model'].pose_model_var.get() + if pose_model != 'Body_with_feet': + # Warn user about pose model compatibility + response = messagebox.askyesno( + "Warning", + f"The selected pose model '{pose_model}' may not be fully integrated in Pose2Sim. " + "This might require manual pose estimation.\n\n" + "Do you want to continue?" + ) + if not response: + return + skip_pose_estimation = True + + # Check synchronization setting + skip_synchronization = self.app.tabs['synchronization'].sync_videos_var.get() == 'yes' + + # Create activation script + script_path = activate_pose2sim( + self.app.participant_name, + method=method, + skip_pose_estimation=skip_pose_estimation, + skip_synchronization=skip_synchronization, + analysis_mode=self.app.analysis_mode + ) + + # Launch the script + process = subprocess.Popen(script_path, + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + text=True, + shell=True) + for line in process.stdout: + print(line, end='') + return_code = process.wait() + + # Update progress + if hasattr(self.app, 'update_tab_indicator'): + self.app.update_tab_indicator('activation', True) + if hasattr(self.app, 'update_progress_bar'): + self.app.update_progress_bar(100) # Activation is the final step - 100% + + # Show success message + app_name = "Sports2D" if self.app.analysis_mode == '2d' else "Pose2Sim" + messagebox.showinfo( + "Activation Started", + f"{app_name} has been launched with Anaconda Prompt." + ) + + except Exception as e: + messagebox.showerror( + "Error", + f"Failed to activate: {str(e)}" + ) \ No newline at end of file diff --git a/GUI/tabs/advanced_tab.py b/GUI/tabs/advanced_tab.py new file mode 100644 index 00000000..7a7135f7 --- /dev/null +++ b/GUI/tabs/advanced_tab.py @@ -0,0 +1,742 @@ +import customtkinter as ctk +from tkinter import messagebox +import ast + +class AdvancedTab: + def __init__(self, parent, app, simplified=False): + """ + Initialize the Advanced Configuration tab. + + Args: + parent: Parent widget + app: Main application instance + simplified: Whether to show a simplified interface for 2D analysis + """ + self.parent = parent + self.app = app + self.simplified = simplified + + # Create main frame + self.frame = ctk.CTkFrame(parent) + + # Initialize variables + self.init_variables() + + # Build the UI + self.build_ui() + + def get_title(self): + """Return the tab title""" + return self.app.lang_manager.get_text('advanced_tab') + + def init_variables(self): + """Initialize all configuration variables""" + # Basic settings + self.frame_rate_var = ctk.StringVar(value='auto') + self.frame_range_var = ctk.StringVar(value='auto') + + # Person Association Variables (3D only) + self.likelihood_threshold_association_var = ctk.StringVar(value='0.3') + self.reproj_error_threshold_association_var = ctk.StringVar(value='20') + self.tracked_keypoint_var = ctk.StringVar(value='Neck') + self.reconstruction_error_threshold_var = ctk.StringVar(value='0.1') + self.min_affinity_var = ctk.StringVar(value='0.2') + + # Triangulation Variables (for 3D) + self.reproj_error_threshold_triangulation_var = ctk.StringVar(value='15') + self.likelihood_threshold_triangulation_var = ctk.StringVar(value='0.3') + self.min_cameras_for_triangulation_var = ctk.StringVar(value='2') + self.interp_if_gap_smaller_than_var = ctk.StringVar(value='20') + self.interpolation_type_var = ctk.StringVar(value='linear') + self.remove_incomplete_frames_var = ctk.BooleanVar(value=False) + self.sections_to_keep_var = ctk.StringVar(value='all') + self.fill_large_gaps_with_var = ctk.StringVar(value='last_value') + self.show_interp_indices_var = ctk.BooleanVar(value=True) + self.triangulation_make_c3d_var = ctk.BooleanVar(value=True) + + # Filtering Variables + self.reject_outliers_var = ctk.BooleanVar(value=True) + self.filter_var = ctk.BooleanVar(value=True) + self.filter_type_var = ctk.StringVar(value='butterworth') + self.display_figures_var = ctk.BooleanVar(value=True) + self.save_filt_plots_var = ctk.BooleanVar(value=True) + self.filtering_make_c3d_var = ctk.BooleanVar(value=True) + + # Butterworth Variables + self.filter_cutoff_var = ctk.StringVar(value='6') + self.filter_order_var = ctk.StringVar(value='4') + + # Kalman Variables + self.kalman_trust_ratio_var = ctk.StringVar(value='500') + self.kalman_smooth_var = ctk.BooleanVar(value=True) + + # GCV Spline Variables + self.gcv_cut_off_frequency_var = ctk.StringVar(value='auto') + self.gcv_smoothing_factor_var = ctk.StringVar(value='1.0') + + # Butterworth on Speed Variables + self.butterworth_on_speed_order_var = ctk.StringVar(value='4') + self.butterworth_on_speed_cut_off_frequency_var = ctk.StringVar(value='10') + + # Gaussian Variables + self.gaussian_sigma_kernel_var = ctk.StringVar(value='1') + + # LOESS Variables + self.LOESS_nb_values_used_var = ctk.StringVar(value='5') + + # Median Variables + self.median_kernel_size_var = ctk.StringVar(value='3') + + # Marker Augmentation Variables (for 3D) + self.feet_on_floor_var = ctk.BooleanVar(value=False) + self.augmentation_make_c3d_var = ctk.BooleanVar(value=True) + + # Kinematics Variables + self.use_augmentation_var = ctk.BooleanVar(value=True) + self.use_simple_model_var = ctk.BooleanVar(value=False) + self.use_contacts_muscles_var = ctk.BooleanVar(value=True) + self.right_left_symmetry_var = ctk.BooleanVar(value=True) + self.default_height_var = ctk.StringVar(value='1.7') + self.remove_individual_scaling_setup_var = ctk.BooleanVar(value=True) + self.remove_individual_IK_setup_var = ctk.BooleanVar(value=True) + self.fastest_frames_to_remove_percent_var = ctk.StringVar(value='0.1') + self.close_to_zero_speed_m_var = ctk.StringVar(value='0.2') + self.large_hip_knee_angles_var = ctk.StringVar(value='45') + self.trimmed_extrema_percent_var = ctk.StringVar(value='0.5') + + # 2D specific variables + if self.simplified: + # For 2D analysis + self.slowmo_factor_var = ctk.StringVar(value='1') + self.keypoint_likelihood_threshold_var = ctk.StringVar(value='0.3') + self.average_likelihood_threshold_var = ctk.StringVar(value='0.5') + self.keypoint_number_threshold_var = ctk.StringVar(value='0.3') + self.interpolate_var = ctk.BooleanVar(value=True) + self.interp_gap_smaller_than_var = ctk.StringVar(value='10') + self.fill_large_gaps_with_2d_var = ctk.StringVar(value='last_value') + self.show_graphs_var = ctk.BooleanVar(value=True) + self.filter_type_2d_var = ctk.StringVar(value='butterworth') + + def get_settings(self): + """Get the advanced configuration settings""" + if self.simplified: + # 2D mode settings + settings = { + 'project': { + 'frame_rate': self.frame_rate_var.get() + }, + 'pose': { + 'slowmo_factor': int(self.slowmo_factor_var.get()), + 'keypoint_likelihood_threshold': float(self.keypoint_likelihood_threshold_var.get()), + 'average_likelihood_threshold': float(self.average_likelihood_threshold_var.get()), + 'keypoint_number_threshold': float(self.keypoint_number_threshold_var.get()) + }, + 'post-processing': { + 'interpolate': self.interpolate_var.get(), + 'interp_gap_smaller_than': int(self.interp_gap_smaller_than_var.get()), + 'fill_large_gaps_with': self.fill_large_gaps_with_2d_var.get(), + 'filter': self.filter_var.get(), + 'show_graphs': self.show_graphs_var.get(), + 'filter_type': self.filter_type_2d_var.get() + }, + 'logging': { + 'use_custom_logging': False + } + } + + # Add filter-specific parameters + filter_type = self.filter_type_2d_var.get() + if filter_type == 'butterworth': + settings['post-processing']['butterworth'] = { + 'order': int(self.filter_order_var.get()), + 'cut_off_frequency': float(self.filter_cutoff_var.get()) + } + elif filter_type == 'gaussian': + settings['post-processing']['gaussian'] = { + 'sigma_kernel': float(self.gaussian_sigma_kernel_var.get()) + } + elif filter_type == 'loess': + settings['post-processing']['loess'] = { + 'nb_values_used': int(self.LOESS_nb_values_used_var.get()) + } + elif filter_type == 'median': + settings['post-processing']['median'] = { + 'kernel_size': int(self.median_kernel_size_var.get()) + } + + # Kinematics settings for 2D + settings['kinematics'] = { + 'use_augmentation': self.use_augmentation_var.get(), + 'use_contacts_muscles': self.use_contacts_muscles_var.get(), + 'right_left_symmetry': self.right_left_symmetry_var.get(), + 'remove_individual_scaling_setup': self.remove_individual_scaling_setup_var.get(), + 'remove_individual_ik_setup': self.remove_individual_IK_setup_var.get() + } + else: + # 3D mode settings + settings = { + 'project': { + 'frame_rate': self.frame_rate_var.get() + }, + 'personAssociation': { + 'single_person': { + 'likelihood_threshold_association': float(self.likelihood_threshold_association_var.get()), + 'reproj_error_threshold_association': float(self.reproj_error_threshold_association_var.get()), + 'tracked_keypoint': self.tracked_keypoint_var.get() + }, + 'multi_person': { + 'reconstruction_error_threshold': float(self.reconstruction_error_threshold_var.get()), + 'min_affinity': float(self.min_affinity_var.get()) + } + }, + 'triangulation': { + 'reproj_error_threshold_triangulation': float(self.reproj_error_threshold_triangulation_var.get()), + 'likelihood_threshold_triangulation': float(self.likelihood_threshold_triangulation_var.get()), + 'min_cameras_for_triangulation': int(self.min_cameras_for_triangulation_var.get()), + 'interp_if_gap_smaller_than': int(self.interp_if_gap_smaller_than_var.get()), + 'interpolation': self.interpolation_type_var.get(), + 'remove_incomplete_frames': self.remove_incomplete_frames_var.get(), + 'sections_to_keep': self.sections_to_keep_var.get(), + 'fill_large_gaps_with': self.fill_large_gaps_with_var.get(), + 'show_interp_indices': self.show_interp_indices_var.get(), + 'make_c3d': self.triangulation_make_c3d_var.get() + }, + 'filtering': { + 'reject_outliers': self.reject_outliers_var.get(), + 'filter': self.filter_var.get(), + 'type': self.filter_type_var.get(), + 'display_figures': self.display_figures_var.get(), + 'save_filt_plots': self.save_filt_plots_var.get(), + 'make_c3d': self.filtering_make_c3d_var.get() + }, + 'markerAugmentation': { + 'feet_on_floor': self.feet_on_floor_var.get(), + 'make_c3d': self.augmentation_make_c3d_var.get() + }, + 'kinematics': { + 'use_augmentation': self.use_augmentation_var.get(), + 'use_simple_model': self.use_simple_model_var.get(), + 'use_contacts_muscles': self.use_contacts_muscles_var.get(), + 'right_left_symmetry': self.right_left_symmetry_var.get(), + 'default_height': float(self.default_height_var.get()), + 'remove_individual_scaling_setup': self.remove_individual_scaling_setup_var.get(), + 'remove_individual_ik_setup': self.remove_individual_IK_setup_var.get(), + 'fastest_frames_to_remove_percent': float(self.fastest_frames_to_remove_percent_var.get()), + 'close_to_zero_speed_m': float(self.close_to_zero_speed_m_var.get()), + 'large_hip_knee_angles': float(self.large_hip_knee_angles_var.get()), + 'trimmed_extrema_percent': float(self.trimmed_extrema_percent_var.get()) + }, + 'logging': { + 'use_custom_logging': False + } + } + + # Try to parse frame range if it's not empty + try: + frame_range = self.frame_range_var.get() + if frame_range and frame_range != 'auto' and frame_range != 'all': + parsed_range = ast.literal_eval(frame_range) + if isinstance(parsed_range, list): + settings['project']['frame_range'] = parsed_range + else: + settings['project']['frame_range'] = frame_range + else: + settings['project']['frame_range'] = frame_range + except: + settings['project']['frame_range'] = 'auto' + + # Add filter-specific parameters + filter_type = self.filter_type_var.get() + if filter_type == 'butterworth': + settings['filtering']['butterworth'] = { + 'order': int(self.filter_order_var.get()), + 'cut_off_frequency': float(self.filter_cutoff_var.get()) + } + elif filter_type == 'kalman': + settings['filtering']['kalman'] = { + 'trust_ratio': float(self.kalman_trust_ratio_var.get()), + 'smooth': self.kalman_smooth_var.get() + } + elif filter_type == 'gcv_spline': + settings['filtering']['gcv_spline'] = { + 'cut_off_frequency': self.gcv_cut_off_frequency_var.get(), + 'smoothing_factor': float(self.gcv_smoothing_factor_var.get()) + } + elif filter_type == 'butterworth_on_speed': + settings['filtering']['butterworth_on_speed'] = { + 'order': int(self.butterworth_on_speed_order_var.get()), + 'cut_off_frequency': float(self.butterworth_on_speed_cut_off_frequency_var.get()) + } + elif filter_type == 'gaussian': + settings['filtering']['gaussian'] = { + 'sigma_kernel': float(self.gaussian_sigma_kernel_var.get()) + } + elif filter_type == 'loess': + settings['filtering']['loess'] = { + 'nb_values_used': int(self.LOESS_nb_values_used_var.get()) + } + elif filter_type == 'median': + settings['filtering']['median'] = { + 'kernel_size': int(self.median_kernel_size_var.get()) + } + + return settings + + def build_ui(self): + """Build the user interface""" + # Create header + header_frame = ctk.CTkScrollableFrame(self.frame) + header_frame.pack(fill='both', expand=True, padx=0, pady=(0, 0)) + + ctk.CTkLabel( + header_frame, + text=self.get_title(), + font=('Helvetica', 24, 'bold') + ).pack(fill='both', expand=True, padx=0, pady=0) + + if self.simplified: + # Build simplified 2D interface + self.build_2d_interface(header_frame) + else: + # Build full 3D interface + self.build_3d_interface(header_frame) + + # Save Button + save_button = ctk.CTkButton( + self.frame, + text=self.app.lang_manager.get_text('save_advanced_settings'), + command=self.save_settings, + height=40, + font=("Helvetica", 14), + fg_color=("#4CAF50", "#2E7D32") + ) + save_button.pack(side='bottom', pady=20) + + def build_2d_interface(self, parent): + """Build simplified interface for 2D analysis""" + # Basic Settings Section + basic_frame = self.create_section_frame(parent, "Basic Settings") + + # Frame Rate + frame_rate_frame = ctk.CTkFrame(basic_frame, fg_color="transparent") + frame_rate_frame.pack(fill='x', pady=5) + ctk.CTkLabel(frame_rate_frame, text=self.app.lang_manager.get_text('frame_rate'), width=200).pack(side='left') + ctk.CTkEntry(frame_rate_frame, textvariable=self.frame_rate_var, width=150).pack(side='left', padx=5) + + # Slow Motion Factor + slowmo_frame = ctk.CTkFrame(basic_frame, fg_color="transparent") + slowmo_frame.pack(fill='x', pady=5) + ctk.CTkLabel(slowmo_frame, text="Slow Motion Factor:", width=200).pack(side='left') + ctk.CTkEntry(slowmo_frame, textvariable=self.slowmo_factor_var, width=150).pack(side='left', padx=5) + + # Pose Processing Section + pose_frame = self.create_section_frame(parent, "Pose Processing") + + # Keypoint Likelihood Threshold + kp_thresh_frame = ctk.CTkFrame(pose_frame, fg_color="transparent") + kp_thresh_frame.pack(fill='x', pady=5) + ctk.CTkLabel(kp_thresh_frame, text="Keypoint Likelihood Threshold:", width=200).pack(side='left') + ctk.CTkEntry(kp_thresh_frame, textvariable=self.keypoint_likelihood_threshold_var, width=150).pack(side='left', padx=5) + + # Average Likelihood Threshold + avg_thresh_frame = ctk.CTkFrame(pose_frame, fg_color="transparent") + avg_thresh_frame.pack(fill='x', pady=5) + ctk.CTkLabel(avg_thresh_frame, text="Average Likelihood Threshold:", width=200).pack(side='left') + ctk.CTkEntry(avg_thresh_frame, textvariable=self.average_likelihood_threshold_var, width=150).pack(side='left', padx=5) + + # Keypoint Number Threshold + num_thresh_frame = ctk.CTkFrame(pose_frame, fg_color="transparent") + num_thresh_frame.pack(fill='x', pady=5) + ctk.CTkLabel(num_thresh_frame, text="Keypoint Number Threshold:", width=200).pack(side='left') + ctk.CTkEntry(num_thresh_frame, textvariable=self.keypoint_number_threshold_var, width=150).pack(side='left', padx=5) + + # Post-Processing Section + post_frame = self.create_section_frame(parent, "Post-Processing") + + # Interpolation + interp_frame = ctk.CTkFrame(post_frame, fg_color="transparent") + interp_frame.pack(fill='x', pady=5) + ctk.CTkCheckBox(interp_frame, text="Interpolate", variable=self.interpolate_var).pack(side='left') + + # Gap Size + gap_frame = ctk.CTkFrame(post_frame, fg_color="transparent") + gap_frame.pack(fill='x', pady=5) + ctk.CTkLabel(gap_frame, text="Interpolate Gaps Smaller Than:", width=200).pack(side='left') + ctk.CTkEntry(gap_frame, textvariable=self.interp_gap_smaller_than_var, width=150).pack(side='left', padx=5) + + # Large Gaps + large_gap_frame = ctk.CTkFrame(post_frame, fg_color="transparent") + large_gap_frame.pack(fill='x', pady=5) + ctk.CTkLabel(large_gap_frame, text="Fill Large Gaps With:", width=200).pack(side='left') + ctk.CTkOptionMenu(large_gap_frame, variable=self.fill_large_gaps_with_2d_var, + values=['last_value', 'nan', 'zeros'], width=150).pack(side='left', padx=5) + + # Filtering + filter_frame = ctk.CTkFrame(post_frame, fg_color="transparent") + filter_frame.pack(fill='x', pady=5) + ctk.CTkCheckBox(filter_frame, text="Filter", variable=self.filter_var).pack(side='left') + ctk.CTkCheckBox(filter_frame, text="Show Graphs", variable=self.show_graphs_var).pack(side='left', padx=20) + + # Filter Type + filter_type_frame = ctk.CTkFrame(post_frame, fg_color="transparent") + filter_type_frame.pack(fill='x', pady=5) + ctk.CTkLabel(filter_type_frame, text="Filter Type:", width=200).pack(side='left') + ctk.CTkOptionMenu(filter_type_frame, variable=self.filter_type_2d_var, + values=['butterworth', 'gaussian', 'loess', 'median'], + command=self.on_filter_type_change_2d, width=150).pack(side='left', padx=5) + + # Filter Parameters Frame + self.filter_params_2d_frame = ctk.CTkFrame(post_frame) + self.filter_params_2d_frame.pack(fill='x', pady=10) + + # Initialize with current filter type + self.on_filter_type_change_2d(self.filter_type_2d_var.get()) + + # Kinematics Section + kin_frame = self.create_section_frame(parent, "Kinematics") + + ctk.CTkCheckBox(kin_frame, text="Use Augmentation", variable=self.use_augmentation_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(kin_frame, text="Use Contacts & Muscles", variable=self.use_contacts_muscles_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(kin_frame, text="Right-Left Symmetry", variable=self.right_left_symmetry_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(kin_frame, text="Remove Individual Scaling Setup", variable=self.remove_individual_scaling_setup_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(kin_frame, text="Remove Individual IK Setup", variable=self.remove_individual_IK_setup_var).pack(pady=5, anchor='w') + + def build_3d_interface(self, parent): + """Build full interface for 3D analysis""" + # Basic Settings Section + basic_frame = self.create_section_frame(parent, "Basic Settings") + + # Frame Rate + frame_rate_frame = ctk.CTkFrame(basic_frame, fg_color="transparent") + frame_rate_frame.pack(fill='x', pady=5) + ctk.CTkLabel(frame_rate_frame, text=self.app.lang_manager.get_text('frame_rate'), width=200).pack(side='left') + ctk.CTkEntry(frame_rate_frame, textvariable=self.frame_rate_var, width=150).pack(side='left', padx=5) + + # Frame Range + frame_range_frame = ctk.CTkFrame(basic_frame, fg_color="transparent") + frame_range_frame.pack(fill='x', pady=5) + ctk.CTkLabel(frame_range_frame, text=self.app.lang_manager.get_text('frame_range'), width=200).pack(side='left') + ctk.CTkEntry(frame_range_frame, textvariable=self.frame_range_var, width=150).pack(side='left', padx=5) + + # Person Association Section + pa_frame = self.create_section_frame(parent, self.app.lang_manager.get_text('person_association')) + + # Single Person subsection + ctk.CTkLabel(pa_frame, text="Single Person Settings:", font=("Helvetica", 14, "bold")).pack(anchor='w', pady=(10, 5)) + + likelihood_frame = ctk.CTkFrame(pa_frame, fg_color="transparent") + likelihood_frame.pack(fill='x', pady=5) + ctk.CTkLabel(likelihood_frame, text="Likelihood Threshold:", width=200).pack(side='left') + ctk.CTkEntry(likelihood_frame, textvariable=self.likelihood_threshold_association_var, width=150).pack(side='left', padx=5) + + reproj_frame = ctk.CTkFrame(pa_frame, fg_color="transparent") + reproj_frame.pack(fill='x', pady=5) + ctk.CTkLabel(reproj_frame, text="Reprojection Error Threshold:", width=200).pack(side='left') + ctk.CTkEntry(reproj_frame, textvariable=self.reproj_error_threshold_association_var, width=150).pack(side='left', padx=5) + + tracked_frame = ctk.CTkFrame(pa_frame, fg_color="transparent") + tracked_frame.pack(fill='x', pady=5) + ctk.CTkLabel(tracked_frame, text="Tracked Keypoint:", width=200).pack(side='left') + ctk.CTkEntry(tracked_frame, textvariable=self.tracked_keypoint_var, width=150).pack(side='left', padx=5) + + # Multi Person subsection + ctk.CTkLabel(pa_frame, text="Multi Person Settings:", font=("Helvetica", 14, "bold")).pack(anchor='w', pady=(10, 5)) + + recon_error_frame = ctk.CTkFrame(pa_frame, fg_color="transparent") + recon_error_frame.pack(fill='x', pady=5) + ctk.CTkLabel(recon_error_frame, text="Reconstruction Error Threshold:", width=200).pack(side='left') + ctk.CTkEntry(recon_error_frame, textvariable=self.reconstruction_error_threshold_var, width=150).pack(side='left', padx=5) + + min_affinity_frame = ctk.CTkFrame(pa_frame, fg_color="transparent") + min_affinity_frame.pack(fill='x', pady=5) + ctk.CTkLabel(min_affinity_frame, text="Minimum Affinity:", width=200).pack(side='left') + ctk.CTkEntry(min_affinity_frame, textvariable=self.min_affinity_var, width=150).pack(side='left', padx=5) + + # Triangulation Section + tri_frame = self.create_section_frame(parent, self.app.lang_manager.get_text('triangulation')) + + tri_reproj_frame = ctk.CTkFrame(tri_frame, fg_color="transparent") + tri_reproj_frame.pack(fill='x', pady=5) + ctk.CTkLabel(tri_reproj_frame, text="Reprojection Error Threshold:", width=200).pack(side='left') + ctk.CTkEntry(tri_reproj_frame, textvariable=self.reproj_error_threshold_triangulation_var, width=150).pack(side='left', padx=5) + + tri_likelihood_frame = ctk.CTkFrame(tri_frame, fg_color="transparent") + tri_likelihood_frame.pack(fill='x', pady=5) + ctk.CTkLabel(tri_likelihood_frame, text="Likelihood Threshold:", width=200).pack(side='left') + ctk.CTkEntry(tri_likelihood_frame, textvariable=self.likelihood_threshold_triangulation_var, width=150).pack(side='left', padx=5) + + min_cameras_frame = ctk.CTkFrame(tri_frame, fg_color="transparent") + min_cameras_frame.pack(fill='x', pady=5) + ctk.CTkLabel(min_cameras_frame, text="Minimum Cameras:", width=200).pack(side='left') + ctk.CTkEntry(min_cameras_frame, textvariable=self.min_cameras_for_triangulation_var, width=150).pack(side='left', padx=5) + + interp_gap_frame = ctk.CTkFrame(tri_frame, fg_color="transparent") + interp_gap_frame.pack(fill='x', pady=5) + ctk.CTkLabel(interp_gap_frame, text="Interpolate if Gap Smaller Than:", width=200).pack(side='left') + ctk.CTkEntry(interp_gap_frame, textvariable=self.interp_if_gap_smaller_than_var, width=150).pack(side='left', padx=5) + + interp_type_frame = ctk.CTkFrame(tri_frame, fg_color="transparent") + interp_type_frame.pack(fill='x', pady=5) + ctk.CTkLabel(interp_type_frame, text="Interpolation Type:", width=200).pack(side='left') + ctk.CTkOptionMenu(interp_type_frame, variable=self.interpolation_type_var, + values=['linear', 'slinear', 'quadratic', 'cubic', 'none'], width=150).pack(side='left', padx=5) + + ctk.CTkCheckBox(tri_frame, text="Remove Incomplete Frames", variable=self.remove_incomplete_frames_var).pack(pady=5, anchor='w') + + sections_frame = ctk.CTkFrame(tri_frame, fg_color="transparent") + sections_frame.pack(fill='x', pady=5) + ctk.CTkLabel(sections_frame, text="Sections to Keep:", width=200).pack(side='left') + ctk.CTkOptionMenu(sections_frame, variable=self.sections_to_keep_var, + values=['all', 'largest', 'first', 'last'], width=150).pack(side='left', padx=5) + + fill_gaps_frame = ctk.CTkFrame(tri_frame, fg_color="transparent") + fill_gaps_frame.pack(fill='x', pady=5) + ctk.CTkLabel(fill_gaps_frame, text="Fill Large Gaps With:", width=200).pack(side='left') + ctk.CTkOptionMenu(fill_gaps_frame, variable=self.fill_large_gaps_with_var, + values=['last_value', 'nan', 'zeros'], width=150).pack(side='left', padx=5) + + ctk.CTkCheckBox(tri_frame, text="Show Interpolation Indices", variable=self.show_interp_indices_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(tri_frame, text="Make C3D", variable=self.triangulation_make_c3d_var).pack(pady=5, anchor='w') + + # Filtering Section + filter_frame = self.create_section_frame(parent, self.app.lang_manager.get_text('filtering')) + + ctk.CTkCheckBox(filter_frame, text="Reject Outliers (Hampel Filter)", variable=self.reject_outliers_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(filter_frame, text="Apply Filter", variable=self.filter_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(filter_frame, text="Display Figures", variable=self.display_figures_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(filter_frame, text="Save Filtering Plots", variable=self.save_filt_plots_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(filter_frame, text="Make C3D", variable=self.filtering_make_c3d_var).pack(pady=5, anchor='w') + + filter_type_frame = ctk.CTkFrame(filter_frame, fg_color="transparent") + filter_type_frame.pack(fill='x', pady=5) + ctk.CTkLabel(filter_type_frame, text="Filter Type:", width=200).pack(side='left') + filter_options = ['butterworth', 'kalman', 'gcv_spline', 'gaussian', 'loess', 'median', 'butterworth_on_speed'] + ctk.CTkOptionMenu(filter_type_frame, variable=self.filter_type_var, + values=filter_options, + command=self.on_filter_type_change, width=150).pack(side='left', padx=5) + + # Filter Parameters Frame + self.filter_params_frame = ctk.CTkFrame(filter_frame) + self.filter_params_frame.pack(fill='x', pady=10) + + # Initialize with current filter type + self.on_filter_type_change(self.filter_type_var.get()) + + # Marker Augmentation Section + marker_frame = self.create_section_frame(parent, self.app.lang_manager.get_text('marker_augmentation')) + + ctk.CTkCheckBox(marker_frame, text="Feet on Floor", variable=self.feet_on_floor_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(marker_frame, text="Make C3D", variable=self.augmentation_make_c3d_var).pack(pady=5, anchor='w') + + # Kinematics Section + kin_frame = self.create_section_frame(parent, self.app.lang_manager.get_text('kinematics')) + + ctk.CTkCheckBox(kin_frame, text="Use Augmentation", variable=self.use_augmentation_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(kin_frame, text="Use Simple Model (>10x faster)", variable=self.use_simple_model_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(kin_frame, text="Use Contacts & Muscles", variable=self.use_contacts_muscles_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(kin_frame, text="Right-Left Symmetry", variable=self.right_left_symmetry_var).pack(pady=5, anchor='w') + + default_height_frame = ctk.CTkFrame(kin_frame, fg_color="transparent") + default_height_frame.pack(fill='x', pady=5) + ctk.CTkLabel(default_height_frame, text="Default Height (m):", width=200).pack(side='left') + ctk.CTkEntry(default_height_frame, textvariable=self.default_height_var, width=150).pack(side='left', padx=5) + + ctk.CTkCheckBox(kin_frame, text="Remove Individual Scaling Setup", variable=self.remove_individual_scaling_setup_var).pack(pady=5, anchor='w') + ctk.CTkCheckBox(kin_frame, text="Remove Individual IK Setup", variable=self.remove_individual_IK_setup_var).pack(pady=5, anchor='w') + + fastest_frames_frame = ctk.CTkFrame(kin_frame, fg_color="transparent") + fastest_frames_frame.pack(fill='x', pady=5) + ctk.CTkLabel(fastest_frames_frame, text="Fastest Frames to Remove (%):", width=200).pack(side='left') + ctk.CTkEntry(fastest_frames_frame, textvariable=self.fastest_frames_to_remove_percent_var, width=150).pack(side='left', padx=5) + + close_to_zero_frame = ctk.CTkFrame(kin_frame, fg_color="transparent") + close_to_zero_frame.pack(fill='x', pady=5) + ctk.CTkLabel(close_to_zero_frame, text="Close to Zero Speed (m):", width=200).pack(side='left') + ctk.CTkEntry(close_to_zero_frame, textvariable=self.close_to_zero_speed_m_var, width=150).pack(side='left', padx=5) + + large_angles_frame = ctk.CTkFrame(kin_frame, fg_color="transparent") + large_angles_frame.pack(fill='x', pady=5) + ctk.CTkLabel(large_angles_frame, text="Large Hip/Knee Angles (deg):", width=200).pack(side='left') + ctk.CTkEntry(large_angles_frame, textvariable=self.large_hip_knee_angles_var, width=150).pack(side='left', padx=5) + + trimmed_extrema_frame = ctk.CTkFrame(kin_frame, fg_color="transparent") + trimmed_extrema_frame.pack(fill='x', pady=5) + ctk.CTkLabel(trimmed_extrema_frame, text="Trimmed Extrema Percent:", width=200).pack(side='left') + ctk.CTkEntry(trimmed_extrema_frame, textvariable=self.trimmed_extrema_percent_var, width=150).pack(side='left', padx=5) + + def create_section_frame(self, parent, title): + """Create a section frame with a title""" + section_frame = ctk.CTkFrame(parent) + section_frame.pack(fill='x', pady=10, padx=5) + + # Title + title_frame = ctk.CTkFrame(section_frame, fg_color="transparent") + title_frame.pack(fill='x', pady=5) + ctk.CTkLabel(title_frame, text=title, font=('Helvetica', 16, 'bold')).pack(anchor='w', padx=10) + + # Content frame + content_frame = ctk.CTkFrame(section_frame, fg_color="transparent") + content_frame.pack(fill='x', pady=5, padx=20) + + return content_frame + + def on_filter_type_change(self, selected_filter): + """Update filter parameters when filter type changes""" + # Clear existing widgets + for widget in self.filter_params_frame.winfo_children(): + widget.destroy() + + if selected_filter == 'butterworth': + cutoff_frame = ctk.CTkFrame(self.filter_params_frame, fg_color="transparent") + cutoff_frame.pack(fill='x', pady=5) + ctk.CTkLabel(cutoff_frame, text="Cutoff Frequency (Hz):", width=200).pack(side='left') + ctk.CTkEntry(cutoff_frame, textvariable=self.filter_cutoff_var, width=150).pack(side='left', padx=5) + + order_frame = ctk.CTkFrame(self.filter_params_frame, fg_color="transparent") + order_frame.pack(fill='x', pady=5) + ctk.CTkLabel(order_frame, text="Filter Order:", width=200).pack(side='left') + ctk.CTkEntry(order_frame, textvariable=self.filter_order_var, width=150).pack(side='left', padx=5) + + elif selected_filter == 'kalman': + trust_frame = ctk.CTkFrame(self.filter_params_frame, fg_color="transparent") + trust_frame.pack(fill='x', pady=5) + ctk.CTkLabel(trust_frame, text="Trust Ratio:", width=200).pack(side='left') + ctk.CTkEntry(trust_frame, textvariable=self.kalman_trust_ratio_var, width=150).pack(side='left', padx=5) + + smooth_frame = ctk.CTkFrame(self.filter_params_frame, fg_color="transparent") + smooth_frame.pack(fill='x', pady=5) + ctk.CTkCheckBox(smooth_frame, text="Smooth", variable=self.kalman_smooth_var).pack(side='left') + + elif selected_filter == 'gcv_spline': + cutoff_frame = ctk.CTkFrame(self.filter_params_frame, fg_color="transparent") + cutoff_frame.pack(fill='x', pady=5) + ctk.CTkLabel(cutoff_frame, text="Cutoff Frequency ('auto' or Hz):", width=200).pack(side='left') + ctk.CTkEntry(cutoff_frame, textvariable=self.gcv_cut_off_frequency_var, width=150).pack(side='left', padx=5) + + smoothing_frame = ctk.CTkFrame(self.filter_params_frame, fg_color="transparent") + smoothing_frame.pack(fill='x', pady=5) + ctk.CTkLabel(smoothing_frame, text="Smoothing Factor:", width=200).pack(side='left') + ctk.CTkEntry(smoothing_frame, textvariable=self.gcv_smoothing_factor_var, width=150).pack(side='left', padx=5) + + elif selected_filter == 'butterworth_on_speed': + cutoff_frame = ctk.CTkFrame(self.filter_params_frame, fg_color="transparent") + cutoff_frame.pack(fill='x', pady=5) + ctk.CTkLabel(cutoff_frame, text="Cutoff Frequency (Hz):", width=200).pack(side='left') + ctk.CTkEntry(cutoff_frame, textvariable=self.butterworth_on_speed_cut_off_frequency_var, width=150).pack(side='left', padx=5) + + order_frame = ctk.CTkFrame(self.filter_params_frame, fg_color="transparent") + order_frame.pack(fill='x', pady=5) + ctk.CTkLabel(order_frame, text="Filter Order:", width=200).pack(side='left') + ctk.CTkEntry(order_frame, textvariable=self.butterworth_on_speed_order_var, width=150).pack(side='left', padx=5) + + elif selected_filter == 'gaussian': + sigma_frame = ctk.CTkFrame(self.filter_params_frame, fg_color="transparent") + sigma_frame.pack(fill='x', pady=5) + ctk.CTkLabel(sigma_frame, text="Sigma Kernel (px):", width=200).pack(side='left') + ctk.CTkEntry(sigma_frame, textvariable=self.gaussian_sigma_kernel_var, width=150).pack(side='left', padx=5) + + elif selected_filter == 'loess': + values_frame = ctk.CTkFrame(self.filter_params_frame, fg_color="transparent") + values_frame.pack(fill='x', pady=5) + ctk.CTkLabel(values_frame, text="Number of Values Used:", width=200).pack(side='left') + ctk.CTkEntry(values_frame, textvariable=self.LOESS_nb_values_used_var, width=150).pack(side='left', padx=5) + + elif selected_filter == 'median': + kernel_frame = ctk.CTkFrame(self.filter_params_frame, fg_color="transparent") + kernel_frame.pack(fill='x', pady=5) + ctk.CTkLabel(kernel_frame, text="Kernel Size:", width=200).pack(side='left') + ctk.CTkEntry(kernel_frame, textvariable=self.median_kernel_size_var, width=150).pack(side='left', padx=5) + + def on_filter_type_change_2d(self, selected_filter): + """Update filter parameters when filter type changes in 2D mode""" + # Clear existing widgets + for widget in self.filter_params_2d_frame.winfo_children(): + widget.destroy() + + if selected_filter == 'butterworth': + cutoff_frame = ctk.CTkFrame(self.filter_params_2d_frame, fg_color="transparent") + cutoff_frame.pack(fill='x', pady=5) + ctk.CTkLabel(cutoff_frame, text="Cutoff Frequency (Hz):", width=200).pack(side='left') + ctk.CTkEntry(cutoff_frame, textvariable=self.filter_cutoff_var, width=150).pack(side='left', padx=5) + + order_frame = ctk.CTkFrame(self.filter_params_2d_frame, fg_color="transparent") + order_frame.pack(fill='x', pady=5) + ctk.CTkLabel(order_frame, text="Filter Order:", width=200).pack(side='left') + ctk.CTkEntry(order_frame, textvariable=self.filter_order_var, width=150).pack(side='left', padx=5) + + elif selected_filter == 'gaussian': + sigma_frame = ctk.CTkFrame(self.filter_params_2d_frame, fg_color="transparent") + sigma_frame.pack(fill='x', pady=5) + ctk.CTkLabel(sigma_frame, text="Sigma Kernel (px):", width=200).pack(side='left') + ctk.CTkEntry(sigma_frame, textvariable=self.gaussian_sigma_kernel_var, width=150).pack(side='left', padx=5) + + elif selected_filter == 'loess': + values_frame = ctk.CTkFrame(self.filter_params_2d_frame, fg_color="transparent") + values_frame.pack(fill='x', pady=5) + ctk.CTkLabel(values_frame, text="Number of Values Used:", width=200).pack(side='left') + ctk.CTkEntry(values_frame, textvariable=self.LOESS_nb_values_used_var, width=150).pack(side='left', padx=5) + + elif selected_filter == 'median': + kernel_frame = ctk.CTkFrame(self.filter_params_2d_frame, fg_color="transparent") + kernel_frame.pack(fill='x', pady=5) + ctk.CTkLabel(kernel_frame, text="Kernel Size:", width=200).pack(side='left') + ctk.CTkEntry(kernel_frame, textvariable=self.median_kernel_size_var, width=150).pack(side='left', padx=5) + + def save_settings(self): + """Save the advanced settings""" + try: + # Validate inputs + self.validate_inputs() + + # Update the app with our settings + if hasattr(self.app, 'update_tab_indicator'): + self.app.update_tab_indicator('advanced', True) + if hasattr(self.app, 'update_progress_bar'): + progress_value = 85 # Based on progress_steps + self.app.update_progress_bar(progress_value) + + # Show success message + messagebox.showinfo( + self.app.lang_manager.get_text('success'), + "Advanced settings saved successfully" + ) + + except ValueError as e: + messagebox.showerror( + self.app.lang_manager.get_text('error'), + str(e) + ) + + def validate_inputs(self): + """Validate all input values""" + errors = [] + + # Frame rate + frame_rate = self.frame_rate_var.get() + if frame_rate != 'auto': + try: + float(frame_rate) + except ValueError: + errors.append("Frame Rate must be 'auto' or a number") + + # Validate numeric inputs + if not self.simplified: + try: + float(self.likelihood_threshold_association_var.get()) + except ValueError: + errors.append("Likelihood Threshold must be a number") + + try: + float(self.reproj_error_threshold_association_var.get()) + except ValueError: + errors.append("Reprojection Error Threshold must be a number") + + try: + float(self.default_height_var.get()) + except ValueError: + errors.append("Default Height must be a number") + + if errors: + raise ValueError("\n".join(errors)) + + return True \ No newline at end of file diff --git a/GUI/tabs/batch_tab.py b/GUI/tabs/batch_tab.py new file mode 100644 index 00000000..a3591132 --- /dev/null +++ b/GUI/tabs/batch_tab.py @@ -0,0 +1,341 @@ +from pathlib import Path +import customtkinter as ctk +from tkinter import messagebox +import toml + +class BatchTab: + def __init__(self, parent, app): + self.parent = parent + self.app = app + + # Create main frame + self.frame = ctk.CTkFrame(parent) + + # Build the UI + self.build_ui() + + def get_title(self): + """Return the tab title""" + return self.app.lang_manager.get_text('batch_tab') + + def get_settings(self): + """Get the batch settings""" + # Batch tab doesn't add specific settings to the main config + # as it manages individual trial configs separately + return {} + + def build_ui(self): + # Create scrollable container + self.content_frame = ctk.CTkScrollableFrame(self.frame) + self.content_frame.pack(fill='both', expand=True, padx=0, pady=0) + + # Header + header_frame = ctk.CTkFrame(self.content_frame, fg_color="transparent") + header_frame.pack(fill='x', pady=(0, 20)) + + ctk.CTkLabel( + header_frame, + text='Trial-Specific Configuration', + font=('Helvetica', 20, 'bold') + ).pack(anchor='w') + + # Information label + info_label = ctk.CTkLabel( + self.content_frame, + text="Configure trial-specific parameters. Other settings will be inherited from the main configuration.", + wraplength=800 + ) + info_label.pack(pady=10) + + # Buttons for trials + self.trials_frame = ctk.CTkFrame(self.content_frame) + self.trials_frame.pack(fill='x', pady=10) + + # Create buttons for each trial + self.create_trial_buttons() + + def create_trial_buttons(self): + """Create buttons for each trial""" + # Clear any existing buttons + for widget in self.trials_frame.winfo_children(): + widget.destroy() + + # Create grid layout for trial buttons + rows = (self.app.num_trials + 1) // 2 + cols = 2 + + for i in range(1, self.app.num_trials + 1): + row = (i - 1) // cols + col = (i - 1) % cols + + frame = ctk.CTkFrame(self.trials_frame) + frame.grid(row=row, column=col, padx=10, pady=10, sticky="nsew") + + # Trial label + ctk.CTkLabel( + frame, + text=f"Trial {i}", + font=("Helvetica", 16, "bold") + ).pack(anchor='w', padx=15, pady=(15, 5)) + + # Status indicator - check if config file exists + config_path = Path(self.app.participant_name) / f'Trial_{i}' / 'Config.toml' + status = "○ Not configured" if not config_path.exist() else "● Configured" + status_color = "gray" if not config_path.exists() else "green" + + status_frame = ctk.CTkFrame(frame, fg_color="transparent") + status_frame.pack(fill='x', padx=15, pady=5) + + status_indicator = ctk.CTkLabel( + status_frame, + text=status, + text_color=status_color + ) + status_indicator.pack(side="left") + + # Configure button + configure_button = ctk.CTkButton( + frame, + text="Configure Trial", + command=lambda trial_num=i: self.configure_trial(trial_num), + height=30 + ) + configure_button.pack(pady=15, padx=15) + + # Make sure rows and columns expand properly + for i in range(rows): + self.trials_frame.grid_rowconfigure(i, weight=1) + for i in range(cols): + self.trials_frame.grid_columnconfigure(i, weight=1) + + def configure_trial(self, trial_number): + """Open configuration window for trial-specific settings""" + config_window = ctk.CTkToplevel(self.app.root) + config_window.title(f"Configure Trial_{trial_number}") + config_window.geometry("800x600") + + main_frame = ctk.CTkFrame(config_window) + main_frame.pack(fill='both', expand=True, padx=10, pady=10) + + scroll_frame = ctk.CTkScrollableFrame(main_frame) + scroll_frame.pack(fill='both', expand=True) + + # Load trial configuration + config_path = Path(self.app.participant_name) / f'Trial_{trial_number}' / 'Config.toml' + try: + if config_path.exists(): + config = toml.load(config_path) + else: + # Use parent config as base + parent_config_path = Path(self.app.participant_name) / 'Config.toml' + if parent_config_path.exists(): + config = toml.load(parent_config_path) + else: + # Create a new config + config = {} + except Exception as e: + messagebox.showerror("Error", f"Could not load configuration for Trial_{trial_number}: {str(e)}") + return + + # Dictionary to store all settings variables + settings_vars = {} + + # Sections to configure + sections = { + 'Project Settings': [ + ('frame_range', '[]', 'entry'), + ('multi_person', True, 'checkbox') + ], + 'Pose Estimation': [ + ('pose_model', 'Body_with_feet', 'combobox', ['Body_with_feet', 'Whole_body_wrist', 'Whole_body', 'Body']), + ('mode', 'balanced', 'combobox', ['lightweight', 'balanced', 'performance']), + ('det_frequency', 60, 'entry') + ], + 'Synchronization': [ + ('keypoints_to_consider', 'all', 'entry'), + ('approx_time_maxspeed', 'auto', 'entry'), + ('time_range_around_maxspeed', 2.0, 'entry') + ], + 'Filtering': [ + ('type', 'butterworth', 'combobox', ['butterworth', 'kalman', 'gaussian', 'LOESS', 'median']), + ('cut_off_frequency', 6, 'entry') + ] + } + + row = 0 + for section_name, settings in sections.items(): + # Section header + ctk.CTkLabel( + scroll_frame, + text=section_name, + font=('Helvetica', 16, 'bold') + ).grid(row=row, column=0, columnspan=2, pady=(15,5), sticky='w') + row += 1 + + for setting_info in settings: + setting_name = setting_info[0] + default_value = setting_info[1] + input_type = setting_info[2] + + # Get current value from config if available + current_value = self.get_config_value(config, setting_name, default_value) + + # Create label + ctk.CTkLabel( + scroll_frame, + text=setting_name.replace('_', ' ').title() + ':', + anchor='w' + ).grid(row=row, column=0, pady=2, padx=5, sticky='w') + + # Create appropriate input widget based on type + if input_type == 'checkbox': + var = ctk.BooleanVar(value=current_value) + ctk.CTkCheckBox( + scroll_frame, + text="", + variable=var, + onvalue=True, + offvalue=False + ).grid(row=row, column=1, pady=2, padx=5, sticky='w') + + elif input_type == 'combobox': + options = setting_info[3] + var = ctk.StringVar(value=current_value) + ctk.CTkOptionMenu( + scroll_frame, + variable=var, + values=options + ).grid(row=row, column=1, pady=2, padx=5, sticky='w') + + else: # Default to entry + var = ctk.StringVar(value=str(current_value)) + ctk.CTkEntry( + scroll_frame, + textvariable=var, + width=200 + ).grid(row=row, column=1, pady=2, padx=5, sticky='w') + + # Store variable reference for later retrieval + settings_vars[setting_name] = (var, input_type) + row += 1 + + # Save button at the bottom + save_button = ctk.CTkButton( + main_frame, + text="Save Trial Configuration", + command=lambda: self.save_trial_configuration(config_path, config, settings_vars, trial_number, config_window), + width=200, + height=40 + ) + save_button.pack(pady=10) + + def get_config_value(self, config, setting_name, default_value): + """Get a value from the config, handling nested paths""" + try: + # Project settings + if setting_name in ['frame_range', 'multi_person']: + return config.get('project', {}).get(setting_name, default_value) + + # Pose settings + elif setting_name in ['pose_model', 'mode', 'det_frequency']: + return config.get('pose', {}).get(setting_name, default_value) + + # Synchronization settings + elif setting_name in ['keypoints_to_consider', 'approx_time_maxspeed', 'time_range_around_maxspeed']: + return config.get('synchronization', {}).get(setting_name, default_value) + + # Filtering settings + elif setting_name == 'type': + return config.get('filtering', {}).get(setting_name, default_value) + elif setting_name == 'cut_off_frequency': + return config.get('filtering', {}).get('butterworth', {}).get(setting_name, default_value) + + return default_value + except: + return default_value + + def save_trial_configuration(self, config_path, config, settings_vars, trial_number, config_window): + """Save trial-specific configuration""" + try: + # Ensure the directories exist + config_path = Path(config_path) + config_path.parent.mkdir(parents=True, exist_ok=True) + + # Update config with new values + for setting_name, (var, input_type) in settings_vars.items(): + value = var.get() + + # Convert value based on input type + if input_type == 'checkbox': + # Boolean values + pass # Already a boolean + elif input_type in ['entry', 'combobox']: + # Try to convert to appropriate type + if setting_name == 'frame_range': + # Special handling for frame_range which should be a list + try: + value = eval(value) # Safely evaluate as Python expression + if not isinstance(value, list): + value = [] + except: + value = [] + elif isinstance(value, str) and value.replace('.', '', 1).isdigit(): + # Convert numeric strings to numbers + value = float(value) + # Convert to int if it's a whole number + if value.is_integer(): + value = int(value) + + # Update the appropriate section in the config + self.set_config_value(config, setting_name, value) + + # Write the updated config + with open(config_path, 'w', encoding='utf-8') as f: + toml.dump(config, f) + + messagebox.showinfo("Success", f"Configuration for Trial_{trial_number} has been saved successfully!") + + # Update the trial buttons to reflect new configuration status + self.create_trial_buttons() + + # Close the config window + config_window.destroy() + + except Exception as e: + messagebox.showerror("Error", f"Failed to save configuration: {str(e)}") + + def set_config_value(self, config, setting_name, value): + """Set a value in the config, handling nested paths""" + try: + # Project settings + if setting_name in ['frame_range', 'multi_person']: + if 'project' not in config: + config['project'] = {} + config['project'][setting_name] = value + + # Pose settings + elif setting_name in ['pose_model', 'mode', 'det_frequency']: + if 'pose' not in config: + config['pose'] = {} + config['pose'][setting_name] = value + + # Synchronization settings + elif setting_name in ['keypoints_to_consider', 'approx_time_maxspeed', 'time_range_around_maxspeed']: + if 'synchronization' not in config: + config['synchronization'] = {} + config['synchronization'][setting_name] = value + + # Filtering settings + elif setting_name == 'type': + if 'filtering' not in config: + config['filtering'] = {} + config['filtering'][setting_name] = value + elif setting_name == 'cut_off_frequency': + if 'filtering' not in config: + config['filtering'] = {} + if 'butterworth' not in config['filtering']: + config['filtering']['butterworth'] = {} + config['filtering']['butterworth'][setting_name] = value + + except Exception as e: + print(f"Error setting {setting_name}: {str(e)}") \ No newline at end of file diff --git a/GUI/tabs/calibration_tab.py b/GUI/tabs/calibration_tab.py new file mode 100644 index 00000000..bed12e2b --- /dev/null +++ b/GUI/tabs/calibration_tab.py @@ -0,0 +1,858 @@ +from pathlib import Path +import tkinter as tk +import customtkinter as ctk +from tkinter import filedialog, messagebox +from PIL import Image +import cv2 +import matplotlib.pyplot as plt +from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg +import matplotlib +matplotlib.use("TkAgg") # Ensure we're using TkAgg backend + +from GUI.utils import generate_checkerboard_image + +class CalibrationTab: + def __init__(self, parent, app): + self.parent = parent + self.app = app + + # Create main frame + self.frame = ctk.CTkFrame(parent) + + # Initialize variables + self.calibration_type_var = ctk.StringVar(value='calculate') + self.num_cameras_var = ctk.StringVar(value='2') + self.checkerboard_width_var = ctk.StringVar(value='7') + self.checkerboard_height_var = ctk.StringVar(value='5') + self.square_size_var = ctk.StringVar(value='30') + self.video_extension_var = ctk.StringVar(value='mp4') + self.convert_from_var = ctk.StringVar(value='qualisys') + self.binning_factor_var = ctk.StringVar(value='1') + + # Track configuration state + self.type_confirmed = False + self.points_2d = [] + self.point_markers = [] + self.object_coords_3d = [] + self.current_point_index = 0 + + # Flag to control click handling during zooming + self.zooming_mode = False + + # Build the UI + self.build_ui() + + def get_settings(self): + """Get the calibration settings""" + settings = { + 'calibration': { + 'calibration_type': self.calibration_type_var.get(), + } + } + + # Add type-specific settings + if self.calibration_type_var.get() == 'calculate': + settings['calibration']['calculate'] = { + 'intrinsics': { + 'intrinsics_corners_nb': [ + int(self.checkerboard_width_var.get()), + int(self.checkerboard_height_var.get()) + ], + 'intrinsics_square_size': float(self.square_size_var.get()), + 'intrinsics_extension': self.video_extension_var.get() + }, + 'extrinsics': { + 'scene': { + 'extrinsics_extension': self.video_extension_var.get() + } + } + } + + # Add coordinates if they've been set + if hasattr(self, 'object_coords_3d') and self.object_coords_3d: + settings['calibration']['calculate']['extrinsics']['scene']['object_coords_3d'] = self.object_coords_3d + else: + settings['calibration']['convert'] = { + 'convert_from': self.convert_from_var.get() + } + + if self.convert_from_var.get() == 'qualisys': + settings['calibration']['convert']['qualisys'] = { + 'binning_factor': int(self.binning_factor_var.get()) + } + + return settings + + def build_ui(self): + # Create a two-panel layout + self.main_panel = ctk.CTkFrame(self.frame) + self.main_panel.pack(fill='both', expand=True, padx=0, pady=0) + + self.title_label = ctk.CTkLabel( + self.main_panel, + text=self.app.lang_manager.get_text('calibration_tab'), + font=("Helvetica", 24, "bold") + ) + self.title_label.pack(pady=(0, 20)) + + # Left panel (for inputs) + self.left_panel = ctk.CTkFrame(self.main_panel, width=600) + self.left_panel.pack(side='left', fill='both', expand=True, padx=5, pady=5) + + # Right panel (for scene image) + self.right_panel = ctk.CTkFrame(self.main_panel) + self.right_panel.pack(side='right', fill='both', expand=True, padx=5, pady=5) + + # Add content to the left panel + self.build_left_panel() + + # Right panel will be populated with scene image when needed + ctk.CTkLabel( + self.right_panel, + text="Scene Calibration Image will appear here", + wraplength=300, + font=("Helvetica", 14) + ).pack(expand=True) + + def build_left_panel(self): + # Calibration Type + type_frame = ctk.CTkFrame(self.left_panel) + type_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + type_frame, + text="Calibration Type:", + width=150 + ).pack(side='left', padx=10, pady=10) + + # Radio buttons for calibration type + radio_frame = ctk.CTkFrame(type_frame, fg_color="transparent") + radio_frame.pack(side='left', fill='x', expand=True) + + ctk.CTkRadioButton( + radio_frame, + text="Calculate", + variable=self.calibration_type_var, + value='calculate', + command=self.on_calibration_type_change + ).pack(side='left', padx=10) + + ctk.CTkRadioButton( + radio_frame, + text="Convert", + variable=self.calibration_type_var, + value='convert', + command=self.on_calibration_type_change + ).pack(side='left', padx=10) + + # Number of Cameras + camera_frame = ctk.CTkFrame(self.left_panel) + camera_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + camera_frame, + text="Number of Cameras:", + width=150 + ).pack(side='left', padx=10, pady=10) + + self.camera_entry = ctk.CTkEntry( + camera_frame, + textvariable=self.num_cameras_var, + width=100 + ) + self.camera_entry.pack(side='left', padx=10) + + # Calculate Options Frame + self.calculate_frame = ctk.CTkFrame(self.left_panel) + self.calculate_frame.pack(fill='x', pady=5) + + # Checkerboard Width + width_frame = ctk.CTkFrame(self.calculate_frame) + width_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + width_frame, + text="Checkerboard Width:", + width=150 + ).pack(side='left', padx=10, pady=5) + + self.width_entry = ctk.CTkEntry( + width_frame, + textvariable=self.checkerboard_width_var, + width=100 + ) + self.width_entry.pack(side='left', padx=10) + + # Checkerboard Height + height_frame = ctk.CTkFrame(self.calculate_frame) + height_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + height_frame, + text="Checkerboard Height:", + width=150 + ).pack(side='left', padx=10, pady=5) + + self.height_entry = ctk.CTkEntry( + height_frame, + textvariable=self.checkerboard_height_var, + width=100 + ) + self.height_entry.pack(side='left', padx=10) + + # Square Size + square_frame = ctk.CTkFrame(self.calculate_frame) + square_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + square_frame, + text="Square Size (mm):", + width=150 + ).pack(side='left', padx=10, pady=5) + + self.square_entry = ctk.CTkEntry( + square_frame, + textvariable=self.square_size_var, + width=100 + ) + self.square_entry.pack(side='left', padx=10) + + # Video Extension + extension_frame = ctk.CTkFrame(self.calculate_frame) + extension_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + extension_frame, + text="Video/Image Extension:", + width=150 + ).pack(side='left', padx=10, pady=5) + + self.extension_entry = ctk.CTkEntry( + extension_frame, + textvariable=self.video_extension_var, + width=100 + ) + self.extension_entry.pack(side='left', padx=10) + + # Checkerboard preview (placed at the bottom of inputs) + self.checkerboard_frame = ctk.CTkFrame(self.left_panel) + self.checkerboard_frame.pack(fill='x', pady=10) + + # Convert Options Frame (initially hidden) + self.convert_frame = ctk.CTkFrame(self.left_panel) + + # Convert From + convert_from_frame = ctk.CTkFrame(self.convert_frame) + convert_from_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + convert_from_frame, + text="Convert From:", + width=150 + ).pack(side='left', padx=10, pady=5) + + convert_options = ['qualisys', 'optitrack', 'vicon', 'opencap', 'easymocap', 'biocv', 'anipose', 'freemocap'] + self.convert_menu = ctk.CTkOptionMenu( + convert_from_frame, + variable=self.convert_from_var, + values=convert_options, + width=150 + ) + self.convert_menu.pack(side='left', padx=10) + + # Binning Factor (for Qualisys) + self.qualisys_frame = ctk.CTkFrame(self.convert_frame) + + ctk.CTkLabel( + self.qualisys_frame, + text="Binning Factor:", + width=150 + ).pack(side='left', padx=10, pady=5) + + ctk.CTkEntry( + self.qualisys_frame, + textvariable=self.binning_factor_var, + width=100 + ).pack(side='left', padx=10) + + # Confirm button + self.confirm_button = ctk.CTkButton( + self.left_panel, + text="Confirm Configuration", + command=self.confirm_calibration_type, + height=40, + width=200, + font=("Helvetica", 14), + fg_color=("#4CAF50", "#2E7D32") + ) + self.confirm_button.pack(side='bottom', pady=10) + + # Proceed button (initially hidden) + self.proceed_button = ctk.CTkButton( + self.left_panel, + text="Proceed with Calibration", + command=self.proceed_calibration, + height=40, + width=200 + ) + + # Points selection frame (for scene calibration) - will be shown in right panel + self.points_frame = ctk.CTkFrame(self.right_panel) + + # Apply the current calibration type + self.on_calibration_type_change() + + def on_calibration_type_change(self): + """Handle changes to calibration type""" + # If already confirmed, ask for reconfirmation + if self.type_confirmed: + response = messagebox.askyesno( + "Confirm Changes", + "Do you want to modify the configuration? This will require reconfirmation." + ) + if response: + # Re-enable inputs for modification + self.camera_entry.configure(state='normal') + if self.calibration_type_var.get() == 'calculate': + self.width_entry.configure(state='normal') + self.height_entry.configure(state='normal') + self.square_entry.configure(state='normal') + self.extension_entry.configure(state='normal') + else: + self.convert_menu.configure(state='normal') + + # Show confirm button, hide proceed button + self.confirm_button.pack(pady=10) + self.proceed_button.pack_forget() + + # Reset confirmation flag + self.type_confirmed = False + else: + # Revert radio button selection + self.calibration_type_var.set('convert' if self.calibration_type_var.get() == 'calculate' else 'calculate') + return + + # Show/hide appropriate frames + if self.calibration_type_var.get() == 'calculate': + self.calculate_frame.pack(fill='x', pady=5) + self.convert_frame.pack_forget() + self.qualisys_frame.pack_forget() + else: + self.calculate_frame.pack_forget() + self.convert_frame.pack(fill='x', pady=5) + + # Show/hide Qualisys-specific settings + if self.convert_from_var.get() == 'qualisys': + self.qualisys_frame.pack(fill='x', pady=5) + else: + self.qualisys_frame.pack_forget() + + def confirm_calibration_type(self): + """Confirm the calibration type configuration""" + try: + # Validate number of cameras + num_cameras = int(self.num_cameras_var.get()) + if num_cameras < 2: + messagebox.showerror( + "Error", + "Number of cameras must be at least 2" + ) + return + + # Validate calculate-specific inputs + if self.calibration_type_var.get() == 'calculate': + if not all([ + self.checkerboard_width_var.get(), + self.checkerboard_height_var.get(), + self.square_size_var.get(), + self.video_extension_var.get() + ]): + messagebox.showerror("Error", "All fields must be filled") + return + + # Generate and display checkerboard preview + checkerboard_width = int(self.checkerboard_width_var.get()) + checkerboard_height = int(self.checkerboard_height_var.get()) + square_size = float(self.square_size_var.get()) + + self.display_checkerboard(checkerboard_width, checkerboard_height, square_size) + + # Disable inputs + self.camera_entry.configure(state='disabled') + if self.calibration_type_var.get() == 'calculate': + self.width_entry.configure(state='disabled') + self.height_entry.configure(state='disabled') + self.square_entry.configure(state='disabled') + self.extension_entry.configure(state='disabled') + else: + self.convert_menu.configure(state='disabled') + + # Update buttons + self.confirm_button.pack_forget() + self.proceed_button.pack(pady=10) + + # Set confirmed flag + self.type_confirmed = True + + messagebox.showinfo( + "Configuration Confirmed", + "Calibration configuration confirmed. Click 'Proceed with Calibration' when ready." + ) + + except ValueError: + messagebox.showerror("Error", "Please enter valid numeric values.") + + def display_checkerboard(self, width, height, square_size): + """Display a checkerboard preview""" + # Clear existing widgets + for widget in self.checkerboard_frame.winfo_children(): + widget.destroy() + + # Generate checkerboard image + checkerboard_image = generate_checkerboard_image(width, height, square_size) + + # Checkerboard preview title + ctk.CTkLabel( + self.checkerboard_frame, + text="Checkerboard Preview:", + font=("Helvetica", 16, "bold") + ).pack(anchor='w', padx=10, pady=(10, 5)) + + # Resize for display + max_size = 200 + img_width, img_height = checkerboard_image.size + scale = min(max_size / img_width, max_size / img_height, 1) + display_image = checkerboard_image.resize( + (int(img_width * scale), int(img_height * scale)), + Image.Resampling.LANCZOS + ) + + # Convert to CTkImage + ctk_img = ctk.CTkImage( + light_image=display_image, + dark_image=display_image, + size=(int(img_width * scale), int(img_height * scale)) + ) + + # Display checkerboard + image_label = ctk.CTkLabel(self.checkerboard_frame, image=ctk_img, text="") + image_label.ctk_image = ctk_img # Keep a reference + image_label.pack(padx=10, pady=10) + + # Save button + ctk.CTkButton( + self.checkerboard_frame, + text="Save as PDF", + command=lambda: self.save_checkerboard_as_pdf(checkerboard_image) + ).pack(pady=(0, 10)) + + def save_checkerboard_as_pdf(self, image): + """Save the checkerboard image as a PDF file""" + file_path = filedialog.asksaveasfilename( + defaultextension=".pdf", + filetypes=[("PDF files", "*.pdf")] + ) + if file_path: + image.save(file_path, "PDF") + messagebox.showinfo( + "Saved", + f"Checkerboard image saved as {file_path}" + ) + + def proceed_calibration(self): + """Proceed with calibration setup""" + if not self.type_confirmed: + messagebox.showerror("Error", "Please confirm your configuration first") + return + + # Get number of cameras + try: + num_cameras = int(self.num_cameras_var.get()) + except ValueError: + messagebox.showerror("Error", "Invalid number of cameras") + return + + # Process based on calibration type + if self.calibration_type_var.get() == 'calculate': + # Create calibration folders + self.create_calibration_folders(num_cameras) + + # Input checkerboard videos + if not self.input_checkerboard_videos(num_cameras): + return + + # Input scene videos + if not self.input_scene_videos(num_cameras): + return + + # Input scene coordinates + if not self.input_scene_coordinates(): + return + + else: # convert + # Input calibration file to convert + if not self.input_calibration_file(): + return + + # Update the progress only after all coordinates are entered in the input_scene_coordinates method + + def create_calibration_folders(self, num_cameras): + """Create the necessary calibration folders""" + # Define base path based on analysis mode + base_path = Path(self.app.participant_name) / 'calibration' + + # Create folders for each camera + for cam in range(1, num_cameras + 1): + intrinsics_folder = base_path / 'intrinsics' / f'int_cam{cam}_img' + extrinsics_folder = base_path / 'extrinsics' / f'ext_cam{cam}_img' + + # Create directories + intrinsics_folder.mkdir(parents=True, exist_ok=True) + extrinsics_folder.mkdir(parents=True, exist_ok=True) + + def input_checkerboard_videos(self, num_cameras): + """Input checkerboard videos/images for each camera""" + messagebox.showinfo( + "Input Checkerboard Videos", + "Please select the checkerboard videos/images for each camera." + ) + + base_path = Path(self.app.participant_name) / 'calibration' + + for cam in range(1, num_cameras + 1): + file_path = filedialog.askopenfilename( + title=f"Select Checkerboard Video/Image for Camera {cam}", + filetypes=[ + ("Video/Image files", f"*.{self.video_extension_var.get()}"), + ("All files", "*.*") + ] + ) + + if not file_path: + messagebox.showerror("Error", f"No file selected for camera {cam}") + return False + + # Copy to appropriate folder + dest_folder = base_path / 'intrinsics' / f'int_cam{cam}_img' + dest_path = dest_folder / Path(file_path).name + if dest_path.exists(): dest_path.unlink() + dest_path.symlink_to(file_path) + + return True + + def input_scene_videos(self, num_cameras): + """Input scene videos/images for each camera""" + messagebox.showinfo( + "Input Scene Videos", + "Please select the scene videos/images for each camera." + ) + + base_path = Path(self.app.participant_name) / 'calibration' + + for cam in range(1, num_cameras + 1): + file_path = filedialog.askopenfilename( + title=f"Select Scene Video/Image for Camera {cam}", + filetypes=[ + ("Video/Image files", f"*.{self.video_extension_var.get()}"), + ("All files", "*.*") + ] + ) + + if not file_path: + messagebox.showerror("Error", f"No file selected for camera {cam}") + return False + + # Copy to appropriate folder + dest_folder = base_path / 'extrinsics' / f'ext_cam{cam}_img' + dest_path = dest_folder / Path(file_path).name + if dest_path.exists(): dest_path.unlink() + dest_path.symlink_to(file_path) + + return True + + def input_scene_coordinates(self): + """Input scene coordinates for calibration with zoomable image""" + # Clear any existing content in right panel + for widget in self.right_panel.winfo_children(): + widget.destroy() + + # Show points frame in the right panel + self.points_frame = ctk.CTkFrame(self.right_panel) + self.points_frame.pack(fill='both', expand=True) + + # Choose a scene image/video for reference + file_path = filedialog.askopenfilename( + title="Select a Scene Image/Video for Point Selection", + filetypes=[ + ("Video/Image files", f"*.{self.video_extension_var.get()}"), + ("All files", "*.*") + ] + ) + + if not file_path: + messagebox.showerror("Error", "No file selected for point selection") + return False + + # Load image from video if video file + if Path(file_path).suffix.lower() in ('.mp4', '.avi', '.mov'): + cap = cv2.VideoCapture(file_path) + ret, frame = cap.read() + cap.release() + if not ret: + messagebox.showerror("Error", "Failed to read video frame") + return False + scene_image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) + else: + scene_image = plt.imread(file_path) + + # Create matplotlib figure for point selection with zoom capability + self.fig, self.ax = plt.subplots(figsize=(10, 8)) + self.ax.imshow(scene_image) + self.ax.set_title("Click to select 8 points for calibration (use mouse wheel to zoom, right-click to remove last point)") + + # Store selected points + self.points_2d = [] + self.point_markers = [] + + # Add navigation toolbar for zoom functionality + from matplotlib.backends.backend_tkagg import NavigationToolbar2Tk + + # Create canvas widget + self.canvas = FigureCanvasTkAgg(self.fig, master=self.points_frame) + self.canvas.draw() + self.canvas.get_tk_widget().pack(fill='both', expand=True) + + # Add toolbar with zoom capabilities + self.toolbar_frame = tk.Frame(self.points_frame) + self.toolbar_frame.pack(fill='x') + self.toolbar = NavigationToolbar2Tk(self.canvas, self.toolbar_frame) + self.toolbar.update() + + # Connect the toolbar events to track zoom state + self.toolbar.pan() # Start in pan mode + self.toolbar.mode = "" # Reset mode + original_update = self.toolbar.update + + def custom_update(): + # Track if we're in zoom or pan mode + self.zooming_mode = self.toolbar.mode in ('zoom rect', 'pan/zoom') + original_update() + + self.toolbar.update = custom_update + + # Click handler for selecting points + def onclick(event): + # Only handle clicks if we're not in zoom or pan mode + if event.inaxes == self.ax and event.button == 1 and not self.zooming_mode: + if len(self.points_2d) < 8: + x, y = event.xdata, event.ydata + if x is not None and y is not None: + self.points_2d.append((x, y)) + # Plot point in red + point = self.ax.plot(x, y, 'ro')[0] + self.point_markers.append(point) + # Add point number + self.ax.text(x + 10, y + 10, str(len(self.points_2d)), color='white', + fontsize=14, fontweight='bold', bbox=dict(facecolor='black', alpha=0.7)) + self.fig.canvas.draw() + + if len(self.points_2d) == 8: + # Process the points + self.process_coordinate_input() + + # Right-click handler to remove the last point + def on_right_click(event): + if event.inaxes == self.ax and event.button == 3 and not self.zooming_mode: + if len(self.points_2d) > 0: + # Remove the last point + self.points_2d.pop() + # Remove the marker + last_marker = self.point_markers.pop() + last_marker.remove() + # Remove any text annotations for this point (approximate by finding last added) + for text in self.ax.texts: + if text.get_text() == str(len(self.points_2d) + 1): + text.remove() + break + self.fig.canvas.draw() + + # Connect click events + self.canvas.mpl_connect('button_press_event', onclick) + self.canvas.mpl_connect('button_press_event', on_right_click) + + # Instructions label + ctk.CTkLabel( + self.points_frame, + text="Click to select 8 points for calibration. Use mouse wheel or toolbar to zoom. Right-click to remove last point.", + wraplength=600, + font=("Helvetica", 12), + text_color="gray" + ).pack(pady=10) + + return True + + def process_coordinate_input(self): + """Process the 8 selected points and input their 3D coordinates""" + # Predefined coordinates layout with origin at first point + predefined_coords = [ + [0.0, 0.0, 0.0], + [0.5, 0.0, 0.0], + [1.0, 0.0, 0.0], + [0.0, 0.5, 0.0], + [0.5, 0.5, 0.0], + [1.0, 0.5, 0.0], + [0.0, 0.0, 0.5], + [0.0, 0.5, 0.5] + ] + + def create_coordinate_window(point_idx): + if point_idx >= len(self.points_2d): + # All points processed + self.save_coordinates_to_config() + return + + # Change current point to yellow + if point_idx < len(self.point_markers): + self.point_markers[point_idx].set_color('yellow') + self.fig.canvas.draw() + + # Create window for coordinate input + coord_win = ctk.CTkToplevel(self.app.root) + coord_win.title(f"Point {point_idx + 1} Coordinates") + coord_win.geometry("400x300") + coord_win.transient(self.app.root) + coord_win.grab_set() + + # Main frame + main_frame = ctk.CTkFrame(coord_win) + main_frame.pack(fill='both', expand=True, padx=20, pady=20) + + # Title + ctk.CTkLabel( + main_frame, + text=f"Enter 3D Coordinates for Point {point_idx + 1}", + font=("Helvetica", 16, "bold") + ).pack(pady=(0, 20)) + + # For first point, use [0,0,0] and disable editing + x_var = ctk.StringVar(value=str(predefined_coords[point_idx][0])) + y_var = ctk.StringVar(value=str(predefined_coords[point_idx][1])) + z_var = ctk.StringVar(value=str(predefined_coords[point_idx][2])) + + # X coordinate + x_frame = ctk.CTkFrame(main_frame) + x_frame.pack(fill='x', pady=5) + ctk.CTkLabel(x_frame, text="X (meters):", width=100).pack(side='left', padx=5) + x_entry = ctk.CTkEntry(x_frame, textvariable=x_var, width=150) + x_entry.pack(side='left', padx=5) + + # Y coordinate + y_frame = ctk.CTkFrame(main_frame) + y_frame.pack(fill='x', pady=5) + ctk.CTkLabel(y_frame, text="Y (meters):", width=100).pack(side='left', padx=5) + y_entry = ctk.CTkEntry(y_frame, textvariable=y_var, width=150) + y_entry.pack(side='left', padx=5) + + # Z coordinate + z_frame = ctk.CTkFrame(main_frame) + z_frame.pack(fill='x', pady=5) + ctk.CTkLabel(z_frame, text="Z (meters):", width=100).pack(side='left', padx=5) + z_entry = ctk.CTkEntry(z_frame, textvariable=z_var, width=150) + z_entry.pack(side='left', padx=5) + + # Disable entries for first point + if point_idx == 0: + x_entry.configure(state='disabled') + y_entry.configure(state='disabled') + z_entry.configure(state='disabled') + + # Submit function + def submit_coords(): + try: + x = float(x_var.get()) + y = float(y_var.get()) + z = float(z_var.get()) + + # Save coordinates + self.object_coords_3d.append([x, y, z]) + + # Change point color to green + if point_idx < len(self.point_markers): + self.point_markers[point_idx].set_color('green') + self.fig.canvas.draw() + + # Close window + coord_win.destroy() + + # Process next point + self.app.root.after(100, lambda: create_coordinate_window(point_idx + 1)) + + except ValueError: + messagebox.showerror("Error", "Please enter valid numbers for coordinates") + + # Submit button + ctk.CTkButton( + main_frame, + text="Next Point", + command=submit_coords, + height=40, + width=150 + ).pack(pady=20) + + # Start with first point + create_coordinate_window(0) + + def save_coordinates_to_config(self): + """Save the 3D coordinates to the config file""" + # Only update progress after all coordinates are entered + messagebox.showinfo( + "Calibration Complete", + "The 3D coordinates have been saved. You will need to click on these same points in order when running Pose2Sim After activation." + ) + + # Update the progress bar and tab indicator now that all steps are complete + if hasattr(self.app, 'update_tab_indicator'): + self.app.update_tab_indicator('calibration', True) + if hasattr(self.app, 'update_progress_bar') and hasattr(self.app, 'progress_steps'): + progress_value = self.app.progress_steps.get('calibration', 15) + self.app.update_progress_bar(progress_value) + + def input_calibration_file(self): + """Input a calibration file for conversion""" + file_path = filedialog.askopenfilename( + title="Select Calibration File to Convert", + filetypes=[ + ("All files", "*.*"), + ("QTM files", "*.qtm"), + ("CSV files", "*.csv"), + ("XML files", "*.xml") + ] + ) + + if not file_path: + messagebox.showerror("Error", "No calibration file selected") + return False + + # Create calibration folder + calibration_path = Path(self.app.participant_name) / 'calibration' + calibration_path.mkdir(parents=True, exist_ok=True) + + # Copy the file + dest_path = calibration_path / Path(file_path).name + if dest_path.exists(): dest_path.unlink() + dest_path.symlink_to(file_path) + + # Update progress now that conversion is complete + if hasattr(self.app, 'update_tab_indicator'): + self.app.update_tab_indicator('calibration', True) + if hasattr(self.app, 'update_progress_bar') and hasattr(self.app, 'progress_steps'): + progress_value = self.app.progress_steps.get('calibration', 15) + self.app.update_progress_bar(progress_value) + + # Show success message + messagebox.showinfo( + "Calibration Complete", + "Calibration file has been imported successfully." + ) + + return True \ No newline at end of file diff --git a/GUI/tabs/pose_model_tab.py b/GUI/tabs/pose_model_tab.py new file mode 100644 index 00000000..e338c0fc --- /dev/null +++ b/GUI/tabs/pose_model_tab.py @@ -0,0 +1,974 @@ +import os +import shutil +import tkinter as tk +import customtkinter as ctk +from tkinter import filedialog, messagebox, simpledialog + +class PoseModelTab: + def __init__(self, parent, app, simplified=False): + self.parent = parent + self.app = app + self.simplified = simplified # Flag for 2D mode + + # Create main frame + self.frame = ctk.CTkFrame(parent) + + # Initialize variables + self.multiple_persons_var = ctk.StringVar(value='single') + self.participant_height_var = ctk.StringVar(value='1.72') + self.participant_mass_var = ctk.StringVar(value='70.0') + self.pose_model_var = ctk.StringVar(value='Body_with_feet') + self.mode_var = ctk.StringVar(value='balanced') + self.tracking_mode_var = ctk.StringVar(value='sports2d') # Added tracking mode variable + self.video_extension_var = ctk.StringVar(value='mp4') + + # For 2D mode + if simplified: + self.video_input_var = ctk.StringVar(value='') + self.visible_side_var = ctk.StringVar(value='auto') + self.video_input_type_var = ctk.StringVar(value='file') # 'file', 'webcam', or 'multiple' + self.multiple_videos_list = [] # Store multiple video paths + + # For multiple people + self.num_people_var = ctk.StringVar(value='2') + self.people_details_vars = [] + self.participant_heights = [] + self.participant_masses = [] + + # Build the UI + self.build_ui() + + def get_title(self): + """Return the tab title""" + return self.app.lang_manager.get_text('pose_model_tab') + + def get_settings(self): + """Get the pose model settings""" + # Common settings for both 2D and 3D + settings = { + 'pose': { + 'pose_model': self.pose_model_var.get(), + 'mode': self.mode_var.get(), + 'tracking_mode': self.tracking_mode_var.get(), + 'vid_img_extension': self.video_extension_var.get() + }, + 'project': { + 'multi_person': self.multiple_persons_var.get() == 'multiple' + } + } + + # Add participant details for 3D mode + if not self.simplified: + if self.multiple_persons_var.get() == 'single': + settings['project']['participant_height'] = float(self.participant_height_var.get()) + settings['project']['participant_mass'] = float(self.participant_mass_var.get()) + else: + settings['project']['participant_height'] = self.participant_heights + settings['project']['participant_mass'] = self.participant_masses + + # Add 2D-specific settings + if self.simplified: + # CRITICAL FIX: Use 'base' section for 2D settings, not 'project' + if 'base' not in settings: + settings['base'] = {} + + # Handle different video input types for 2D mode + if self.video_input_type_var.get() == 'webcam': + settings['base']['video_input'] = 'webcam' + elif self.video_input_type_var.get() == 'multiple' and self.multiple_videos_list: + settings['base']['video_input'] = self.multiple_videos_list + else: + settings['base']['video_input'] = self.video_input_var.get() + + settings['base']['visible_side'] = self.visible_side_var.get() + settings['base']['first_person_height'] = float(self.participant_height_var.get()) + + # DEBUG: Print what we're returning + print(f"DEBUG pose_model get_settings: video_input = {settings['base']['video_input']}") + + # CRITICAL FIX: Actually return the settings! + return settings + + def build_ui(self): + # Create scrollable container + self.content_frame = ctk.CTkScrollableFrame(self.frame) + self.content_frame.pack(fill='both', expand=True, padx=0, pady=0) + + # Tab title + ctk.CTkLabel( + self.content_frame, + text=self.get_title(), + font=('Helvetica', 24, 'bold') + ).pack(anchor='w', pady=(0, 20)) + + # Build appropriate UI based on mode + if self.simplified: + self.build_2d_ui() + else: + self.build_3d_ui() + + def build_2d_ui(self): + """Build the UI for 2D analysis""" + # Video input section + video_frame = ctk.CTkFrame(self.content_frame) + video_frame.pack(fill='x', pady=10) + + ctk.CTkLabel( + video_frame, + text="Video Input:", + font=("Helvetica", 16, "bold") + ).pack(anchor='w', padx=10, pady=(10, 5)) + + # Video input type selection (File, Webcam, Multiple files) + input_type_frame = ctk.CTkFrame(video_frame) + input_type_frame.pack(fill='x', padx=10, pady=5) + + ctk.CTkLabel( + input_type_frame, + text="Input Type:", + width=100 + ).pack(side='left', padx=5) + + input_type_radio_frame = ctk.CTkFrame(input_type_frame, fg_color="transparent") + input_type_radio_frame.pack(side='left', fill='x', expand=True) + + ctk.CTkRadioButton( + input_type_radio_frame, + text="Single Video File", + variable=self.video_input_type_var, + value='file', + command=self.on_video_input_type_change + ).pack(side='left', padx=10) + + ctk.CTkRadioButton( + input_type_radio_frame, + text="Webcam", + variable=self.video_input_type_var, + value='webcam', + command=self.on_video_input_type_change + ).pack(side='left', padx=10) + + ctk.CTkRadioButton( + input_type_radio_frame, + text="Multiple Videos", + variable=self.video_input_type_var, + value='multiple', + command=self.on_video_input_type_change + ).pack(side='left', padx=10) + + # Container for video input options (changes based on selection) + self.video_input_container = ctk.CTkFrame(video_frame) + self.video_input_container.pack(fill='x', padx=10, pady=5) + + # Initialize with single file option as default + self.build_single_video_input() + + # Person details section + self.build_person_section() + + # Pose model section + self.build_pose_model_section() + + # Visible side selection + side_frame = ctk.CTkFrame(self.content_frame) + side_frame.pack(fill='x', pady=10) + + ctk.CTkLabel( + side_frame, + text="Visible Side:", + font=("Helvetica", 16, "bold") + ).pack(anchor='w', padx=10, pady=(10, 5)) + + side_options = ['auto', 'right', 'left', 'front', 'back', 'none'] + side_menu = ctk.CTkOptionMenu( + side_frame, + variable=self.visible_side_var, + values=side_options, + width=150 + ) + side_menu.pack(anchor='w', padx=30, pady=10) + + # Proceed button + ctk.CTkButton( + self.content_frame, + text=self.app.lang_manager.get_text('proceed_pose_estimation'), + command=self.proceed_pose_estimation, + height=40, + width=200, + font=("Helvetica", 14), + fg_color=("#4CAF50", "#2E7D32") + ).pack(side='bottom', pady=20) + + def build_single_video_input(self): + """Build the UI for single video file input""" + # Clear existing content + for widget in self.video_input_container.winfo_children(): + widget.destroy() + + # Video path display and browse button + path_frame = ctk.CTkFrame(self.video_input_container, fg_color="transparent") + path_frame.pack(fill='x', pady=5) + + self.path_entry = ctk.CTkEntry( + path_frame, + textvariable=self.video_input_var, + width=400 + ) + self.path_entry.pack(side='left', padx=(0, 10)) + + ctk.CTkButton( + path_frame, + text="Browse", + command=self.browse_video, + width=100 + ).pack(side='left') + + def build_webcam_input(self): + """Build the UI for webcam input""" + # Clear existing content + for widget in self.video_input_container.winfo_children(): + widget.destroy() + + # Webcam info label + webcam_info_frame = ctk.CTkFrame(self.video_input_container, fg_color="transparent") + webcam_info_frame.pack(fill='x', pady=10) + + ctk.CTkLabel( + webcam_info_frame, + text="Webcam will be used when Sports2D is launched.\nNo additional configuration needed.", + wraplength=500, + font=("Helvetica", 14), + text_color=("gray20", "gray90") + ).pack(pady=10) + + # Set value for config + self.video_input_var.set("webcam") + + def build_multiple_videos_input(self): + """Build the UI for multiple video files input""" + # Clear existing content + for widget in self.video_input_container.winfo_children(): + widget.destroy() + + # Create a frame for the list and controls + list_frame = ctk.CTkFrame(self.video_input_container) + list_frame.pack(fill='x', pady=5) + + # Video list (scrollable) + self.videos_list_frame = ctk.CTkScrollableFrame(list_frame, height=150) + self.videos_list_frame.pack(fill='x', expand=True, pady=5) + + # Add controls + controls_frame = ctk.CTkFrame(list_frame, fg_color="transparent") + controls_frame.pack(fill='x', pady=5) + + ctk.CTkButton( + controls_frame, + text="Add Video", + command=self.add_video_to_list, + width=120 + ).pack(side='left', padx=5) + + ctk.CTkButton( + controls_frame, + text="Clear All", + command=self.clear_videos_list, + width=120 + ).pack(side='left', padx=5) + + # Update the videos list display + self.update_videos_list_display() + + def update_videos_list_display(self): + """Update the display of multiple videos list""" + # Clear current list display + for widget in self.videos_list_frame.winfo_children(): + widget.destroy() + + if not self.multiple_videos_list: + ctk.CTkLabel( + self.videos_list_frame, + text="No videos added yet. Click 'Add Video' to begin.", + text_color="gray" + ).pack(pady=10) + return + + # Add each video to the list + for i, video_path in enumerate(self.multiple_videos_list): + video_frame = ctk.CTkFrame(self.videos_list_frame) + video_frame.pack(fill='x', pady=2) + + # Show just the filename to save space + filename = os.path.basename(video_path) + ctk.CTkLabel( + video_frame, + text=f"{i+1}. {filename}", + width=400, + anchor="w" + ).pack(side='left', padx=5) + + # Remove button + ctk.CTkButton( + video_frame, + text="✕", + width=30, + command=lambda idx=i: self.remove_video_from_list(idx) + ).pack(side='right', padx=5) + + def add_video_to_list(self): + """Add a video to the multiple videos list""" + file_path = filedialog.askopenfilename( + title="Select Video File", + filetypes=[ + ("Video files", "*.mp4 *.avi *.mov *.mpeg"), + ("All files", "*.*") + ] + ) + + if file_path: + self.multiple_videos_list.append(file_path) + self.update_videos_list_display() + + def remove_video_from_list(self, index): + """Remove a video from the multiple videos list""" + if 0 <= index < len(self.multiple_videos_list): + del self.multiple_videos_list[index] + self.update_videos_list_display() + + def clear_videos_list(self): + """Clear all videos from the list""" + self.multiple_videos_list = [] + self.update_videos_list_display() + + def on_video_input_type_change(self): + """Handle change in video input type selection""" + input_type = self.video_input_type_var.get() + + if input_type == 'file': + self.build_single_video_input() + elif input_type == 'webcam': + self.build_webcam_input() + elif input_type == 'multiple': + self.build_multiple_videos_input() + + def build_3d_ui(self): + """Build the UI for 3D analysis""" + # Multiple persons section + multiple_frame = ctk.CTkFrame(self.content_frame) + multiple_frame.pack(fill='x', pady=10) + + ctk.CTkLabel( + multiple_frame, + text=self.app.lang_manager.get_text('multiple_persons'), + width=150 + ).pack(side='left', padx=10, pady=10) + + radio_frame = ctk.CTkFrame(multiple_frame, fg_color="transparent") + radio_frame.pack(side='left', fill='x', expand=True) + + ctk.CTkRadioButton( + radio_frame, + text=self.app.lang_manager.get_text('single_person'), + variable=self.multiple_persons_var, + value='single', + command=self.on_multiple_persons_change + ).pack(side='left', padx=10) + + ctk.CTkRadioButton( + radio_frame, + text=self.app.lang_manager.get_text('multiple_persons'), + variable=self.multiple_persons_var, + value='multiple', + command=self.on_multiple_persons_change + ).pack(side='left', padx=10) + + # Person details frame + self.person_frame = ctk.CTkFrame(self.content_frame) + self.person_frame.pack(fill='x', pady=10) + + # Initially show single person details + self.build_single_person_details() + + # Pose model section + self.build_pose_model_section() + + # Proceed button + ctk.CTkButton( + self.content_frame, + text=self.app.lang_manager.get_text('proceed_pose_estimation'), + command=self.proceed_pose_estimation, + height=40, + width=200, + font=("Helvetica", 14), + fg_color=("#4CAF50", "#2E7D32") + ).pack(pady=20) + + def build_pose_model_section(self): + """Build the pose model selection section""" + model_frame = ctk.CTkFrame(self.content_frame) + model_frame.pack(fill='x', pady=10) + + ctk.CTkLabel( + model_frame, + text=self.app.lang_manager.get_text('pose_model_selection'), + font=("Helvetica", 16, "bold") + ).pack(anchor='w', padx=10, pady=(10, 5)) + + # Pose model selection + model_menu_frame = ctk.CTkFrame(model_frame) + model_menu_frame.pack(fill='x', padx=10, pady=5) + + ctk.CTkLabel( + model_menu_frame, + text="Model:", + width=100 + ).pack(side='left', padx=5) + + # Available pose models + pose_models = [ + 'Body_with_feet', 'Whole_body_wrist', 'Whole_body', 'Body', + 'Hand', 'Face', 'Animal' + ] + + self.pose_model_menu = ctk.CTkOptionMenu( + model_menu_frame, + variable=self.pose_model_var, + values=pose_models, + width=200, + command=self.on_pose_model_change + ) + self.pose_model_menu.pack(side='left', padx=5) + + # Mode selection + self.mode_frame = ctk.CTkFrame(model_frame) + self.mode_frame.pack(fill='x', padx=10, pady=5) + + ctk.CTkLabel( + self.mode_frame, + text=self.app.lang_manager.get_text('mode'), + width=100 + ).pack(side='left', padx=5) + + mode_options = ['lightweight', 'balanced', 'performance'] + self.mode_menu = ctk.CTkOptionMenu( + self.mode_frame, + variable=self.mode_var, + values=mode_options, + width=200 + ) + self.mode_menu.pack(side='left', padx=5) + + # Add tracking mode selection (new section) + self.tracking_frame = ctk.CTkFrame(model_frame) + self.tracking_frame.pack(fill='x', padx=10, pady=5) + + ctk.CTkLabel( + self.tracking_frame, + text="Tracking Mode:", + width=100 + ).pack(side='left', padx=5) + + # Tracking mode options - added "deepsort" as requested + tracking_options = ['sports2d', 'deepsort'] + self.tracking_mode_menu = ctk.CTkOptionMenu( + self.tracking_frame, + variable=self.tracking_mode_var, + values=tracking_options, + width=200 + ) + self.tracking_mode_menu.pack(side='left', padx=5) + + # Tracking mode info button + ctk.CTkButton( + self.tracking_frame, + text="?", + width=30, + command=self.show_tracking_info + ).pack(side='left', padx=5) + + # Video extension + extension_frame = ctk.CTkFrame(model_frame) + extension_frame.pack(fill='x', padx=10, pady=5) + + ctk.CTkLabel( + extension_frame, + text=self.app.lang_manager.get_text('video_extension'), + width=150 + ).pack(side='left', padx=5) + + ctk.CTkEntry( + extension_frame, + textvariable=self.video_extension_var, + width=100 + ).pack(side='left', padx=5) + + # Apply the current pose model selection + self.on_pose_model_change(self.pose_model_var.get()) + + def show_tracking_info(self): + """Show info about tracking modes""" + messagebox.showinfo( + "Tracking Modes", + "sports2d: Default tracking optimized for sports applications\n\n" + "deepsort: Advanced tracking algorithm with better multi-person ID consistency" + ) + + def build_person_section(self): + """Build the section for personal information in 2D mode""" + person_frame = ctk.CTkFrame(self.content_frame) + person_frame.pack(fill='x', pady=10) + + ctk.CTkLabel( + person_frame, + text="Participant Information:", + font=("Helvetica", 16, "bold") + ).pack(anchor='w', padx=10, pady=(10, 5)) + + # Multiple persons selection + multiple_frame = ctk.CTkFrame(person_frame) + multiple_frame.pack(fill='x', padx=10, pady=5) + + ctk.CTkLabel( + multiple_frame, + text=self.app.lang_manager.get_text('multiple_persons'), + width=150 + ).pack(side='left', padx=5) + + radio_frame = ctk.CTkFrame(multiple_frame, fg_color="transparent") + radio_frame.pack(side='left') + + ctk.CTkRadioButton( + radio_frame, + text=self.app.lang_manager.get_text('single_person'), + variable=self.multiple_persons_var, + value='single', + command=self.on_multiple_persons_change + ).pack(side='left', padx=10) + + ctk.CTkRadioButton( + radio_frame, + text=self.app.lang_manager.get_text('multiple_persons'), + variable=self.multiple_persons_var, + value='multiple', + command=self.on_multiple_persons_change + ).pack(side='left', padx=10) + + # Person details container + self.person_frame = ctk.CTkFrame(person_frame) + self.person_frame.pack(fill='x', padx=10, pady=5) + + # Initially show single person details + self.build_single_person_details() + + def build_single_person_details(self): + """Build form for single person details""" + # Clear existing widgets + for widget in self.person_frame.winfo_children(): + widget.destroy() + + # Height input + height_frame = ctk.CTkFrame(self.person_frame) + height_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + height_frame, + text=self.app.lang_manager.get_text('participant_height'), + width=150 + ).pack(side='left', padx=5) + + ctk.CTkEntry( + height_frame, + textvariable=self.participant_height_var, + width=100 + ).pack(side='left', padx=5) + + # Mass input + mass_frame = ctk.CTkFrame(self.person_frame) + mass_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + mass_frame, + text=self.app.lang_manager.get_text('participant_mass'), + width=150 + ).pack(side='left', padx=5) + + ctk.CTkEntry( + mass_frame, + textvariable=self.participant_mass_var, + width=100 + ).pack(side='left', padx=5) + + def build_multiple_persons_form(self): + """Build form for multiple persons details""" + # Clear existing widgets + for widget in self.person_frame.winfo_children(): + widget.destroy() + + # Input for number of people + num_people_frame = ctk.CTkFrame(self.person_frame) + num_people_frame.pack(fill='x', pady=10) + + ctk.CTkLabel( + num_people_frame, + text=self.app.lang_manager.get_text('number_of_people'), + width=150 + ).pack(side='left', padx=5) + + ctk.CTkEntry( + num_people_frame, + textvariable=self.num_people_var, + width=100 + ).pack(side='left', padx=5) + + ctk.CTkButton( + num_people_frame, + text=self.app.lang_manager.get_text('submit_number'), + command=self.create_people_details_inputs, + width=100 + ).pack(side='left', padx=10) + + def create_people_details_inputs(self): + """Create input fields for each person's details""" + try: + num_people = int(self.num_people_var.get()) + if num_people < 1: + raise ValueError("Number of people must be positive") + except ValueError as e: + messagebox.showerror( + "Error", + f"Invalid number of people: {str(e)}" + ) + return + + # Clear previous inputs except for the number of people frame + for widget in list(self.person_frame.winfo_children())[1:]: + widget.destroy() + + # Create scrollable frame for many people + details_frame = ctk.CTkScrollableFrame(self.person_frame, height=200) + details_frame.pack(fill='x', pady=10) + + # Clear previous vars + self.people_details_vars = [] + + # Create input fields for each person + for i in range(num_people): + person_frame = ctk.CTkFrame(details_frame) + person_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + person_frame, + text=f"Person {i+1}", + font=("Helvetica", 14, "bold") + ).pack(anchor='w', padx=10, pady=(10, 5)) + + # Height + height_frame = ctk.CTkFrame(person_frame) + height_frame.pack(fill='x', pady=2) + + ctk.CTkLabel( + height_frame, + text=self.app.lang_manager.get_text('height'), + width=100 + ).pack(side='left', padx=5) + + height_var = ctk.StringVar(value="1.72") + ctk.CTkEntry( + height_frame, + textvariable=height_var, + width=100 + ).pack(side='left', padx=5) + + # Mass + mass_frame = ctk.CTkFrame(person_frame) + mass_frame.pack(fill='x', pady=2) + + ctk.CTkLabel( + mass_frame, + text=self.app.lang_manager.get_text('mass'), + width=100 + ).pack(side='left', padx=5) + + mass_var = ctk.StringVar(value="70.0") + ctk.CTkEntry( + mass_frame, + textvariable=mass_var, + width=100 + ).pack(side='left', padx=5) + + # Store vars + self.people_details_vars.append((height_var, mass_var)) + + # Add submit button + ctk.CTkButton( + self.person_frame, + text=self.app.lang_manager.get_text('submit'), + command=self.submit_people_details, + width=150, + height=40 + ).pack(pady=10) + + def submit_people_details(self): + """Process and validate people details""" + heights = [] + masses = [] + + try: + for i, (height_var, mass_var) in enumerate(self.people_details_vars): + height = float(height_var.get()) + mass = float(mass_var.get()) + + if height <= 0 or mass <= 0: + raise ValueError(f"Invalid values for person {i+1}") + + heights.append(height) + masses.append(mass) + + # Store values + self.participant_heights = heights + self.participant_masses = masses + + messagebox.showinfo( + "Success", + "Participant details saved successfully." + ) + + except ValueError as e: + messagebox.showerror( + "Error", + f"Invalid input: {str(e)}" + ) + + def on_multiple_persons_change(self): + """Handle change in multiple persons selection""" + if self.multiple_persons_var.get() == 'single': + self.build_single_person_details() + else: + self.build_multiple_persons_form() + + def on_pose_model_change(self, value): + """Show/hide mode selection based on pose model""" + if value in ['Body_with_feet', 'Whole_body_wrist', 'Whole_body', 'Body']: + self.mode_frame.pack(fill='x', padx=10, pady=5) + else: + self.mode_frame.pack_forget() + + def browse_video(self): + """Browse for video file in 2D mode""" + file_path = filedialog.askopenfilename( + title="Select Video File", + filetypes=[ + ("Video files", "*.mp4 *.avi *.mov *.mpeg"), + ("All files", "*.*") + ] + ) + + if file_path: + self.video_input_var.set(file_path) + + def proceed_pose_estimation(self): + """Handle pose estimation configuration and video input""" + try: + # Validate inputs + if not self.video_extension_var.get(): + messagebox.showerror( + "Error", + "Please specify a video extension." + ) + return + + # Validate person details + if self.multiple_persons_var.get() == 'multiple': + if not hasattr(self, 'participant_heights') or not self.participant_heights: + messagebox.showerror( + "Error", + "Please enter and submit participant details for multiple persons." + ) + return + else: + try: + height = float(self.participant_height_var.get()) + mass = float(self.participant_mass_var.get()) + + if height <= 0 or mass <= 0: + raise ValueError("Height and mass must be positive.") + except ValueError as e: + messagebox.showerror( + "Error", + f"Invalid height or mass: {str(e)}" + ) + return + + # Handle 2D or 3D mode specifically + if self.simplified: + # 2D mode: check input type and handle accordingly + input_type = self.video_input_type_var.get() + + if input_type == 'webcam': + # Just set the value to 'webcam' in config, no file copying needed + messagebox.showinfo( + "Webcam Setup", + "Webcam will be used when Sports2D is launched.\nNo additional setup needed." + ) + elif input_type == 'multiple': + # Check if multiple videos are selected + if not self.multiple_videos_list: + messagebox.showerror( + "Error", + "Please add at least one video for multiple videos mode." + ) + return + + # For multiple videos, store paths but don't copy + # (Paths will be saved to config as a list) + pass + else: # single file + # Check that video is selected for single file mode + if not self.video_input_var.get(): + messagebox.showerror( + "Error", + "Please select a video file." + ) + return + + # Copy video to participant directory if not already there + dest_dir = os.path.join(self.app.participant_name) + os.makedirs(dest_dir, exist_ok=True) + + # Check if file needs to be copied + if not os.path.dirname(self.video_input_var.get()) == dest_dir: + # Get just the filename (preserve the same name for config) + filename = os.path.basename(self.video_input_var.get()) + dest_path = os.path.join(dest_dir, filename) + + # Copy the file + shutil.copy(self.video_input_var.get(), dest_path) + + # Update the path to only the filename for config_demo.toml + self.video_input_var.set(filename) + else: + # 3D mode: input videos for each camera + self.input_videos() + + # Update progress + if hasattr(self.app, 'update_tab_indicator'): + self.app.update_tab_indicator('pose_model', True) + if hasattr(self.app, 'update_progress_bar') and hasattr(self.app, 'progress_steps'): + progress_value = self.app.progress_steps.get('pose_model', 50) + self.app.update_progress_bar(progress_value) + + # Show success message + messagebox.showinfo( + "Pose Model Settings", + f"Pose model settings have been saved. Tracking mode: {self.tracking_mode_var.get()}" + ) + + # Move to next tab + if hasattr(self.app, 'show_tab'): + tab_order = list(self.app.tabs.keys()) + current_idx = tab_order.index('pose_model') + if current_idx + 1 < len(tab_order): + next_tab = tab_order[current_idx + 1] + self.app.show_tab(next_tab) + + except Exception as e: + messagebox.showerror( + "Error", + f"An unexpected error occurred: {str(e)}" + ) + + def input_videos(self): + """Input videos for 3D mode""" + try: + # Get number of cameras + num_cameras = int(self.app.tabs['calibration'].num_cameras_var.get()) + + # Define target directory + if self.app.process_mode == 'batch': + # For batch mode, ask which trial to import videos for + trial_num = simpledialog.askinteger( + "Trial Selection", + f"Enter trial number (1-{self.app.num_trials}):", + minvalue=1, + maxvalue=self.app.num_trials + ) + + if not trial_num: + return False + + target_path = os.path.join(self.app.participant_name, f'Trial_{trial_num}', 'videos') + else: + # For single mode + target_path = os.path.join(self.app.participant_name, 'videos') + + # Create the directory if it doesn't exist + os.makedirs(target_path, exist_ok=True) + + # Check for existing videos + existing_videos = [ + f for f in os.listdir(target_path) + if f.endswith(self.video_extension_var.get()) + ] + + if existing_videos: + response = messagebox.askyesno( + "Existing Videos", + "Existing videos found. Do you want to replace them?" + ) + + if response: + # Delete existing videos + for video in existing_videos: + try: + os.remove(os.path.join(target_path, video)) + except Exception as e: + messagebox.showerror( + "Error", + f"Could not remove {video}: {str(e)}" + ) + return False + else: + # User chose not to replace + return False + + # Input new videos + for cam in range(1, num_cameras + 1): + file_path = filedialog.askopenfilename( + title=f"Select video for Camera {cam}", + filetypes=[(f"Video files", f"*.{self.video_extension_var.get()}")] + ) + + if not file_path: + messagebox.showerror( + "Error", + f"No file selected for camera {cam}" + ) + return False + + # Copy and rename the file + dest_filename = f"cam{cam}.{self.video_extension_var.get()}" + dest_path = os.path.join(target_path, dest_filename) + + # Copy the file + shutil.copy(file_path, dest_path) + + # Show completion message + messagebox.showinfo( + "Videos Imported", + "All videos have been imported successfully." + ) + + return True + + except ValueError: + messagebox.showerror( + "Error", + "Invalid number of cameras." + ) + return False + except Exception as e: + messagebox.showerror( + "Error", + f"Error importing videos: {str(e)}" + ) + return False \ No newline at end of file diff --git a/GUI/tabs/prepare_video_tab.py b/GUI/tabs/prepare_video_tab.py new file mode 100644 index 00000000..7a496ce9 --- /dev/null +++ b/GUI/tabs/prepare_video_tab.py @@ -0,0 +1,728 @@ +from pathlib import Path +import cv2 +import customtkinter as ctk +from tkinter import messagebox +import threading +import subprocess +from PIL import Image +from customtkinter import CTkImage + +class PrepareVideoTab: + def __init__(self, parent, app): + """Initialize the Prepare Video tab""" + self.parent = parent + self.app = app + + # Create the main frame + self.frame = ctk.CTkFrame(parent) + + # Initialize state variables + self.editing_mode_var = ctk.StringVar(value='simple') + self.only_checkerboard_var = ctk.StringVar(value='yes') + self.time_interval_var = ctk.StringVar(value='1') + self.extrinsic_format_var = ctk.StringVar(value='png') + self.change_intrinsics_extension = False + self.current_camera_index = 0 + self.camera_image_list = [] + self.image_vars = [] + + # Build the tab UI + self.build_ui() + + def get_settings(self): + """Get the prepare video settings""" + settings = {} + # No specific settings to return for the prepare video tab + # as these are handled directly in the calibration settings + return settings + + def build_ui(self): + """Build the tab user interface""" + # Create a scrollable frame for content + content_frame = ctk.CTkScrollableFrame(self.frame) + content_frame.pack(fill='both', expand=True, padx=0, pady=0) + + # Tab title + self.title_label = ctk.CTkLabel( + content_frame, + text="Prepare Video", + font=("Helvetica", 24, "bold") + ) + self.title_label.pack(pady=(0, 20)) + + # Editing mode selection - using a card-style UI + mode_frame = ctk.CTkFrame(content_frame) + mode_frame.pack(fill='x', pady=15) + + ctk.CTkLabel( + mode_frame, + text="Select Editing Mode:", + font=("Helvetica", 18, "bold") + ).pack(anchor="w", padx=15, pady=(10, 15)) + + # Mode selection buttons in a horizontal layout + buttons_frame = ctk.CTkFrame(mode_frame, fg_color="transparent") + buttons_frame.pack(fill='x', padx=15, pady=(0, 15)) + + # Simple mode button (as a card) + simple_card = ctk.CTkFrame(buttons_frame) + simple_card.pack(side='left', padx=10, pady=5, fill='x', expand=True) + + self.simple_mode_btn = ctk.CTkButton( + simple_card, + text="Simple Mode", + command=lambda: self.set_editing_mode('simple'), + width=150, + height=40, + font=("Helvetica", 14), + fg_color=("#3a7ebf", "#1f538d") # Default selected color + ) + self.simple_mode_btn.pack(pady=10, padx=10, fill='x') + + ctk.CTkLabel( + simple_card, + text="Basic extraction and processing", + font=("Helvetica", 12), + text_color="gray" + ).pack(pady=(0, 10), padx=10) + + # Advanced mode button (as a card) + advanced_card = ctk.CTkFrame(buttons_frame) + advanced_card.pack(side='left', padx=10, pady=5, fill='x', expand=True) + + self.advanced_mode_btn = ctk.CTkButton( + advanced_card, + text="Advanced Editing", + command=lambda: self.set_editing_mode('advanced'), + width=150, + height=40, + font=("Helvetica", 14) + ) + self.advanced_mode_btn.pack(pady=10, padx=10, fill='x') + + ctk.CTkLabel( + advanced_card, + text="Full-featured video editing tools", + font=("Helvetica", 12), + text_color="gray" + ).pack(pady=(0, 10), padx=10) + + # Divider + divider = ctk.CTkFrame(content_frame, height=2, fg_color="gray75") + divider.pack(fill='x', pady=15) + + # Simple mode frame + self.simple_mode_frame = ctk.CTkFrame(content_frame) + self.simple_mode_frame.pack(fill='x', pady=10) + + # Checkerboard-only option frame + checkerboard_frame = ctk.CTkFrame(self.simple_mode_frame) + checkerboard_frame.pack(fill='x', pady=10, padx=10) + + ctk.CTkLabel( + checkerboard_frame, + text="Do your videos contain only checkerboard images?", + font=("Helvetica", 14, "bold") + ).pack(anchor='w', padx=10, pady=(10, 5)) + + radio_frame = ctk.CTkFrame(checkerboard_frame, fg_color="transparent") + radio_frame.pack(fill='x', padx=10, pady=5) + + ctk.CTkRadioButton( + radio_frame, + text="Yes", + variable=self.only_checkerboard_var, + value='yes', + command=self.on_only_checkerboard_change + ).pack(side='left', padx=20) + + ctk.CTkRadioButton( + radio_frame, + text="No", + variable=self.only_checkerboard_var, + value='no', + command=self.on_only_checkerboard_change + ).pack(side='left', padx=20) + + # Frame for time interval input (initially hidden) + self.time_extraction_frame = ctk.CTkFrame(self.simple_mode_frame) + + ctk.CTkLabel( + self.time_extraction_frame, + text="Enter time interval in seconds for image extraction:", + font=("Helvetica", 14, "bold") + ).pack(anchor='w', padx=15, pady=(10, 5)) + + time_input_frame = ctk.CTkFrame(self.time_extraction_frame, fg_color="transparent") + time_input_frame.pack(fill='x', padx=15, pady=5) + + ctk.CTkEntry( + time_input_frame, + textvariable=self.time_interval_var, + width=100 + ).pack(side='left', padx=5) + + ctk.CTkLabel( + time_input_frame, + text="seconds", + font=("Helvetica", 12) + ).pack(side='left', padx=5) + + # Extrinsic Format Frame + format_frame = ctk.CTkFrame(self.simple_mode_frame) + format_frame.pack(fill='x', pady=10, padx=10) + + ctk.CTkLabel( + format_frame, + text="Enter the image format (e.g., png, jpg):", + font=("Helvetica", 14, "bold") + ).pack(anchor='w', padx=10, pady=(10, 5)) + + format_input_frame = ctk.CTkFrame(format_frame, fg_color="transparent") + format_input_frame.pack(fill='x', padx=10, pady=5) + + ctk.CTkEntry( + format_input_frame, + textvariable=self.extrinsic_format_var, + width=100 + ).pack(side='left', padx=5) + + # Confirm button for "Yes" option + self.confirm_button = ctk.CTkButton( + self.simple_mode_frame, + text="Confirm", + command=self.confirm_checkerboard_only, + width=200, + height=40, + font=("Helvetica", 14), + fg_color=("#4CAF50", "#2E7D32") + ) + self.confirm_button.pack(pady=20, side='bottom') + + # Proceed button for "No" option (initially hidden) + self.proceed_button = ctk.CTkButton( + self.simple_mode_frame, + text="Proceed with Video Preparation", + command=self.proceed_prepare_video, + width=200, + height=40, + font=("Helvetica", 14), + fg_color=("#4CAF50", "#2E7D32") + ) + + # Advanced mode frame (initially hidden) + self.advanced_mode_frame = ctk.CTkFrame(content_frame) + + # Advanced mode content - with clear black text as requested + advanced_title = ctk.CTkLabel( + self.advanced_mode_frame, + text="Advanced Video Editing", + font=("Helvetica", 22, "bold"), + text_color="black" # Explicitly setting to black for clarity + ) + advanced_title.pack(pady=(20, 5)) + + # Divider below title + title_divider = ctk.CTkFrame(self.advanced_mode_frame, height=2, fg_color="gray75") + title_divider.pack(fill='x', pady=10, padx=40) + + # Description with improved visibility + description_frame = ctk.CTkFrame(self.advanced_mode_frame, fg_color=("gray95", "gray20")) + description_frame.pack(fill='x', padx=30, pady=15) + + ctk.CTkLabel( + description_frame, + text="Use this mode to run the external blur.py editor for advanced video processing.", + wraplength=600, + font=("Helvetica", 14), + text_color="black" # Explicitly setting to black for clarity + ).pack(pady=15, padx=20) + + # Button frame + button_frame = ctk.CTkFrame(self.advanced_mode_frame, fg_color="transparent") + button_frame.pack(pady=20) + + # Launch button for blur.py + self.launch_editor_btn = ctk.CTkButton( + button_frame, + text="Launch Video Editor", + command=self.launch_external_editor, + width=200, + height=45, + font=("Helvetica", 14), + fg_color=("#4CAF50", "#2E7D32") # Green color + ) + self.launch_editor_btn.pack(side='left', padx=10) + + # Done editing button + self.done_editing_btn = ctk.CTkButton( + button_frame, + text="Done Editing", + command=self.complete_advanced_editing, + width=200, + height=45, + font=("Helvetica", 14), + state="disabled", # Initially disabled + fg_color=("#FF9500", "#FF7000") # Orange color + ) + self.done_editing_btn.pack(side='left', padx=10) + + # Status label for feedback (common to both modes) + self.status_frame = ctk.CTkFrame(content_frame, fg_color=("gray90", "gray25"), corner_radius=8) + self.status_frame.pack(fill='x', pady=15, padx=10) + + self.status_label = ctk.CTkLabel( + self.status_frame, + text="Select an editing mode to begin", + font=("Helvetica", 13), + text_color=("gray30", "gray80") + ) + self.status_label.pack(pady=10, padx=10) + + # Show/hide elements based on initial editing mode + self.set_editing_mode('simple') + + def set_editing_mode(self, mode): + """Switch between simple and advanced editing modes""" + self.editing_mode_var.set(mode) + + # Update button colors + if mode == 'simple': + self.simple_mode_btn.configure(text_color='white', fg_color=("#3a7ebf", "#1f538d")) + self.advanced_mode_btn.configure(text_color="grey20") + self.simple_mode_frame.pack(fill='x', pady=10) + self.advanced_mode_frame.pack_forget() + self.update_status("Simple mode: Configure video extraction settings", "blue") + else: # advanced + self.simple_mode_btn.configure(text_color="grey20") + self.advanced_mode_btn.configure(text_color='white', fg_color=("#3a7ebf", "#1f538d")) + self.simple_mode_frame.pack_forget() + self.advanced_mode_frame.pack(fill='x', pady=10) + self.update_status("Advanced mode: Use external editor for video processing", "blue") + + # Apply current checkerboard setting in simple mode + if mode == 'simple': + self.on_only_checkerboard_change() + + def on_only_checkerboard_change(self): + """Handle changes to the checkerboard-only option""" + if self.only_checkerboard_var.get() == 'no': + self.time_extraction_frame.pack(fill='x', pady=10, after=self.confirm_button) + self.confirm_button.pack_forget() + self.proceed_button.pack(pady=20) + else: + self.time_extraction_frame.pack_forget() + self.proceed_button.pack_forget() + self.confirm_button.pack(pady=20) + + def confirm_checkerboard_only(self): + """Handle confirmation when 'Yes' is selected for checkerboard-only option""" + # Keep existing extension for both intrinsics and extrinsics + self.change_intrinsics_extension = False + + # Update status + self.update_status("Prepare video step completed. Checkerboard videos will be used directly.", "green") + + # Update progress bar (use existing method) + if hasattr(self.app, 'progress_steps') and 'prepare_video' in self.app.progress_steps: + progress_value = self.app.progress_steps['prepare_video'] + else: + progress_value = 30 # Default value for prepare_video step + + self.app.update_progress_bar(progress_value) + + # Update tab indicator + self.app.update_tab_indicator('prepare_video', True) + + # Disable inputs + for widget in self.frame.winfo_descendants(): + if isinstance(widget, (ctk.CTkEntry, ctk.CTkRadioButton)): + widget.configure(state="disabled") + + self.confirm_button.configure(state="disabled") + + # Show success message + messagebox.showinfo("Complete", "Prepare video step completed. You can proceed to the next tab.") + + # Automatically switch to the next tab if available + if hasattr(self.app, 'show_tab'): + tab_order = list(self.app.tabs.keys()) + current_idx = tab_order.index('prepare_video') + if current_idx + 1 < len(tab_order): + next_tab = tab_order[current_idx + 1] + self.app.show_tab(next_tab) + + def proceed_prepare_video(self): + """Handle video preparation when 'No' is selected""" + try: + time_interval = float(self.time_interval_var.get()) + if time_interval <= 0: + raise ValueError("Time interval must be a positive number") + + # Set flag to change intrinsics extension to png + self.change_intrinsics_extension = True + + # Update status + self.update_status("Processing videos... Please wait.", "blue") + + # Disable the Proceed button to prevent multiple clicks + self.proceed_button.configure(state='disabled') + + # Start extraction in a separate thread + extraction_thread = threading.Thread(target=lambda: self.extract_frames(time_interval)) + extraction_thread.start() + + except ValueError as e: + messagebox.showerror("Error", f"Invalid time interval: {str(e)}") + self.update_status("Error: Please enter a valid time interval.", "red") + self.proceed_button.configure(state='normal') + + def launch_external_editor(self): + """Launch the external blur.py editor""" + try: + # Path to the blur.py script (in the same directory as the app) + script_path = Path(__file__).parent.parent / "blur.py" + + # Check if the file exists + if not script_path.exists(): + self.update_status("Error: blur.py not found in the application directory", "red") + return + + # Update status + self.update_status("Launching external video editor...", "blue") + + # Disable the launch button + self.launch_editor_btn.configure(state="disabled") + + # Launch the script in a separate process + process = subprocess.Popen(["python", script_path]) + + # Enable the done button + self.done_editing_btn.configure(state="normal") + + # Update status + self.update_status("External editor launched. Click 'Done Editing' when finished.", "orange") + + except Exception as e: + self.update_status(f"Error launching editor: {str(e)}", "red") + self.launch_editor_btn.configure(state="normal") + + def complete_advanced_editing(self): + """Complete the advanced editing process""" + # Update status + self.update_status("Advanced editing completed successfully.", "green") + + # Update progress + if hasattr(self.app, 'progress_steps') and 'prepare_video' in self.app.progress_steps: + progress_value = self.app.progress_steps['prepare_video'] + else: + progress_value = 30 # Default value for prepare_video step + + self.app.update_progress_bar(progress_value) + + # Update tab indicator + self.app.update_tab_indicator('prepare_video', True) + + # Disable buttons + self.done_editing_btn.configure(state="disabled") + self.launch_editor_btn.configure(state="disabled") + + # Show success message + messagebox.showinfo("Complete", "Advanced video editing completed. You can proceed to the next tab.") + + # Automatically switch to the next tab if available + if hasattr(self.app, 'show_tab'): + tab_order = list(self.app.tabs.keys()) + current_idx = tab_order.index('prepare_video') + if current_idx + 1 < len(tab_order): + next_tab = tab_order[current_idx + 1] + self.app.show_tab(next_tab) + + def extract_frames(self, time_interval): + """Extract frames from videos at given time intervals""" + # Determine the base path based on app mode + base_path = Path(self.app.participant_name) / 'calibration' / 'intrinsics' + + if not base_path.exists(): + self.update_status(f"Error: Directory '{base_path}' does not exist.", "red") + self.proceed_button.configure(state='normal') + return + + video_extensions = ('.mp4', '.avi', '.mov', '.mpeg') + extracted_images = [] + + # Collect all video files + video_files = [file for file in base_path.rglob('*') if file.suffix.lower() in video_extensions] + + total_videos = len(video_files) + + if not video_files: + self.update_status("Warning: No video files found.", "orange") + + # Still mark as complete using the app's method + if hasattr(self.app, 'progress_steps') and 'prepare_video' in self.app.progress_steps: + progress_value = self.app.progress_steps['prepare_video'] + else: + progress_value = 30 + + self.app.update_progress_bar(progress_value) + self.app.update_tab_indicator('prepare_video', True) + + self.proceed_button.configure(state='normal') + return + + try: + self.update_status(f"Processing {total_videos} videos...", "blue") + + for idx, video_path in enumerate(video_files): + video_dir = video_path.parent + cap = cv2.VideoCapture(video_path) + + if not cap.isOpened(): + self.update_status(f"Error: Failed to open video: {video_path}", "red") + continue + + fps = cap.get(cv2.CAP_PROP_FPS) + if fps <= 0: + fps = 30 # Default to 30 fps if detection fails + + interval_frames = int(fps * time_interval) + + frame_count = 0 + while True: + ret, frame = cap.read() + if not ret: + break + + if frame_count % interval_frames == 0: + image_name = f"{Path(video_path).stem}_frame{frame_count}.png" + save_path = video_dir / image_name + cv2.imwrite(save_path, frame) + extracted_images.append(save_path) + + frame_count += 1 + + cap.release() + + # Update progress for this video (15-30% range for extraction) + progress = 15 + (15 * (idx + 1) / total_videos) + self.app.update_progress_bar(int(progress)) + + # Update status + self.update_status(f"Processed {idx+1}/{total_videos} videos...", "blue") + + # If images were extracted, show the review interface + if extracted_images: + self.sort_images_by_camera(extracted_images) + else: + self.update_status("Process completed. No frames were extracted.", "green") + + # Complete the prepare_video step + if hasattr(self.app, 'progress_steps') and 'prepare_video' in self.app.progress_steps: + progress_value = self.app.progress_steps['prepare_video'] + else: + progress_value = 30 + + self.app.update_progress_bar(progress_value) + self.app.update_tab_indicator('prepare_video', True) + + except Exception as e: + self.update_status(f"Error during extraction: {str(e)}", "red") + self.proceed_button.configure(state='normal') + + def update_status(self, message, color="black"): + """Update the status label with a message and color""" + # Schedule UI update on the main thread + self.frame.after(0, lambda: self.status_label.configure(text=message, text_color=color)) + + def sort_images_by_camera(self, image_paths): + """Sort extracted images by camera directory""" + images_by_camera = {} + + for img_path in image_paths: + camera_dir = Path(img_path).parent.name + if camera_dir not in images_by_camera: + images_by_camera[camera_dir] = [] + images_by_camera[camera_dir].append(img_path) + + self.camera_image_list = list(images_by_camera.items()) + self.current_camera_index = 0 + + if self.camera_image_list: + camera_dir, imgs = self.camera_image_list[self.current_camera_index] + self.review_camera_images(camera_dir, imgs) + else: + self.update_status("No images to review.", "orange") + + # Complete the prepare_video step + if hasattr(self.app, 'progress_steps') and 'prepare_video' in self.app.progress_steps: + progress_value = self.app.progress_steps['prepare_video'] + else: + progress_value = 30 + + self.app.update_progress_bar(progress_value) + self.app.update_tab_indicator('prepare_video', True) + + self.proceed_button.configure(state='normal') + + def review_camera_images(self, camera_dir, image_paths): + """Create a review window for a specific camera's images""" + # Create a new toplevel window for reviewing images + review_window = ctk.CTkToplevel(self.frame) + review_window.title(f"Review Images - {camera_dir}") + review_window.geometry("900x700") + review_window.grab_set() # Make modal + + # Header frame + header_frame = ctk.CTkFrame(review_window) + header_frame.pack(fill="x", padx=20, pady=10) + + ctk.CTkLabel( + header_frame, + text=f"Review Images for {camera_dir}", + font=("Helvetica", 18, "bold") + ).pack(side="left", padx=10) + + ctk.CTkLabel( + header_frame, + text=f"Camera {self.current_camera_index + 1} of {len(self.camera_image_list)}", + font=("Helvetica", 14) + ).pack(side="right", padx=10) + + # Create scrollable frame for images + scroll_frame = ctk.CTkScrollableFrame(review_window) + scroll_frame.pack(fill="both", expand=True, padx=20, pady=10) + + # List to hold image vars for this camera + self.image_vars = [] + + # Organize images in a grid (4 columns) + num_columns = 4 + row, col = 0, 0 + + for idx, img_path in enumerate(image_paths): + # Create frame for this image + img_frame = ctk.CTkFrame(scroll_frame) + img_frame.grid(row=row, column=col, padx=10, pady=10, sticky="nsew") + + try: + # Load and display the image + img = Image.open(img_path) + img.thumbnail((200, 150)) # Resize for thumbnail display + + ctk_img = CTkImage(light_image=img, dark_image=img, size=(200, 150)) + + img_label = ctk.CTkLabel(img_frame, image=ctk_img, text="") + img_label.image = ctk_img # Keep a reference + img_label.pack(padx=5, pady=5) + + # Add filename below image + ctk.CTkLabel( + img_frame, + text=img_path.name, + font=("Helvetica", 10), + wraplength=200 + ).pack(pady=(0, 5)) + + # Checkbox to keep this image + var = ctk.BooleanVar(value=True) # Default to keeping images + check = ctk.CTkCheckBox(img_frame, text="Keep", variable=var) + check.pack(pady=5) + + # Store reference to this image + self.image_vars.append({'var': var, 'path': img_path}) + + # Update grid position + col += 1 + if col >= num_columns: + col = 0 + row += 1 + + except Exception as e: + print(f"Error loading image {img_path}: {e}") + + # Button frame + button_frame = ctk.CTkFrame(review_window) + button_frame.pack(fill="x", padx=20, pady=10) + + # Function to process this camera and move to next + def process_camera(): + # Handle image deletion + to_delete = [img['path'] for img in self.image_vars if not img['var'].get()] + + for img_path in to_delete: + try: + img_path.unlink() # Delete the image file + print(f"Deleted {img_path}") + except Exception as e: + print(f"Failed to delete {img_path}: {e}") + + # Close the review window + review_window.destroy() + + # Move to next camera + self.current_camera_index += 1 + if self.current_camera_index < len(self.camera_image_list): + next_camera, next_images = self.camera_image_list[self.current_camera_index] + self.review_camera_images(next_camera, next_images) + else: + # All cameras processed + self.update_status("Image review completed. All cameras processed.", "green") + + # Complete the prepare_video step + if hasattr(self.app, 'progress_steps') and 'prepare_video' in self.app.progress_steps: + progress_value = self.app.progress_steps['prepare_video'] + else: + progress_value = 30 + + self.app.update_progress_bar(progress_value) + self.app.update_tab_indicator('prepare_video', True) + + # Show final confirmation + messagebox.showinfo( + "Processing Complete", + "All camera images have been processed successfully." + ) + + # Automatically move to next tab + if hasattr(self.app, 'show_tab'): + tab_order = list(self.app.tabs.keys()) + current_idx = tab_order.index('prepare_video') + if current_idx + 1 < len(tab_order): + next_tab = tab_order[current_idx + 1] + self.app.show_tab(next_tab) + + # Add buttons + ctk.CTkButton( + button_frame, + text="Save and Continue", + command=process_camera, + width=150, + height=35, + font=("Helvetica", 14) + ).pack(side="right", padx=10) + + # Select/Deselect All buttons + def select_all(): + for item in self.image_vars: + item['var'].set(True) + + def deselect_all(): + for item in self.image_vars: + item['var'].set(False) + + ctk.CTkButton( + button_frame, + text="Select All", + command=select_all, + width=100, + height=35 + ).pack(side="left", padx=10) + + ctk.CTkButton( + button_frame, + text="Deselect All", + command=deselect_all, + width=100, + height=35 + ).pack(side="left", padx=10) \ No newline at end of file diff --git a/GUI/tabs/synchronization_tab.py b/GUI/tabs/synchronization_tab.py new file mode 100644 index 00000000..b8c2a573 --- /dev/null +++ b/GUI/tabs/synchronization_tab.py @@ -0,0 +1,733 @@ +import customtkinter as ctk +from tkinter import messagebox + +class SynchronizationTab: + def __init__(self, parent, app): + """Initialize the Synchronization tab""" + self.parent = parent + self.app = app + + # Create the main frame + self.frame = ctk.CTkFrame(parent) + + # Initialize variables + self.sync_videos_var = ctk.StringVar(value='no') # Default to 'no' (need synchronization) + self.use_gui_var = ctk.StringVar(value='yes') # Default to 'yes' (use GUI) + self.keypoints_var = ctk.StringVar(value='all') + self.approx_time_var = ctk.StringVar(value='auto') + self.time_range_var = ctk.StringVar(value='2.0') + self.likelihood_threshold_var = ctk.StringVar(value='0.4') + self.filter_cutoff_var = ctk.StringVar(value='6') + self.filter_order_var = ctk.StringVar(value='4') + self.approx_time_entries = [] + self.approx_times = [] + + # Build the UI + self.build_ui() + + def get_title(self): + """Return the tab title""" + return "Synchronization" + + def get_settings(self): + """Get the synchronization settings""" + settings = { + 'synchronization': {} + } + + # If skipping synchronization, disable GUI + if self.sync_videos_var.get() == 'yes': + settings['synchronization']['synchronization_gui'] = False + return settings + + # Set GUI flag based on selection + settings['synchronization']['synchronization_gui'] = self.use_gui_var.get() == 'yes' + + # If using GUI, we don't need the other settings as they can be set in the GUI + if self.use_gui_var.get() == 'yes': + return settings + + # Otherwise, add all manual synchronization settings + # Get keypoints setting (all or specific keypoint) + keypoints = self.keypoints_var.get() + if keypoints == 'all': + keypoints_setting = 'all' + else: + keypoints_setting = [keypoints] + + # Get approximate times + if self.approx_time_var.get() == 'yes' and self.approx_time_entries: + try: + approx_times = [float(entry.get()) for entry in self.approx_time_entries] + except (ValueError, TypeError): + # Default to auto if conversion fails + approx_times = 'auto' + else: + approx_times = 'auto' + + # Get other numeric settings with validation + try: + time_range = float(self.time_range_var.get()) + except ValueError: + time_range = 2.0 + + try: + likelihood_threshold = float(self.likelihood_threshold_var.get()) + except ValueError: + likelihood_threshold = 0.4 + + try: + filter_cutoff = int(self.filter_cutoff_var.get()) + except ValueError: + filter_cutoff = 6 + + try: + filter_order = int(self.filter_order_var.get()) + except ValueError: + filter_order = 4 + + # Add all manual settings + settings['synchronization'].update({ + 'keypoints_to_consider': keypoints_setting, + 'approx_time_maxspeed': approx_times, + 'time_range_around_maxspeed': time_range, + 'likelihood_threshold': likelihood_threshold, + 'filter_cutoff': filter_cutoff, + 'filter_order': filter_order + }) + + return settings + + def build_ui(self): + """Build the tab user interface""" + # Create scrollable frame for content + content_frame = ctk.CTkScrollableFrame(self.frame) + content_frame.pack(fill='both', expand=True, padx=0, pady=0) + + # Title header + ctk.CTkLabel( + content_frame, + text="Synchronization", + font=("Helvetica", 24, "bold") + ).pack(pady=(0, 20)) + + # Information text + ctk.CTkLabel( + content_frame, + text="Configure video synchronization settings. Videos must be synchronized for accurate 3D reconstruction.", + font=("Helvetica", 14), + wraplength=800, + justify="left" + ).pack(anchor='w', pady=(0, 20)) + + # First decision: Skip synchronization or not + skip_frame = ctk.CTkFrame(content_frame) + skip_frame.pack(fill='x', pady=10) + + ctk.CTkLabel( + skip_frame, + text="Are your videos already synchronized?", + font=("Helvetica", 14, "bold") + ).pack(side='left', padx=10) + + ctk.CTkRadioButton( + skip_frame, + text="Yes (Skip synchronization)", + variable=self.sync_videos_var, + value='yes', + command=self.update_ui_based_on_selections + ).pack(side='left', padx=10) + + ctk.CTkRadioButton( + skip_frame, + text="No (Need synchronization)", + variable=self.sync_videos_var, + value='no', + command=self.update_ui_based_on_selections + ).pack(side='left', padx=10) + + # Second decision: Use GUI or not (initially hidden) + self.gui_frame = ctk.CTkFrame(content_frame) + self.gui_frame.pack(fill='x', pady=10) + + gui_title_frame = ctk.CTkFrame(self.gui_frame, fg_color="transparent") + gui_title_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + gui_title_frame, + text="Would you like to use the GUI for synchronization?", + font=("Helvetica", 14, "bold") + ).pack(side='left', padx=10) + + # Add "Recommended" tag with a distinct visual + recommended_label = ctk.CTkLabel( + gui_title_frame, + text="✓ Recommended", + font=("Helvetica", 12), + text_color="#4CAF50" + ) + recommended_label.pack(side='left', padx=10) + + # Radio buttons for GUI option + gui_radio_frame = ctk.CTkFrame(self.gui_frame, fg_color="transparent") + gui_radio_frame.pack(fill='x', pady=5, padx=10) + + ctk.CTkRadioButton( + gui_radio_frame, + text="Yes (Interactive synchronization interface)", + variable=self.use_gui_var, + value='yes', + command=self.update_ui_based_on_selections + ).pack(side='left', padx=20) + + ctk.CTkRadioButton( + gui_radio_frame, + text="No (Manual parameter configuration)", + variable=self.use_gui_var, + value='no', + command=self.update_ui_based_on_selections + ).pack(side='left', padx=20) + + # GUI info text + gui_info_frame = ctk.CTkFrame(self.gui_frame, fg_color=("gray95", "gray25")) + gui_info_frame.pack(fill='x', padx=30, pady=(0, 10)) + + ctk.CTkLabel( + gui_info_frame, + text="The GUI option provides an interactive interface to visualize and manually adjust synchronization. " + "It's the recommended approach for achieving the best synchronization results.", + wraplength=700, + justify="left", + font=("Helvetica", 12), + text_color=("gray30", "gray80") + ).pack(pady=10, padx=10) + + # Hide GUI frame initially - will be shown based on selections + self.gui_frame.pack_forget() + + # Manual synchronization settings frame (initially hidden) + self.manual_sync_frame = ctk.CTkFrame(content_frame) + self.manual_sync_frame.pack(fill='x', pady=10) + + # Keypoints to consider + keypoints_frame = ctk.CTkFrame(self.manual_sync_frame) + keypoints_frame.pack(fill='x', pady=10, padx=10) + + ctk.CTkLabel( + keypoints_frame, + text="Select keypoints to consider for synchronization:", + font=("Helvetica", 14) + ).pack(side='left', padx=10) + + keypoints_options = ['all', 'CHip', 'RHip', 'RKnee', 'RAnkle', 'RBigToe', 'RSmallToe', 'RHeel', + 'LHip', 'LKnee', 'LAnkle', 'LBigToe', 'LSmallToe', 'LHeel', 'Neck', 'Head', + 'Nose', 'RShoulder', 'RElbow', 'RWrist', 'LShoulder', 'LElbow', 'LWrist'] + + self.keypoints_menu = ctk.CTkOptionMenu( + keypoints_frame, + variable=self.keypoints_var, + values=keypoints_options, + width=150 + ) + self.keypoints_menu.pack(side='left', padx=10) + + # Approximate time of movement + approx_time_frame = ctk.CTkFrame(self.manual_sync_frame) + approx_time_frame.pack(fill='x', pady=10, padx=10) + + ctk.CTkLabel( + approx_time_frame, + text="Do you want to specify approximate times of movement?", + font=("Helvetica", 14) + ).pack(side='left', padx=10) + + ctk.CTkRadioButton( + approx_time_frame, + text="Yes (Recommended)", + variable=self.approx_time_var, + value='yes', + command=self.on_approx_time_change + ).pack(side='left', padx=10) + + ctk.CTkRadioButton( + approx_time_frame, + text="Auto (Uses whole video)", + variable=self.approx_time_var, + value='auto', + command=self.on_approx_time_change + ).pack(side='left', padx=10) + + # Frame for camera-specific times (initially hidden) + self.camera_times_frame = ctk.CTkFrame(self.manual_sync_frame) + self.camera_times_frame.pack(fill='x', pady=10, padx=10) + self.camera_times_frame.pack_forget() # Hide initially + + # Separator + ctk.CTkFrame(self.manual_sync_frame, height=1, fg_color="gray").pack( + fill='x', pady=10, padx=20) + + # Parameters frame + params_frame = ctk.CTkFrame(self.manual_sync_frame) + params_frame.pack(fill='x', pady=10, padx=10) + + # Time range around max speed + time_range_frame = ctk.CTkFrame(params_frame) + time_range_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + time_range_frame, + text="Time interval around max speed (seconds):", + font=("Helvetica", 14), + width=300 + ).pack(side='left', padx=10) + + ctk.CTkEntry( + time_range_frame, + textvariable=self.time_range_var, + width=100 + ).pack(side='left', padx=10) + + # Likelihood threshold + likelihood_frame = ctk.CTkFrame(params_frame) + likelihood_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + likelihood_frame, + text="Likelihood Threshold:", + font=("Helvetica", 14), + width=300 + ).pack(side='left', padx=10) + + ctk.CTkEntry( + likelihood_frame, + textvariable=self.likelihood_threshold_var, + width=100 + ).pack(side='left', padx=10) + + # Filter settings + filter_frame = ctk.CTkFrame(params_frame) + filter_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + filter_frame, + text="Filter Cutoff (Hz):", + font=("Helvetica", 14), + width=300 + ).pack(side='left', padx=10) + + ctk.CTkEntry( + filter_frame, + textvariable=self.filter_cutoff_var, + width=100 + ).pack(side='left', padx=10) + + # Filter order + filter_order_frame = ctk.CTkFrame(params_frame) + filter_order_frame.pack(fill='x', pady=5) + + ctk.CTkLabel( + filter_order_frame, + text="Filter Order:", + font=("Helvetica", 14), + width=300 + ).pack(side='left', padx=10) + + ctk.CTkEntry( + filter_order_frame, + textvariable=self.filter_order_var, + width=100 + ).pack(side='left', padx=10) + + # Hide manual sync frame initially + self.manual_sync_frame.pack_forget() + + # Add empty frame to push the buttons to the bottom + spacer_frame = ctk.CTkFrame(content_frame, fg_color="transparent") + spacer_frame.pack(fill='both', expand=True) + + # Buttons for saving settings + self.skip_button_frame = ctk.CTkFrame(content_frame, fg_color="transparent") + self.skip_button_frame.pack(side='bottom', pady=20) + + self.confirm_skip_button = ctk.CTkButton( + self.skip_button_frame, + text="Confirm Skip Synchronization", + command=self.confirm_skip_synchronization, + font=("Helvetica", 14), + height=40, + width=250, + fg_color="#4CAF50", + hover_color="#388E3C" + ) + self.confirm_skip_button.pack(side='bottom') + + self.confirm_gui_button = ctk.CTkButton( + self.skip_button_frame, + text="Confirm GUI Synchronization", + command=self.confirm_gui_synchronization, + font=("Helvetica", 14), + height=40, + width=250, + fg_color="#4CAF50", + hover_color="#388E3C" + ) + self.confirm_gui_button.pack(side='bottom') + + self.save_manual_button = ctk.CTkButton( + self.skip_button_frame, + text="Save Manual Synchronization Settings", + command=self.save_manual_settings, + font=("Helvetica", 14), + height=40, + width=250, + fg_color="#4CAF50", + hover_color="#388E3C" + ) + self.save_manual_button.pack(side='bottom') + + # Status label for feedback + self.status_label = ctk.CTkLabel( + content_frame, + text="", + font=("Helvetica", 12), + text_color="gray" + ) + self.status_label.pack(pady=10) + + # Initialize UI based on current settings + self.update_ui_based_on_selections() + + def update_ui_based_on_selections(self): + """Update which UI elements are shown based on current selections""" + # Clear all buttons first + for widget in self.skip_button_frame.winfo_children(): + widget.pack_forget() + + # If skipping synchronization (videos already synced) + if self.sync_videos_var.get() == 'yes': + # Hide GUI and manual frames + self.gui_frame.pack_forget() + self.manual_sync_frame.pack_forget() + + # Show only the skip confirmation button + self.confirm_skip_button.pack(pady=10) + + # Update status + self.status_label.configure( + text="Videos will be treated as already synchronized. No synchronization will be performed.", + text_color="blue" + ) + + # If need synchronization (videos not synced) + else: + # Show GUI choice frame + self.gui_frame.pack(fill='x', pady=10) + + # If using GUI + if self.use_gui_var.get() == 'yes': + # Hide manual sync frame + self.manual_sync_frame.pack_forget() + + # Show GUI confirmation button + self.confirm_gui_button.pack(pady=10) + + # Update status + self.status_label.configure( + text="You will use the interactive GUI for synchronization during processing.", + text_color="blue" + ) + + # If not using GUI + else: + # Show manual sync frame + self.manual_sync_frame.pack(fill='x', pady=10) + + # Update camera times frame if needed + if self.approx_time_var.get() == 'yes': + self.setup_camera_times_input() + self.camera_times_frame.pack(fill='x', pady=10, padx=10) + else: + self.camera_times_frame.pack_forget() + + # Show save manual settings button + self.save_manual_button.pack(pady=10) + + # Update status + self.status_label.configure( + text="Configure manual synchronization parameters above.", + text_color="blue" + ) + + def on_approx_time_change(self): + """Handle changes to the approximate time option""" + # Update UI + self.update_ui_based_on_selections() + + def setup_camera_times_input(self): + """Create input fields for camera-specific times""" + # Clear existing widgets + for widget in self.camera_times_frame.winfo_children(): + widget.destroy() + + # Instructions + ctk.CTkLabel( + self.camera_times_frame, + text="Enter approximate times (in seconds) of sync movement for each camera:", + font=("Helvetica", 14) + ).pack(anchor='w', padx=10, pady=(10, 5)) + + # Create scrollable frame for camera inputs (if many cameras) + times_scroll_frame = ctk.CTkScrollableFrame( + self.camera_times_frame, + width=700, + height=200 + ) + times_scroll_frame.pack(fill='x', pady=5) + + # Get number of cameras + try: + num_cameras = int(self.app.tabs['calibration'].num_cameras_var.get()) + except (AttributeError, ValueError): + # Default to 2 if can't get from calibration tab + num_cameras = 2 + + # Reset time entries list + self.approx_time_entries = [] + + # Create entry for each camera + for cam in range(1, num_cameras + 1): + # Frame for this camera + cam_frame = ctk.CTkFrame(times_scroll_frame) + cam_frame.pack(fill='x', pady=2) + + # Label + ctk.CTkLabel( + cam_frame, + text=f"Camera {cam}:", + width=100 + ).pack(side='left', padx=10) + + # Entry field + time_var = ctk.StringVar(value="0.0") + entry = ctk.CTkEntry( + cam_frame, + textvariable=time_var, + width=100 + ) + entry.pack(side='left', padx=10) + + # Add to entry list + self.approx_time_entries.append(entry) + + # Help text + ctk.CTkLabel( + self.camera_times_frame, + text="Tip: Enter the time (in seconds) when a clear movement is visible in each camera.", + font=("Helvetica", 12), + text_color="gray" + ).pack(anchor='w', padx=10, pady=5) + + def confirm_skip_synchronization(self): + """Handle confirmation when skipping synchronization""" + # Update status + self.status_label.configure( + text="Synchronization will be skipped. Videos will be treated as already synchronized.", + text_color="green" + ) + + # Update progress + if hasattr(self.app, 'progress_steps') and 'synchronization' in self.app.progress_steps: + progress_value = self.app.progress_steps['synchronization'] + else: + progress_value = 70 # Default value + + self.app.update_progress_bar(progress_value) + + # Update tab indicator + self.app.update_tab_indicator('synchronization', True) + + # Disable skip button + self.confirm_skip_button.configure(state="disabled") + + # Show success message + messagebox.showinfo( + "Synchronization Skipped", + "Synchronization will be skipped. Videos will be treated as already synchronized." + ) + + # Automatically move to next tab if available + if hasattr(self.app, 'show_tab'): + tab_order = list(self.app.tabs.keys()) + current_idx = tab_order.index('synchronization') + if current_idx + 1 < len(tab_order): + next_tab = tab_order[current_idx + 1] + self.app.show_tab(next_tab) + + def confirm_gui_synchronization(self): + """Handle confirmation when using GUI for synchronization""" + # Update status + self.status_label.configure( + text="GUI synchronization mode enabled. You will use the interactive interface during processing.", + text_color="green" + ) + + # Update progress + if hasattr(self.app, 'progress_steps') and 'synchronization' in self.app.progress_steps: + progress_value = self.app.progress_steps['synchronization'] + else: + progress_value = 70 # Default value + + self.app.update_progress_bar(progress_value) + + # Update tab indicator + self.app.update_tab_indicator('synchronization', True) + + # Disable GUI button + self.confirm_gui_button.configure(state="disabled") + + # Show success message + messagebox.showinfo( + "GUI Synchronization Enabled", + "Interactive GUI synchronization will be used during processing. This is the recommended approach." + ) + + # Automatically move to next tab if available + if hasattr(self.app, 'show_tab'): + tab_order = list(self.app.tabs.keys()) + current_idx = tab_order.index('synchronization') + if current_idx + 1 < len(tab_order): + next_tab = tab_order[current_idx + 1] + self.app.show_tab(next_tab) + + def save_manual_settings(self): + """Save manual synchronization settings""" + try: + # Validate inputs + if self.approx_time_var.get() == 'yes': + # Validate time entries + for i, entry in enumerate(self.approx_time_entries, 1): + try: + time_value = float(entry.get()) + if time_value < 0: + messagebox.showerror( + "Invalid Input", + f"Camera {i} time must be a positive number." + ) + return + except ValueError: + messagebox.showerror( + "Invalid Input", + f"Camera {i} time must be a number." + ) + return + + # Get other float values + try: + time_range = float(self.time_range_var.get()) + if time_range <= 0: + messagebox.showerror( + "Invalid Input", + "Time range must be a positive number." + ) + return + except ValueError: + messagebox.showerror( + "Invalid Input", + "Time range must be a number." + ) + return + + try: + likelihood = float(self.likelihood_threshold_var.get()) + if not 0 <= likelihood <= 1: + messagebox.showerror( + "Invalid Input", + "Likelihood threshold must be between 0 and 1." + ) + return + except ValueError: + messagebox.showerror( + "Invalid Input", + "Likelihood threshold must be a number." + ) + return + + # Get integer values + try: + filter_cutoff = int(self.filter_cutoff_var.get()) + if filter_cutoff <= 0: + messagebox.showerror( + "Invalid Input", + "Filter cutoff must be a positive integer." + ) + return + except ValueError: + messagebox.showerror( + "Invalid Input", + "Filter cutoff must be an integer." + ) + return + + try: + filter_order = int(self.filter_order_var.get()) + if filter_order <= 0: + messagebox.showerror( + "Invalid Input", + "Filter order must be a positive integer." + ) + return + except ValueError: + messagebox.showerror( + "Invalid Input", + "Filter order must be an integer." + ) + return + + # Update status + self.status_label.configure( + text="Manual synchronization settings saved successfully. GUI is disabled.", + text_color="green" + ) + + # Update progress + if hasattr(self.app, 'progress_steps') and 'synchronization' in self.app.progress_steps: + progress_value = self.app.progress_steps['synchronization'] + else: + progress_value = 70 # Default value + + self.app.update_progress_bar(progress_value) + + # Update tab indicator + self.app.update_tab_indicator('synchronization', True) + + # Disable inputs after saving + self.disable_all_widgets(self.manual_sync_frame) + self.save_manual_button.configure(state="disabled") + + # Show success message + messagebox.showinfo( + "Settings Saved", + "Manual synchronization settings have been saved successfully. GUI mode is disabled." + ) + + # Automatically move to next tab if available + if hasattr(self.app, 'show_tab'): + tab_order = list(self.app.tabs.keys()) + current_idx = tab_order.index('synchronization') + if current_idx + 1 < len(tab_order): + next_tab = tab_order[current_idx + 1] + self.app.show_tab(next_tab) + + except Exception as e: + messagebox.showerror( + "Error", + f"An error occurred while saving settings: {str(e)}" + ) + + def disable_all_widgets(self, parent): + """Recursively disable all input widgets in a parent widget""" + for child in parent.winfo_children(): + if isinstance(child, (ctk.CTkEntry, ctk.CTkRadioButton, ctk.CTkOptionMenu)): + child.configure(state="disabled") + if hasattr(child, 'winfo_children') and callable(child.winfo_children): + self.disable_all_widgets(child) \ No newline at end of file diff --git a/GUI/tabs/tutorial_tab.py b/GUI/tabs/tutorial_tab.py new file mode 100644 index 00000000..1b74532b --- /dev/null +++ b/GUI/tabs/tutorial_tab.py @@ -0,0 +1,555 @@ +from pathlib import Path +import sys +import customtkinter as ctk +from tkinter import messagebox +import subprocess +import threading +import webbrowser + +class TutorialTab: + def __init__(self, parent, app): + self.parent = parent + self.app = app + + # Create main frame + self.frame = ctk.CTkFrame(parent) + + # Initialize variables + self.marker_file = Path(__file__).parent.parent / "tutorial_completed" + + # Video links + self.video_links = { + '2d': "https://drive.google.com/file/d/1Lglv-1tdO4FFKUl2LA7dKhYvPcsWbLmJ/view?usp=drive_link", + '3d': "https://drive.google.com/file/d/1fNQDtc0f1jYOrgqkQcVHQ3XPfbdTIqTr/view?usp=drive_link" + } + + # Dependency check results + self.dependencies = { + "anaconda": {"installed": False, "name": "Anaconda"}, + "path": {"installed": False, "name": "Anaconda Path"}, + "pose2sim": {"installed": False, "name": "Pose2Sim"}, + "opensim": {"installed": False, "name": "OpenSim"}, + "pytorch": {"installed": False, "name": "PyTorch (optional)"}, + "onnxruntime-gpu": {"installed": False, "name": "ONNX GPU (optional)"}, + } + + # Build the UI + self.build_ui() + + # # Check for tutorial marker file + # self.check_tutorial_status() + + # Start dependency check in background thread + threading.Thread(target=self.check_dependencies, daemon=True).start() + + def get_title(self): + """Return the tab title""" + return "Tutorial" + + def get_settings(self): + """Get the tutorial settings""" + return {} # This tab doesn't add settings to the config file + + def build_ui(self): + """Build the tutorial UI""" + # Create a scrollable content frame + self.content_frame = ctk.CTkScrollableFrame(self.frame) + self.content_frame.pack(fill='both', expand=True, padx=0, pady=0) + + # Title + self.title_label = ctk.CTkLabel( + self.content_frame, + text="Welcome to Pose2Sim", + font=("Helvetica", 24, "bold") + ) + self.title_label.pack(pady=(0, 20)) + + # Video information section + video_info_frame = ctk.CTkFrame(self.content_frame, fg_color=("gray95", "gray20")) + video_info_frame.pack(fill='x', pady=10, padx=0) + + # ctk.CTkLabel( + # video_info_frame, + # text="Due to size, tutorial videos are hosted on Google Drive", + # font=("Helvetica", 16, "bold"), + # wraplength=600 + # ).pack(pady=(10, 5)) + + # Get the analysis mode + analysis_mode = getattr(self.app, 'analysis_mode', '3d') + + video_buttons_frame = ctk.CTkFrame(video_info_frame, fg_color="transparent") + video_buttons_frame.pack(pady=10) + + # Button for current mode video + current_mode_text = "Watch 2D Tutorial Video" if analysis_mode == '2d' else "Watch 3D Tutorial Video" + ctk.CTkButton( + video_buttons_frame, + text=current_mode_text, + command=lambda: self.open_video_link(analysis_mode), + font=("Helvetica", 14, "bold"), + width=250, + height=40 + ).pack(padx=10, pady=5) + + # Button for other mode video + other_mode = '3d' if analysis_mode == '2d' else '2d' + other_mode_text = "Watch 3D Tutorial Video" if analysis_mode == '2d' else "Watch 2D Tutorial Video" + ctk.CTkButton( + video_buttons_frame, + text=other_mode_text, + command=lambda: self.open_video_link(other_mode), + font=("Helvetica", 12), + width=150, + height=30, + text_color="grey20" + ).pack(padx=10, pady=5) + + # # Tutorial image placeholder + # self.tutorial_img_frame = ctk.CTkFrame(self.content_frame, height=300) + # self.tutorial_img_frame.pack(fill='x', pady=10) + + # # Load a placeholder image or tutorial screenshot if available + # tutorial_img_path = Path(__file__).parent.parent / "assets" / "tutorial_preview.png" + # if tutorial_img_path.exists(): + # try: + # # Load and display image + # img = Image.open(tutorial_img_path) + # img = img.resize((800, 300), Image.LANCZOS) + # self.tutorial_img = ctk.CTkImage(light_image=img, dark_image=img, size=(800, 300)) + + # img_label = ctk.CTkLabel(self.tutorial_img_frame, image=self.tutorial_img, text="") + # img_label.pack(pady=10) + # except Exception as e: + # ctk.CTkLabel( + # self.tutorial_img_frame, + # text="Tutorial Preview Image Not Available", + # font=("Helvetica", 16) + # ).pack(expand=True) + # else: + # ctk.CTkLabel( + # self.tutorial_img_frame, + # text="Tutorial Preview Image Not Available", + # font=("Helvetica", 16) + # ).pack(expand=True) + + # Add beta version message box + self.beta_message_frame = ctk.CTkFrame(self.content_frame, fg_color="white") + self.beta_message_frame.pack(fill='x', pady=10, padx=0) + + self.beta_message = ctk.CTkLabel( + self.beta_message_frame, + text="This GUI is a beta version. If you have recommendations, errors, or suggestions please send them to yacine.pose2sim@gmail.com or contact@david-pagnon.com", + font=("Helvetica", 12), + text_color="black", + wraplength=600 + ) + self.beta_message.pack(pady=10, padx=10) + + # # Description text + # self.description_frame = ctk.CTkFrame(self.content_frame) + # self.description_frame.pack(fill='x', pady=10) + + # self.description_text = ctk.CTkTextbox( + # self.description_frame, + # height=100, + # font=("Helvetica", 12) + # ) + # self.description_text.pack(fill='x', padx=10, pady=10) + + # description = ( + # "Welcome to the Pose2Sim tutorial. This guide will help you set up and use Pose2Sim effectively.\n\n" + # "The tutorial videos cover:\n" + # "• Configuration workflow\n" + # "• Data processing\n" + # "• Advanced features\n\n" + # "Click on the video link above to watch the complete tutorial on Google Drive." + # ) + + # self.description_text.insert("1.0", description) + # self.description_text.configure(state="disabled") + + # Dependency check frame + self.dependency_frame = ctk.CTkFrame(self.content_frame) + self.dependency_frame.pack(fill='x', pady=10) + + ctk.CTkLabel( + self.dependency_frame, + text="System Requirements Check", + font=("Helvetica", 16, "bold") + ).pack(pady=(10, 5)) + + # Create a frame for each dependency + self.dependency_items_frame = ctk.CTkFrame(self.dependency_frame) + self.dependency_items_frame.pack(fill='x', padx=10, pady=10) + + # Create indicators for each dependency + for dep_id, dep_info in self.dependencies.items(): + dep_frame = ctk.CTkFrame(self.dependency_items_frame, fg_color="transparent") + dep_frame.pack(fill='x', pady=5, padx=10) + + # Status indicator + status_label = ctk.CTkLabel( + dep_frame, + text="⏳", + font=("Helvetica", 14), + width=30 + ) + status_label.pack(side='left', padx=5) + + # Dependency name + name_label = ctk.CTkLabel( + dep_frame, + text=dep_info["name"], + font=("Helvetica", 14), + width=150, + anchor="w" + ) + name_label.pack(side='left', padx=5) + + # Install button (hidden initially) + install_button = ctk.CTkButton( + dep_frame, + text="Install", + width=80, + command=lambda d=dep_id: self.install_dependency(d) + ) + install_button.pack(side='left', padx=5) + install_button.pack_forget() + + # Store references to update later + dep_info["status_label"] = status_label + dep_info["install_button"] = install_button + + # # Add spacer frame to push buttons to bottom + # spacer = ctk.CTkFrame(self.content_frame, fg_color="transparent") + # spacer.pack(fill='x', pady=100) + + # # Bottom buttons frame + # self.bottom_frame = ctk.CTkFrame(self.content_frame, fg_color="transparent") + # self.bottom_frame.pack(side='bottom', expand=True, fill='x', pady=(10, 10)) + + # # Complete tutorial button + # self.complete_button = ctk.CTkButton( + # self.bottom_frame, + # text="Complete Tutorial", + # command=self.complete_tutorial, + # height=40, + # width=200, + # font=("Helvetica", 14), + # fg_color=("#4CAF50", "#2E7D32") + # ) + # self.complete_button.pack(side='right', padx=10) + + # # Skip tutorial button + # self.skip_button = ctk.CTkButton( + # self.bottom_frame, + # text="Skip Tutorial", + # command=self.skip_tutorial, + # height=40, + # width=200, + # font=("Helvetica", 14), + # fg_color="#FF9500", + # hover_color="#FF7000" + # ) + # self.skip_button.pack(side='right', padx=10) + + def open_video_link(self, mode): + """Open the video link in a web browser""" + if mode in self.video_links: + webbrowser.open(self.video_links[mode]) + + # def check_tutorial_status(self): + # """Check if the tutorial has been completed before""" + # if Path(self.marker_file).exists(): + # # Tutorial has been completed before, only show skip button + # self.complete_button.pack_forget() + # else: + # # First time user, show both buttons + # pass + + # def skip_tutorial(self): + # """Skip the tutorial and move to the main app""" + # # Confirm the user wants to skip + # response = messagebox.askyesno( + # "Skip Tutorial", + # "Are you sure you want to skip the tutorial? You can access it again from the Tutorial tab later." + # ) + + # if response: + # # Move to the next tab + # if hasattr(self.app, 'show_tab'): + # tab_order = list(self.app.tabs.keys()) + # current_idx = tab_order.index('tutorial') + # if current_idx + 1 < len(tab_order): + # next_tab = tab_order[current_idx + 1] + # self.app.show_tab(next_tab) + + # def complete_tutorial(self): + # """Mark the tutorial as completed and continue to the app""" + # # Create marker file to indicate tutorial completion + # try: + # with open(self.marker_file, 'w') as f: + # f.write("Tutorial completed") + + # messagebox.showinfo( + # "Tutorial Complete", + # "You have completed the Pose2Sim tutorial. You can access it again at any time from the Tutorial tab." + # ) + + # # Move to the next tab + # if hasattr(self.app, 'show_tab'): + # tab_order = list(self.app.tabs.keys()) + # current_idx = tab_order.index('tutorial') + # if current_idx + 1 < len(tab_order): + # next_tab = tab_order[current_idx + 1] + # self.app.show_tab(next_tab) + + # except Exception as e: + # messagebox.showerror( + # "Error", + # f"Failed to mark tutorial as completed: {str(e)}" + # ) + + def check_dependencies(self): + """Check if required dependencies are installed""" + # Check for Anaconda + self.check_anaconda() + + # Check for anaconda in PATH + self.check_anaconda_path() + + # Check for pose2sim + self.check_package("pose2sim") + + # Check for OpenSim + self.check_package("opensim") + + # Check for PyTorch + self.check_pytorch() + + # Check for ONNX Runtime GPU + self.check_package("onnxruntime-gpu") + + # Update UI with results + self.frame.after(0, self.update_dependency_ui) + + def check_anaconda(self): + """Check if Anaconda is installed""" + try: + # Check for conda executable + if sys.platform == 'win32': + result = subprocess.run(["where", "conda"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) + else: + result = subprocess.run(["which", "conda"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) + + if result.returncode == 0 and result.stdout.strip(): + self.dependencies["anaconda"]["installed"] = True + else: + self.dependencies["anaconda"]["installed"] = False + except Exception: + self.dependencies["anaconda"]["installed"] = False + + def check_anaconda_path(self): + """Check if Anaconda is in PATH""" + try: + # Try to run conda command + result = subprocess.run(["conda", "--version"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) + + if result.returncode == 0: + self.dependencies["path"]["installed"] = True + else: + self.dependencies["path"]["installed"] = False + except Exception: + self.dependencies["path"]["installed"] = False + + def check_package(self, package_name): + """Check if a Python package is installed""" + try: + if package_name == "opensim": + # Special check for OpenSim + cmd = ["conda", "list", "opensim"] + else: + # Check with pip + cmd = ["pip", "show", package_name] + + result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) + + dep_key = package_name.lower().replace("-", "-") + if result.returncode == 0 and result.stdout.strip(): + self.dependencies[dep_key]["installed"] = True + else: + self.dependencies[dep_key]["installed"] = False + except Exception: + self.dependencies[package_name.lower().replace("-", "-")]["installed"] = False + + def check_pytorch(self): + """Check if PyTorch with CUDA is installed""" + try: + # Execute a Python script to check PyTorch and CUDA + check_cmd = [ + sys.executable, + "-c", + "import torch; print(f'PyTorch: {torch.__version__}, CUDA: {torch.cuda.is_available()}')" + ] + + result = subprocess.run(check_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) + + if result.returncode == 0 and "CUDA: True" in result.stdout: + self.dependencies["pytorch"]["installed"] = True + else: + self.dependencies["pytorch"]["installed"] = False + except Exception: + self.dependencies["pytorch"]["installed"] = False + + def update_dependency_ui(self): + """Update UI with dependency check results""" + for dep_id, dep_info in self.dependencies.items(): + status_label = dep_info["status_label"] + install_button = dep_info["install_button"] + + if dep_info["installed"]: + status_label.configure(text="✅", text_color="#4CAF50") + install_button.pack_forget() + else: + status_label.configure(text="❌", text_color="#F44336") + install_button.pack(side='left', padx=5) + + def install_dependency(self, dependency_id): + """Install a missing dependency""" + commands = { + "anaconda": { + "message": "Please download and install Anaconda from:\nhttps://www.anaconda.com/products/distribution", + "command": None # Manual installation required + }, + "path": { + "message": "Anaconda is installed but not in PATH. Please add it to your system PATH.", + "command": None # Manual configuration required + }, + "pose2sim": { + "message": "Installing Pose2Sim...", + "command": ["pip", "install", "pose2sim"] + }, + "opensim": { + "message": "Installing OpenSim...", + "command": ["conda", "install", "-c", "opensim-org", "opensim", "-y"] + }, + "pytorch": { + "message": "Installing PyTorch with CUDA...", + "command": ["pip", "install", "torch", "torchvision", "torchaudio", "--index-url", "https://download.pytorch.org/whl/cu124"] + }, + "onnxruntime-gpu": { + "message": "Installing ONNX Runtime GPU...", + "command": ["pip", "uninstall", "onnxruntime", "-y", "&&", "pip", "install", "onnxruntime-gpu"] + } + } + + if dependency_id not in commands: + messagebox.showerror("Error", f"Unknown dependency: {dependency_id}") + return + + dep_info = commands[dependency_id] + + if dep_info["command"] is None: + # Manual installation required + messagebox.showinfo("Manual Installation", dep_info["message"]) + return + + # Show installation dialog + progress_window = ctk.CTkToplevel(self.frame) + progress_window.title(f"Installing {self.dependencies[dependency_id]['name']}") + progress_window.geometry("400x200") + progress_window.transient(self.frame) + progress_window.grab_set() + + # Message + message_label = ctk.CTkLabel( + progress_window, + text=dep_info["message"], + font=("Helvetica", 14) + ) + message_label.pack(pady=(20, 10)) + + # Progress indicator + progress = ctk.CTkProgressBar(progress_window) + progress.pack(fill='x', padx=20, pady=10) + progress.configure(mode="indeterminate") + progress.start() + + # Status + status_label = ctk.CTkLabel( + progress_window, + text="Starting installation...", + font=("Helvetica", 12) + ) + status_label.pack(pady=10) + + # Run installation in a separate thread + def install_thread(): + try: + # Update status + self.frame.after(0, lambda: status_label.configure(text="Installation in progress...")) + + # Execute command + if "&&" in dep_info["command"]: + # Handle compound commands (uninstall and then install) + cmd1 = dep_info["command"][:dep_info["command"].index("&&")] + cmd2 = dep_info["command"][dep_info["command"].index("&&")+1:] + + # Run first command + result1 = subprocess.run(cmd1, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) + + # Run second command + result2 = subprocess.run(cmd2, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) + + success = result2.returncode == 0 + else: + # Single command + result = subprocess.run(dep_info["command"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) + success = result.returncode == 0 + + # Update UI based on result + if success: + self.frame.after(0, lambda: status_label.configure( + text="Installation completed successfully!", + text_color="#4CAF50" + )) + + # Update dependency status + self.dependencies[dependency_id]["installed"] = True + self.frame.after(500, self.update_dependency_ui) + else: + self.frame.after(0, lambda: status_label.configure( + text="Installation failed. Please try manual installation.", + text_color="#F44336" + )) + + # Add close button + self.frame.after(0, lambda: ctk.CTkButton( + progress_window, + text="Close", + command=progress_window.destroy + ).pack(pady=10)) + + # Stop progress animation + self.frame.after(0, progress.stop) + + except Exception as e: + # Show error + self.frame.after(0, lambda e=e: status_label.configure( + text=f"Error: {str(e)}", + text_color="#F44336" + )) + + # Add close button + self.frame.after(0, lambda: ctk.CTkButton( + progress_window, + text="Close", + command=progress_window.destroy + ).pack(pady=10)) + + # Stop progress animation + self.frame.after(0, progress.stop) + + # Start installation thread + threading.Thread(target=install_thread, daemon=True).start() \ No newline at end of file diff --git a/GUI/tabs/visualization_tab.py b/GUI/tabs/visualization_tab.py new file mode 100644 index 00000000..a30514c1 --- /dev/null +++ b/GUI/tabs/visualization_tab.py @@ -0,0 +1,1120 @@ +import os +import numpy as np +import tkinter as tk +import customtkinter as ctk +from tkinter import filedialog, messagebox +from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg +from matplotlib.figure import Figure +import cv2 +from PIL import Image, ImageTk + +class VisualizationTab: + def __init__(self, parent, app): + self.parent = parent + self.app = app + + # Create main frame + self.frame = ctk.CTkFrame(parent) + + # Initialize data variables + self.trc_data = None + self.mot_data = None + self.video_cap = None + self.video_path = None + self.current_frame = 0 + self.playing = False + self.play_after_id = None + + # Selected angles/segments for visualization + self.selected_angles = [] + + # Stores current time line object in angle plot + self.time_line = None + + # Build the UI + self.build_ui() + + def get_title(self): + """Return the tab title""" + return "Data Visualization" + + def get_settings(self): + """Get the visualization settings""" + return {} # This tab doesn't add settings to the config file + + def build_ui(self): + # Create main layout with left and right panels + self.main_paned_window = ctk.CTkFrame(self.frame) + self.main_paned_window.pack(fill='both', expand=True, padx=10, pady=10) + + # Top control panel + self.control_panel = ctk.CTkFrame(self.main_paned_window) + self.control_panel.pack(fill='x', pady=(0, 10)) + + # File control frame + file_frame = ctk.CTkFrame(self.control_panel) + file_frame.pack(side='left', fill='y', padx=10, pady=5) + + ctk.CTkLabel(file_frame, text="Data Files:", font=("Helvetica", 12, "bold")).pack(side='left', padx=5) + + self.auto_detect_btn = ctk.CTkButton( + file_frame, + text="Auto-Detect Files", + command=self.auto_detect_files, + width=120 + ) + self.auto_detect_btn.pack(side='left', padx=5) + + self.load_trc_btn = ctk.CTkButton( + file_frame, + text="Load TRC", + command=self.load_trc_file, + width=80 + ) + self.load_trc_btn.pack(side='left', padx=5) + + self.load_mot_btn = ctk.CTkButton( + file_frame, + text="Load MOT", + command=self.load_mot_file, + width=80 + ) + self.load_mot_btn.pack(side='left', padx=5) + + self.load_video_btn = ctk.CTkButton( + file_frame, + text="Load Video", + command=self.load_video_file, + width=80 + ) + self.load_video_btn.pack(side='left', padx=5) + + # Playback controls + playback_frame = ctk.CTkFrame(self.control_panel) + playback_frame.pack(side='right', fill='y', padx=10, pady=5) + + self.play_btn = ctk.CTkButton( + playback_frame, + text="▶️ Play", + command=self.toggle_play, + width=80 + ) + self.play_btn.pack(side='left', padx=5) + + self.speed_var = ctk.DoubleVar(value=1.0) + speed_frame = ctk.CTkFrame(playback_frame) + speed_frame.pack(side='left', padx=5) + + ctk.CTkLabel(speed_frame, text="Speed:").pack(side='left', padx=2) + ctk.CTkComboBox( + speed_frame, + values=["0.25x", "0.5x", "1.0x", "1.5x", "2.0x"], + command=self.set_playback_speed, + width=70 + ).pack(side='left', padx=2) + + # Split window contents + self.content_frame = ctk.CTkFrame(self.main_paned_window) + self.content_frame.pack(fill='both', expand=True) + + # Left panel (70% width) - visualization of markers and/or video + self.left_panel = ctk.CTkFrame(self.content_frame) + self.left_panel.pack(side='left', fill='both', expand=True, padx=(0, 5)) + + # Right panel (30% width) - angle plots and selection + self.right_panel = ctk.CTkFrame(self.content_frame) + self.right_panel.pack(side='right', fill='both', expand=False, padx=(5, 0), pady=5, ipadx=10) + self.right_panel.configure(width=350) # Fixed width + + # Add visualization elements + self.create_visualization_panel() + self.create_angles_panel() + + # Add timeline slider at bottom + self.timeline_frame = ctk.CTkFrame(self.main_paned_window) + self.timeline_frame.pack(fill='x', pady=(10, 0)) + + self.frame_slider = ctk.CTkSlider( + self.timeline_frame, + from_=0, + to=100, + command=self.on_slider_change + ) + self.frame_slider.pack(side='left', fill='x', expand=True, padx=5, pady=10) + + self.frame_label = ctk.CTkLabel(self.timeline_frame, text="Frame: 0/0") + self.frame_label.pack(side='right', padx=5) + + # Status bar + self.status_label = ctk.CTkLabel( + self.main_paned_window, + text="Load data files to begin visualization", + anchor="w", + font=("Helvetica", 11), + text_color="gray" + ) + self.status_label.pack(fill='x', pady=(5, 0)) + + def create_visualization_panel(self): + """Create the left panel for 3D visualization and/or video display""" + # Top part: 3D markers or video + self.viz_frame = ctk.CTkFrame(self.left_panel) + self.viz_frame.pack(fill='both', expand=True, pady=5) + + # Notebook for switching between 3D view and video + self.viz_notebook = ctk.CTkTabview(self.viz_frame) + self.viz_notebook.pack(fill='both', expand=True, padx=5, pady=5) + + # Add tabs + self.markers_tab = self.viz_notebook.add("3D Markers") + self.video_tab = self.viz_notebook.add("Video") + + # Create marker visualization in markers tab + self.create_marker_visualization() + + # Create video display in video tab + self.create_video_display() + + def create_marker_visualization(self): + """Create 3D marker visualization with Y-up orientation""" + self.marker_fig = Figure(figsize=(8, 6), dpi=100) + self.marker_ax = self.marker_fig.add_subplot(111, projection='3d') + self.marker_ax.set_title('3D Marker Positions') + self.marker_ax.set_xlabel('X') + self.marker_ax.set_ylabel('Y (Up)') + self.marker_ax.set_zlabel('Z (Depth)') + + # Set initial view angle to match Image 1 + self.marker_ax.view_init(elev=20, azim=-35) + + # Create canvas widget + self.marker_canvas = FigureCanvasTkAgg(self.marker_fig, master=self.markers_tab) + self.marker_canvas.draw() + self.marker_canvas.get_tk_widget().pack(fill='both', expand=True) + + # Initialize empty marker data + self.scatter = self.marker_ax.scatter([], [], [], s=30) + + # Add options for marker display + self.marker_options_frame = ctk.CTkFrame(self.markers_tab) + self.marker_options_frame.pack(fill='x', pady=5) + + self.connect_joints_var = ctk.BooleanVar(value=True) + ctk.CTkCheckBox( + self.marker_options_frame, + text="Connect Joints", + variable=self.connect_joints_var, + command=self.update_marker_visualization + ).pack(side='left', padx=10) + + self.show_labels_var = ctk.BooleanVar(value=False) + ctk.CTkCheckBox( + self.marker_options_frame, + text="Show Labels", + variable=self.show_labels_var, + command=self.update_marker_visualization + ).pack(side='left', padx=10) + + # Add view angle controls + angle_frame = ctk.CTkFrame(self.marker_options_frame) + angle_frame.pack(side='right', padx=10) + + ctk.CTkLabel(angle_frame, text="Elev:").pack(side='left', padx=2) + self.elev_var = ctk.StringVar(value="20") + elev_entry = ctk.CTkEntry(angle_frame, width=40, textvariable=self.elev_var) + elev_entry.pack(side='left', padx=2) + + ctk.CTkLabel(angle_frame, text="Azim:").pack(side='left', padx=2) + self.azim_var = ctk.StringVar(value="-35") + azim_entry = ctk.CTkEntry(angle_frame, width=40, textvariable=self.azim_var) + azim_entry.pack(side='left', padx=2) + + ctk.CTkButton( + angle_frame, + text="Apply", + command=self.apply_view_angle, + width=60 + ).pack(side='left', padx=2) + + def apply_view_angle(self): + """Apply the specified view angle""" + try: + elev = float(self.elev_var.get()) + azim = float(self.azim_var.get()) + self.marker_ax.view_init(elev=elev, azim=azim) + self.marker_canvas.draw() + except ValueError: + pass + + def create_video_display(self): + """Create video display area""" + # Frame for video display + self.video_display_frame = ctk.CTkFrame(self.video_tab) + self.video_display_frame.pack(fill='both', expand=True) + + # Canvas for video + self.video_canvas = tk.Canvas(self.video_display_frame, bg="black") + self.video_canvas.pack(fill='both', expand=True) + + # Add a label with instructions + self.video_label = ctk.CTkLabel( + self.video_display_frame, + text="Load a video using the 'Load Video' button", + font=("Helvetica", 14) + ) + self.video_label.place(relx=0.5, rely=0.5, anchor='center') + + def create_angles_panel(self): + """Create the right panel for angle selection and plots""" + # Create tabs for different views + self.angles_notebook = ctk.CTkTabview(self.right_panel) + self.angles_notebook.pack(fill='both', expand=True) + + # Add tabs + self.plots_tab = self.angles_notebook.add("Plots") + self.selection_tab = self.angles_notebook.add("Selection") + + # Create plots tab + self.create_angle_plots() + + # Create selection tab + self.create_angle_selection() + + def create_angle_plots(self): + """Create angle plot visualizations""" + self.angle_fig = Figure(figsize=(5, 7), dpi=100) + + # Create a single plot that will show selected angles + self.angle_ax = self.angle_fig.add_subplot(111) + self.angle_ax.set_title('Joint Angles') + self.angle_ax.set_xlabel('Time (s)') + self.angle_ax.set_ylabel('Angle (degrees)') + self.angle_ax.grid(True, linestyle='--', alpha=0.7) + + self.angle_fig.tight_layout() + + # Create canvas widget + self.angle_canvas = FigureCanvasTkAgg(self.angle_fig, master=self.plots_tab) + self.angle_canvas.draw() + self.angle_canvas.get_tk_widget().pack(fill='both', expand=True) + + # Current time indicator + self.time_line = None + + def create_angle_selection(self): + """Create UI for selecting angles to plot""" + # Create scrollable frame for angle selection + self.selection_frame = ctk.CTkScrollableFrame(self.selection_tab) + self.selection_frame.pack(fill='both', expand=True, padx=5, pady=5) + + # Add a message when no data is loaded + self.no_data_label = ctk.CTkLabel( + self.selection_frame, + text="Load a MOT file to view available angles", + font=("Helvetica", 12), + text_color="gray" + ) + self.no_data_label.pack(pady=20) + + # Add buttons at the bottom + self.selection_buttons_frame = ctk.CTkFrame(self.selection_tab) + self.selection_buttons_frame.pack(fill='x', pady=5) + + self.select_all_btn = ctk.CTkButton( + self.selection_buttons_frame, + text="Select All", + command=self.select_all_angles, + width=90, + state="disabled" + ) + self.select_all_btn.pack(side='left', padx=5) + + self.deselect_all_btn = ctk.CTkButton( + self.selection_buttons_frame, + text="Deselect All", + command=self.deselect_all_angles, + width=90, + state="disabled" + ) + self.deselect_all_btn.pack(side='left', padx=5) + + self.apply_selection_btn = ctk.CTkButton( + self.selection_buttons_frame, + text="Apply Selection", + command=self.apply_angle_selection, + width=110, + state="disabled" + ) + self.apply_selection_btn.pack(side='right', padx=5) + + def auto_detect_files(self): + """Auto-detect TRC and MOT files""" + try: + # Determine file paths based on application mode + self.update_status("Looking for data files...", "blue") + + trc_file = None + mot_file = None + video_file = None + + if self.app.analysis_mode == '2d': + # For 2D analysis, look for *Sports2D folder + search_path = self.app.participant_name + sports2d_folders = [] + + for root, dirs, _ in os.walk(search_path): + for dir_name in dirs: + if dir_name.endswith("Sports2D"): + sports2d_folders.append(os.path.join(root, dir_name)) + + if sports2d_folders: + folder_path = sports2d_folders[0] + + # Find TRC files + trc_files = [f for f in os.listdir(folder_path) if f.endswith('.trc')] + non_lstm_trc = [f for f in trc_files if "LSTM" not in f] + + if non_lstm_trc: + trc_file = os.path.join(folder_path, non_lstm_trc[0]) + elif trc_files: + trc_file = os.path.join(folder_path, trc_files[0]) + + # Find MOT files + mot_files = [f for f in os.listdir(folder_path) if f.endswith('.mot')] + non_lstm_mot = [f for f in mot_files if "LSTM" not in f] + + if non_lstm_mot: + mot_file = os.path.join(folder_path, non_lstm_mot[0]) + elif mot_files: + mot_file = os.path.join(folder_path, mot_files[0]) + + # Look for video files + video_files = [f for f in os.listdir(folder_path) if f.lower().endswith(('.mp4', '.avi', '.mov'))] + if video_files: + video_file = os.path.join(folder_path, video_files[0]) + else: + # For 3D analysis + search_path = self.app.participant_name + + # Look for pose-3d.trc + potential_trc = os.path.join(search_path, 'pose-3d.trc') + if os.path.exists(potential_trc): + trc_file = potential_trc + + # Look in kinematics folder for MOT files + kinematics_path = os.path.join(search_path, 'kinematics') + if os.path.exists(kinematics_path): + mot_files = [f for f in os.listdir(kinematics_path) if f.endswith('.mot')] + if mot_files: + mot_file = os.path.join(kinematics_path, mot_files[0]) + + # Look for videos + videos_path = os.path.join(search_path, 'videos') + if os.path.exists(videos_path): + video_files = [f for f in os.listdir(videos_path) if f.lower().endswith(('.mp4', '.avi', '.mov'))] + if video_files: + video_file = os.path.join(videos_path, video_files[0]) + + # Load the files if found + files_found = False + + if trc_file: + self.load_trc_data(trc_file) + files_found = True + + if mot_file: + self.load_mot_data(mot_file) + files_found = True + + if video_file: + self.load_video(video_file) + files_found = True + + if not files_found: + self.update_status("No data files found. Try loading files manually.", "orange") + else: + self.update_status("Data files loaded successfully.", "green") + + except Exception as e: + self.update_status(f"Error auto-detecting files: {str(e)}", "red") + + def load_trc_file(self): + """Open file dialog to load TRC file""" + file_path = filedialog.askopenfilename( + title="Select TRC File", + filetypes=[("TRC Files", "*.trc"), ("All Files", "*.*")] + ) + + if file_path: + self.load_trc_data(file_path) + + def load_mot_file(self): + """Open file dialog to load MOT file""" + file_path = filedialog.askopenfilename( + title="Select MOT File", + filetypes=[("MOT Files", "*.mot"), ("All Files", "*.*")] + ) + + if file_path: + self.load_mot_data(file_path) + + def load_video_file(self): + """Open file dialog to load video file""" + file_path = filedialog.askopenfilename( + title="Select Video File", + filetypes=[ + ("Video Files", "*.mp4 *.avi *.mov *.mkv"), + ("All Files", "*.*") + ] + ) + + if file_path: + self.load_video(file_path) + + def load_trc_data(self, file_path): + """Parse and load TRC file""" + try: + self.update_status(f"Loading TRC file: {os.path.basename(file_path)}...", "blue") + + with open(file_path, 'r') as f: + content = f.readlines() + + # First find the header lines + data_rate_header_idx = -1 + for i, line in enumerate(content): + if "DataRate" in line and "CameraRate" in line: + data_rate_header_idx = i + break + + if data_rate_header_idx == -1: + raise ValueError("Invalid TRC file format: DataRate header line not found") + + # Get values from the line after the header + values_line_idx = data_rate_header_idx + 1 + if values_line_idx >= len(content): + raise ValueError("Invalid TRC file format: Values line missing") + + values_line = content[values_line_idx].strip().split('\t') + if len(values_line) < 4: + raise ValueError(f"Invalid values line format: {content[values_line_idx]}") + + # Extract values from values line + frame_rate = float(values_line[0]) + num_frames = int(values_line[2]) + num_markers = int(values_line[3]) + + # Find column headers (marker names) - usually 2 lines after values line + marker_line_idx = values_line_idx + 2 + if marker_line_idx >= len(content): + raise ValueError("Invalid TRC file format: Marker names line not found") + + marker_names_line = content[marker_line_idx].strip().split('\t') + + # Process marker names (removing duplicates from X/Y/Z components) + marker_names = [] + i = 2 # Start after Frame# and Time + while i < len(marker_names_line): + if marker_names_line[i]: + # Remove X/Y/Z suffix if present + name = marker_names_line[i].split(':')[0] # Handle "MarkerName:X" format + marker_names.append(name) + i += 3 # Skip the X/Y/Z columns for this marker + else: + i += 1 # Skip empty column + + # Parse data + data_start_idx = marker_line_idx + 2 # Skip marker names line and coordinate headers + + frames_data = [] + for i in range(data_start_idx, len(content)): + line = content[i].strip() + if not line: + continue + + parts = line.split('\t') + if len(parts) < 4: # Need at least frame, time, and one coordinate + continue + + try: + frame_num = int(float(parts[0])) + time_val = float(parts[1]) + + # Process marker data + markers = {} + marker_idx = 0 + + for j in range(len(marker_names)): + # Each marker has 3 values (X, Y, Z) + col_offset = 2 + j*3 + + # Check if within bounds + if col_offset + 2 < len(parts): + x_str = parts[col_offset].strip() + y_str = parts[col_offset + 1].strip() + z_str = parts[col_offset + 2].strip() + + try: + x = float(x_str) if x_str else float('nan') + y = float(y_str) if y_str else float('nan') + z = float(z_str) if z_str else float('nan') + + markers[marker_names[j]] = {'x': x, 'y': y, 'z': z} + marker_idx += 1 + except ValueError: + # Skip invalid values + pass + + frames_data.append({ + 'frame': frame_num, + 'time': time_val, + 'markers': markers + }) + + except (ValueError, IndexError) as e: + # Skip invalid lines + print(f"Error parsing line {i}: {e}") + continue + + # Store data + self.trc_data = { + 'file_path': file_path, + 'marker_names': marker_names, + 'num_frames': num_frames, + 'frames': frames_data + } + + # Update slider range + max_frame = len(frames_data) - 1 + self.frame_slider.configure(to=max_frame) + self.frame_slider.set(0) + self.current_frame = 0 + self.frame_label.configure(text=f"Frame: 1/{len(frames_data)}") + + # Update visualization + self.update_marker_visualization() + + # Switch to 3D Markers tab + self.viz_notebook.set("3D Markers") + + self.update_status(f"TRC file loaded: {os.path.basename(file_path)} ({len(marker_names)} markers, {len(frames_data)} frames)", "green") + + except Exception as e: + self.update_status(f"Error loading TRC file: {str(e)}", "red") + import traceback + traceback.print_exc() + + def load_mot_data(self, file_path): + """Parse and load MOT file""" + try: + self.update_status(f"Loading MOT file: {os.path.basename(file_path)}...", "blue") + + with open(file_path, 'r') as f: + content = f.readlines() + + # Find endheader line + header_end_idx = -1 + for i, line in enumerate(content): + if "endheader" in line.lower(): + header_end_idx = i + break + + if header_end_idx == -1: + # Try alternate format (look for line starting with a number) + for i, line in enumerate(content): + if line.strip() and line[0].isdigit(): + header_end_idx = i - 1 + break + + if header_end_idx == -1: + raise ValueError("Could not find header end in MOT file") + + # Get column headers + header_line = content[header_end_idx + 1].strip() + headers = header_line.split() + + # Parse data + frames_data = [] + for i in range(header_end_idx + 2, len(content)): + line = content[i].strip() + if not line: + continue + + parts = line.split() + if len(parts) < 2: # Need at least time and one value + continue + + try: + time_val = float(parts[0]) + + # Process angle data + angles = {} + for j in range(1, min(len(headers), len(parts))): + try: + value = float(parts[j]) if parts[j].strip() else float('nan') + angles[headers[j]] = value + except ValueError: + # Skip invalid values + pass + + frames_data.append({ + 'time': time_val, + 'angles': angles + }) + + except (ValueError, IndexError): + # Skip invalid lines + continue + + # Store data + self.mot_data = { + 'file_path': file_path, + 'headers': headers[1:], # Skip 'time' column + 'frames': frames_data + } + + # Update angle selection UI + self.update_angle_selection() + + # Switch to Selection tab in right panel + self.angles_notebook.set("Selection") + + self.update_status(f"MOT file loaded: {os.path.basename(file_path)} ({len(headers)-1} angles, {len(frames_data)} frames)", "green") + + except Exception as e: + self.update_status(f"Error loading MOT file: {str(e)}", "red") + + def load_video(self, file_path): + """Load video file""" + try: + # Close any previously open video + if self.video_cap is not None: + self.video_cap.release() + + # Open the video file + self.video_cap = cv2.VideoCapture(file_path) + + if not self.video_cap.isOpened(): + raise ValueError("Could not open video file") + + # Get video properties + width = int(self.video_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) + height = int(self.video_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) + fps = self.video_cap.get(cv2.CAP_PROP_FPS) + total_frames = int(self.video_cap.get(cv2.CAP_PROP_FRAME_COUNT)) + + self.video_path = file_path + + # Update UI + self.video_label.place_forget() # Hide the instruction label + + # Switch to Video tab + self.viz_notebook.set("Video") + + # Show first frame + self.update_video_frame() + + self.update_status(f"Video loaded: {os.path.basename(file_path)} ({width}x{height}, {fps:.1f} fps, {total_frames} frames)", "green") + + except Exception as e: + self.update_status(f"Error loading video: {str(e)}", "red") + + def update_marker_visualization(self): + """Update 3D marker visualization with Y-up coordinate system""" + if not self.trc_data or self.current_frame >= len(self.trc_data['frames']): + return + + # Clear existing plot + self.marker_ax.clear() + + # Get frame data + frame_data = self.trc_data['frames'][self.current_frame] + markers = frame_data['markers'] + + # Prepare coordinates - SWAPPING Y AND Z CORRECTLY + xs, ys, zs = [], [], [] + names = [] + + for name, coords in markers.items(): + if not np.isnan(coords['x']) and not np.isnan(coords['y']) and not np.isnan(coords['z']): + # Correct mapping with Y and Z swapped + xs.append(coords['x']) # X stays as X + ys.append(coords['z']) # Z becomes Y (up) + zs.append(coords['y']) # Y becomes Z (depth) + names.append(name) + + # Plot markers + self.marker_ax.scatter(xs, ys, zs, c='blue', s=40) + + # Add marker labels if enabled + if self.show_labels_var.get(): + for i, (x, y, z, name) in enumerate(zip(xs, ys, zs, names)): + self.marker_ax.text(x, y, z, name, size=8, zorder=1, color='black') + + # Connect joints if enabled + if self.connect_joints_var.get(): + # Define connections between markers + connections = { + 'Hip': ['RHip', 'LHip', 'Neck'], + 'RHip': ['RKnee'], + 'RKnee': ['RAnkle'], + 'RAnkle': ['RHeel', 'RBigToe'], + 'RBigToe': ['RSmallToe'], + 'LHip': ['LKnee'], + 'LKnee': ['LAnkle'], + 'LAnkle': ['LHeel', 'LBigToe'], + 'LBigToe': ['LSmallToe'], + 'Neck': ['Head', 'RShoulder', 'LShoulder'], + 'Head': ['Nose'], + 'RShoulder': ['RElbow'], + 'RElbow': ['RWrist'], + 'LShoulder': ['LElbow'], + 'LElbow': ['LWrist'] + } + + marker_dict = {name: (x, y, z) for name, x, y, z in zip(names, xs, ys, zs)} + + for start, ends in connections.items(): + if start in marker_dict: + start_coords = marker_dict[start] + for end in ends: + if end in marker_dict: + end_coords = marker_dict[end] + self.marker_ax.plot( + [start_coords[0], end_coords[0]], + [start_coords[1], end_coords[1]], + [start_coords[2], end_coords[2]], + 'k-', linewidth=1 + ) + + # Set axis properties + x_range = max(xs) - min(xs) if xs else 1 + y_range = max(ys) - min(ys) if ys else 1 + z_range = max(zs) - min(zs) if zs else 1 + + # Find center point + x_center = (max(xs) + min(xs)) / 2 if xs else 0 + y_center = (max(ys) + min(ys)) / 2 if ys else 0 + z_center = (max(zs) + min(zs)) / 2 if zs else 0 + + # Set equal aspect ratio + max_range = max(x_range, y_range, z_range) * 0.6 + + self.marker_ax.set_xlim(x_center - max_range, x_center + max_range) + self.marker_ax.set_ylim(y_center - max_range, y_center + max_range) + self.marker_ax.set_zlim(z_center - max_range, z_center + max_range) + + # Set labels + self.marker_ax.set_xlabel('X') + self.marker_ax.set_ylabel('Y (Up)') + self.marker_ax.set_zlabel('Z (Depth)') + self.marker_ax.set_title(f'3D Markers - Frame {self.current_frame+1}') + + # Set view angle to match Image 1 + self.marker_ax.view_init(elev=20, azim=-35) + + # Redraw + self.marker_canvas.draw() + + def update_video_frame(self): + """Update video display with current frame""" + if self.video_cap is None: + return + + # Seek to the current frame + self.video_cap.set(cv2.CAP_PROP_POS_FRAMES, self.current_frame) + + # Read the frame + ret, frame = self.video_cap.read() + + if not ret: + self.update_status("Failed to read video frame", "red") + return + + # Convert frame to RGB + frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) + + # Get canvas dimensions + canvas_width = self.video_canvas.winfo_width() + canvas_height = self.video_canvas.winfo_height() + + if canvas_width < 10 or canvas_height < 10: # Canvas not yet realized + # Set default size + canvas_width = 640 + canvas_height = 480 + + # Calculate scaling to fit the canvas while maintaining aspect ratio + frame_h, frame_w = frame_rgb.shape[:2] + + scale = min(canvas_width / frame_w, canvas_height / frame_h) + + new_width = int(frame_w * scale) + new_height = int(frame_h * scale) + + # Resize the frame + frame_resized = cv2.resize(frame_rgb, (new_width, new_height)) + + # Convert to PIL Image + image = Image.fromarray(frame_resized) + + # Convert to PhotoImage + self.photo = ImageTk.PhotoImage(image=image) + + # Update canvas + self.video_canvas.delete("all") + + # Center the image + x_offset = (canvas_width - new_width) // 2 + y_offset = (canvas_height - new_height) // 2 + + self.video_canvas.create_image(x_offset, y_offset, anchor="nw", image=self.photo) + + def update_angle_selection(self): + """Update angle selection UI based on loaded MOT data""" + if not self.mot_data: + return + + # Clear existing widgets + for widget in self.selection_frame.winfo_children(): + widget.destroy() + + # Get angle headers + angle_headers = self.mot_data['headers'] + + if not angle_headers: + ctk.CTkLabel( + self.selection_frame, + text="No angles found in MOT file", + font=("Helvetica", 12), + text_color="gray" + ).pack(pady=20) + return + + # Create variables for checkboxes + self.angle_vars = {} + + # Group similar angles + angle_groups = { + "Lower Limbs": [a for a in angle_headers if any(s in a.lower() for s in + ['ankle', 'knee', 'hip', 'foot', 'toe', 'heel', 'thigh', 'shank'])], + "Upper Limbs": [a for a in angle_headers if any(s in a.lower() for s in + ['shoulder', 'arm', 'elbow', 'wrist', 'forearm', 'sup'])], + "Trunk & Spine": [a for a in angle_headers if any(s in a.lower() for s in + ['trunk', 'pelvis', 'lumbar', 'thorax', 'neck', 'head', 'spine', 'l1', 'l2', 'l3', 'l4', 'l5'])], + "Other": [] # Will catch anything not categorized above + } + + # Add uncategorized angles to "Other" + for angle in angle_headers: + if not any(angle in group for group in angle_groups.values()): + angle_groups["Other"].append(angle) + + # Create section for each group + for group_name, angles in angle_groups.items(): + if not angles: + continue + + # Create group frame + group_frame = ctk.CTkFrame(self.selection_frame) + group_frame.pack(fill='x', pady=5, padx=2) + + # Group header + ctk.CTkLabel( + group_frame, + text=group_name, + font=("Helvetica", 12, "bold") + ).pack(anchor='w', padx=5, pady=5) + + # Create checkboxes for all angles in this group + for angle in angles: + var = ctk.BooleanVar(value=False) + self.angle_vars[angle] = var + + ctk.CTkCheckBox( + group_frame, + text=angle, + variable=var + ).pack(anchor='w', padx=20, pady=2) + + # Enable selection buttons + self.select_all_btn.configure(state="normal") + self.deselect_all_btn.configure(state="normal") + self.apply_selection_btn.configure(state="normal") + + def select_all_angles(self): + """Select all angles""" + if hasattr(self, 'angle_vars'): + for var in self.angle_vars.values(): + var.set(True) + + def deselect_all_angles(self): + """Deselect all angles""" + if hasattr(self, 'angle_vars'): + for var in self.angle_vars.values(): + var.set(False) + + def apply_angle_selection(self): + """Apply the current angle selection to the plot""" + if not hasattr(self, 'angle_vars') or not self.mot_data: + return + + # Get selected angles + self.selected_angles = [angle for angle, var in self.angle_vars.items() if var.get()] + + if not self.selected_angles: + messagebox.showinfo("Selection Empty", "Please select at least one angle to plot") + return + + # Update angle plot + self.update_angle_plot() + + # Update time indicator if TRC data is loaded + if self.trc_data and self.current_frame < len(self.trc_data['frames']): + current_time = self.trc_data['frames'][self.current_frame]['time'] + self.update_time_indicator(current_time) + + # Switch to Plots tab + self.angles_notebook.set("Plots") + + def update_angle_plot(self): + """Update angle plot with selected angles""" + if not self.mot_data or not self.selected_angles: + return + + # Clear existing plot + self.angle_ax.clear() + + # Get time values + time_values = [frame['time'] for frame in self.mot_data['frames']] + + # Plot selected angles + for angle in self.selected_angles: + angle_values = [frame['angles'].get(angle, float('nan')) for frame in self.mot_data['frames']] + self.angle_ax.plot(time_values, angle_values, label=angle) + + # Add vertical line for current time if data available + if self.trc_data and self.current_frame < len(self.trc_data['frames']): + current_time = self.trc_data['frames'][self.current_frame]['time'] + # Add or update vertical line to show current time + self.time_line = self.angle_ax.axvline(x=current_time, color='red', linestyle='--', linewidth=2) + + # Set labels and title + self.angle_ax.set_xlabel('Time (s)') + self.angle_ax.set_ylabel('Angle (degrees)') + self.angle_ax.set_title('Joint Angles') + self.angle_ax.grid(True, linestyle='--', alpha=0.7) + self.angle_ax.legend(loc='best', fontsize='small') + + # Adjust layout + self.angle_fig.tight_layout() + + # Redraw + self.angle_canvas.draw() + + def on_slider_change(self, value): + """Handle slider position change""" + if not self.trc_data and not self.video_cap: + return + + # Get the frame index + frame_index = int(float(value)) + + # Update current frame + self.current_frame = frame_index + + # Update frame label + max_frames = 0 + if self.trc_data: + max_frames = len(self.trc_data['frames']) + elif self.video_cap: + max_frames = int(self.video_cap.get(cv2.CAP_PROP_FRAME_COUNT)) + + self.frame_label.configure(text=f"Frame: {frame_index+1}/{max_frames}") + + # Update visualizations + if self.trc_data: + self.update_marker_visualization() + + if self.video_cap: + self.update_video_frame() + + # Update time indicator in angle plot + if self.trc_data and self.mot_data and self.selected_angles: + current_time = self.trc_data['frames'][self.current_frame]['time'] + self.update_time_indicator(current_time) + + def update_time_indicator(self, current_time): + """Update the time indicator line in the angle plot""" + if hasattr(self, 'angle_ax') and self.selected_angles: + # Remove existing time line if it exists + if self.time_line: + try: + self.time_line.remove() + except: + pass + + # Add new time line + self.time_line = self.angle_ax.axvline(x=current_time, color='red', linestyle='--', linewidth=2) + + # Redraw the canvas + self.angle_canvas.draw() + + def toggle_play(self): + """Toggle playback of animation""" + self.playing = not self.playing + + if self.playing: + self.play_btn.configure(text="⏸ Pause") + self.play_animation() + else: + self.play_btn.configure(text="▶️ Play") + # Cancel scheduled animation + if self.play_after_id: + self.frame.after_cancel(self.play_after_id) + self.play_after_id = None + + def play_animation(self): + """Play animation frame by frame""" + if not self.playing: + return + + # Determine max frames + max_frames = 0 + if self.trc_data: + max_frames = len(self.trc_data['frames']) + elif self.video_cap: + max_frames = int(self.video_cap.get(cv2.CAP_PROP_FRAME_COUNT)) + + if max_frames <= 0: + self.playing = False + self.play_btn.configure(text="▶️ Play") + return + + # Advance to next frame + next_frame = (self.current_frame + 1) % max_frames + + # Update slider position (will trigger visualization update) + self.frame_slider.set(next_frame) + self.on_slider_change(next_frame) + + # Calculate frame delay based on speed setting + speed = self.speed_var.get() + + # Determine frame rate + fps = 30 # Default + if self.video_cap: + fps = self.video_cap.get(cv2.CAP_PROP_FPS) + + # Calculate delay in milliseconds + delay = int(1000 / (fps * speed)) + + # Schedule next frame + self.play_after_id = self.frame.after(delay, self.play_animation) + + def set_playback_speed(self, speed_text): + """Set playback speed from combo box selection""" + speed = float(speed_text.replace('x', '')) + self.speed_var.set(speed) + + def update_status(self, message, color="black"): + """Update status message""" + self.status_label.configure(text=message, text_color=color) \ No newline at end of file diff --git a/GUI/tabs/welcome_tab.py b/GUI/tabs/welcome_tab.py new file mode 100644 index 00000000..7c744fd8 --- /dev/null +++ b/GUI/tabs/welcome_tab.py @@ -0,0 +1,333 @@ +import customtkinter as ctk +from PIL import Image, ImageTk +from pathlib import Path + +class WelcomeTab: + def __init__(self, parent, app): + self.parent = parent + self.app = app + + # Create main frame + self.frame = ctk.CTkFrame(parent) + self.frame.pack(expand=True, fill='both') + + # Show welcome screen + self.show_welcome() + + def show_welcome(self): + """Show the welcome screen with Pose2Sim logo and language selection""" + # Add logo above the title + favicon_path = Path(__file__).parents[1]/"assets/Pose2Sim_logo.png" + self.top_image = Image.open(favicon_path) + self.top_photo = ctk.CTkImage(light_image=self.top_image, dark_image=self.top_image, size=(246,246)) + image_label = ctk.CTkLabel(self.frame, image=self.top_photo, text="") + image_label.pack(pady=(50, 20)) + + # Title + title_label = ctk.CTkLabel( + self.frame, + text="Pose2Sim", + font=("Helvetica", 72, "bold") + ) + title_label.pack(pady=(50, 10)) + + + # Cards container + cards_frame = ctk.CTkFrame(self.frame, fg_color="transparent") + cards_frame.pack(pady=20) + + # 2D Analysis Card + analysis_2d_card = ctk.CTkFrame(cards_frame) + analysis_2d_card.pack(side="left", padx=20, fill="both") + + analysis_2d_label = ctk.CTkLabel( + analysis_2d_card, + text=self.app.lang_manager.get_text("2d_analysis"), + font=("Helvetica", 22, "bold") + ) + analysis_2d_label.pack(pady=(20, 10)) + analysis_2d_label.translation_key = "2d_analysis" + + analysis_2d_single = ctk.CTkLabel( + analysis_2d_card, + text=self.app.lang_manager.get_text("single_camera"), + font=("Helvetica", 14), + height=80, + wraplength=250 + ) + analysis_2d_single.pack(pady=(0, 10), padx=30) + analysis_2d_single.translation_key = "single_camera" + + analysis_2d_button = ctk.CTkButton( + analysis_2d_card, + text=self.app.lang_manager.get_text("select"), + width=200, + height=40, + font=("Helvetica", 14), + command=lambda: self.select_analysis_mode("2d") + ) + analysis_2d_button.pack(pady=(0, 10)) + analysis_2d_button.translation_key = "select" + + # 3D Analysis Card + analysis_3d_card = ctk.CTkFrame(cards_frame) + analysis_3d_card.pack(side="left", padx=10, fill="both") + + analysis_3d_label = ctk.CTkLabel( + analysis_3d_card, + text=self.app.lang_manager.get_text("3d_analysis"), + font=("Helvetica", 22, "bold") + ) + analysis_3d_label.pack(pady=(20, 10)) + analysis_3d_label.translation_key = "3d_analysis" + + analysis_3d_multi = ctk.CTkLabel( + analysis_3d_card, + text=self.app.lang_manager.get_text("multi_camera"), + font=("Helvetica", 14), + height=80, + wraplength=250 + ) + analysis_3d_multi.pack(pady=(0, 10), padx=30) + analysis_3d_multi.translation_key = "multi_camera" + + analysis_3d_button = ctk.CTkButton( + analysis_3d_card, + text=self.app.lang_manager.get_text("select"), + width=200, + height=40, + font=("Helvetica", 14), + command=lambda: self.select_analysis_mode("3d") + ) + analysis_3d_button.pack(pady=(0, 10)) + analysis_3d_button.translation_key = "select" + + # Version info + version_label = ctk.CTkLabel( + self.frame, + text="Version 2.0", + font=("Helvetica", 12) + ) + version_label.pack(side="bottom", pady=20) + + def set_language(self, lang): + """Set the language and move to analysis mode selection""" + self.app.language = lang + self.app.lang_manager.set_language(lang) + self.app.change_language(lang) + + # Clear the frame + for widget in self.frame.winfo_children(): + widget.destroy() + + # Show analysis mode selection + self.show_analysis_mode_selection() + + + def select_analysis_mode(self, mode): + """Set the analysis mode and move to process mode selection""" + self.analysis_mode = mode + + # Clear the frame + for widget in self.frame.winfo_children(): + widget.destroy() + + # Show process mode selection + self.show_process_mode_selection() + + def show_process_mode_selection(self): + """Show the process mode selection screen""" + # Title + title_label = ctk.CTkLabel( + self.frame, + text=self.app.lang_manager.get_text("Select the process mode"), + font=("Helvetica", 30, "bold") + ) + title_label.pack(pady=(80, 40)) + title_label.translation_key = "Select the process mode" + + # Disable batch mode for 2D analysis + if self.analysis_mode == "2d": + # Skip process mode selection for 2D - always use single mode + self.process_mode = "single" + self.show_participant_name_input() + return + + # Cards container for 3D analysis + cards_frame = ctk.CTkFrame(self.frame, fg_color="transparent") + cards_frame.pack(pady=20) + + # Single Mode Card + single_card = ctk.CTkFrame(cards_frame) + single_card.pack(side="left", padx=20, fill="both") + + single_trial_label = ctk.CTkLabel( + single_card, + text=self.app.lang_manager.get_text("Single Trial"), + font=("Helvetica", 22, "bold") + ) + single_trial_label.pack(pady=(30, 20)) + single_trial_label.translation_key = "Single Trial" + + + single_trial_explanation_label = ctk.CTkLabel( + single_card, + text="Process one recording session\nSimpler setup for single experiments", + font=("Helvetica", 14), + height=80 + ) + single_trial_explanation_label.pack(pady=(0, 20), padx=30) + single_trial_explanation_label.translation_key = "Process one recording session\nSimpler setup for single experiments" + + single_trial_button_label = ctk.CTkButton( + single_card, + text=self.app.lang_manager.get_text("Select"), + width=200, + height=40, + font=("Helvetica", 14), + command=lambda: self.select_process_mode("single") + ) + single_trial_button_label.pack(pady=(0, 30)) + single_trial_button_label.translation_key = "Select" + + # Batch Mode Card + batch_card = ctk.CTkFrame(cards_frame) + batch_card.pack(side="left", padx=20, fill="both") + + batch_label = ctk.CTkLabel( + batch_card, + text=self.app.lang_manager.get_text("batch_mode"), + font=("Helvetica", 22, "bold") + ) + batch_label.pack(pady=(30, 20)) + batch_label.translation_key = "batch_mode" + + batch_explanation_label = ctk.CTkLabel( + batch_card, + text="Process multiple trials at once\nIdeal for larger research studies", + font=("Helvetica", 14), + height=80 + ) + batch_explanation_label.pack(pady=(0, 20), padx=30) + batch_explanation_label.translation_key = "Process multiple trials at once\nIdeal for larger research studies" + + batch_select_button = ctk.CTkButton( + batch_card, + text=self.app.lang_manager.get_text("Select"), + width=200, + height=40, + font=("Helvetica", 14), + command=lambda: self.select_process_mode("Batch") + ) + batch_select_button.pack(pady=(0, 30)) + batch_select_button.translation_key = "Select" + + def select_process_mode(self, mode): + """Set the process mode and move to participant input""" + self.process_mode = mode + + # Clear the frame + for widget in self.frame.winfo_children(): + widget.destroy() + + # Show participant name input + self.show_participant_name_input() + + def show_participant_name_input(self): + """Show the participant name input screen""" + # Create input frame + input_frame = ctk.CTkFrame(self.frame) + input_frame.pack(expand=True, fill="none", pady=100) + + # Header + project_label = ctk.CTkLabel( + input_frame, + text=self.app.lang_manager.get_text("Project Name"), + font=("Helvetica", 24, "bold") + ) + project_label.pack(pady=(20, 30)) + project_label.translation_key = "Project Name" + + # Name input + name_frame = ctk.CTkFrame(input_frame, fg_color="transparent") + name_frame.pack(pady=20) + + project_prompt_label = ctk.CTkLabel( + name_frame, + text=self.app.lang_manager.get_text("Enter a project name"), + font=("Helvetica", 16) + ) + project_prompt_label.pack(side="left", padx=10) + project_prompt_label.translation_key = "Enter a project name" + + self.participant_name_var = ctk.StringVar(value="my_project") + name_entry = ctk.CTkEntry(name_frame, textvariable=self.participant_name_var, width=200, height=40) + name_entry.pack(side="left", padx=10) + + # For batch mode, also ask for number of trials + if hasattr(self, 'process_mode') and self.process_mode == "batch": + trials_frame = ctk.CTkFrame(input_frame, fg_color="transparent") + trials_frame.pack(pady=20) + + trial_number_label = ctk.CTkLabel( + trials_frame, + text=self.app.lang_manager.get_text("enter the trials number"), + font=("Helvetica", 16) + ) + trial_number_label.pack(side="left", padx=10) + trial_number_label.translation_key = "enter the trials number" + + self.num_trials_var = ctk.StringVar(value="3") + trials_entry = ctk.CTkEntry(trials_frame, textvariable=self.num_trials_var, width=100, height=40) + trials_entry.pack(side="left", padx=10) + + # Continue button + button_frame = ctk.CTkFrame(input_frame, fg_color="transparent") + button_frame.pack(pady=40) + + next_label = ctk.CTkButton( + button_frame, + text=self.app.lang_manager.get_text("next"), + width=200, + height=40, + font=("Helvetica", 16), + command=self.finalize_setup + ) + next_label.pack(side="bottom") + next_label.translation_key = "next" + + # Version info + version_label = ctk.CTkLabel( + self.frame, + text="Version 2.0", + font=("Helvetica", 12) + ) + version_label.pack(side="bottom", pady=20) + + def finalize_setup(self): + """Finalize setup and start the configuration process""" + participant_name = self.participant_name_var.get().strip() + if not participant_name: + participant_name = "Participant" + + # For batch mode, get the number of trials + num_trials = 0 + if hasattr(self, 'process_mode') and self.process_mode == "batch": + try: + num_trials = int(self.num_trials_var.get()) + if num_trials < 1: + raise ValueError + except ValueError: + num_trials = 3 # Default to 3 trials + + # Start the main configuration process + self.app.start_configuration( + analysis_mode=self.analysis_mode, + process_mode=self.process_mode, + participant_name=participant_name, + num_trials=num_trials + ) + + def clear(self): + """Clear the welcome tab frame when done""" + self.frame.pack_forget() \ No newline at end of file diff --git a/GUI/templates/2d_config_template.toml b/GUI/templates/2d_config_template.toml new file mode 100644 index 00000000..d63912f5 --- /dev/null +++ b/GUI/templates/2d_config_template.toml @@ -0,0 +1,155 @@ +############################################################################### +## SPORTS2D PROJECT PARAMETERS ## +############################################################################### + +[project] +video_input = 'cam2.mp4' +px_to_m_from_person_id = 0 +px_to_m_person_height = 1.75 +visible_side = ['auto'] +load_trc_px = '' +compare = false +time_range = [] +video_dir = '' +webcam_id = 0 +input_size = [1280, 720] + +[process] +multiperson = true +show_realtime_results = true +save_vid = true +save_img = false +save_pose = true +calculate_angles = true +save_angles = true +result_dir = '' + +[pose] +slowmo_factor = 1 +pose_model = 'Body_with_feet' +mode = 'balanced' +det_frequency = 4 +device = 'auto' +backend = 'auto' +tracking_mode = 'sports2d' +keypoint_likelihood_threshold = 0.3 +average_likelihood_threshold = 0.5 +keypoint_number_threshold = 0.3 + +[px_to_meters_conversion] +to_meters = true +make_c3d = true +save_calib = true +floor_angle = 'auto' +xy_origin = ['auto'] +calib_file = '' + +[angles] +display_angle_values_on = ['body', 'list'] +fontSize = 0.3 +joint_angles = ['Right ankle', 'Left ankle', 'Right knee', 'Left knee', 'Right hip', 'Left hip', 'Right shoulder', 'Left shoulder', 'Right elbow', 'Left elbow', 'Right wrist', 'Left wrist'] +segment_angles = ['Right foot', 'Left foot', 'Right shank', 'Left shank', 'Right thigh', 'Left thigh', 'Pelvis', 'Trunk', 'Shoulders', 'Head', 'Right arm', 'Left arm', 'Right forearm', 'Left forearm'] +flip_left_right = true +correct_segment_angles_with_floor_angle = true + +[post-processing] +interpolate = true +interp_gap_smaller_than = 10 +fill_large_gaps_with = 'last_value' +filter = true +show_graphs = true +filter_type = 'butterworth' + [post-processing.butterworth] + order = 4 + cut_off_frequency = 6 + [post-processing.gaussian] + sigma_kernel = 1 + [post-processing.loess] + nb_values_used = 5 + [post-processing.median] + kernel_size = 3 + +[kinematics] +do_ik = true +use_augmentation = true +use_contacts_muscles = true +participant_mass = [67.0, 55.0] +right_left_symmetry = true +default_height = 1.7 +fastest_frames_to_remove_percent = 0.1 +close_to_zero_speed_px = 50 +close_to_zero_speed_m = 0.2 +large_hip_knee_angles = 45 +trimmed_extrema_percent = 0.5 +remove_individual_scaling_setup = true +remove_individual_ik_setup = true + +[logging] +use_custom_logging = false + +[pose.CUSTOM] +name = "Hip" +id = 19 + [[pose.CUSTOM.children]] + name = "RHip" + id = 12 + [[pose.CUSTOM.children.children]] + name = "RKnee" + id = 14 + [[pose.CUSTOM.children.children.children]] + name = "RAnkle" + id = 16 + [[pose.CUSTOM.children.children.children.children]] + name = "RBigToe" + id = 21 + [[pose.CUSTOM.children.children.children.children.children]] + name = "RSmallToe" + id = 23 + [[pose.CUSTOM.children.children.children.children]] + name = "RHeel" + id = 25 + [[pose.CUSTOM.children]] + name = "LHip" + id = 11 + [[pose.CUSTOM.children.children]] + name = "LKnee" + id = 13 + [[pose.CUSTOM.children.children.children]] + name = "LAnkle" + id = 15 + [[pose.CUSTOM.children.children.children.children]] + name = "LBigToe" + id = 20 + [[pose.CUSTOM.children.children.children.children.children]] + name = "LSmallToe" + id = 22 + [[pose.CUSTOM.children.children.children.children]] + name = "LHeel" + id = 24 + [[pose.CUSTOM.children]] + name = "Neck" + id = 18 + [[pose.CUSTOM.children.children]] + name = "Head" + id = 17 + [[pose.CUSTOM.children.children.children]] + name = "Nose" + id = 0 + [[pose.CUSTOM.children.children]] + name = "RShoulder" + id = 6 + [[pose.CUSTOM.children.children.children]] + name = "RElbow" + id = 8 + [[pose.CUSTOM.children.children.children.children]] + name = "RWrist" + id = 10 + [[pose.CUSTOM.children.children]] + name = "LShoulder" + id = 5 + [[pose.CUSTOM.children.children.children]] + name = "LElbow" + id = 7 + [[pose.CUSTOM.children.children.children.children]] + name = "LWrist" + id = 9 \ No newline at end of file diff --git a/GUI/templates/3d_config_template.toml b/GUI/templates/3d_config_template.toml new file mode 100644 index 00000000..811e2923 --- /dev/null +++ b/GUI/templates/3d_config_template.toml @@ -0,0 +1,207 @@ +############################################################################### +## PROJECT PARAMETERS ## +############################################################################### + +[project] +multi_person = true +participant_height = 'auto' +participant_mass = 70.0 + +frame_rate = 'auto' +frame_range = [] +exclude_from_batch = [] + +[pose] +vid_img_extension = 'avi' +pose_model = 'Body_with_feet' +mode = 'balanced' +det_frequency = 4 +device = 'auto' +backend = 'auto' +tracking_mode = 'sports2d' +deepsort_params = """{'max_age':30, 'n_init':3, 'nms_max_overlap':0.8, 'max_cosine_distance':0.3, 'nn_budget':200, 'max_iou_distance':0.8}""" +display_detection = true +overwrite_pose = true +save_video = 'none' +output_format = 'openpose' + +[synchronization] +display_sync_plots = true +keypoints_to_consider = 'all' +approx_time_maxspeed = 'auto' +time_range_around_maxspeed = 2.0 +likelihood_threshold = 0.4 +filter_cutoff = 6 +filter_order = 4 + +[calibration] +calibration_type = 'convert' + + [calibration.convert] + convert_from = 'qualisys' + [calibration.convert.caliscope] + [calibration.convert.qualisys] + binning_factor = 1 + [calibration.convert.optitrack] + [calibration.convert.vicon] + [calibration.convert.opencap] + [calibration.convert.easymocap] + [calibration.convert.biocv] + [calibration.convert.anipose] + [calibration.convert.freemocap] + + [calibration.calculate] + [calibration.calculate.intrinsics] + overwrite_intrinsics = false + show_detection_intrinsics = true + intrinsics_extension = 'png' + extract_every_N_sec = 1 + intrinsics_corners_nb = [3,5] + intrinsics_square_size = 34 + + [calibration.calculate.extrinsics] + calculate_extrinsics = true + extrinsics_method = 'scene' + moving_cameras = false + + [calibration.calculate.extrinsics.board] + show_reprojection_error = true + extrinsics_extension = 'mp4' + extrinsics_corners_nb = [4,7] + extrinsics_square_size = 60 + + [calibration.calculate.extrinsics.scene] + show_reprojection_error = true + extrinsics_extension = 'mp4' + object_coords_3d = [[0.0, 0.0, 0.0], [-0.50, 0.0, 0.0], [-1.0, 0.0, 0.0], [-1.5, 0.0, 0.0], [0.00, 0.50, 0.0], [-0.50, 0.50, 0.0], [-1.0, 0.50, 0.0], [-1.50, 0.50, 0.0]] + + [calibration.calculate.extrinsics.keypoints] + +[personAssociation] + likelihood_threshold_association = 0.3 + + [personAssociation.single_person] + reproj_error_threshold_association = 20 + tracked_keypoint = 'Neck' + + [personAssociation.multi_person] + reconstruction_error_threshold = 0.1 + min_affinity = 0.2 + +[triangulation] +reproj_error_threshold_triangulation = 15 +likelihood_threshold_triangulation= 0.3 +min_cameras_for_triangulation = 2 +interpolation = 'linear' +interp_if_gap_smaller_than = 10 +fill_large_gaps_with = 'last_value' +show_interp_indices = true +handle_LR_swap = false +undistort_points = false +make_c3d = true + +[filtering] +type = 'butterworth' +display_figures = true +make_c3d = true + + [filtering.butterworth] + order = 4 + cut_off_frequency = 6 + [filtering.kalman] + trust_ratio = 100 + smooth = true + [filtering.butterworth_on_speed] + order = 4 + cut_off_frequency = 10 + [filtering.gaussian] + sigma_kernel = 2 + [filtering.LOESS] + nb_values_used = 30 + [filtering.median] + kernel_size = 9 + +[markerAugmentation] +make_c3d = true + +[kinematics] +use_augmentation = true +use_contacts_muscles = true +right_left_symmetry = true +default_height = 1.7 +remove_individual_scaling_setup = true +remove_individual_IK_setup = true +fastest_frames_to_remove_percent = 0.1 +close_to_zero_speed_m = 0.2 +large_hip_knee_angles = 45 +trimmed_extrema_percent = 0.5 + +[logging] +use_custom_logging = false + +[pose.CUSTOM] +name = "Hip" +id = 19 + [[pose.CUSTOM.children]] + name = "RHip" + id = 12 + [[pose.CUSTOM.children.children]] + name = "RKnee" + id = 14 + [[pose.CUSTOM.children.children.children]] + name = "RAnkle" + id = 16 + [[pose.CUSTOM.children.children.children.children]] + name = "RBigToe" + id = 21 + [[pose.CUSTOM.children.children.children.children.children]] + name = "RSmallToe" + id = 23 + [[pose.CUSTOM.children.children.children.children]] + name = "RHeel" + id = 25 + [[pose.CUSTOM.children]] + name = "LHip" + id = 11 + [[pose.CUSTOM.children.children]] + name = "LKnee" + id = 13 + [[pose.CUSTOM.children.children.children]] + name = "LAnkle" + id = 15 + [[pose.CUSTOM.children.children.children.children]] + name = "LBigToe" + id = 20 + [[pose.CUSTOM.children.children.children.children.children]] + name = "LSmallToe" + id = 22 + [[pose.CUSTOM.children.children.children.children]] + name = "LHeel" + id = 24 + [[pose.CUSTOM.children]] + name = "Neck" + id = 18 + [[pose.CUSTOM.children.children]] + name = "Head" + id = 17 + [[pose.CUSTOM.children.children.children]] + name = "Nose" + id = 0 + [[pose.CUSTOM.children.children]] + name = "RShoulder" + id = 6 + [[pose.CUSTOM.children.children.children]] + name = "RElbow" + id = 8 + [[pose.CUSTOM.children.children.children.children]] + name = "RWrist" + id = 10 + [[pose.CUSTOM.children.children]] + name = "LShoulder" + id = 5 + [[pose.CUSTOM.children.children.children]] + name = "LElbow" + id = 7 + [[pose.CUSTOM.children.children.children.children]] + name = "LWrist" + id = 9 \ No newline at end of file diff --git a/GUI/utils.py b/GUI/utils.py new file mode 100644 index 00000000..8954e323 --- /dev/null +++ b/GUI/utils.py @@ -0,0 +1,292 @@ +import os +import cv2 +import numpy as np +from PIL import Image +import matplotlib.pyplot as plt +from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg + +def generate_checkerboard_image(width, height, square_size): + """ + Generate a checkerboard image for display. + + Args: + width: Number of internal corners in the checkerboard width + height: Number of internal corners in the checkerboard height + square_size: Size of each square in pixels + + Returns: + PIL Image of the checkerboard + """ + # Add 1 to include outer corners + num_rows = height + 1 + num_cols = width + 1 + square_size = int(square_size) + + # Create checkerboard pattern + pattern = np.zeros((num_rows * square_size, num_cols * square_size), dtype=np.uint8) + for row in range(num_rows): + for col in range(num_cols): + if (row + col) % 2 == 0: + pattern[row*square_size:(row+1)*square_size, + col*square_size:(col+1)*square_size] = 255 + + # Convert to PIL Image + return Image.fromarray(pattern) + +def extract_frames_from_video(video_path, output_dir, time_interval): + """ + Extract frames from a video at regular intervals. + + Args: + video_path: Path to the video file + output_dir: Directory to save the extracted frames + time_interval: Time interval between frames in seconds + + Returns: + List of paths to the extracted frames + """ + # Ensure output directory exists + os.makedirs(output_dir, exist_ok=True) + + # Open the video + cap = cv2.VideoCapture(video_path) + if not cap.isOpened(): + raise ValueError(f"Could not open video file: {video_path}") + + fps = cap.get(cv2.CAP_PROP_FPS) + if fps <= 0: + fps = 30 # Default to 30 fps if detection fails + + # Calculate frame interval + frame_interval = int(fps * time_interval) + + # Extract frames + extracted_frames = [] + frame_count = 0 + + while True: + ret, frame = cap.read() + if not ret: + break + + if frame_count % frame_interval == 0: + # Save frame as image + frame_name = f"{os.path.splitext(os.path.basename(video_path))[0]}_frame{frame_count}.png" + frame_path = os.path.join(output_dir, frame_name) + cv2.imwrite(frame_path, frame) + extracted_frames.append(frame_path) + + frame_count += 1 + + cap.release() + return extracted_frames + +def create_point_selection_canvas(parent, image_path, points_callback, max_points=8): + """ + Create a canvas for selecting points on an image. + + Args: + parent: Parent widget to attach the canvas to + image_path: Path to the image + points_callback: Function to call with selected points + max_points: Maximum number of points to select + + Returns: + Canvas widget + """ + # Load the image + if image_path.lower().endswith(('.mp4', '.avi', '.mov')): + cap = cv2.VideoCapture(image_path) + ret, frame = cap.read() + cap.release() + if not ret: + raise ValueError(f"Could not read video frame from: {image_path}") + image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) + else: + image = plt.imread(image_path) + + # Create figure and axes + fig, ax = plt.subplots(figsize=(10, 8)) + ax.imshow(image) + ax.set_title(f"Click to select {max_points} points") + + # Store selected points + points = [] + point_markers = [] + + def onclick(event): + if len(points) < max_points: + x, y = event.xdata, event.ydata + if x is not None and y is not None: + points.append((x, y)) + # Plot point in red + point = ax.plot(x, y, 'ro')[0] + point_markers.append(point) + # Add point number + ax.text(x + 5, y + 5, str(len(points)), color='white') + fig.canvas.draw() + + if len(points) == max_points: + # Call the callback with the selected points + points_callback(points) + + # Create canvas and connect click event + canvas = FigureCanvasTkAgg(fig, master=parent) + canvas.get_tk_widget().pack(fill='both', expand=True) + canvas.mpl_connect('button_press_event', onclick) + + return canvas + +def activate_pose2sim(participant_path, method='cmd', skip_pose_estimation=False, skip_synchronization=False, analysis_mode='3d'): + """ + Create scripts to activate Pose2Sim or Sports2D with the specified method. + + Args: + participant_path: Path to the participant directory + method: Method to use ('cmd', 'conda', or 'powershell') + skip_pose_estimation: Whether to skip pose estimation + skip_synchronization: Whether to skip synchronization + analysis_mode: '2d' or '3d' + + Returns: + Path to the created script + """ + if analysis_mode == '3d': + # Generate Python script content for Pose2Sim (3D) + python_script = f""" +from Pose2Sim import Pose2Sim +Pose2Sim.runAll(do_calibration=True, + do_poseEstimation={not skip_pose_estimation}, + do_synchronization={not skip_synchronization}, + do_personAssociation=True, + do_triangulation=True, + do_filtering=True, + do_markerAugmentation=True, + do_kinematics=True) +""" + script_path = os.path.join(participant_path, 'run_pose2sim.py') + else: + # Generate Python script content for Sports2D (2D) + python_script = """ +from Sports2D import Sports2D +Sports2D.process('Config_demo.toml') +""" + script_path = os.path.join(participant_path, 'run_sports2d.py') + + # Save the Python script + with open(script_path, 'w', encoding='utf-8') as f: + f.write(python_script) + + # Create the appropriate conda environment name + conda_env = "Sports2D" if analysis_mode == '2d' else "Pose2Sim" + + # Generate launch script based on method + if method == 'cmd': + if analysis_mode == '3d': + launch_script = f""" +@echo off +setlocal EnableDelayedExpansion + +REM Activate Conda environment +call conda activate {conda_env} + +REM Change to the specified directory +cd "{os.path.abspath(participant_path)}" + +REM Launch the Python script and keep the command prompt open +python {os.path.basename(script_path)} + +REM Pause the command prompt to prevent it from closing +pause + +endlocal +""" + else: # 2D mode + launch_script = f""" +@echo off +setlocal EnableDelayedExpansion + +REM Activate Conda environment +call conda activate {conda_env} + +REM Change to the specified directory +cd "{os.path.abspath(participant_path)}" + +REM Launch IPython and execute Sports2D command +ipython -c "from Sports2D import Sports2D; Sports2D.process('Config_demo.toml')" + +REM Pause the command prompt to prevent it from closing +pause + +endlocal +""" + script_ext = 'cmd' + + elif method == 'conda': + if analysis_mode == '3d': + launch_script = f""" +@echo off +setlocal EnableDelayedExpansion + +REM Change to the specified directory +cd "{os.path.abspath(participant_path)}" + +REM Launch the Python script +call conda activate {conda_env} && python {os.path.basename(script_path)} + +REM Pause to keep the window open +pause + +endlocal +""" + else: # 2D mode + launch_script = f""" +@echo off +setlocal EnableDelayedExpansion + +REM Change to the specified directory +cd "{os.path.abspath(participant_path)}" + +REM Launch IPython and execute Sports2D command +call conda activate {conda_env} && ipython -c "from Sports2D import Sports2D; Sports2D.process('Config_demo.toml')" + +REM Pause to keep the window open +pause + +endlocal +""" + script_ext = 'bat' + + else: # powershell + if analysis_mode == '3d': + launch_script = f""" +# Change to the specified directory +cd "{os.path.abspath(participant_path)}" + +# Activate Conda environment and run script +conda activate {conda_env}; python {os.path.basename(script_path)} + +# Pause to keep the window open +Write-Host "Press any key to continue..." +$null = $Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown") +""" + else: # 2D mode + launch_script = f""" +# Change to the specified directory +cd "{os.path.abspath(participant_path)}" + +# Activate Conda environment and run IPython command +conda activate {conda_env}; ipython -c "from Sports2D import Sports2D; Sports2D.process('Config_demo.toml')" + +# Pause to keep the window open +Write-Host "Press any key to continue..." +$null = $Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown") +""" + script_ext = 'ps1' + + # Save the launch script + launch_script_path = os.path.join(participant_path, f'activate_{conda_env.lower()}_{method}.{script_ext}') + with open(launch_script_path, 'w', encoding='utf-8') as f: + f.write(launch_script) + + return launch_script_path \ No newline at end of file diff --git a/Pose2Sim/Demo_Batch/Config.toml b/Pose2Sim/Demo_Batch/Config.toml index 723f6b5d..41689d0c 100644 --- a/Pose2Sim/Demo_Batch/Config.toml +++ b/Pose2Sim/Demo_Batch/Config.toml @@ -284,7 +284,7 @@ use_custom_logging = false # if integrated in an API that already has logging # from anytree import Node, RenderTree # for pre, _, node in RenderTree(model): # print(f'{pre}{node.name} id={node.id}') -[pose.CUSTOM] +[[pose.CUSTOM]] name = "Hip" id = 19 [[pose.CUSTOM.children]] diff --git a/Pose2Sim/Demo_Batch/Trial_1/Config.toml b/Pose2Sim/Demo_Batch/Trial_1/Config.toml index cbd3a59b..b0e2a83c 100644 --- a/Pose2Sim/Demo_Batch/Trial_1/Config.toml +++ b/Pose2Sim/Demo_Batch/Trial_1/Config.toml @@ -285,7 +285,7 @@ # # from anytree import Node, RenderTree # # for pre, _, node in RenderTree(model): # # print(f'{pre}{node.name} id={node.id}') -# [pose.CUSTOM] +# [[pose.CUSTOM]] # name = "Hip" # id = 19 # [[pose.CUSTOM.children]] diff --git a/Pose2Sim/Demo_Batch/Trial_2/Config.toml b/Pose2Sim/Demo_Batch/Trial_2/Config.toml index f1f95e82..e91a7ef7 100644 --- a/Pose2Sim/Demo_Batch/Trial_2/Config.toml +++ b/Pose2Sim/Demo_Batch/Trial_2/Config.toml @@ -283,7 +283,7 @@ keypoints_to_consider = ['RWrist'] # 'all' if all points should be considered, f # # from anytree import Node, RenderTree # # for pre, _, node in RenderTree(model): # # print(f'{pre}{node.name} id={node.id}') -# [pose.CUSTOM] +# [[pose.CUSTOM]] # name = "Hip" # id = 19 # [[pose.CUSTOM.children]] diff --git a/Pose2Sim/Demo_MultiPerson/Config.toml b/Pose2Sim/Demo_MultiPerson/Config.toml index 1665d798..07743c3e 100644 --- a/Pose2Sim/Demo_MultiPerson/Config.toml +++ b/Pose2Sim/Demo_MultiPerson/Config.toml @@ -284,7 +284,7 @@ use_custom_logging = false # if integrated in an API that already has logging # from anytree import Node, RenderTree # for pre, _, node in RenderTree(model): # print(f'{pre}{node.name} id={node.id}') -[pose.CUSTOM] +[[pose.CUSTOM]] name = "Hip" id = 19 [[pose.CUSTOM.children]] diff --git a/Pose2Sim/Demo_SinglePerson/Config.toml b/Pose2Sim/Demo_SinglePerson/Config.toml index 4f9686c5..99c3e773 100644 --- a/Pose2Sim/Demo_SinglePerson/Config.toml +++ b/Pose2Sim/Demo_SinglePerson/Config.toml @@ -24,14 +24,13 @@ participant_mass = 70.0 # float (eg 70.0), or list of floats (eg [70 frame_rate = 'auto' # fps # int or 'auto'. If 'auto', finds from video (or defaults to 60 fps if you work with images) frame_range = 'auto' # 'auto', 'all', or range like [10,300]. If 'auto', will trim around the frames with low reprojection error (useful if a person enters/exits the scene) - -## If cameras are not synchronized, designates the frame range of the camera with the shortest recording time -## N.B.: If you want a time range instead, use frame_range = time_range * frame_rate -## For example if you want to analyze from 0.1 to 2 seconds with a 60 fps frame rate, -## frame_range = [0.1, 2.0]*frame_rate = [6, 120] + # If cameras are not synchronized, designates the frame range of the camera with the shortest recording time + # N.B.: If you want a time range instead, use frame_range = time_range * frame_rate + # For example if you want to analyze from 0.1 to 2 seconds with a 60 fps frame rate, + # frame_range = [0.1, 2.0]*frame_rate = [6, 120] exclude_from_batch = [] # List of trials to be excluded from batch analysis, ['', 'etc']. -# e.g. ['S00_P00_Participant/S00_P00_T00_StaticTrial', 'S00_P00_Participant/S00_P00_T01_BalancingTrial'] + # e.g. ['S00_P00_Participant/S00_P00_T00_StaticTrial', 'S00_P00_Participant/S00_P00_T01_BalancingTrial'] [pose] @@ -47,7 +46,6 @@ pose_model = 'Body_with_feet' #With RTMLib: # - Animal (ANIMAL2D_17) # /!\ Only RTMPose is natively embeded in Pose2Sim. For all other pose estimation methods, you will have to run them yourself, and then refer to the documentation to convert the output files if needed # /!\ For Face and Animal, use mode="""{dictionary}""", and find the corresponding .onnx model there https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose - #With MMPose: HALPE_26, COCO_133, COCO_17, CUSTOM. See CUSTOM example at the end of the file #With openpose: BODY_25B, BODY_25, BODY_135, COCO, MPII #With mediapipe: BLAZEPOSE @@ -91,8 +89,8 @@ deepsort_params = """{'max_age':30, 'n_init':3, 'nms_max_overlap':0.8, 'max_cosi handle_LR_swap = false # Might be useful if cameras film from the sagittal plane and the right and left sides are hard to differenciate to your eye. undistort_points = false # Better if distorted image (parallel lines curvy on the edge or at least one param > 10^-2), but unnecessary (and slightly slower) if distortions are low -display_detection = true -overwrite_pose = false # set to false if you don't want to recalculate pose estimation when it has already been done +display_detection = true # Real-time display of the videos with pose estimation +overwrite_pose = false # Set to false if you don't want to recalculate pose estimation when it has already been done save_video = 'to_video' # 'to_video' or 'to_images', 'none', or ['to_video', 'to_images'] output_format = 'openpose' # 'openpose', 'mmpose', 'deeplabcut', 'none' or a list of them # /!\ only 'openpose' is supported for now @@ -178,6 +176,7 @@ calibration_type = 'convert' # 'convert' or 'calculate' likelihood_threshold_association = 0.3 # should be in single_person section [personAssociation.single_person] + likelihood_threshold_association = 0.3 reproj_error_threshold_association = 20 # px tracked_keypoint = 'Neck' # If the neck is not detected by the pose_model, check skeleton.py # and choose a stable point for tracking the person of interest (e.g., 'right_shoulder' or 'RShoulder') @@ -287,7 +286,7 @@ use_custom_logging = false # if integrated in an API that already has logging # from anytree import Node, RenderTree # for pre, _, node in RenderTree(model): # print(f'{pre}{node.name} id={node.id}') -[pose.CUSTOM] +[[pose.CUSTOM]] name = "Hip" id = 19 [[pose.CUSTOM.children]] diff --git a/Pose2Sim/Demo_SinglePerson/calibration/Calib_scene.toml b/Pose2Sim/Demo_SinglePerson/calibration/Calib_scene.toml deleted file mode 100644 index 4f2bec2b..00000000 --- a/Pose2Sim/Demo_SinglePerson/calibration/Calib_scene.toml +++ /dev/null @@ -1,39 +0,0 @@ -[int_cam01_img] -name = "int_cam01_img" -size = [ 1088.0, 1920.0] -matrix = [ [ 1671.5012042021037, 0.0, 564.6403499852015], [ 0.0, 1671.3866928599075, 933.0074897877159], [ 0.0, 0.0, 1.0]] -distortions = [ -0.05310342256182765, 0.1510044676505964, -0.00011989435671705446, 0.0006503403122568441] -rotation = [ 1.678784562972848, 1.031972654261743, -0.3910235872501538] -translation = [ 0.2609476905995074, 0.9641586618892432, 2.932963441842366] -fisheye = false - -[int_cam02_img] -name = "int_cam02_img" -size = [ 1088.0, 1920.0] -matrix = [ [ 1675.7080607483433, 0.0, 555.7798754387691], [ 0.0, 1675.4079150009259, 937.7259328417255], [ 0.0, 0.0, 1.0]] -distortions = [ -0.05205993968544501, 0.14917865330192143, 1.6726989091268645e-05, -0.0006545682909903758] -rotation = [ 1.3578601662680128, 1.5796573692170839, -1.1822007045082419] -translation = [ -0.16526980322189794, 0.8084087327474073, 3.2044946004620805] -fisheye = false - -[int_cam03_img] -name = "int_cam03_img" -size = [ 1088.0, 1920.0] -matrix = [ [ 1678.7524092974595, 0.0, 556.0712835099544], [ 0.0, 1678.1875775424717, 928.197809795709], [ 0.0, 0.0, 1.0]] -distortions = [ -0.031226100230109667, 0.009201607173964147, -0.0008794344390757962, -2.031259635506541e-05] -rotation = [ 0.7860494371239335, -2.186387571217796, 1.4336044493611793] -translation = [ -0.8504307571967306, 0.41363741965310824, 4.297890768705867] -fisheye = false - -[int_cam04_img] -name = "int_cam04_img" -size = [ 1088.0, 1920.0] -matrix = [ [ 1690.8621773209618, 0.0, 535.7642414457736], [ 0.0, 1689.2410845215763, 933.0004533663628], [ 0.0, 0.0, 1.0]] -distortions = [ -0.04718570631606589, 0.10068388229636667, 0.002421622328944551, -0.0010513343762849723] -rotation = [ 1.4051886526650297, -1.3909627084708258, 0.4436653908561921] -translation = [ 0.5111616284766533, 0.11254959801153527, 4.410359951259428] -fisheye = false - -[metadata] -adjusted = false -error = 0.0 diff --git a/Pose2Sim/Demo_SinglePerson/calibration/calib_cam01_ext.png b/Pose2Sim/Demo_SinglePerson/calibration/calib_cam01_ext.png deleted file mode 100644 index 4bc0da59..00000000 Binary files a/Pose2Sim/Demo_SinglePerson/calibration/calib_cam01_ext.png and /dev/null differ diff --git a/Pose2Sim/Demo_SinglePerson/calibration/calib_cam02_ext.png b/Pose2Sim/Demo_SinglePerson/calibration/calib_cam02_ext.png deleted file mode 100644 index 653e941e..00000000 Binary files a/Pose2Sim/Demo_SinglePerson/calibration/calib_cam02_ext.png and /dev/null differ diff --git a/Pose2Sim/Demo_SinglePerson/calibration/calib_cam03_ext.png b/Pose2Sim/Demo_SinglePerson/calibration/calib_cam03_ext.png deleted file mode 100644 index d2343082..00000000 Binary files a/Pose2Sim/Demo_SinglePerson/calibration/calib_cam03_ext.png and /dev/null differ diff --git a/Pose2Sim/Demo_SinglePerson/calibration/calib_cam04_ext.png b/Pose2Sim/Demo_SinglePerson/calibration/calib_cam04_ext.png deleted file mode 100644 index d9f60432..00000000 Binary files a/Pose2Sim/Demo_SinglePerson/calibration/calib_cam04_ext.png and /dev/null differ diff --git a/Pose2Sim/Utilities/trc_Zup_to_Yup.py b/Pose2Sim/Utilities/trc_Zup_to_Yup.py index 92946311..8718526b 100644 --- a/Pose2Sim/Utilities/trc_Zup_to_Yup.py +++ b/Pose2Sim/Utilities/trc_Zup_to_Yup.py @@ -62,7 +62,10 @@ def trc_Zup_to_Yup_func(*args): trc_yup_path = args[0]['output'] except: trc_path = args[0] # invoked as a function - trc_yup_path = trc_path.replace('.trc', '_Yup.trc') + try: + trc_yup_path = args[1] + except: + trc_yup_path = trc_path.replace('.trc', '_Yup.trc') # header with open(trc_path, 'r') as trc_file: diff --git a/Pose2Sim/Utilities/trc_rotate.py b/Pose2Sim/Utilities/trc_rotate.py new file mode 100644 index 00000000..5f6c4647 --- /dev/null +++ b/Pose2Sim/Utilities/trc_rotate.py @@ -0,0 +1,162 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + + +''' + ################################################## + ## Rotate trc coordinates by 90° ## + ################################################## + + Rotate trc coordinates by 90° around an axis + You can either choose an axis to rotate around, + or use one of the predefined conversions from and axis-up to another one. + + 90° rotation around: + - "X" corresponds to: yup_to_zup or zup_to_ydown + - "-X" corresponds to: yup_to_zdown or zup_to_yup + - "Y" corresponds to: zup_to_xup or xup_to_zdown + - "-Y" corresponds to: zup_to_xdown or xup_to_zup + - "Z" corresponds to: yup_to_xdown + - "-Z" corresponds to: yup_to_xup + + The output file argument is optional. If not specified, + '_X', '_-X', '_Y', '_-Y', '_Z' or '_-Z' is appended to the input filename. + + Usage: + from Pose2Sim.Utilities import trc_rotate; trc_rotate.trc_rotate_func(input=r'', output=r'', rotate90='-X') + + trc_rotate -i input_trc_file # Will rotate around -X by default (Y-up -> Z-up) + trc_rotate -i input_trc_file -o output_trc_file + + trc_rotate -i input_trc_file --zup_to_yup + trc_rotate -i input_trc_file --rotate90=-X # Equivalently +''' + + +## INIT +import pandas as pd +import numpy as np +import argparse + + +## AUTHORSHIP INFORMATION +__author__ = "David Pagnon" +__copyright__ = "Copyright 2021, Pose2Sim" +__credits__ = ["David Pagnon"] +__license__ = "BSD 3-Clause License" +from importlib.metadata import version +__version__ = version('pose2sim') +__maintainer__ = "David Pagnon" +__email__ = "contact@david-pagnon.com" +__status__ = "Development" + + +## FUNCTIONS +def trc_rotate_func(**args): + ''' + Rotate trc coordinates by 90° around an axis + You can either choose an axis to rotate around, + or use one of the predefined conversions from and axis-up to another one. + + 90° rotation around: + - "X" corresponds to: yup_to_zup or zup_to_ydown + - "-X" corresponds to: yup_to_zdown or zup_to_yup + - "Y" corresponds to: zup_to_xup or xup_to_zdown + - "-Y" corresponds to: zup_to_xdown or xup_to_zup + - "Z" corresponds to: yup_to_xdown + - "-Z" corresponds to: yup_to_xup + + The output file argument is optional. If not specified, + '_X', '_-X', '_Y', '_-Y', '_Z' or '_-Z' is appended to the input filename. + + Usage: + from Pose2Sim.Utilities import trc_rotate; trc_rotate.trc_rotate_func(input=r'', output=r'', rotate90='-X') + + trc_rotate -i input_trc_file # Will rotate around -X by default (Y-up -> Z-up) + trc_rotate -i input_trc_file -o output_trc_file + + trc_rotate -i input_trc_file --zup_to_yup + trc_rotate -i input_trc_file --rotate90=-X # Equivalently + ''' + + trc_path = args.get('input') + output_trc_path = args.get('output') + rotate90 = args.get('rotate90') + if rotate90 is None: + rotate90 = "-X" + if output_trc_path is None: + output_trc_path = trc_path.replace('.trc', f'_{rotate90}.trc') + + # header + with open(trc_path, 'r') as trc_file: + header = [next(trc_file) for line in range(5)] + + # data + trc_df = pd.read_csv(trc_path, sep="\t", skiprows=4, encoding='utf-8') + frames_col, time_col = trc_df.iloc[:,0], trc_df.iloc[:,1] + Q_coord = trc_df.drop(trc_df.columns[[0, 1]], axis=1) + + # rotate coordinates + cols = Q_coord.values.reshape(-1,3) + if rotate90 == "X": # X->X, Y->-Z, Z->Y + cols = np.stack([cols[:,0],-cols[:,2],cols[:,1]], axis=-1) + elif rotate90 == "-X": # X->X, Y->Z, Z->-Y + cols = np.stack([cols[:,0],cols[:,2],-cols[:,1]], axis=-1) + elif rotate90 == "Y": # X->Z, Y->Y, Z->-X + cols = np.stack([cols[:,2],cols[:,1],-cols[:,0]], axis=-1) + elif rotate90 == "-Y": # X->-Z, Y->Y, Z->X + cols = np.stack([-cols[:,2],cols[:,1],cols[:,0]], axis=-1) + elif rotate90 == "Z": # X->-Y, Y->X, Z->Z + cols = np.stack([-cols[:,1],cols[:,0],cols[:,2]], axis=-1) + elif rotate90 == "-Z": # X->Y, Y->-X, Z->Z + cols = np.stack([cols[:,1],-cols[:,0],cols[:,2]], axis=-1) + Q_coord = pd.DataFrame(cols.reshape(Q_coord.values.shape[0],-1), columns=Q_coord.columns, index=Q_coord.index) + + # write file + with open(output_trc_path, 'w') as trc_o: + [trc_o.write(line) for line in header] + Q_coord.insert(0, 'Frame#', frames_col) + Q_coord.insert(1, 'Time', time_col) + Q_coord.to_csv(trc_o, sep='\t', index=False, header=None, lineterminator='\n') + + print(f"trc file rotated by 90° around {rotate90}. Saved to {output_trc_path}") + + +def main(): + parser = argparse.ArgumentParser(description="Rotate trc coordinates by 90°") + + parser.add_argument('-i', '--input', required=True, help='trc input file') + parser.add_argument('-o', '--output', required=False, help='trc output file') + + group = parser.add_mutually_exclusive_group(required=False) + group.add_argument("--rotate90", + choices=["X","-X","Y","-Y","Z","-Z"], default="-X", + help="Axis and direction for a 90-degree rotation") + group.add_argument("--yup_to_zup", action="store_const", const="X", dest="rotate90", + help="Corresponds to a 90-degree rotation around +X") + group.add_argument("--zup_to_ydown", action="store_const", const="X", dest="rotate90", + help="Corresponds to a 90-degree rotation around +X") + group.add_argument("--yup_to_zdown", action="store_const", const="-X", dest="rotate90", + help="Corresponds to a 90-degree rotation around -X") + group.add_argument("--zup_to_yup", action="store_const", const="-X", dest="rotate90", + help="Corresponds to a 90-degree rotation around -X") + group.add_argument("--zup_to_xup", action="store_const", const="Y", dest="rotate90", + help="Corresponds to a 90-degree rotation around +Y") + group.add_argument("--xup_to_zdown", action="store_const", const="Y", dest="rotate90", + help="Corresponds to a 90-degree rotation around +Y") + group.add_argument("--zup_to_xdown", action="store_const", const="-Y", dest="rotate90", + help="Corresponds to a 90-degree rotation around -Y") + group.add_argument("--xup_to_zup", action="store_const", const="-Y", dest="rotate90", + help="Corresponds to a 90-degree rotation around -Y") + group.add_argument("--yup_to_xdown", action="store_const", const="Z", dest="rotate90", + help="Corresponds to a 90-degree rotation around +Z") + group.add_argument("--yup_to_xup", action="store_const", const="-Z", dest="rotate90", + help="Corresponds to a 90-degree rotation around -Z") + + args = vars(parser.parse_args()) + + trc_rotate_func(**args) + + +if __name__ == '__main__': + main() diff --git a/Pose2Sim/Utilities/trc_scale.py b/Pose2Sim/Utilities/trc_scale.py new file mode 100644 index 00000000..f9a606a5 --- /dev/null +++ b/Pose2Sim/Utilities/trc_scale.py @@ -0,0 +1,95 @@ +#!/usr/bin/env python +# -*- coding: utf-8 -*- + + +''' + ################################################## + ## Scale trc coordinates ## + ################################################## + + Scale trc coordinates by a desired factor. + + Usage: + from Pose2Sim.Utilities import trc_scale; trc_scale.trc_scaled_func(r'', 0.001, r'') + trc_scale -i input_trc_file -s 0.001 + trc_scale -i input_trc_file -s 0.001 -o output_trc_file +''' + + +## INIT +import pandas as pd +import numpy as np +import argparse + + +## AUTHORSHIP INFORMATION +__author__ = "David Pagnon" +__copyright__ = "Copyright 2021, Pose2Sim" +__credits__ = ["David Pagnon"] +__license__ = "BSD 3-Clause License" +from importlib.metadata import version +__version__ = version('pose2sim') +__maintainer__ = "David Pagnon" +__email__ = "contact@david-pagnon.com" +__status__ = "Development" + + +## FUNCTIONS +def main(): + parser = argparse.ArgumentParser() + parser.add_argument('-i', '--input', required = True, help='trc Zup input file') + parser.add_argument('-o', '--output', required=False, help='trc Yup output file') + parser.add_argument('-s', '--scale_factor', required=True, type=float, help='scaling factor to apply to the trc coordinates. mm to m would be 0.001') + args = vars(parser.parse_args()) + + trc_scale_func(args) + + +def trc_scale_func(*args): + ''' + Scale trc coordinates by a desired factor. + + Usage: + from Pose2Sim.Utilities import trc_scale; trc_scale.trc_scaled_func(r'', 0.001, r'') + trc_scale -i input_trc_file -s 0.001 + trc_scale -i input_trc_file -s 0.001 -o output_trc_file + ''' + + try: + trc_path = args[0]['input'] # invoked with argparse + scale_factor = args[0]['scale_factor'] + if args[0]['output'] == None: + trc_scaled_path = trc_path.replace('.trc', '_scaled.trc') + else: + trc_scaled_path = args[0]['output'] + except: + trc_path = args[0] # invoked as a function + scale_factor = args[1] + try: + trc_scaled_path = args[2] + except: + trc_scaled_path = trc_path.replace('.trc', '_scaled.trc') + + # header + with open(trc_path, 'r') as trc_file: + header = [next(trc_file) for line in range(5)] + + # data + trc_df = pd.read_csv(trc_path, sep="\t", skiprows=4) + frames_col, time_col = trc_df.iloc[:,0], trc_df.iloc[:,1] + Q_coord = trc_df.drop(trc_df.columns[[0, 1]], axis=1) + + # scaling + Q_scaled = Q_coord * scale_factor + + # write file + with open(trc_scaled_path, 'w') as trc_o: + [trc_o.write(line) for line in header] + Q_scaled.insert(0, 'Frame#', frames_col) + Q_scaled.insert(1, 'Time', time_col) + Q_scaled.to_csv(trc_o, sep='\t', index=False, header=None, lineterminator='\n') + + print(f"trc file scaled with a {scale_factor} factor. Saved to {trc_scaled_path}") + +if __name__ == '__main__': + main() diff --git a/Pose2Sim/calibration.py b/Pose2Sim/calibration.py index 0a1b6fdc..2265f6e8 100644 --- a/Pose2Sim/calibration.py +++ b/Pose2Sim/calibration.py @@ -132,7 +132,7 @@ def read_qca(qca_path, binning_factor): ret += [float(tag.attrib.get('avg-residual'))] C += [tag.attrib.get('serial')] res += [int(tag.attrib.get('video_resolution')[:-1]) if tag.attrib.get('video_resolution') not in (None, "N/A") else 1080] - if tag.attrib.get('model') in ('Miqus Video', 'Miqus Video UnderWater', 'none'): + if any(model in tag.attrib.get('model', '').lower() for model in ["video", "none"]): vid_id += [i] # Image size diff --git a/Pose2Sim/common.py b/Pose2Sim/common.py index ff13e850..5388bf37 100644 --- a/Pose2Sim/common.py +++ b/Pose2Sim/common.py @@ -974,11 +974,12 @@ def compute_height(Q_coords, keypoints_names, fastest_frames_to_remove_percent=0 try: head_pair = [['MidShoulder', 'Head']] - head = [euclidean_distance(Q_coords_low_speeds_low_angles[pair[0]],Q_coords_low_speeds_low_angles[pair[1]]) for pair in head_pair][0] + head = [euclidean_distance(Q_coords_low_speeds_low_angles[pair[0]],Q_coords_low_speeds_low_angles[pair[1]]) for pair in head_pair][0]\ + *1.008 except: head_pair = [['MidShoulder', 'Nose']] head = [euclidean_distance(Q_coords_low_speeds_low_angles[pair[0]],Q_coords_low_speeds_low_angles[pair[1]]) for pair in head_pair][0]\ - *1.33 + *1.5 logging.warning('The Head marker is missing from your model. Considering Neck to Head size as 1.33 times Neck to MidShoulder size.') heights = (rfoot + lfoot)/2 + (rshank + lshank)/2 + (rfemur + lfemur)/2 + (rback + lback)/2 + head diff --git a/Pose2Sim/filtering.py b/Pose2Sim/filtering.py index 156ddf11..72ec98e9 100644 --- a/Pose2Sim/filtering.py +++ b/Pose2Sim/filtering.py @@ -688,7 +688,8 @@ def filter_all(config_dict): raise frame_rate = round(cap.get(cv2.CAP_PROP_FPS)) except: - frame_rate = 60 + logging.warning(f'Cannot read video. Frame rate will be set to 60 fps.') + frame_rate = 30 # Trc paths trc_path_in = [file for file in glob.glob(os.path.join(pose3d_dir, '*.trc')) if 'filt' not in file] @@ -699,14 +700,14 @@ def filter_all(config_dict): Q_coords, frames_col, time_col, markers, header = read_trc(t_path_in) # frame range selection - f_range = [[frames_col.iloc[0], frames_col.iloc[-1]+1] if frame_range in ('all', 'auto', []) else frame_range][0] + f_range = [[frames_col.iloc[0], frames_col.iloc[-1]+1] if (frame_range in ('all', 'auto', []) or frames_col.iloc[0]>frame_range[0] or frames_col.iloc[1] 0.2 + + likely_keypoints = np.where(mask_scores[:, np.newaxis, np.newaxis], keypoints, np.nan) + likely_scores = np.where(mask_scores[:, np.newaxis], scores, np.nan) + likely_bboxes = bbox_xyxy_compute(frame_shape, likely_keypoints, padding=0) + score_likely_bboxes = np.nanmean(likely_scores, axis=1) + + valid_indices = np.where(~np.isnan(score_likely_bboxes))[0] + if len(valid_indices) > 0: + valid_bboxes = likely_bboxes[valid_indices] + valid_scores = score_likely_bboxes[valid_indices] + keep_valid = nms(valid_bboxes, valid_scores, nms_thr=0.45) + keep = valid_indices[keep_valid] + else: + keep = [] + keypoints, scores = likely_keypoints[keep], likely_scores[keep] + + # Track poses across frames + if tracking_mode == 'deepsort': + keypoints, scores = sort_people_deepsort(keypoints, scores, deepsort_tracker, frame, frame_idx) + if tracking_mode == 'sports2d': + if 'prev_keypoints' not in locals(): prev_keypoints = keypoints + prev_keypoints, keypoints, scores = sort_people_sports2d(prev_keypoints, keypoints, scores=scores, max_dist=max_distance_px) + else: + pass - # Non maximum suppression (at pose level, not detection, and only using likely keypoints) - frame_shape = frame.shape - mask_scores = np.mean(scores, axis=1) > 0.2 - - likely_keypoints = np.where(mask_scores[:, np.newaxis, np.newaxis], keypoints, np.nan) - likely_scores = np.where(mask_scores[:, np.newaxis], scores, np.nan) - likely_bboxes = bbox_xyxy_compute(frame_shape, likely_keypoints, padding=0) - score_likely_bboxes = np.nanmean(likely_scores, axis=1) - - valid_indices = np.where(~np.isnan(score_likely_bboxes))[0] - if len(valid_indices) > 0: - valid_bboxes = likely_bboxes[valid_indices] - valid_scores = score_likely_bboxes[valid_indices] - keep_valid = nms(valid_bboxes, valid_scores, nms_thr=0.45) - keep = valid_indices[keep_valid] - else: - keep = [] - keypoints, scores = likely_keypoints[keep], likely_scores[keep] - - # Track poses across frames - if tracking_mode == 'deepsort': - keypoints, scores = sort_people_deepsort(keypoints, scores, deepsort_tracker, frame, frame_idx) - if tracking_mode == 'sports2d': - if 'prev_keypoints' not in locals(): prev_keypoints = keypoints - prev_keypoints, keypoints, scores = sort_people_sports2d(prev_keypoints, keypoints, scores=scores, max_dist=max_distance_px) - else: - pass + except: + keypoints = np.full((1,kpt_id_max,2), fill_value=np.nan) + scores = np.full((1,kpt_id_max), fill_value=np.nan) # Save to json if 'openpose' in output_format: @@ -387,6 +475,10 @@ def process_images(image_folder_path, vid_img_extension, pose_tracker, pose_mode cv2.namedWindow(f"Pose Estimation {os.path.basename(image_folder_path)}", cv2.WINDOW_NORMAL) cv2.resizeWindow(f"Pose Estimation {os.path.basename(image_folder_path)}", display_width, display_height) + # Retrieve keypoint names from model + keypoints_ids = [node.id for _, _, node in RenderTree(pose_model) if node.id!=None] + kpt_id_max = max(keypoints_ids)+1 + f_range = [[0,len(image_files)] if frame_range in ('all', 'auto', []) else frame_range][0] for frame_idx, image_file in enumerate(tqdm(image_files, desc=f'\nProcessing {os.path.basename(img_output_dir)}')): if frame_idx in range(*f_range): @@ -396,16 +488,20 @@ def process_images(image_folder_path, vid_img_extension, pose_tracker, pose_mode except: raise NameError(f"{image_file} is not an image. Videos must be put in the video directory, not in subdirectories.") - # Detect poses - keypoints, scores = pose_tracker(frame) - - # Track poses across frames - if tracking_mode == 'deepsort': - keypoints, scores = sort_people_deepsort(keypoints, scores, deepsort_tracker, frame, frame_idx) - if tracking_mode == 'sports2d': - if 'prev_keypoints' not in locals(): prev_keypoints = keypoints - prev_keypoints, keypoints, scores = sort_people_sports2d(prev_keypoints, keypoints, scores=scores, max_dist=max_distance_px) - + try: + # Detect poses + keypoints, scores = pose_tracker(frame) + + # Track poses across frames + if tracking_mode == 'deepsort': + keypoints, scores = sort_people_deepsort(keypoints, scores, deepsort_tracker, frame, frame_idx) + if tracking_mode == 'sports2d': + if 'prev_keypoints' not in locals(): prev_keypoints = keypoints + prev_keypoints, keypoints, scores = sort_people_sports2d(prev_keypoints, keypoints, scores=scores, max_dist=max_distance_px) + except: + keypoints = np.full((1,kpt_id_max,2), fill_value=np.nan) + scores = np.full((1,kpt_id_max), fill_value=np.nan) + # Extract frame number from the filename if 'openpose' in output_format: json_file_path = os.path.join(json_output_dir, f"{os.path.splitext(os.path.basename(image_file))[0]}_{frame_idx:06d}.json") @@ -538,76 +634,12 @@ def estimate_pose_all(config_dict): # Select the appropriate model based on the model_type logging.info('\nEstimating pose...') - if pose_model.upper() in ('HALPE_26', 'BODY_WITH_FEET'): - model_name = 'HALPE_26' - ModelClass = BodyWithFeet # 26 keypoints(halpe26) - logging.info(f"Using HALPE_26 model (body and feet) for pose estimation.") - elif pose_model.upper() in ('COCO_133', 'WHOLE_BODY', 'WHOLE_BODY_WRIST'): - model_name = 'COCO_133' - ModelClass = Wholebody - logging.info(f"Using COCO_133 model (body, feet, hands, and face) for pose estimation.") - elif pose_model.upper() in ('COCO_17', 'BODY'): - model_name = 'COCO_17' - ModelClass = Body - logging.info(f"Using COCO_17 model (body) for pose estimation.") - elif pose_model.upper() =='HAND': - model_name = 'HAND_21' - ModelClass = Hand - logging.info(f"Using HAND_21 model for pose estimation.") - elif pose_model.upper() =='FACE': - model_name = 'FACE_106' - logging.info(f"Using FACE_106 model for pose estimation.") - elif pose_model.upper() =='ANIMAL': - model_name = 'ANIMAL2D_17' - logging.info(f"Using ANIMAL2D_17 model for pose estimation.") - else: - model_name = pose_model.upper() - logging.info(f"Using model {model_name} for pose estimation.") pose_model_name = pose_model - try: - pose_model = eval(model_name) - except: - try: # from Config.toml - pose_model = DictImporter().import_(config_dict.get('pose').get(pose_model)) - if pose_model.id == 'None': - pose_model.id = None - except: - raise NameError(f'{pose_model} not found in skeletons.py nor in Config.toml') + pose_model, ModelClass, mode = setup_model_class_mode(pose_model, mode, config_dict) # Select device and backend backend, device = setup_backend_device(backend=backend, device=device) - # Manually select the models if mode is a dictionary rather than 'lightweight', 'balanced', or 'performance' - if not mode in ['lightweight', 'balanced', 'performance'] or 'ModelClass' not in locals(): - try: - try: - mode = ast.literal_eval(mode) - except: # if within single quotes instead of double quotes when run with sports2d --mode """{dictionary}""" - mode = mode.strip("'").replace('\n', '').replace(" ", "").replace(",", '", "').replace(":", '":"').replace("{", '{"').replace("}", '"}').replace('":"/',':/').replace('":"\\',':\\') - mode = re.sub(r'"\[([^"]+)",\s?"([^"]+)\]"', r'[\1,\2]', mode) # changes "[640", "640]" to [640,640] - mode = json.loads(mode) - det_class = mode.get('det_class') - det = mode.get('det_model') - det_input_size = mode.get('det_input_size') - pose_class = mode.get('pose_class') - pose = mode.get('pose_model') - pose_input_size = mode.get('pose_input_size') - - ModelClass = partial(Custom, - det_class=det_class, det=det, det_input_size=det_input_size, - pose_class=pose_class, pose=pose, pose_input_size=pose_input_size, - backend=backend, device=device) - - if pose_class == 'RTMO' and model_name != 'COCO_17': - logging.warning("RTMO currently only supports 'Body' pose_model. Switching to 'Body'.") - pose_model = eval('COCO_17') - - except (json.JSONDecodeError, TypeError): - logging.warning("\nInvalid mode. Must be 'lightweight', 'balanced', 'performance', or '''{dictionary}''' of parameters within triple quotes. Make sure input_sizes are within square brackets.") - logging.warning('Using the default "balanced" mode.') - mode = 'balanced' - - # Estimate pose try: pose_listdirs_names = next(os.walk(pose_dir))[1] diff --git a/README.md b/README.md index aaeb0890..0f7f8bc5 100644 --- a/README.md +++ b/README.md @@ -909,6 +909,12 @@ Converts 3D point data from a .trc file to a .c3d file compatible with Visual3D. [trc_desample.py](https://github.com/perfanalytics/pose2sim/blob/main/Pose2Sim/Utilities/trc_desample.py) Undersamples a trc file. +[trc_scale.py](https://github.com/perfanalytics/pose2sim/blob/main/Pose2Sim/Utilities/trc_scale.py) +Scale trc coordinates by a desired factor. + +[trc_rotate.py](https://github.com/perfanalytics/pose2sim/blob/main/Pose2Sim/Utilities/trc_rotate.py) +Rotate trc coordinates by 90° around an axis. You can either choose an axis to rotate around, or use one of the predefined conversions from and axis-up to another one. + [trc_Zup_to_Yup.py](https://github.com/perfanalytics/pose2sim/blob/main/Pose2Sim/Utilities/trc_Zup_to_Yup.py) Changes Z-up system coordinates to Y-up system coordinates. diff --git a/pose2sim.yaml b/pose2sim.yaml new file mode 100644 index 00000000..1523553c --- /dev/null +++ b/pose2sim.yaml @@ -0,0 +1,13 @@ +# pose2sim.yaml + +# install: conda env create -f pose2sim.yaml +name: Pose2Sim +channels: + - conda-forge + - opensim-org +dependencies: + - python>=3.10 + - pip + - opensim + - pip: + - pose2sim diff --git a/pose2sim_installer.py b/pose2sim_installer.py new file mode 100644 index 00000000..75ae7a4d --- /dev/null +++ b/pose2sim_installer.py @@ -0,0 +1,326 @@ +import os +import sys +import platform +import subprocess +import tempfile +import urllib.request +from pathlib import Path +import tkinter as tk +from tkinter import ttk +import threading +import io + +class Pose2SimInstaller: + def __init__(self): + self.os_type = platform.system() + self.miniconda_installed = self.check_miniconda() + self.home_dir = str(Path.home()) + + def check_miniconda(self): + """Check if Miniconda/Anaconda is installed""" + try: + subprocess.run(["conda", "--version"], + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + check=True) + return True + except (subprocess.SubprocessError, FileNotFoundError): + return False + + def download_miniconda(self): + """Download appropriate Miniconda installer""" + print("Downloading Miniconda installer...") + + # Create temp directory + temp_dir = tempfile.mkdtemp() + installer_path = os.path.join(temp_dir, "miniconda_installer") + + # Select correct installer based on OS + if self.os_type == "Windows": + url = "https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe" + installer_path += ".exe" + elif self.os_type == "Darwin": # macOS + if platform.machine() == "arm64": # Apple Silicon + url = "https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh" + else: # Intel + url = "https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh" + installer_path += ".sh" + else: # Linux + url = "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" + installer_path += ".sh" + + # Download the installer + urllib.request.urlretrieve(url, installer_path) + + return installer_path + + def install_miniconda(self, installer_path): + """Install Miniconda""" + print("Installing Miniconda...") + + if self.os_type == "Windows": + subprocess.run([installer_path, "/InstallationType=JustMe", + "/RegisterPython=0", "/S", f"/D={self.home_dir}\\Miniconda3"], + check=True) + else: # macOS or Linux + subprocess.run(["bash", installer_path, "-b", "-p", + f"{self.home_dir}/miniconda3"], check=True) + + # Update PATH for current process + if self.os_type == "Windows": + os.environ["PATH"] = f"{self.home_dir}\\Miniconda3;{self.home_dir}\\Miniconda3\\Scripts;" + os.environ["PATH"] + else: + os.environ["PATH"] = f"{self.home_dir}/miniconda3/bin:" + os.environ["PATH"] + + def setup_pose2sim(self): + """Set up Pose2Sim environment and dependencies""" + print("\nSetting up Pose2Sim environment...") + + # Determine conda executable path + conda_exec = "conda" + if self.os_type == "Windows": + conda_exec = f"{self.home_dir}\\Miniconda3\\Scripts\\conda.exe" + + # Create and configure environment + commands = [ + [conda_exec, "create", "-n", "Pose2Sim", "python=3.10", "-y"], + [conda_exec, "install", "-c", "opensim-org", "opensim", "-y", "--name", "Pose2Sim"], + ] + + # For Windows, activate environment first + if self.os_type == "Windows": + activate_cmd = f"{self.home_dir}\\Miniconda3\\Scripts\\activate.bat" + pip_commands = [ + f"call {activate_cmd} Pose2Sim && pip install pose2sim", + f"call {activate_cmd} Pose2Sim && pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124", + f"call {activate_cmd} Pose2Sim && pip uninstall -y onnxruntime", + f"call {activate_cmd} Pose2Sim && pip install onnxruntime-gpu" + ] + else: # macOS/Linux + activate_cmd = f"source {self.home_dir}/miniconda3/bin/activate Pose2Sim" + pip_commands = [ + f"{activate_cmd} && pip install pose2sim", + f"{activate_cmd} && pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124", + f"{activate_cmd} && pip uninstall -y onnxruntime", + f"{activate_cmd} && pip install onnxruntime-gpu" + ] + + # Execute conda commands + for cmd in commands: + print(f"Running: {' '.join(cmd)}") + subprocess.run(cmd, check=True) + + # Execute pip commands through shell + for cmd in pip_commands: + print(f"Running: {cmd}") + if self.os_type == "Windows": + subprocess.run(cmd, shell=True, check=True) + else: + subprocess.run(cmd, shell=True, executable="/bin/bash", check=True) + + def create_launcher(self): + """Create a launcher for Pose2Sim""" + print("\nCreating Pose2Sim launcher...") + + if self.os_type == "Windows": + launcher_path = os.path.join(os.environ["USERPROFILE"], "Desktop", "Pose2Sim.bat") + with open(launcher_path, "w") as f: + f.write(f"@echo off\n") + f.write(f"call {self.home_dir}\\Miniconda3\\Scripts\\activate.bat Pose2Sim\n") + f.write(f"echo Pose2Sim environment activated!\n") + f.write(f"cmd /k\n") + f.write(f"pose2sim\n") + else: # macOS/Linux + launcher_path = os.path.join(self.home_dir, "Desktop", "Pose2Sim.sh") + with open(launcher_path, "w") as f: + f.write("#!/bin/bash\n") + f.write(f"source {self.home_dir}/miniconda3/bin/activate Pose2Sim\n") + f.write("echo 'Pose2Sim environment activated!'\n") + f.write("exec $SHELL\n") + f.write(f"pose2sim\n") + + # Make executable + os.chmod(launcher_path, 0o755) + + print(f"Launcher created at: {launcher_path}") + + def install(self): + """Run the complete installation process""" + print("=== Pose2Sim Installer ===") + + # Check for GPU support + try: + if self.os_type == "Windows": + nvidia_smi = subprocess.run(["nvidia-smi"], stdout=subprocess.PIPE, + stderr=subprocess.PIPE) + if nvidia_smi.returncode != 0: + print("WARNING: NVIDIA GPU not detected. GPU acceleration won't be available.") + except FileNotFoundError: + print("WARNING: NVIDIA GPU not detected. GPU acceleration won't be available.") + + # Install Miniconda if needed + if not self.miniconda_installed: + installer_path = self.download_miniconda() + self.install_miniconda(installer_path) + else: + print("Miniconda already installed, skipping installation.") + + # Setup Pose2Sim + self.setup_pose2sim() + + # Create launcher + self.create_launcher() + + print("\n=== Installation Complete! ===") + print("You can now use Pose2Sim by running the created launcher.") + + +class InstallerGUI: + def __init__(self, root): + self.root = root + self.root.title("Pose2Sim Installer") + self.root.geometry("600x400") + self.root.resizable(True, True) + + # Store original stdout before any redirection + self.original_stdout = sys.stdout + + # Create main frame + main_frame = ttk.Frame(root, padding="20") + main_frame.pack(fill=tk.BOTH, expand=True) + + # Add title + title_label = ttk.Label(main_frame, text="Pose2Sim Installer", font=("Helvetica", 16, "bold")) + title_label.pack(pady=(0, 20)) + + # Add description + desc_text = "This installer will set up Pose2Sim with all required dependencies." + desc_label = ttk.Label(main_frame, text=desc_text, wraplength=500) + desc_label.pack(pady=(0, 20)) + + # Create text box for logs + self.log_box = tk.Text(main_frame, height=15, width=70, state="disabled") + self.log_box.pack(fill=tk.BOTH, expand=True, pady=(0, 20)) + + # Add scrollbar to text box + scrollbar = ttk.Scrollbar(self.log_box, command=self.log_box.yview) + scrollbar.pack(side=tk.RIGHT, fill=tk.Y) + self.log_box.config(yscrollcommand=scrollbar.set) + + # Add progress bar + self.progress = ttk.Progressbar(main_frame, orient="horizontal", length=500, mode="indeterminate") + self.progress.pack(fill=tk.X, pady=(0, 20)) + + # Add buttons + button_frame = ttk.Frame(main_frame) + button_frame.pack(fill=tk.X) + + self.install_button = ttk.Button(button_frame, text="Install", command=self.start_installation) + self.install_button.pack(side=tk.LEFT, padx=5) + + self.exit_button = ttk.Button(button_frame, text="Exit", command=self.cleanup_and_exit) + self.exit_button.pack(side=tk.RIGHT, padx=5) + + # Set up output redirection + sys.stdout = self.RedirectText(self) + + def cleanup_and_exit(self): + """Restore stdout and exit""" + sys.stdout = self.original_stdout + self.root.destroy() + + class RedirectText: + def __init__(self, gui): + self.gui = gui + + def write(self, string): + if self.gui.original_stdout: + self.gui.original_stdout.write(string) + self.gui.update_log(string) + + def flush(self): + if self.gui.original_stdout: + self.gui.original_stdout.flush() + + def update_log(self, text): + """Update the log box with new text""" + self.root.after(0, self._update_log, text) + + def _update_log(self, text): + """Actually update the log box (must be called from main thread)""" + self.log_box.config(state="normal") + self.log_box.insert(tk.END, text) + self.log_box.see(tk.END) + self.log_box.config(state="disabled") + + def start_installation(self): + """Start the installation process in a separate thread""" + self.install_button.config(state="disabled") + self.progress.start() + print("Starting installation...\n") + + # Create and start installation thread + install_thread = threading.Thread(target=self.run_installation) + install_thread.daemon = True + install_thread.start() + + def run_installation(self): + """Run the actual installation process""" + try: + installer = Pose2SimInstaller() + installer.install() + + # Update UI on completion + self.root.after(0, self.installation_complete, True) + except Exception as e: + error_msg = f"ERROR: Installation failed: {str(e)}\n" + print(error_msg) + # Update UI on failure + self.root.after(0, self.installation_complete, False) + + def installation_complete(self, success): + """Handle installation completion""" + self.progress.stop() + + if success: + print("\nInstallation completed successfully!") + else: + print("\nInstallation failed. Please check the log for details.") + + self.install_button.config(state="normal") + self.install_button.config(text="Close") + self.install_button.config(command=self.cleanup_and_exit) + + +if __name__ == "__main__": + # Check if we should use GUI or console mode + use_gui = True + + # Use console mode if --console argument is provided + if len(sys.argv) > 1 and "--console" in sys.argv: + use_gui = False + + if use_gui: + # Run with GUI + root = tk.Tk() + app = InstallerGUI(root) + root.mainloop() + else: + # Run in console mode + installer = Pose2SimInstaller() + try: + installer.install() + print("\nInstallation completed successfully!") + except Exception as e: + print(f"ERROR: Installation failed: {e}") + print("Please check the error message and try again.") + sys.exit(1) + finally: + # Keep the window open when double-clicked in Windows + if platform.system() == "Windows": + # Only ask for input if it's not being run from a terminal + if not os.environ.get("PROMPT"): + print("\nPress Enter to exit...") + input() + else: + input("\nPress Enter to exit...") \ No newline at end of file diff --git a/pyproject.toml b/pyproject.toml index 6f629c3d..af0148dd 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -51,8 +51,10 @@ dependencies = [ "rtmlib", "openvino", "onnxruntime", - "opencv-python != 4.11.*", # avoid 4.11 due to displaymatrix ignored (rotation metadata) - # "deep-sort-realtime", # likely not required anymore + "opencv-python != 4.11.*", # avoid 4.11 due to displaymatrix ignored (rotation metadata) + # "deep-sort-realtime", # likely not required anymore + "customtkinter", + "requests" ] [tool.setuptools_scm] @@ -89,5 +91,8 @@ trc_gaitevents = "Pose2Sim.Utilities.trc_gaitevents:main" trc_plot = "Pose2Sim.Utilities.trc_plot:main" trc_to_c3d = "Pose2Sim.Utilities.trc_to_c3d:main" trc_Zup_to_Yup = "Pose2Sim.Utilities.trc_Zup_to_Yup:main" +trc_rotate = "Pose2Sim.Utilities.trc_rotate:main" +trc_scale = "Pose2Sim.Utilities.trc_scale:main" tests_pose2sim = "Pose2Sim.Utilities.tests:main" +Pose2Sim = "GUI.main:main"