You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> **N.B.:** The Z coordinate (depth) should not be overly trusted.
268
268
269
-
You may want coordinates in meters rather than pixels. 2 options to do so:
269
+
To convert from pixels to meters, you need a minima the height of a participant. Better results can be obtained by also providing an information on depth. The camera horizon angle and the floor height are generally automatically estimated. **N.B.: A calibration file will be generated.**
270
270
271
-
1.**Just provide the height of a reference person**:
272
-
- Their height in meters is be compared with their height in pixels to get a pixel-to-meter conversion factor.
273
-
- To estimate the depth coordinates, specify which side of the person is visible: `left`, `right`, `front`, or `back`. Use `auto` if you want it to be automatically determined (only works for motions in the sagittal plane), or `none` if you want to keep 2D coordinates instead of 3D (if the person turns around, for example).
274
-
- The floor angle is automatically estimated from gait, as well as the origin of the xy axis. The person trajectory is corrected accordingly. You can use the `--floor_angle` and `--xy_origin` parameters to manually specify them if your subject is not travelling horizontally or if you want the origin not to be under their feet (note that the `y` axis points down).
275
-
276
-
**N.B.: A calibration file will be generated.** By convention, the camera-to-subject distance is set to 10 meters.
271
+
- The pixel-to-meters scale is computed from the ratio between the height of the participant in meters and in pixels. The height in pixels is automatically calculated; use the `--first_person_height` parameter to specify the height in meters.
272
+
- Depth perspective effects can be compensated either with the camera-to-person distance (m), or focal length (px), or field-of-view (degrees or radians), or from a calibration file. Use the `--perspective_unit` ('distance_m', 'f_px', 'fov_deg', 'fov_rad', or 'from_calib') and `--perspective_value` parameters (resp. in m, px, deg, rad, or '').
273
+
- The camera horizon angle can be estimated from kinematics (`auto`), from a calibration file (`from_calib`), or manually (float). Use the `--floor_angle` parameter.
274
+
- Likewise for the floor level. Use the `--xy_origin` parameter.
277
275
278
-
```cmd
279
-
sports2d --first_person_height 1.65 --visible_side auto front none
280
-
```
281
-
```cmd
282
-
sports2d --first_person_height 1.65 --visible_side auto front none `
283
-
--person_ordering_method on_click `
284
-
--floor_angle 0 --xy_origin 0 940
285
-
```
276
+
If one of these parameters is set to `from_calib`, then use `--calib_file`.
286
277
287
-
2.**Or use a calibration file**:\
288
-
It can either be a `.toml` calibration file previously generated by Sports2D, or a more accurate one coming from another system. For example, [Pose2Sim](https://github.com/perfanalytics/pose2sim) can be used to accurately calculate calibration, or to convert calibration files from Qualisys, Vicon, OpenCap, FreeMoCap, etc.
289
278
290
-
```cmd
291
-
sports2d --calib_file Calib_demo.toml --visible_side auto front none
'config': ["C", "path to a toml configuration file"],
453
458
454
459
'video_input': ["i", "webcam, or video_path.mp4, or video1_path.avi video2_path.mp4 ... Beware that images won't be saved if paths contain non ASCII characters"],
460
+
'time_range': ["t", "start_time end_time. In seconds. Whole video if not specified. start_time1 end_time1 start_time2 end_time2 ... if multiple videos with different time ranges"],
455
461
'nb_persons_to_detect': ["n", "number of persons to detect. int or 'all'. 'all' if not specified"],
456
462
'person_ordering_method': ["", "'on_click', 'highest_likelihood', 'largest_size', 'smallest_size', 'greatest_displacement', 'least_displacement', 'first_detected', or 'last_detected'. 'on_click' if not specified"],
457
463
'first_person_height': ["H", "height of the reference person in meters. 1.65 if not specified. Not used if a calibration file is provided"],
458
464
'visible_side': ["", "front, back, left, right, auto, or none. 'auto front none' if not specified. If 'auto', will be either left or right depending on the direction of the motion. If 'none', no IK for this person"],
465
+
'participant_mass': ["", "mass of the participant in kg or none. Defaults to 70 if not provided. No influence on kinematics (motion), only on kinetics (forces)"],
466
+
'perspective_value': ["", "Either camera-to-person distance (m), or focal length (px), or field-of-view (degrees or radians), or '' if perspective_unit=='from_calib'"],
467
+
'perspective_unit': ["", "'distance_m', 'f_px', 'fov_deg', 'fov_rad', or 'from_calib'"],
468
+
'do_ik': ["", "do inverse kinematics. false if not specified"],
469
+
'use_augmentation': ["", "Use LSTM marker augmentation. false if not specified"],
459
470
'load_trc_px': ["", "load trc file to avaid running pose estimation again. false if not specified"],
460
471
'compare': ["", "visually compare motion with trc file. false if not specified"],
461
-
'webcam_id': ["w", "webcam ID. 0 if not specified"],
462
-
'time_range': ["t", "start_time end_time. In seconds. Whole video if not specified. start_time1 end_time1 start_time2 end_time2 ... if multiple videos with different time ranges"],
463
472
'video_dir': ["d", "current directory if not specified"],
464
473
'result_dir': ["r", "current directory if not specified"],
474
+
'webcam_id': ["w", "webcam ID. 0 if not specified"],
465
475
'show_realtime_results': ["R", "show results in real-time. true if not specified"],
466
476
'display_angle_values_on': ["a", '"body", "list", "body" "list", or "none". body list if not specified'],
467
477
'show_graphs': ["G", "show plots of raw and processed results. true if not specified"],
468
-
'save_graphs': ["", "save position and angle plots of raw and processed results. false if not specified"],
478
+
'save_graphs': ["", "save position and angle plots of raw and processed results. true if not specified"],
'save_vid': ["V", "save processed video. true if not specified"],
@@ -486,11 +496,8 @@ sports2d --help
486
496
'xy_origin': ["", "origin of the xy plane. 'auto' if not specified"],
487
497
'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
488
498
'save_calib': ["", "save calibration file. true if not specified"],
489
-
'do_ik': ["", "do inverse kinematics. false if not specified"],
490
-
'use_augmentation': ["", "Use LSTM marker augmentation. false if not specified"],
491
499
'feet_on_floor': ["", "offset marker augmentation results so that feet are at floor level. true if not specified"],
492
-
'use_simple_model': ["", "IK 10+ times faster, but no muscles or flexible spine. false if not specified"],
493
-
'participant_mass': ["", "mass of the participant in kg or none. Defaults to 70 if not provided. No influence on kinematics (motion), only on kinetics (forces)"],
500
+
'use_simple_model': ["", "IK 10+ times faster, but no muscles or flexible spine, no patella. false if not specified"],
494
501
'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
495
502
'tracking_mode': ["", "'sports2d' or 'deepsort'. 'deepsort' is slower, harder to parametrize but can be more robust if correctly tuned"],
'keypoint_likelihood_threshold': ["", "detected keypoints are not retained if likelihood is below this threshold. 0.3 if not specified"],
501
508
'average_likelihood_threshold': ["", "detected persons are not retained if average keypoint likelihood is below this threshold. 0.5 if not specified"],
502
509
'keypoint_number_threshold': ["", "detected persons are not retained if number of detected keypoints is below this threshold. 0.3 if not specified, i.e., i.e., 30 percent"],
510
+
'max_distance': ["", "If a person is detected further than max_distance from its position on the previous frame, it will be considered as a new one. in px or None, 100 by default."],
503
511
'fastest_frames_to_remove_percent': ["", "Frames with high speed are considered as outliers. Defaults to 0.1"],
504
512
'close_to_zero_speed_px': ["", "Sum for all keypoints: about 50 px/frame or 0.2 m/frame. Defaults to 50"],
505
513
'large_hip_knee_angles': ["", "Hip and knee angles below this value are considered as imprecise. Defaults to 45"],
@@ -511,15 +519,16 @@ sports2d --help
511
519
'interp_gap_smaller_than': ["", "interpolate sequences of missing data if they are less than N frames long. 10 if not specified"],
512
520
'fill_large_gaps_with': ["", "last_value, nan, or zeros. last_value if not specified"],
513
521
'sections_to_keep': ["", "all, largest, first, or last. Keep 'all' valid sections even when they are interspersed with undetected chunks, or the 'largest' valid section, or the 'first' one, or the 'last' one"],
522
+
'min_chunk_size': ["", "Minimum number of valid frames in a row to keep a chunk of data for a person. 10 if not specified"],
514
523
'reject_outliers': ["", "reject outliers with Hampel filter before other filtering methods. true if not specified"],
515
524
'filter': ["", "filter results. true if not specified"],
516
525
'filter_type': ["", "butterworth, kalman, gcv_spline, gaussian, median, or loess. butterworth if not specified"],
526
+
'cut_off_frequency': ["", "cut-off frequency of the Butterworth filter. 6 if not specified"],
517
527
'order': ["", "order of the Butterworth filter. 4 if not specified"],
518
-
'cut_off_frequency': ["", "cut-off frequency of the Butterworth filter. 3 if not specified"],
528
+
'gcv_cut_off_frequency': ["", "cut-off frequency of the GCV spline filter. 'auto' is usually better, unless the signal is too short (noise can then be considered as signal -> trajectories not filtered). 'auto' if not specified"],
529
+
'gcv_smoothing_factor': ["", "smoothing factor of the GCV spline filter (>=0). Ignored if cut_off_frequency != 'auto'. Biases results towards more smoothing (>1) or more fidelity to data (<1). 1.0 if not specified"],
519
530
'trust_ratio': ["", "trust ratio of the Kalman filter: How much more do you trust triangulation results (measurements), than the assumption of constant acceleration(process)? 500 if not specified"],
520
531
'smooth': ["", "dual Kalman smoothing. true if not specified"],
521
-
'gcv_cut_off_frequency': ["", "cut-off frequency of the GCV spline filter. 'auto' if not specified"],
522
-
'smoothing_factor': ["", "smoothing factor of the GCV spline filter (>=0). Ignored if cut_off_frequency != 'auto'. Biases results towards more smoothing (>1) or more fidelity to data (<1). 0.1 if not specified"],
523
532
'sigma_kernel': ["", "sigma of the gaussian filter. 1 if not specified"],
524
533
'nb_values_used': ["", "number of values used for the loess filter. 5 if not specified"],
525
534
'kernel_size': ["", "kernel size of the median filter. 3 if not specified"],
@@ -628,11 +637,11 @@ Sports2D:
628
637
629
638
2.**Sets up pose estimation with RTMLib.** It can be run in lightweight, balanced, or performance mode, and for faster inference, the person bounding boxes can be tracked instead of detected every frame. Any RTMPose model can be used.
630
639
631
-
3.**Tracks people** so that their IDs are consistent across frames. A person is associated to another in the next frame when they are at a small distance. IDs remain consistent even if the person disappears from a few frames. We crafted a 'sports2D' tracker which gives good results and runs in real time, but it is also possible to use `deepsort` in particularly challenging situations.
640
+
3.**Tracks people** so that their IDs are consistent across frames. A person is associated to another in the next frame when they are at a small distance. IDs remain consistent even if the person disappears from a few frames, thanks to the 'sports2D' tracker. [See Release notes of v0.8.22 for more information](https://github.com/davidpagnon/Sports2D/releases/tag/v0.8.22).
632
641
633
642
4.**Chooses which persons to analyze.** In single-person mode, only keeps the person with the highest average scores over the sequence. In multi-person mode, you can choose the number of persons to analyze (`nb_persons_to_detect`), and how to order them (`person_ordering_method`). The ordering method can be 'on_click', 'highest_likelihood', 'largest_size', 'smallest_size', 'greatest_displacement', 'least_displacement', 'first_detected', or 'last_detected'. `on_click` is default and lets the user click on the persons they are interested in, in the desired order.
634
643
635
-
4.**Converts the pixel coordinates to meters.** The user can provide the size of a specified person to scale results accordingly. The floor angle and the coordinate origin can either be detected automatically from the gait sequence, or be manually specified. The depth coordinates are set to normative values, depending on whether the person is going left, right, facing the camera, or looking away.
644
+
4.**Converts the pixel coordinates to meters.** The user can provide the size of a specified person to scale results accordingly. The camera horizon angle and the floor level can either be detected automatically from the gait sequence, be manually specified, or obtained frmm a calibration file. The depth perspective effects are compensated thanks with the distance from the camera to the subject, the focal length, the field of view, or from a calibration file. [See Release notes of v0.8.25 for more information](https://github.com/davidpagnon/Sports2D/releases/tag/v0.8.25).
636
645
637
646
5.**Computes the selected joint and segment angles**, and flips them on the left/right side if the respective foot is pointing to the left/right.
0 commit comments