You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Supported marker augmentation and inverse kinematics + others #22
- Depth values in trc files adjusted based on neutral position for the sagittal plane (left and right side) and frontal plane (front and back side)
- Supported marker augmentation and inverse kinematics via Pose2Sim
- c3d export
- renamed a bunch of parameters
- fixed whole_body_wrist model
- Fixed "load_trc" option
- Updated doc
> **N.B.:** Depth is estimated from a neutral pose.
234
230
235
231
<!-- You either need to provide a calibration file, or simply the height of a person (Note that the latter will not take distortions into account, and that it will be less accurate for motion in the frontal plane).\-->
236
232
You may need to convert pixel coordinates to meters.\
237
-
Just provide the height of the reference person (and their ID in case of multiple person detection).\
238
-
The floor angle and the origin of the xy axis are computed automatically from gait. If you analyze another type of motion, you can manually specify them.\
239
-
Note that it does not take distortions into account, and that it will be less accurate for motions in the frontal plane.
233
+
Just provide the height of the reference person (and their ID in case of multiple person detection).
240
234
241
-
```cmd
235
+
You can also specify whether the visible side of the person is left, right, front, or back. Set it to 'auto' if you do not want to find it automatically (only works for motion in the sagittal plane), or to 'none' if you want to keep 2D instead of 3D coordinates (if the person goes right, and then left for example).
236
+
237
+
The floor angle and the origin of the xy axis are computed automatically from gait. If you analyze another type of motion, you can manually specify them. Note that `y` points down.\
238
+
Also note that distortions are not taken into account, and that results will be less accurate for motions in the frontal plane.
> N.B.: The person needs to be moving on a single plane for the whole selected time range.
258
-
259
-
Analyzed persons can be showing their left, right, front, or back side. If you want to ignore a certain person, set `--visible_side none`.
260
-
261
-
262
-
263
-
264
-
Why IK?
265
-
Add section in how it works
257
+
> **N.B.:** The person needs to be moving on a single plane for the whole selected time range.
266
258
259
+
OpenSim inverse kinematics allows you to set joint constraints, joint angle limits, to constrain the bones to keep the same length all along the motion and potentially to have equal sizes on left and right side. Most generally, it gives more biomechanically accurate results. It can also give you the opportunity to compute joint torques, muscle forces, ground reaction forces, and more, [with MoCo](https://opensim-org.github.io/opensim-moco-site/) for example.
267
260
261
+
This is done via [Pose2Sim](https://github.com/perfanalytics/pose2sim).\
262
+
Model scaling is done according to the mean of the segment lengths, across a subset of frames. We remove the 10% fastest frames (potential outliers), the frames where the speed is 0 (person probably out of frame), the frames where the average knee and hip flexion angles are above 45° (pose estimation is not precise when the person is crouching) and the 20% most extreme segment values after the previous operations (potential outliers). All these parameters can be edited in your Config.toml file.
268
263
269
264
```cmd
270
-
sports2d --time_range 1.2 2.7 --do_ik true --visible_side front left
You can optionally use the LSTM marker augmentation to improve the quality of the output motion.\
272
+
You can also optionally give the participants proper masses. Mass has no influence on motion, only on forces (if you decide to further pursue kinetics analysis).
273
+
273
274
```cmd
274
-
sports2d --time_range 1.2 2.7 --do_ik true --visible_side front left --use_augmentation True
'config': ["C", "path to a toml configuration file"],
385
+
379
386
'video_input': ["i", "webcam, or video_path.mp4, or video1_path.avi video2_path.mp4 ... Beware that images won't be saved if paths contain non ASCII characters"],
380
387
'px_to_m_person_height': ["H", "height of the person in meters. 1.70 if not specified"],
381
-
'visible_side': ["", "front, back, left, right, auto, or none. 'front auto' if not specified. If 'auto', will be either left or right depending on the direction of the motion. If 'none', no IK for this person"],
388
+
'visible_side': ["", "front, back, left, right, auto, or none. 'front none auto' if not specified. If 'auto', will be either left or right depending on the direction of the motion. If 'none', no IK for this person"],
382
389
'load_trc_px': ["", "load trc file to avaid running pose estimation again. false if not specified"],
383
390
'compare': ["", "visually compare motion with trc file. false if not specified"],
384
391
'webcam_id': ["w", "webcam ID. 0 if not specified"],
@@ -412,6 +419,7 @@ sports2d --help
412
419
'do_ik': ["", "do inverse kinematics. false if not specified"],
413
420
'use_augmentation': ["", "Use LSTM marker augmentation. false if not specified"],
414
421
'use_contacts_muscles': ["", "Use model with contact spheres and muscles. false if not specified"],
422
+
'participant_mass': ["", "mass of the participant in kg or none. Defaults to 70 if not provided. No influence on kinematics (motion), only on kinetics (forces)"],
415
423
'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
416
424
'multiperson': ["", "multiperson involves tracking: will be faster if set to false. true if not specified"],
417
425
'tracking_mode': ["", "sports2d or rtmlib. sports2d is generally much more accurate and comparable in speed. sports2d if not specified"],
@@ -544,7 +552,7 @@ Sports2D:
544
552
545
553
4.**Chooses the right persons to keep.** In single-person mode, only keeps the person with the highest average scores over the sequence. In multi-person mode, only retrieves the keypoints with high enough confidence, and only keeps the persons with high enough average confidence over each frame.
546
554
547
-
4.**Converts the pixel coordinates to meters.** The user can provide a calibration file, or simply the size of a specified person. The floor angle and the coordinate origin can either be detected automatically from the gait sequence, or be manually specified.
555
+
4.**Converts the pixel coordinates to meters.** The user can provide a calibration file, or simply the size of a specified person. The floor angle and the coordinate origin can either be detected automatically from the gait sequence, or be manually specified. The depth coordinates are set to normative values, depending on whether the person is going left, right, facing the camera, or looking away.
548
556
549
557
5.**Computes the selected joint and segment angles**, and flips them on the left/right side if the respective foot is pointing to the left/right.
Copy file name to clipboardExpand all lines: Sports2D/Demo/Config_demo.toml
+23-17Lines changed: 23 additions & 17 deletions
Original file line number
Diff line number
Diff line change
@@ -18,10 +18,10 @@ video_input = 'demo.mp4' # 'webcam' or '<video_path.ext>', or ['video1_path.mp
18
18
px_to_m_from_person_id = 2# Person to use for pixels to meters conversion (not used if a calibration file is provided)
19
19
px_to_m_person_height = 1.65# Height of the reference person in meters (for pixels -> meters conversion).
20
20
visible_side = ['front', 'none', 'auto'] # Choose visible side among ['right', 'left', 'front', 'back', 'auto', 'none']. String or list of strings.
21
-
# if 'auto', will be either 'left', 'right', or 'front' depending on the direction of the motion
22
-
# if 'none', no processing will be performed on the corresponding person
21
+
# if 'auto', will be either 'left', 'right', or 'front' depending on the direction of the motion
22
+
# if 'none', coordinates will be left in 2D rather than 3D
23
23
load_trc_px = ''# If you do not want to recalculate pose, load it from a trc file (in px, not in m)
24
-
compare = false# Not implemented yet
24
+
compare = false# Not implemented yet
25
25
26
26
# Video parameters
27
27
time_range = [] # [] for the whole video, or [start_time, end_time] (in seconds), or [[start_time1, end_time1], [start_time2, end_time2], ...]
@@ -53,7 +53,16 @@ result_dir = '' # If empty, project dir is current dir
53
53
slowmo_factor = 1# 1 for normal speed. For a video recorded at 240 fps and exported to 30 fps, it would be 240/30 = 8
54
54
55
55
# Pose detection parameters
56
-
pose_model = 'Body_with_feet'#With RTMLib: Body_with_feet (default HALPE_26 model), Whole_body (COCO_133: body + feet + hands), Body (COCO_17), CUSTOM (see example at the end of the file), or any from skeletons.py
56
+
pose_model = 'Body_with_feet'#With RTMLib:
57
+
# - Body_with_feet (default HALPE_26 model),
58
+
# - Whole_body_wrist (COCO_133_WRIST: body + feet + 2 hand_points),
59
+
# - Whole_body (COCO_133: body + feet + hands),
60
+
# - Body (COCO_17). Marker augmentation won't work, Kinematic analysis will work,
61
+
# - Hand (HAND_21, only lightweight mode. Potentially better results with Whole_body),
62
+
# - Face (FACE_106),
63
+
# - Animal (ANIMAL2D_17)
64
+
# /!\ Only RTMPose is natively embeded in Pose2Sim. For all other pose estimation methods, you will have to run them yourself, and then refer to the documentation to convert the output files if needed
65
+
# /!\ For Face and Animal, use mode="""{dictionary}""", and find the corresponding .onnx model there https://github.com/open-mmlab/mmpose/tree/main/projects/rtmpose
57
66
mode = 'balanced'# 'lightweight', 'balanced', 'performance', or """{dictionary}""" (see below)
58
67
59
68
# A dictionary (WITHIN THREE DOUBLE QUOTES) allows you to manually select the person detection (if top_down approach) and/or pose estimation models (see https://github.com/Tau-J/rtmlib).
@@ -81,7 +90,7 @@ det_frequency = 4 # Run person detection only every N frames, and inbetwee
# More robust in crowded scenes but tricky to parametrize. More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51
calib_file = ''# Calibration in the Pose2Sim format. 'calib_demo.toml', or '' if not available
107
116
108
-
fastest_frames_to_remove_percent = 0.1# Frames with high speed are considered as outliers
109
-
close_to_zero_speed_px = 50# Sum for all keypoints: about 50 px/frame or 0.2 m/frame
110
-
large_hip_knee_angles = 45# Hip and knee angles below this value are considered as imprecise
111
-
trimmed_extrema_percent = 0.5# Proportion of the most extreme segment values to remove before calculating their mean)
112
-
113
117
114
118
[angles]
115
119
display_angle_values_on = ['body', 'list'] # 'body', 'list', ['body', 'list'], 'none'. Display angle values on the body, as a list in the upper left of the image, both, or do not display them.
use_augmentation = true# true or false (lowercase) # Set to true if you want to use the model with augmented markers
152
156
use_contacts_muscles = true# true or false (lowercase) # If true, contact spheres and muscles are added to the model
153
-
154
-
osim_setup_path = '../OpenSim_setup'# Path to the OpenSim setup folder
157
+
participant_mass = [67.0, 55.0] # kg # defaults to 70 if not provided. No influence on kinematics (motion), only on kinetics (forces)
155
158
right_left_symmetry = true# true or false (lowercase) # Set to false only if you have good reasons to think the participant is not symmetrical (e.g. prosthetic limb)
156
-
# default_height = 1.7 # meters # If automatic height calculation did not work, this value is used to scale the model
157
-
remove_individual_scaling_setup = true# true or false (lowercase) # If true, the individual scaling setup files are removed to avoid cluttering
158
-
remove_individual_ik_setup = true#true or false (lowercase) # If true, the individual IK setup files are removed to avoid cluttering
159
+
160
+
# Choosing best frames to scale the model
161
+
default_height = 1.7#meters # If automatic height calculation did not work, this value is used to scale the model
159
162
fastest_frames_to_remove_percent = 0.1# Frames with high speed are considered as outliers
160
-
close_to_zero_speed_m = 0.2# Sum for all keypoints: about 50 px/frame or 0.2 m/frame
163
+
close_to_zero_speed_px = 50# Sum for all keypoints: about 50 px/frame
164
+
close_to_zero_speed_m = 0.2# Sum for all keypoints: 0.2 m/frame
161
165
large_hip_knee_angles = 45# Hip and knee angles below this value are considered as imprecise
162
166
trimmed_extrema_percent = 0.5# Proportion of the most extreme segment values to remove before calculating their mean)
167
+
remove_individual_scaling_setup = true# true or false (lowercase) # If true, the individual scaling setup files are removed to avoid cluttering
168
+
remove_individual_ik_setup = true# true or false (lowercase) # If true, the individual IK setup files are removed to avoid cluttering
0 commit comments