-
Notifications
You must be signed in to change notification settings - Fork 17
SDK WebRTC Stats Monitoring #1293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…le to send the `verto.answer` (#1263)
Fabric references renamed too Call
# Conflicts: # internal/e2e-client/playwright.config.ts # package-lock.json # packages/js/CHANGELOG.md # packages/js/package.json # packages/realtime-api/CHANGELOG.md # packages/realtime-api/package.json # packages/web-api/CHANGELOG.md # packages/web-api/package.json
- Remove browser-js workflows (production and staging) - Remove realtime-api workflows (production and staging) - Update unit-tests.yml to only run browser-client workflows - Remove stack tests that reference deleted realtime-api package - Add --passWithNoTests flag to test scripts to handle empty test suites This focuses the CI/CD pipeline on the @signalwire/client package only.
- Fix test file path: buildVideoWithCallSDK.spec.ts -> buildVideoWithFabricSDK.spec.ts - Add correct path prefixes for callfabric tests (callfabric/*.spec.ts) - Fix renegotiation test paths to include callfabric/ prefix - Disable relayApp.spec.ts test that depends on removed @signalwire/realtime-api These configuration mismatches were causing e2e workflow failures.
This reverts commit e6537bf.
Merging the type renames for @signalwire/client back to the original repository
# Conflicts: # internal/e2e-js/fixtures.ts # internal/e2e-js/package.json # internal/e2e-js/playwright.config.ts # internal/e2e-js/tests/buildVideoWithVideoSDK.spec.ts # internal/e2e-js/tests/callfabric/agentCustomer.spec.ts # internal/e2e-js/tests/roomSession.spec.ts # internal/e2e-js/tests/roomSessionAudienceCount.spec.ts # internal/e2e-js/tests/roomSessionAutomaticStream.spec.ts # internal/e2e-js/tests/roomSessionBadNetwork.spec.ts # internal/e2e-js/tests/roomSessionCleanup.spec.ts # internal/e2e-js/tests/roomSessionDemote.spec.ts # internal/e2e-js/tests/roomSessionDemoteAudience.spec.ts # internal/e2e-js/tests/roomSessionDemotePromote.spec.ts # internal/e2e-js/tests/roomSessionDemoteReattachPromote.spec.ts # internal/e2e-js/tests/roomSessionDevices.spec.ts # internal/e2e-js/tests/roomSessionFollowLeader.spec.ts # internal/e2e-js/tests/roomSessionJoinFrom.spec.ts # internal/e2e-js/tests/roomSessionJoinUntil.spec.ts # internal/e2e-js/tests/roomSessionLocalStream.spec.ts # internal/e2e-js/tests/roomSessionLockUnlock.spec.ts # internal/e2e-js/tests/roomSessionMaxMembers.spec.ts # internal/e2e-js/tests/roomSessionMethodsOnNonExistingMembers.spec.ts # internal/e2e-js/tests/roomSessionMultipleStreams.spec.ts # internal/e2e-js/tests/roomSessionPromoteDemote.spec.ts # internal/e2e-js/tests/roomSessionPromoteMeta.spec.ts # internal/e2e-js/tests/roomSessionPromoteParticipant.spec.ts # internal/e2e-js/tests/roomSessionPromoteReattachDemote.spec.ts # internal/e2e-js/tests/roomSessionRaiseHand.spec.ts # internal/e2e-js/tests/roomSessionReattach.spec.ts # internal/e2e-js/tests/roomSessionReattachBadAuth.spec.ts # internal/e2e-js/tests/roomSessionReattachMultiple.spec.ts # internal/e2e-js/tests/roomSessionReattachScreenshare.spec.ts # internal/e2e-js/tests/roomSessionReattachWrongCallId.spec.ts # internal/e2e-js/tests/roomSessionReattachWrongProtocol.spec.ts # internal/e2e-js/tests/roomSessionRemoveAllMembers.spec.ts # internal/e2e-js/tests/roomSessionStreaming.spec.ts # internal/e2e-js/tests/roomSessionStreamingAPI.spec.ts # internal/e2e-js/tests/roomSessionTalkingEventsParticipant.spec.ts # internal/e2e-js/tests/roomSessionTalkingEventsToAudience.spec.ts # internal/e2e-js/tests/roomSessionUnauthorized.spec.ts # internal/e2e-js/tests/roomSessionUpdateMedia.spec.ts # internal/e2e-js/tests/roomSettings.spec.ts # internal/e2e-js/utils.ts # packages/js/src/fabric/FabricRoomSession.ts # packages/js/src/fabric/utils/validationProxy.ts # packages/js/src/fabric/utils/validators.ts
- Copy playground-js to playground-client - Remove non-fabric samples (chat, pubSub, video, videoManager) - Update imports to use @signalwire/client instead of @signalwire/js - Add Vite configuration for proper module resolution - Update e2e-client to use playground-client 🤖 Generated with Claude Code Co-Authored-By: Claude <[email protected]>
- Apply Vite configuration fixes to properly resolve @signalwire/client modules - Update blank template to explicitly import and expose SignalWire exports - Add module aliases, optimization, and process definitions for browser compatibility - This ensures window._SWJS is properly populated during e2e tests The combination of playground-client and these fixes resolves the module loading issue. 🤖 Generated with Claude Code Co-Authored-By: Claude <[email protected]>
- Remove direct dependency on @signalwire/realtime-api in test file - Create relayAppWorker.js to execute realtime-api operations in Node.js context - Add relayAppClient fixture to manage Node.js child processes - Install @signalwire/realtime-api as dev dependency from npm registry - Update all three test cases to use the new fixture pattern - Maintain same test assertions and behavior This change isolates the realtime-api package execution to a separate Node.js process, allowing the tests to work without importing the Node.js-only package directly. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Co-authored-by: Ammar Ansari <[email protected]>
Co-authored-by: Ammar Ansari <[email protected]>
Co-authored-by: Ammar Ansari <[email protected]>
Co-authored-by: Ammar Ansari <[email protected]>
…' into joao/sdk-stability
…ager.test.ts - Removed unused 'BaselineComparison' import (TS6133) - Removed unused 'MATH_CONSTANTS' import (TS6133) - Build now passes without TypeScript errors - All BaselineManager tests continue to pass (31/31) 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add checks before defining navigator.permissions and navigator.mediaDevices - Handle cases where properties already exist in test environment - Use Object.assign for mediaDevices to preserve existing functionality - Fixes TypeError: Cannot redefine property errors in Jest tests This ensures compatibility across different Jest/jsdom configurations.
export const WEBRTC_CONSTANTS = { | ||
/** Stats report types we monitor */ | ||
MONITORED_STATS_TYPES: [ | ||
'inbound-rtp', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does 'inbound-rtp'
include 'remote-inbound-rtp'
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@claude Does 'inbound-rtp' include 'remote-inbound-rtp'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude finished @jpsantosbh's task —— View job
Analyzing RTCStats Types
- Read the constants.ts file to understand current implementation
- Research RTCStats specification for inbound-rtp vs remote-inbound-rtp
- Check how these stats are used in monitoring code
- Provide technical answer and recommendations
Analysis
No, 'inbound-rtp'
does NOT include 'remote-inbound-rtp'
stats. These are completely separate RTCStats types with different purposes:
'inbound-rtp'
: Local receiver perspective
- Provides stats about RTP streams being received by this peer
- Contains local receiver metrics (packets received, bytes received, jitter, etc.)
- Measured from the receiving endpoint's perspective
'remote-inbound-rtp'
: Remote sender's perspective of what receiver gets
- Contains the remote sender's view of what the receiver is getting
- Includes round-trip time (RTT) measurements and remote packet loss
- Based on RTCP receiver reports sent back to the sender
Current Implementation Gap
Looking at packages/webrtc/src/monitoring/constants.ts:335
and MetricsCollector.ts:233
, the monitoring system only processes 'inbound-rtp'
stats but misses 'remote-inbound-rtp'
stats. This means:
❌ Missing RTT measurements from remote perspective
❌ Missing remote packet loss calculations
❌ Incomplete network quality assessment
Recommendation
Add 'remote-inbound-rtp'
to the MONITORED_STATS_TYPES
array and update the stats parser to handle both types for complete network monitoring.
/** Minimum bandwidth in kbps */ | ||
minBandwidth?: number | ||
/** Minimum audio level for detection */ | ||
minAudioLevel?: number |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if there is silence? This doesn't appear to be as meaningful as, say, the minimum bandwidth.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@claude What if there is silence? This doesn't appear to be as meaningful as, say, the minimum bandwidth
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude finished @giavac's task —— View job
Analyzing Silence Handling in WebRTC Stats
- Read the interfaces.ts file to understand the current metric implementation
- Examine related monitoring code to understand metric usage
- Analyze silence period implications
- Provide technical analysis and recommendations
Analysis
You're absolutely correct about the minAudioLevel
threshold (line 141). During silence periods, this metric becomes problematic compared to minBandwidth
.
The Issue with minAudioLevel
During Silence
Current Implementation Problems:
minAudioLevel
(line 141): Set to thresholds like0.1
,0.05
, or0.01
across different monitoring presets- Silence Detection: In
MetricsCollector.ts:716
, audio quality is penalized whenaudioLevel < 0.01
- Natural Silence: During legitimate silence (participants not speaking),
audioLevel
can be0.0
or near zero
Why This Is Problematic:
❌ False Positives: Legitimate silence triggers "audio quality degraded" alerts
❌ Meaningless Metric: Audio level during silence doesn't indicate network issues
❌ Recovery Actions: System may attempt unnecessary recovery during normal silence periods
Why minBandwidth
Is More Meaningful
minBandwidth
Advantages During Silence:
✅ Network Capacity: Measures actual network throughput capability regardless of content
✅ Infrastructure Health: Low bandwidth indicates network congestion or connectivity issues
✅ Predictive: Can detect impending issues before they affect actual media transmission
✅ Silence Agnostic: Bandwidth measurement via candidatePair
stats is independent of audio content
Implementation Context:
// packages/webrtc/src/monitoring/MetricsCollector.ts:524-532
const instantaneousBandwidth = (totalBytesDelta * 8) / (timeDelta * 1000)
// Bandwidth is calculated from actual byte throughput
Recommendations
-
Disable Audio Level Monitoring During Silence:
- Add silence detection logic before applying
minAudioLevel
thresholds - Use bandwidth and packet flow as primary indicators during quiet periods
- Add silence detection logic before applying
-
Enhanced Silence Detection:
- Track audio energy patterns over time windows
- Distinguish between network-caused audio issues vs. natural silence
-
Contextual Thresholds:
- Apply different thresholds based on detected silence vs. active speech periods
- Prioritize
minBandwidth
, packet loss, and RTT during silence
Bottom Line: minBandwidth
provides consistent network health assessment regardless of audio content, while minAudioLevel
creates noise in monitoring systems during normal conversation pauses.
if (stat.kind === 'audio' && stat.audioLevel !== undefined) { | ||
audioLevel = stat.audioLevel | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@claude Doesn't it make more sense to focus on totalAudioEnergy
, which gives a summary of the accumulated audio levels received?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude finished @giavac's task —— View job
Analyzing totalAudioEnergy
vs Current Audio Level Approach
- Read MetricsCollector.ts to understand current audio level implementation
- Research
totalAudioEnergy
in WebRTC RTCStats specification - Analyze how this addresses silence detection issues from previous review
- Compare benefits and drawbacks of both approaches
- Provide technical analysis and recommendation
Analysis
You're absolutely right. totalAudioEnergy
would be a significantly better metric than the current instantaneous audioLevel
approach for quality monitoring.
Current Implementation Issues
Problems with audioLevel
(line 397 in MetricsCollector.ts):
- Instantaneous measurement: Current approach uses
stat.audioLevel
which represents audio level at a single point in time - False positives during silence: Line 716-718 penalizes quality when
audioLevel < 0.01
, incorrectly flagging natural conversation pauses - Noise sensitivity: Brief dips in audio level trigger quality degradation alerts
Why totalAudioEnergy
Is Superior
Accumulated Energy Benefits:
✅ Cumulative measurement: totalAudioEnergy
represents accumulated audio energy over time, providing trend analysis
✅ Silence differentiation: Can distinguish between network-caused audio dropouts vs. natural silence periods
✅ Trend detection: Allows monitoring of energy accumulation rate rather than instantaneous snapshots
✅ Already collected: The monitoring system already captures this metric in MetricsCollector.ts:524
Evidence from E2E Tests:
The codebase already recognizes totalAudioEnergy
's value - your E2E tests (in internal/e2e-client/utils.ts
) use it for silence detection:
- Genuine silence detection: Comments note "If there is genuine silence, then totalAudioEnergy must be present, albeit being a small number"
- Network issue distinction: Tests differentiate between missing
totalAudioEnergy
(network problems) vs. low values (natural silence)
Recommended Implementation
Replace instantaneous audio level checking with energy rate analysis:
// Instead of: if (metrics.audioLevel < 0.01) score -= 20
// Use energy accumulation rate over time windows:
const energyRate = (currentTotalAudioEnergy - previousTotalAudioEnergy) / timeDelta
if (energyRate < expectedSilenceThreshold && packetsReceived > minPacketThreshold) {
// Only penalize if we expect audio but energy isn't accumulating
score -= 20
}
Key advantages:
- Context-aware: Only flags low energy when packets are being received but energy isn't accumulating
- Trend-based: Monitors energy accumulation rate rather than instantaneous levels
- Silence-resistant: Natural conversation pauses don't trigger false alarms
Bottom line: totalAudioEnergy
provides the accumulated context needed to distinguish between network quality issues and natural silence periods - exactly what quality monitoring systems need.
// Penalize low audio level | ||
if (metrics.audioLevel < 0.01) { | ||
score -= 20 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, I'm unsure this is meaningful.
private calculateOverallQualityScore(metrics: StatsMetrics): number { | ||
let score = 100 | ||
|
||
// Base penalties for key metrics | ||
score -= metrics.packetLoss * 350 // 10% loss = 35 point penalty | ||
score -= Math.min(metrics.jitter / 2, 25) // Cap jitter penalty at 25 points | ||
score -= Math.min((metrics.rtt - 100) * 0.1, 15) // Cap RTT penalty at 15 points | ||
|
||
// Bandwidth penalty if very low | ||
if (metrics.bandwidth < 100) { | ||
// Less than 100 kbps | ||
score -= (100 - metrics.bandwidth) * 0.2 | ||
} | ||
|
||
return Math.max(0, Math.min(100, Math.round(score))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What metrics are used here? The ones from audio or from video?
Why not computing an overall score from the score for audio and the one for video (if present)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@claude could you analyze the questions in this comment #1293 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude finished @jpsantosbh's task —— View job
Analyzing Metrics Scoring Questions
- Read MetricsCollector.ts to understand current scoring implementation
- Analyze what metrics are used in the scoring system (audio vs video)
- Evaluate the reviewer's suggestion about separate audio/video scores
- Provide technical analysis and recommendations
Analysis of MetricsCollector.ts:784 Scoring System
Looking at the reviewer's questions about calculateOverallQualityScore
(lines 770-784) and the broader scoring architecture:
🔍 What Metrics Are Used?
The calculateOverallQualityScore
method uses MIXED metrics:
// MetricsCollector.ts:770-784
private calculateOverallQualityScore(metrics: StatsMetrics): number {
let score = 100
// Network-level metrics (apply to both audio/video):
score -= metrics.packetLoss * 350 // Packet loss affects all media
score -= Math.min(metrics.jitter / 2, 25) // Jitter affects all media
score -= Math.min((metrics.rtt - 100) * 0.1, 15) // RTT affects all media
// Infrastructure metric (not media-specific):
if (metrics.bandwidth < 100) {
score -= (100 - metrics.bandwidth) * 0.2 // Low bandwidth affects all
}
return Math.max(0, Math.min(100, Math.round(score)))
}
Issue Identified:
❌ Mixed metric types without media differentiation
❌ No consideration of whether audio/video is present
❌ Same penalty weights regardless of media type
🎯 Why Separate Audio/Video Scores Matter
The system ALREADY has separate scoring methods:
calculateAudioQualityScore()
(lines 693-721) - Uses audio-specific metricscalculateVideoQualityScore()
(lines 729-761) - Uses video-specific metricscalculateOverallQualityScore()
(lines 770-784) - Problem area
Current Architecture Issues:
-
Audio vs Video Have Different Sensitivities:
// Audio scoring (line 703): 10% packet loss = 30 point penalty score -= metrics.packetLoss * 300 // Video scoring (line 739): 10% packet loss = 40 point penalty score -= metrics.packetLoss * 400 // Overall scoring (line 774): 10% packet loss = 35 point penalty score -= metrics.packetLoss * 350 // ❌ Arbitrary middle value
-
No Context Awareness:
- Overall score penalizes video metrics even in audio-only calls
- No weighting based on active media types
- Ignores the separate audio/video scores that were already calculated
✅ Recommended Solution
Replace the current calculateOverallQualityScore
with a composite approach:
private calculateOverallQualityScore(metrics: StatsMetrics): number {
const audioScore = this.calculateAudioQualityScore(metrics)
const videoScore = this.calculateVideoQualityScore(metrics)
// Determine active media types
const hasAudio = metrics.audioLevel !== undefined
const hasVideo = metrics.frameRate !== undefined
if (hasAudio && hasVideo) {
// Weighted average based on media importance
return Math.round((audioScore || 0) * 0.4 + (videoScore || 0) * 0.6)
} else if (hasAudio) {
return audioScore || this.calculateNetworkOnlyScore(metrics)
} else if (hasVideo) {
return videoScore || this.calculateNetworkOnlyScore(metrics)
} else {
// Fallback to network-only metrics
return this.calculateNetworkOnlyScore(metrics)
}
}
private calculateNetworkOnlyScore(metrics: StatsMetrics): number {
// Current logic but as fallback only
let score = 100
score -= metrics.packetLoss * 350
score -= Math.min(metrics.jitter / 2, 25)
score -= Math.min((metrics.rtt - 100) * 0.1, 15)
if (metrics.bandwidth < 100) {
score -= (100 - metrics.bandwidth) * 0.2
}
return Math.max(0, Math.min(100, Math.round(score)))
}
📊 Key Benefits
✅ Context-Aware Scoring: Different scoring for audio-only, video-only, and mixed calls
✅ Leverages Existing Logic: Reuses the well-designed audio/video specific scoring
✅ Proper Weighting: Video typically gets higher weight due to bandwidth sensitivity
✅ Maintains Backward Compatibility: Network-only fallback preserves current behavior when media info unavailable
The reviewer is absolutely correct - computing an overall score from separate audio/video scores is much more meaningful than the current approach that arbitrarily mixes network and media-specific penalties without considering what media types are actually active.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure about the weights, but I think we can do this and put those weights in consts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, it looks better this way.
The weights may require some tuning/experimentation.
const audioIssues = issues.filter((issue) => issue.mediaType === 'audio') | ||
if (audioIssues.length > 0) { | ||
this.logger.debug( | ||
`Handling ${audioIssues.length} audio issues with track restoration` | ||
) | ||
await this.restoreAudioTrack() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If handleKeyFrameRequest()
is called, but there aren't any video issues, why do we want to intervene on the audio tracks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean? It's only going to call requestKeyframe
if any video
issue exists, and restoreAudioTrack
if audio
issues exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, but it's a function called handleKeyFrameRequest()
, so I would expect to only impact video, regardless of audio issues. In particular it would seem surprising that it could trigger some changes at audio level, when there aren't video issues detected.
It's the "key frame" term, which I associate to video only, that makes me feel it misleading.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see
I think the audio thing is all messy... It probably got confused with this other issue #1294 that the original PR to Cantina was trying to solve.
Maybe we should remove the audio check from the metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When referring to a Key Frame, I also think we should only care about video.
private async reduceVideoQuality(): Promise<void> { | ||
try { | ||
// Apply lower quality constraints | ||
await this.rtcPeer.applyMediaConstraints('video', { | ||
width: { ideal: 320 }, | ||
height: { ideal: 240 }, | ||
frameRate: { ideal: 15 }, | ||
}) | ||
} catch (error) { | ||
this.logger.warn('Failed to reduce video quality:', error) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to apply these hardcoded values? At what point will the original video quality settings be restored if the issues disappear? (Maybe it's clear from other parts I haven't seen yet)
Co-authored-by: Giacomo Vacca <[email protected]>
Co-authored-by: Giacomo Vacca <[email protected]>
Claude finished @jpsantosbh's task —— View job PR Code Review: SDK WebRTC Stats MonitoringI've conducted a comprehensive technical review of this WebRTC monitoring implementation. The scope is substantial - adding real-time network quality monitoring with a three-tier recovery system across 31 files. Here are my key findings: Review Tasks:
🏗️ Architecture & Design Quality: ExcellentStrengths:
📊 Critical Issues Requiring Attention1. Audio Level Metrics - False Silence Detection
|
Claude finished @jpsantosbh's task —— View job PR Review: WebRTC Stats MonitoringI've completed a comprehensive review of this WebRTC monitoring implementation. Here's my detailed analysis: Review Tasks
Architecture & Design Quality ✅Strengths:
Issues Identified & Recommendations🚨 Critical: Metrics Scoring Logic Issues1. AudioLevel vs TotalAudioEnergy Problem (
// Instead of: if (metrics.audioLevel < 0.01) score -= 20
const energyRate = (currentTotalAudioEnergy - previousTotalAudioEnergy) / timeDelta
if (energyRate < expectedSilenceThreshold && packetsReceived > minPacketThreshold) {
score -= 20 // Only penalize if packets received but no energy accumulating
} 2. Mixed Metric Scoring Without Context (
private calculateOverallQualityScore(metrics: StatsMetrics): number {
const audioScore = this.calculateAudioQualityScore(metrics)
const videoScore = this.calculateVideoQualityScore(metrics)
const hasAudio = metrics.audioLevel !== undefined
const hasVideo = metrics.frameRate !== undefined
if (hasAudio && hasVideo) {
return Math.round((audioScore || 0) * 0.4 + (videoScore || 0) * 0.6)
}
return hasAudio ? audioScore : hasVideo ? videoScore : this.calculateNetworkOnlyScore(metrics)
}
|
Claude finished @jpsantosbh's task —— View job PR Review: SDK WebRTC Stats MonitoringI've conducted a comprehensive technical review of this WebRTC monitoring implementation. The scope is substantial - adding real-time network quality monitoring with a three-tier recovery system across 31 files. Here are my key findings: Review Tasks:
🏗️ Architecture & Design Quality: ExcellentStrengths:
📊 Critical Issues Requiring Attention1. Audio Level Metrics - False Silence Detection
|
Claude finished @jpsantosbh's task —— View job PR Code Review: SDK WebRTC Stats MonitoringI've conducted a comprehensive technical review of this WebRTC monitoring implementation. The scope is substantial - adding real-time network quality monitoring with a three-tier recovery system across 31 files. Here are my key findings: Review Tasks:
🏗️ Architecture & Design Quality: ExcellentStrengths:
📊 Critical Issues Requiring Attention1. Audio Level Metrics - False Silence Detection
|
Co-authored-by: Giacomo Vacca <[email protected]>
Claude finished @jpsantosbh's task —— View job PR Code Review: SDK WebRTC Stats MonitoringI've conducted a comprehensive technical review of this WebRTC monitoring implementation. The scope is substantial - adding real-time network quality monitoring with a three-tier recovery system across 31 files. Here are my key findings: Review Tasks:
🏗️ Architecture & Design Quality: ExcellentStrengths:
📊 Critical Issues Requiring Attention1. Audio Level Metrics - False Silence Detection
|
Summary
This was based on this original PR: https://github.com/signalwire/cantina/pull/1509
This PR contains new WebRTC monitoring capabilities.
Major Changes
🚀 WebRTC Stats Monitoring Feature
enableStatsMonitoring()
,getNetworkQuality()
,getConnectionStats()
API Impact
✅ Backward Compatibility Maintained
Testing Status
Next Steps
This will have to be tested in real life.
🤖 Generated with Claude Code
Co-Authored-By: Claude [email protected]