WebKit Bugzilla
Attachment 356839 Details for
Bug 192517
: Update libwebrtc up to 2fb890f08c
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
Patch
bug-192517-20181207145431.patch (text/plain), 167.03 KB, created by
youenn fablet
on 2018-12-07 14:54:32 PST
(
hide
)
Description:
Patch
Filename:
MIME Type:
Creator:
youenn fablet
Created:
2018-12-07 14:54:32 PST
Size:
167.03 KB
patch
obsolete
>Subversion Revision: 238900 >diff --git a/Source/ThirdParty/libwebrtc/ChangeLog b/Source/ThirdParty/libwebrtc/ChangeLog >index 57055e3fbe23d4b0d04699d292230d9a07f71bd3..40043d7f301c7b6da550c318be29d456e9d39903 100644 >--- a/Source/ThirdParty/libwebrtc/ChangeLog >+++ b/Source/ThirdParty/libwebrtc/ChangeLog >@@ -1,3 +1,83 @@ >+2018-12-07 Youenn Fablet <youenn@apple.com> >+ >+ Update libwebrtc up to 2fb890f08c >+ https://bugs.webkit.org/show_bug.cgi?id=192517 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ Merge changes to track libwebrtc M72. >+ >+ * Source/webrtc/DEPS: >+ * Source/webrtc/api/audio/echo_canceller3_config.h: >+ * Source/webrtc/api/rtp_headers.h: >+ * Source/webrtc/api/video/encoded_frame.h: >+ * Source/webrtc/api/video/encoded_image.h: >+ * Source/webrtc/call/rtp_transport_controller_send_interface.h: >+ * Source/webrtc/call/video_receive_stream.h: >+ * Source/webrtc/call/video_send_stream.h: >+ * Source/webrtc/common_types.h: >+ (webrtc::RtcpStatistics::RtcpStatistics): >+ (webrtc::RtcpStatisticsCallback::~RtcpStatisticsCallback): >+ * Source/webrtc/logging/rtc_event_log/rtc_event_log_impl.cc: >+ * Source/webrtc/media/engine/webrtcvideoengine.cc: >+ * Source/webrtc/modules/audio_coding/BUILD.gn: >+ * Source/webrtc/modules/audio_coding/neteq/neteq_unittest.cc: >+ * Source/webrtc/modules/audio_processing/aec3/BUILD.gn: >+ * Source/webrtc/modules/audio_processing/aec3/aec_state.cc: >+ * Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics.cc: Removed. >+ * Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics.h: Removed. >+ * Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics_unittest.cc: Removed. >+ * Source/webrtc/modules/audio_processing/aec3/echo_canceller3.cc: >+ * Source/webrtc/modules/audio_processing/aec3/echo_canceller3.h: >+ * Source/webrtc/modules/audio_processing/aec3/filter_analyzer.cc: >+ * Source/webrtc/modules/audio_processing/aec3/filter_analyzer.h: >+ * Source/webrtc/modules/audio_processing/aec3/suppression_gain.cc: >+ * Source/webrtc/modules/rtp_rtcp/BUILD.gn: >+ * Source/webrtc/modules/rtp_rtcp/include/receive_statistics.h: >+ * Source/webrtc/modules/rtp_rtcp/include/rtcp_statistics.h: Removed. >+ * Source/webrtc/modules/rtp_rtcp/include/rtp_rtcp.h: >+ * Source/webrtc/modules/rtp_rtcp/include/rtp_rtcp_defines.h: >+ * Source/webrtc/modules/rtp_rtcp/source/rtcp_receiver.h: >+ * Source/webrtc/modules/rtp_rtcp/source/rtp_header_extension_map.cc: >+ * Source/webrtc/modules/rtp_rtcp/source/rtp_header_extensions.cc: >+ * Source/webrtc/modules/rtp_rtcp/source/rtp_header_extensions.h: >+ (webrtc::HdrMetadataExtension::ValueSize): >+ * Source/webrtc/modules/rtp_rtcp/source/rtp_packet_received.cc: >+ * Source/webrtc/modules/rtp_rtcp/source/rtp_packet_unittest.cc: >+ * Source/webrtc/modules/rtp_rtcp/source/rtp_utility.cc: >+ * Source/webrtc/modules/video_coding/codecs/test/videocodec_test_libvpx.cc: >+ * Source/webrtc/modules/video_coding/codecs/vp9/test/vp9_impl_unittest.cc: >+ * Source/webrtc/modules/video_coding/codecs/vp9/vp9_impl.cc: >+ * Source/webrtc/modules/video_coding/codecs/vp9/vp9_impl.h: >+ * Source/webrtc/modules/video_coding/encoded_frame.h: >+ (webrtc::VCMEncodedFrame::video_timing_mutable): >+ (webrtc::VCMEncodedFrame::SetCodecSpecific): >+ * Source/webrtc/modules/video_coding/frame_buffer2.cc: >+ * Source/webrtc/modules/video_coding/frame_buffer2.h: >+ * Source/webrtc/modules/video_coding/frame_buffer2_unittest.cc: >+ * Source/webrtc/modules/video_coding/frame_object.cc: >+ * Source/webrtc/modules/video_coding/rtp_frame_reference_finder.cc: >+ * Source/webrtc/p2p/base/p2ptransportchannel.cc: >+ * Source/webrtc/p2p/base/p2ptransportchannel_unittest.cc: >+ * Source/webrtc/p2p/base/port.cc: >+ * Source/webrtc/p2p/base/port.h: >+ * Source/webrtc/p2p/client/basicportallocator.cc: >+ * Source/webrtc/p2p/client/basicportallocator.h: >+ * Source/webrtc/p2p/client/basicportallocator_unittest.cc: >+ * Source/webrtc/pc/peerconnection.cc: >+ * Source/webrtc/pc/rtcstats_integrationtest.cc: >+ * Source/webrtc/pc/test/peerconnectiontestwrapper.cc: >+ * Source/webrtc/pc/test/peerconnectiontestwrapper.h: >+ * Source/webrtc/rtc_base/stringize_macros.h: >+ * Source/webrtc/sdk/objc/components/video_codec/nalu_rewriter_unittest.cc: Removed. >+ * Source/webrtc/test/fuzzers/rtp_packet_fuzzer.cc: >+ * Source/webrtc/tools_webrtc/ios/internal.client.webrtc/iOS64_Perf.json: >+ * Source/webrtc/tools_webrtc/ios/internal.tryserver.webrtc/ios_arm64_perf.json: >+ * Source/webrtc/tools_webrtc/whitespace.txt: >+ * Source/webrtc/video/report_block_stats.h: >+ * Source/webrtc/video/rtp_video_stream_receiver.cc: >+ * Source/webrtc/video/video_receive_stream.cc: >+ > 2018-12-06 Youenn Fablet <youenn@apple.com> > > Update libwebrtc up to 0d007d7c4f >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/DEPS b/Source/ThirdParty/libwebrtc/Source/webrtc/DEPS >index 555c27f10e7e199730006d4db97d360d2c158376..2987251541007a6b236222b9ca2e5d949394aad9 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/DEPS >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/DEPS >@@ -7,16 +7,16 @@ vars = { > 'checkout_configuration': 'default', > 'checkout_instrumented_libraries': 'checkout_linux and checkout_configuration == "default"', > 'webrtc_git': 'https://webrtc.googlesource.com', >- 'chromium_revision': 'b04e513f825d6ce22e147601cbfc75ea23eb9073', >+ 'chromium_revision': '81c26a093bfdfe868dff4e3e611edcfd0486b641', > 'boringssl_git': 'https://boringssl.googlesource.com', > # Three lines of non-changing comments so that > # the commit queue can handle CLs rolling swarming_client > # and whatever else without interference from each other. >- 'swarming_revision': '157bec8a25cc4ebd6a16052510d08b05b6102aad', >+ 'swarming_revision': 'b6e9e23e4e79249bd4f95735205ffb7c3f9f0912', > # Three lines of non-changing comments so that > # the commit queue can handle CLs rolling BoringSSL > # and whatever else without interference from each other. >- 'boringssl_revision': '6965d25602754bc419c5f757d008ba1f4da49ae4', >+ 'boringssl_revision': 'f241a59dcca617c5b9d9880a8a9fd92996a654be', > # Three lines of non-changing comments so that > # the commit queue can handle CLs rolling lss > # and whatever else without interference from each other. >@@ -42,9 +42,9 @@ deps = { > # TODO(kjellander): Move this to be Android-only once the libevent dependency > # in base/third_party/libevent is solved. > 'src/base': >- Var('chromium_git') + '/chromium/src/base' + '@' + 'b9901f9bc5b65a1169d5db696ef4428a65cec622', >+ Var('chromium_git') + '/chromium/src/base' + '@' + '3c9ac70552b5023eba32209626c9f058b4412696', > 'src/build': >- Var('chromium_git') + '/chromium/src/build' + '@' + 'bbd67a350d745200e5798cace11a28edfd9fc3b2', >+ Var('chromium_git') + '/chromium/src/build' + '@' + '076d347a5645411e75265532ba94e37f80970bb1', > 'src/buildtools': > Var('chromium_git') + '/chromium/buildtools.git' + '@' + '04161ec8d7c781e4498c699254c69ba0dd959fde', > # Gradle 4.3-rc4. Used for testing Android Studio project generation for WebRTC. >@@ -54,13 +54,13 @@ deps = { > 'condition': 'checkout_android', > }, > 'src/ios': { >- 'url': Var('chromium_git') + '/chromium/src/ios' + '@' + 'eb9e8e09dc4b28e691cfd70242ba50ceecb6ab20', >+ 'url': Var('chromium_git') + '/chromium/src/ios' + '@' + 'aa06ff9dc9b4711741e3a0f70f149ffd05418556', > 'condition': 'checkout_ios', > }, > 'src/testing': >- Var('chromium_git') + '/chromium/src/testing' + '@' + '5a776bca050aedfb0269cdd8408ff0559acde2aa', >+ Var('chromium_git') + '/chromium/src/testing' + '@' + 'eeb483a8343ede0ea409815af78b32061d8b3fde', > 'src/third_party': >- Var('chromium_git') + '/chromium/src/third_party' + '@' + '77f6c16720eff7721b0c0b2438ce9126f59e1f0b', >+ Var('chromium_git') + '/chromium/src/third_party' + '@' + '3a20b86d38a3e17aa3cf24f6e36192013b535571', > 'src/third_party/android_ndk': { > 'url': Var('chromium_git') + '/android_ndk.git' + '@' + '4e2cea441bfd43f0863d14f57b1e1844260b9884', > 'condition': 'checkout_android', >@@ -107,7 +107,7 @@ deps = { > 'src/third_party/colorama/src': > Var('chromium_git') + '/external/colorama.git' + '@' + '799604a1041e9b3bc5d2789ecbd7e8db2e18e6b8', > 'src/third_party/depot_tools': >- Var('chromium_git') + '/chromium/tools/depot_tools.git' + '@' + '44d4b29082f0d8bacacd623f91c4d29637b4b901', >+ Var('chromium_git') + '/chromium/tools/depot_tools.git' + '@' + '6c18a1afa1a7a30421775c8d6c1cfb7edb3f272a', > 'src/third_party/errorprone/lib': { > 'url': Var('chromium_git') + '/chromium/third_party/errorprone.git' + '@' + '980d49e839aa4984015efed34b0134d4b2c9b6d7', > 'condition': 'checkout_android', >@@ -225,7 +225,7 @@ deps = { > 'src/third_party/yasm/source/patched-yasm': > Var('chromium_git') + '/chromium/deps/yasm/patched-yasm.git' + '@' + '720b70524a4424b15fc57e82263568c8ba0496ad', > 'src/tools': >- Var('chromium_git') + '/chromium/src/tools' + '@' + 'b47c6315878eabd73bf71c1aa0cbd9b816716789', >+ Var('chromium_git') + '/chromium/src/tools' + '@' + '627b8cb5aff2be7ed4c202f685ba6bd2957543bf', > 'src/tools/swarming_client': > Var('chromium_git') + '/infra/luci/client-py.git' + '@' + Var('swarming_revision'), > >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/api/audio/echo_canceller3_config.h b/Source/ThirdParty/libwebrtc/Source/webrtc/api/audio/echo_canceller3_config.h >index ea6e51baf9c53d1236f0759556b26e2d921ac769..ffe17f2b8b5d74f200899a9c17d24905b9bcfa9c 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/api/audio/echo_canceller3_config.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/api/audio/echo_canceller3_config.h >@@ -182,8 +182,8 @@ struct RTC_EXPORT EchoCanceller3Config { > 0.25f); > > struct DominantNearendDetection { >- float enr_threshold = 4.f; >- float enr_exit_threshold = .1f; >+ float enr_threshold = .25f; >+ float enr_exit_threshold = 10.f; > float snr_threshold = 30.f; > int hold_duration = 50; > int trigger_threshold = 12; >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/api/rtp_headers.h b/Source/ThirdParty/libwebrtc/Source/webrtc/api/rtp_headers.h >index c766899b9fd88f4a2c2ee9090f7f8cb69e9462bd..3e51f43591aeffe8a9337c245ad1500543074e63 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/api/rtp_headers.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/api/rtp_headers.h >@@ -17,7 +17,7 @@ > > #include "absl/types/optional.h" > #include "api/array_view.h" >-#include "api/video/color_space.h" >+#include "api/video/hdr_metadata.h" > #include "api/video/video_content_type.h" > #include "api/video/video_frame_marking.h" > #include "api/video/video_rotation.h" >@@ -129,7 +129,7 @@ struct RTPHeaderExtension { > // https://tools.ietf.org/html/draft-ietf-mmusic-sdp-bundle-negotiation-38 > Mid mid; > >- absl::optional<ColorSpace> color_space; >+ absl::optional<HdrMetadata> hdr_metadata; > }; > > struct RTPHeader { >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/api/video/encoded_frame.h b/Source/ThirdParty/libwebrtc/Source/webrtc/api/video/encoded_frame.h >index b8462c6c2c90447749fc0b4ceec4c62fe5917b32..a0ef35cdbee5efa39ef494ad1c3ffb9fd4b9c98f 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/api/video/encoded_frame.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/api/video/encoded_frame.h >@@ -80,6 +80,9 @@ class EncodedFrame : public webrtc::VCMEncodedFrame { > size_t num_references = 0; > int64_t references[kMaxFrameReferences]; > bool inter_layer_predicted = false; >+ // Is this subframe the last one in the superframe (In RTP stream that would >+ // mean that the last packet has a marker bit set). >+ bool is_last_spatial_layer = true; > }; > > } // namespace video_coding >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/api/video/encoded_image.h b/Source/ThirdParty/libwebrtc/Source/webrtc/api/video/encoded_image.h >index a7c719ce3a5678d51ac24fc732039eebd1b90df0..de1a25a3c37e73538ae0bb1832818e9f8614fc4e 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/api/video/encoded_image.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/api/video/encoded_image.h >@@ -67,6 +67,18 @@ class RTC_EXPORT EncodedImage { > color_space ? absl::make_optional(*color_space) : absl::nullopt; > } > >+ size_t size() const { return _length; } >+ void set_size(size_t new_size) { >+ RTC_DCHECK_LE(new_size, _size); >+ _length = new_size; >+ } >+ size_t capacity() const { return _size; } >+ >+ void set_buffer(uint8_t* buffer, size_t capacity) { >+ _buffer = buffer; >+ _size = capacity; >+ } >+ > uint32_t _encodedWidth = 0; > uint32_t _encodedHeight = 0; > // NTP time of the capture time in local timebase in milliseconds. >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/call/rtp_transport_controller_send_interface.h b/Source/ThirdParty/libwebrtc/Source/webrtc/call/rtp_transport_controller_send_interface.h >index 9868585cdf8113928a1ad63dd65eecc971cc2f7c..219c36de35f97042c9429b1b739f2c6c09fc32d5 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/call/rtp_transport_controller_send_interface.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/call/rtp_transport_controller_send_interface.h >@@ -25,7 +25,6 @@ > #include "api/transport/bitrate_settings.h" > #include "call/rtp_config.h" > #include "logging/rtc_event_log/rtc_event_log.h" >-#include "modules/rtp_rtcp/include/rtcp_statistics.h" > #include "modules/rtp_rtcp/include/rtp_rtcp_defines.h" > > namespace rtc { >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/call/video_receive_stream.h b/Source/ThirdParty/libwebrtc/Source/webrtc/call/video_receive_stream.h >index 2cd5f1631a09b15a0195315b7eb88c3f16fc7179..94cbfcc2f195f13de768b517154b435505d4efda 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/call/video_receive_stream.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/call/video_receive_stream.h >@@ -26,7 +26,6 @@ > #include "api/video/video_timing.h" > #include "api/video_codecs/sdp_video_format.h" > #include "call/rtp_config.h" >-#include "modules/rtp_rtcp/include/rtcp_statistics.h" > #include "modules/rtp_rtcp/include/rtp_rtcp_defines.h" > > namespace webrtc { >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/call/video_send_stream.h b/Source/ThirdParty/libwebrtc/Source/webrtc/call/video_send_stream.h >index 4d3d9c0c574cb7f76c6e9f9b95140284ef6ebe42..a76e82e389e94ec42b16efd965c740b40c05166e 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/call/video_send_stream.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/call/video_send_stream.h >@@ -27,7 +27,6 @@ > #include "api/video/video_stream_encoder_settings.h" > #include "api/video_codecs/video_encoder_config.h" > #include "call/rtp_config.h" >-#include "modules/rtp_rtcp/include/rtcp_statistics.h" > #include "modules/rtp_rtcp/include/rtp_rtcp_defines.h" > > namespace webrtc { >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/common_types.h b/Source/ThirdParty/libwebrtc/Source/webrtc/common_types.h >index 848b899a0dd1f489c526d0b25451a5504c2b2b38..41b901cec261e4331beafe30d33e60e354bc065d 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/common_types.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/common_types.h >@@ -39,6 +39,29 @@ enum FrameType { > kVideoFrameDelta = 4, > }; > >+// Statistics for an RTCP channel >+struct RtcpStatistics { >+ RtcpStatistics() >+ : fraction_lost(0), >+ packets_lost(0), >+ extended_highest_sequence_number(0), >+ jitter(0) {} >+ >+ uint8_t fraction_lost; >+ int32_t packets_lost; // Defined as a 24 bit signed integer in RTCP >+ uint32_t extended_highest_sequence_number; >+ uint32_t jitter; >+}; >+ >+class RtcpStatisticsCallback { >+ public: >+ virtual ~RtcpStatisticsCallback() {} >+ >+ virtual void StatisticsUpdated(const RtcpStatistics& statistics, >+ uint32_t ssrc) = 0; >+ virtual void CNameChanged(const char* cname, uint32_t ssrc) = 0; >+}; >+ > // Statistics for RTCP packet types. > struct RtcpPacketTypeCounter { > RtcpPacketTypeCounter() >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/logging/rtc_event_log/rtc_event_log_impl.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/logging/rtc_event_log/rtc_event_log_impl.cc >index c022a3d4185cf143b94e0e281bc6fe725cee3672..c5843d93751f0049e33ead98cd277e13b6a4d1ec 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/logging/rtc_event_log/rtc_event_log_impl.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/logging/rtc_event_log/rtc_event_log_impl.cc >@@ -159,6 +159,12 @@ RtcEventLogImpl::~RtcEventLogImpl() { > > // If we're logging to the output, this will stop that. Blocking function. > StopLogging(); >+ >+ // We want to block on any executing task by invoking ~TaskQueue() before >+ // we set unique_ptr's internal pointer to null. >+ rtc::TaskQueue* tq = task_queue_.get(); >+ delete tq; >+ task_queue_.release(); > } > > bool RtcEventLogImpl::StartLogging(std::unique_ptr<RtcEventLogOutput> output, >@@ -179,10 +185,11 @@ bool RtcEventLogImpl::StartLogging(std::unique_ptr<RtcEventLogOutput> output, > << "(" << timestamp_us << ", " << utc_time_us << ")."; > > // Binding to |this| is safe because |this| outlives the |task_queue_|. >- auto start = [this, timestamp_us, >+ auto start = [this, output_period_ms, timestamp_us, > utc_time_us](std::unique_ptr<RtcEventLogOutput> output) { > RTC_DCHECK_RUN_ON(task_queue_.get()); > RTC_DCHECK(output->IsActive()); >+ output_period_ms_ = output_period_ms; > event_output_ = std::move(output); > num_config_events_written_ = 0; > WriteToOutput(event_encoder_->EncodeLogStart(timestamp_us, utc_time_us)); >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/media/engine/webrtcvideoengine.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/media/engine/webrtcvideoengine.cc >index a75db73697069b937fa88b617c3e3d9a4f4b48f7..5656e8e7f125b00cfb1cad39d658e0fdaa40ae44 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/media/engine/webrtcvideoengine.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/media/engine/webrtcvideoengine.cc >@@ -383,6 +383,9 @@ WebRtcVideoChannel::WebRtcVideoSendStream::ConfigureVideoEncoderSettings( > if (!is_screencast) { > // Limit inter-layer prediction to key pictures. > vp9_settings.interLayerPred = webrtc::InterLayerPredMode::kOnKeyPic; >+ } else { >+ // 3 spatial layers vp9 screenshare needs flexible mode. >+ vp9_settings.flexibleMode = vp9_settings.numberOfSpatialLayers > 2; > } > return new rtc::RefCountedObject< > webrtc::VideoEncoderConfig::Vp9EncoderSpecificSettings>(vp9_settings); >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_coding/BUILD.gn b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_coding/BUILD.gn >index df4ba23fff5d37740138c77b802239b4c628315c..db71079a7e5310d31bb7b722f517ea3aadbd23fd 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_coding/BUILD.gn >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_coding/BUILD.gn >@@ -2103,7 +2103,6 @@ if (rtc_include_tests) { > "../../logging:mocks", > "../../logging:rtc_event_audio", > "../../logging:rtc_event_log_api", >- "../../modules/rtp_rtcp:rtp_rtcp_format", > "../../rtc_base:checks", > "../../rtc_base:protobuf_utils", > "../../rtc_base:rtc_base", >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_coding/neteq/neteq_unittest.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_coding/neteq/neteq_unittest.cc >index e8b50237703148a33aefa834c13a03fc85f6efdf..2aced6aa7e846392fd2d12b0eff1d5a8040b388f 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_coding/neteq/neteq_unittest.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_coding/neteq/neteq_unittest.cc >@@ -22,12 +22,12 @@ > > #include "api/audio/audio_frame.h" > #include "api/audio_codecs/builtin_audio_decoder_factory.h" >+#include "common_types.h" // NOLINT(build/include) > #include "modules/audio_coding/codecs/pcm16b/pcm16b.h" > #include "modules/audio_coding/neteq/tools/audio_loop.h" > #include "modules/audio_coding/neteq/tools/neteq_packet_source_input.h" > #include "modules/audio_coding/neteq/tools/neteq_test.h" > #include "modules/audio_coding/neteq/tools/rtp_file_source.h" >-#include "modules/rtp_rtcp/include/rtcp_statistics.h" > #include "rtc_base/ignore_wundef.h" > #include "rtc_base/messagedigest.h" > #include "rtc_base/numerics/safe_conversions.h" >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/BUILD.gn b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/BUILD.gn >index 189bcfd71255339bc7dbf77f0f370cebb31f4e43..684f192cc59717f1297d3726a4f98b65a83d7721 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/BUILD.gn >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/BUILD.gn >@@ -20,8 +20,6 @@ rtc_static_library("aec3") { > "aec3_fft.h", > "aec_state.cc", > "aec_state.h", >- "api_call_jitter_metrics.cc", >- "api_call_jitter_metrics.h", > "block_delay_buffer.cc", > "block_delay_buffer.h", > "block_framer.cc", >@@ -194,7 +192,6 @@ if (rtc_include_tests) { > "adaptive_fir_filter_unittest.cc", > "aec3_fft_unittest.cc", > "aec_state_unittest.cc", >- "api_call_jitter_metrics_unittest.cc", > "block_delay_buffer_unittest.cc", > "block_framer_unittest.cc", > "block_processor_metrics_unittest.cc", >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/aec_state.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/aec_state.cc >index d5f256b1c1256e9fef27490d5080f292ebbd8aab..45b361fc594219374401e05596d9479198196102 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/aec_state.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/aec_state.cc >@@ -151,8 +151,7 @@ void AecState::Update( > subtractor_output_analyzer_.Update(subtractor_output); > > // Analyze the properties of the filter. >- filter_analyzer_.Update(adaptive_filter_impulse_response, >- adaptive_filter_frequency_response, render_buffer); >+ filter_analyzer_.Update(adaptive_filter_impulse_response, render_buffer); > > // Estimate the direct path delay of the filter. > delay_state_.Update(filter_analyzer_, external_delay, >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics.cc >deleted file mode 100644 >index 45f56a5dce6fab1371f244e89e2c025f7a440c98..0000000000000000000000000000000000000000 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics.cc >+++ /dev/null >@@ -1,121 +0,0 @@ >-/* >- * Copyright (c) 2018 The WebRTC project authors. All Rights Reserved. >- * >- * Use of this source code is governed by a BSD-style license >- * that can be found in the LICENSE file in the root of the source >- * tree. An additional intellectual property rights grant can be found >- * in the file PATENTS. All contributing project authors may >- * be found in the AUTHORS file in the root of the source tree. >- */ >- >-#include "modules/audio_processing/aec3/api_call_jitter_metrics.h" >- >-#include <algorithm> >-#include <limits> >- >-#include "modules/audio_processing/aec3/aec3_common.h" >-#include "system_wrappers/include/metrics.h" >- >-namespace webrtc { >-namespace { >- >-bool TimeToReportMetrics(int frames_since_last_report) { >- constexpr int kNumFramesPerSecond = 100; >- constexpr int kReportingIntervalFrames = 10 * kNumFramesPerSecond; >- return frames_since_last_report == kReportingIntervalFrames; >-} >- >-} // namespace >- >-ApiCallJitterMetrics::Jitter::Jitter() >- : max_(0), min_(std::numeric_limits<int>::max()) {} >- >-void ApiCallJitterMetrics::Jitter::Update(int num_api_calls_in_a_row) { >- min_ = std::min(min_, num_api_calls_in_a_row); >- max_ = std::max(max_, num_api_calls_in_a_row); >-} >- >-void ApiCallJitterMetrics::Jitter::Reset() { >- min_ = std::numeric_limits<int>::max(); >- max_ = 0; >-} >- >-void ApiCallJitterMetrics::Reset() { >- render_jitter_.Reset(); >- capture_jitter_.Reset(); >- num_api_calls_in_a_row_ = 0; >- frames_since_last_report_ = 0; >- last_call_was_render_ = false; >- proper_call_observed_ = false; >-} >- >-void ApiCallJitterMetrics::ReportRenderCall() { >- if (!last_call_was_render_) { >- // If the previous call was a capture and a proper call has been observed >- // (containing both render and capture data), storing the last number of >- // capture calls into the metrics. >- if (proper_call_observed_) { >- capture_jitter_.Update(num_api_calls_in_a_row_); >- } >- >- // Reset the call counter to start counting render calls. >- num_api_calls_in_a_row_ = 0; >- } >- ++num_api_calls_in_a_row_; >- last_call_was_render_ = true; >-} >- >-void ApiCallJitterMetrics::ReportCaptureCall() { >- if (last_call_was_render_) { >- // If the previous call was a render and a proper call has been observed >- // (containing both render and capture data), storing the last number of >- // render calls into the metrics. >- if (proper_call_observed_) { >- render_jitter_.Update(num_api_calls_in_a_row_); >- } >- // Reset the call counter to start counting capture calls. >- num_api_calls_in_a_row_ = 0; >- >- // If this statement is reached, at least one render and one capture call >- // have been observed. >- proper_call_observed_ = true; >- } >- ++num_api_calls_in_a_row_; >- last_call_was_render_ = false; >- >- // Only report and update jitter metrics for when a proper call, containing >- // both render and capture data, has been observed. >- if (proper_call_observed_ && >- TimeToReportMetrics(++frames_since_last_report_)) { >- // Report jitter, where the base basic unit is frames. >- constexpr int kMaxJitterToReport = 50; >- >- // Report max and min jitter for render and capture, in units of 20 ms. >- RTC_HISTOGRAM_COUNTS_LINEAR( >- "WebRTC.Audio.EchoCanceller.MaxRenderJitter", >- std::min(kMaxJitterToReport, render_jitter().max()), 1, >- kMaxJitterToReport, kMaxJitterToReport); >- RTC_HISTOGRAM_COUNTS_LINEAR( >- "WebRTC.Audio.EchoCanceller.MinRenderJitter", >- std::min(kMaxJitterToReport, render_jitter().min()), 1, >- kMaxJitterToReport, kMaxJitterToReport); >- >- RTC_HISTOGRAM_COUNTS_LINEAR( >- "WebRTC.Audio.EchoCanceller.MaxCaptureJitter", >- std::min(kMaxJitterToReport, capture_jitter().max()), 1, >- kMaxJitterToReport, kMaxJitterToReport); >- RTC_HISTOGRAM_COUNTS_LINEAR( >- "WebRTC.Audio.EchoCanceller.MinCaptureJitter", >- std::min(kMaxJitterToReport, capture_jitter().min()), 1, >- kMaxJitterToReport, kMaxJitterToReport); >- >- frames_since_last_report_ = 0; >- Reset(); >- } >-} >- >-bool ApiCallJitterMetrics::WillReportMetricsAtNextCapture() const { >- return TimeToReportMetrics(frames_since_last_report_ + 1); >-} >- >-} // namespace webrtc >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics.h >deleted file mode 100644 >index dd1fa82e9324707292e87ee2f375cf85e717f3db..0000000000000000000000000000000000000000 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics.h >+++ /dev/null >@@ -1,60 +0,0 @@ >-/* >- * Copyright (c) 2018 The WebRTC project authors. All Rights Reserved. >- * >- * Use of this source code is governed by a BSD-style license >- * that can be found in the LICENSE file in the root of the source >- * tree. An additional intellectual property rights grant can be found >- * in the file PATENTS. All contributing project authors may >- * be found in the AUTHORS file in the root of the source tree. >- */ >- >-#ifndef MODULES_AUDIO_PROCESSING_AEC3_API_CALL_JITTER_METRICS_H_ >-#define MODULES_AUDIO_PROCESSING_AEC3_API_CALL_JITTER_METRICS_H_ >- >-namespace webrtc { >- >-// Stores data for reporting metrics on the API call jitter. >-class ApiCallJitterMetrics { >- public: >- class Jitter { >- public: >- Jitter(); >- void Update(int num_api_calls_in_a_row); >- void Reset(); >- >- int min() const { return min_; } >- int max() const { return max_; } >- >- private: >- int max_; >- int min_; >- }; >- >- ApiCallJitterMetrics() { Reset(); } >- >- // Update metrics for render API call. >- void ReportRenderCall(); >- >- // Update and periodically report metrics for capture API call. >- void ReportCaptureCall(); >- >- // Methods used only for testing. >- const Jitter& render_jitter() const { return render_jitter_; } >- const Jitter& capture_jitter() const { return capture_jitter_; } >- bool WillReportMetricsAtNextCapture() const; >- >- private: >- void Reset(); >- >- Jitter render_jitter_; >- Jitter capture_jitter_; >- >- int num_api_calls_in_a_row_ = 0; >- int frames_since_last_report_ = 0; >- bool last_call_was_render_ = false; >- bool proper_call_observed_ = false; >-}; >- >-} // namespace webrtc >- >-#endif // MODULES_AUDIO_PROCESSING_AEC3_API_CALL_JITTER_METRICS_H_ >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics_unittest.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics_unittest.cc >deleted file mode 100644 >index 86608aa3e1093f97bc370dc9aa6b91bbff4933d4..0000000000000000000000000000000000000000 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/api_call_jitter_metrics_unittest.cc >+++ /dev/null >@@ -1,109 +0,0 @@ >-/* >- * Copyright (c) 2018 The WebRTC project authors. All Rights Reserved. >- * >- * Use of this source code is governed by a BSD-style license >- * that can be found in the LICENSE file in the root of the source >- * tree. An additional intellectual property rights grant can be found >- * in the file PATENTS. All contributing project authors may >- * be found in the AUTHORS file in the root of the source tree. >- */ >- >-#include "modules/audio_processing/aec3/api_call_jitter_metrics.h" >-#include "modules/audio_processing/aec3/aec3_common.h" >- >-#include "test/gtest.h" >- >-namespace webrtc { >- >-// Verify constant jitter. >-TEST(ApiCallJitterMetrics, ConstantJitter) { >- for (int jitter = 1; jitter < 20; ++jitter) { >- ApiCallJitterMetrics metrics; >- for (size_t k = 0; k < 30 * kNumBlocksPerSecond; ++k) { >- for (int j = 0; j < jitter; ++j) { >- metrics.ReportRenderCall(); >- } >- >- for (int j = 0; j < jitter; ++j) { >- metrics.ReportCaptureCall(); >- >- if (metrics.WillReportMetricsAtNextCapture()) { >- EXPECT_EQ(jitter, metrics.render_jitter().min()); >- EXPECT_EQ(jitter, metrics.render_jitter().max()); >- EXPECT_EQ(jitter, metrics.capture_jitter().min()); >- EXPECT_EQ(jitter, metrics.capture_jitter().max()); >- } >- } >- } >- } >-} >- >-// Verify peaky jitter for the render. >-TEST(ApiCallJitterMetrics, JitterPeakRender) { >- constexpr int kMinJitter = 2; >- constexpr int kJitterPeak = 10; >- constexpr int kPeakInterval = 100; >- >- ApiCallJitterMetrics metrics; >- int render_surplus = 0; >- >- for (size_t k = 0; k < 30 * kNumBlocksPerSecond; ++k) { >- const int num_render_calls = >- k % kPeakInterval == 0 ? kJitterPeak : kMinJitter; >- for (int j = 0; j < num_render_calls; ++j) { >- metrics.ReportRenderCall(); >- ++render_surplus; >- } >- >- ASSERT_LE(kMinJitter, render_surplus); >- const int num_capture_calls = >- render_surplus == kMinJitter ? kMinJitter : kMinJitter + 1; >- for (int j = 0; j < num_capture_calls; ++j) { >- metrics.ReportCaptureCall(); >- >- if (metrics.WillReportMetricsAtNextCapture()) { >- EXPECT_EQ(kMinJitter, metrics.render_jitter().min()); >- EXPECT_EQ(kJitterPeak, metrics.render_jitter().max()); >- EXPECT_EQ(kMinJitter, metrics.capture_jitter().min()); >- EXPECT_EQ(kMinJitter + 1, metrics.capture_jitter().max()); >- } >- --render_surplus; >- } >- } >-} >- >-// Verify peaky jitter for the capture. >-TEST(ApiCallJitterMetrics, JitterPeakCapture) { >- constexpr int kMinJitter = 2; >- constexpr int kJitterPeak = 10; >- constexpr int kPeakInterval = 100; >- >- ApiCallJitterMetrics metrics; >- int capture_surplus = kMinJitter; >- >- for (size_t k = 0; k < 30 * kNumBlocksPerSecond; ++k) { >- ASSERT_LE(kMinJitter, capture_surplus); >- const int num_render_calls = >- capture_surplus == kMinJitter ? kMinJitter : kMinJitter + 1; >- for (int j = 0; j < num_render_calls; ++j) { >- metrics.ReportRenderCall(); >- --capture_surplus; >- } >- >- const int num_capture_calls = >- k % kPeakInterval == 0 ? kJitterPeak : kMinJitter; >- for (int j = 0; j < num_capture_calls; ++j) { >- metrics.ReportCaptureCall(); >- >- if (metrics.WillReportMetricsAtNextCapture()) { >- EXPECT_EQ(kMinJitter, metrics.render_jitter().min()); >- EXPECT_EQ(kMinJitter + 1, metrics.render_jitter().max()); >- EXPECT_EQ(kMinJitter, metrics.capture_jitter().min()); >- EXPECT_EQ(kJitterPeak, metrics.capture_jitter().max()); >- } >- ++capture_surplus; >- } >- } >-} >- >-} // namespace webrtc >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/echo_canceller3.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/echo_canceller3.cc >index f05edb15c3e56af98ca6bfca90262316f86cb39b..33e90e882ebf6daf1c4dbc65acbea97d7a4ae0ab 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/echo_canceller3.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/echo_canceller3.cc >@@ -147,7 +147,7 @@ EchoCanceller3Config AdjustConfig(const EchoCanceller3Config& config) { > EchoCanceller3Config::Suppressor::MaskingThresholds(.07f, .1f, .3f), > 2.0f, 0.25f); > >- adjusted_cfg.suppressor.dominant_nearend_detection.enr_threshold = 10.f; >+ adjusted_cfg.suppressor.dominant_nearend_detection.enr_threshold = 0.1f; > adjusted_cfg.suppressor.dominant_nearend_detection.snr_threshold = 10.f; > adjusted_cfg.suppressor.dominant_nearend_detection.hold_duration = 25; > } >@@ -446,10 +446,6 @@ void EchoCanceller3::ProcessCapture(AudioBuffer* capture, bool level_change) { > data_dumper_->DumpRaw("aec3_call_order", > static_cast<int>(EchoCanceller3ApiCall::kCapture)); > >- // Report capture call in the metrics and periodically update API call >- // metrics. >- api_call_metrics_.ReportCaptureCall(); >- > // Optionally delay the capture signal. > if (config_.delay.fixed_capture_delay_samples > 0) { > block_delay_buffer_.DelaySignal(capture); >@@ -504,9 +500,6 @@ void EchoCanceller3::EmptyRenderQueue() { > bool frame_to_buffer = > render_transfer_queue_.Remove(&render_queue_output_frame_); > while (frame_to_buffer) { >- // Report render call in the metrics. >- api_call_metrics_.ReportRenderCall(); >- > BufferRenderFrameContent(&render_queue_output_frame_, 0, &render_blocker_, > block_processor_.get(), &block_, &sub_frame_view_); > >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/echo_canceller3.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/echo_canceller3.h >index 671d27167675c66c99c3e21e2572c6baf734e4f2..0d07702c84f47b4ea3a82549991951a5729c0fc5 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/echo_canceller3.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/echo_canceller3.h >@@ -18,7 +18,6 @@ > #include "api/array_view.h" > #include "api/audio/echo_canceller3_config.h" > #include "api/audio/echo_control.h" >-#include "modules/audio_processing/aec3/api_call_jitter_metrics.h" > #include "modules/audio_processing/aec3/block_delay_buffer.h" > #include "modules/audio_processing/aec3/block_framer.h" > #include "modules/audio_processing/aec3/block_processor.h" >@@ -141,7 +140,6 @@ class EchoCanceller3 : public EchoControl { > std::vector<rtc::ArrayView<float>> sub_frame_view_ > RTC_GUARDED_BY(capture_race_checker_); > BlockDelayBuffer block_delay_buffer_ RTC_GUARDED_BY(capture_race_checker_); >- ApiCallJitterMetrics api_call_metrics_ RTC_GUARDED_BY(capture_race_checker_); > > RTC_DISALLOW_IMPLICIT_CONSTRUCTORS(EchoCanceller3); > }; >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/filter_analyzer.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/filter_analyzer.cc >index 5b890d74746c9de1bccfc5f2ae3134803bc31bd5..3e69be6585066f0ef51d320b50ceaded28008cda 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/filter_analyzer.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/filter_analyzer.cc >@@ -25,18 +25,22 @@ > namespace webrtc { > namespace { > >-size_t FindPeakIndex(rtc::ArrayView<const float> filter_time_domain) { >- size_t peak_index = 0; >- float max_h2 = filter_time_domain[0] * filter_time_domain[0]; >- for (size_t k = 1; k < filter_time_domain.size(); ++k) { >+size_t FindPeakIndex(rtc::ArrayView<const float> filter_time_domain, >+ size_t peak_index_in, >+ size_t start_sample, >+ size_t end_sample) { >+ size_t peak_index_out = peak_index_in; >+ float max_h2 = >+ filter_time_domain[peak_index_out] * filter_time_domain[peak_index_out]; >+ for (size_t k = start_sample; k <= end_sample; ++k) { > float tmp = filter_time_domain[k] * filter_time_domain[k]; > if (tmp > max_h2) { >- peak_index = k; >+ peak_index_out = k; > max_h2 = tmp; > } > } > >- return peak_index; >+ return peak_index_out; > } > > bool EnableFilterPreprocessing() { >@@ -44,6 +48,11 @@ bool EnableFilterPreprocessing() { > "WebRTC-Aec3FilterAnalyzerPreprocessorKillSwitch"); > } > >+bool EnableIncrementalAnalysis() { >+ return !field_trial::IsEnabled( >+ "WebRTC-Aec3FilterAnalyzerIncrementalAnalysisKillSwitch"); >+} >+ > } // namespace > > int FilterAnalyzer::instance_count_ = 0; >@@ -54,46 +63,37 @@ FilterAnalyzer::FilterAnalyzer(const EchoCanceller3Config& config) > use_preprocessed_filter_(EnableFilterPreprocessing()), > bounded_erl_(config.ep_strength.bounded_erl), > default_gain_(config.ep_strength.lf), >- active_render_threshold_(config.render_levels.active_render_limit * >- config.render_levels.active_render_limit * >- kFftLengthBy2), >+ use_incremental_analysis_(EnableIncrementalAnalysis()), > h_highpass_(GetTimeDomainLength(config.filter.main.length_blocks), 0.f), >- filter_length_blocks_(config.filter.main_initial.length_blocks) { >+ filter_length_blocks_(config.filter.main_initial.length_blocks), >+ consistent_filter_detector_(config) { > Reset(); > } > >-void FilterAnalyzer::PreProcessFilter( >- rtc::ArrayView<const float> filter_time_domain) { >- RTC_DCHECK_GE(h_highpass_.capacity(), filter_time_domain.size()); >- h_highpass_.resize(filter_time_domain.size()); >- // Minimum phase high-pass filter with cutoff frequency at about 600 Hz. >- constexpr std::array<float, 3> h = {{0.7929742f, -0.36072128f, -0.47047766f}}; >- >- std::fill(h_highpass_.begin(), h_highpass_.end(), 0.f); >- for (size_t k = h.size() - 1; k < filter_time_domain.size(); ++k) { >- for (size_t j = 0; j < h.size(); ++j) { >- h_highpass_[k] += filter_time_domain[k - j] * h[j]; >- } >- } >-} >- > FilterAnalyzer::~FilterAnalyzer() = default; > > void FilterAnalyzer::Reset() { > delay_blocks_ = 0; >- consistent_estimate_ = false; > blocks_since_reset_ = 0; >- consistent_estimate_ = false; >- consistent_estimate_counter_ = 0; >- consistent_delay_reference_ = -10; > gain_ = default_gain_; >+ peak_index_ = 0; >+ ResetRegion(); >+ consistent_filter_detector_.Reset(); >+} >+ >+void FilterAnalyzer::Update(rtc::ArrayView<const float> filter_time_domain, >+ const RenderBuffer& render_buffer) { >+ SetRegionToAnalyze(filter_time_domain); >+ AnalyzeRegion(filter_time_domain, render_buffer); > } > >-void FilterAnalyzer::Update( >+void FilterAnalyzer::AnalyzeRegion( > rtc::ArrayView<const float> filter_time_domain, >- const std::vector<std::array<float, kFftLengthBy2Plus1>>& >- filter_freq_response, > const RenderBuffer& render_buffer) { >+ RTC_DCHECK_LT(region_.start_sample_, filter_time_domain.size()); >+ RTC_DCHECK_LT(peak_index_, filter_time_domain.size()); >+ RTC_DCHECK_LT(region_.end_sample_, filter_time_domain.size()); >+ > // Preprocess the filter to avoid issues with low-frequency components in the > // filter. > PreProcessFilter(filter_time_domain); >@@ -103,51 +103,15 @@ void FilterAnalyzer::Update( > use_preprocessed_filter_ ? h_highpass_ : filter_time_domain; > RTC_DCHECK_EQ(filter_to_analyze.size(), filter_time_domain.size()); > >- size_t peak_index = FindPeakIndex(filter_to_analyze); >- delay_blocks_ = peak_index >> kBlockSizeLog2; >- UpdateFilterGain(filter_to_analyze, peak_index); >- >- float filter_floor = 0; >- float filter_secondary_peak = 0; >- size_t limit1 = peak_index < 64 ? 0 : peak_index - 64; >- size_t limit2 = >- peak_index > filter_to_analyze.size() - 129 ? 0 : peak_index + 128; >- >- for (size_t k = 0; k < limit1; ++k) { >- float abs_h = fabsf(filter_to_analyze[k]); >- filter_floor += abs_h; >- filter_secondary_peak = std::max(filter_secondary_peak, abs_h); >- } >- for (size_t k = limit2; k < filter_to_analyze.size(); ++k) { >- float abs_h = fabsf(filter_to_analyze[k]); >- filter_floor += abs_h; >- filter_secondary_peak = std::max(filter_secondary_peak, abs_h); >- } >- >- filter_floor /= (limit1 + filter_to_analyze.size() - limit2); >- >- float abs_peak = fabsf(filter_to_analyze[peak_index]); >- bool significant_peak_index = >- abs_peak > 10.f * filter_floor && abs_peak > 2.f * filter_secondary_peak; >- >- if (consistent_delay_reference_ != delay_blocks_ || !significant_peak_index) { >- consistent_estimate_counter_ = 0; >- consistent_delay_reference_ = delay_blocks_; >- } else { >- const auto& x = render_buffer.Block(-delay_blocks_)[0]; >- const float x_energy = >- std::inner_product(x.begin(), x.end(), x.begin(), 0.f); >- const bool active_render_block = x_energy > active_render_threshold_; >- >- if (active_render_block) { >- ++consistent_estimate_counter_; >- } >- } >- >- consistent_estimate_ = >- consistent_estimate_counter_ > 1.5f * kNumBlocksPerSecond; >- >+ peak_index_ = FindPeakIndex(filter_to_analyze, peak_index_, >+ region_.start_sample_, region_.end_sample_); >+ delay_blocks_ = peak_index_ >> kBlockSizeLog2; >+ UpdateFilterGain(filter_to_analyze, peak_index_); > filter_length_blocks_ = filter_time_domain.size() * (1.f / kBlockSize); >+ >+ consistent_estimate_ = consistent_filter_detector_.Detect( >+ filter_to_analyze, region_, render_buffer.Block(-delay_blocks_)[0], >+ peak_index_, delay_blocks_); > } > > void FilterAnalyzer::UpdateFilterGain( >@@ -169,4 +133,114 @@ void FilterAnalyzer::UpdateFilterGain( > } > } > >+void FilterAnalyzer::PreProcessFilter( >+ rtc::ArrayView<const float> filter_time_domain) { >+ RTC_DCHECK_GE(h_highpass_.capacity(), filter_time_domain.size()); >+ h_highpass_.resize(filter_time_domain.size()); >+ // Minimum phase high-pass filter with cutoff frequency at about 600 Hz. >+ constexpr std::array<float, 3> h = {{0.7929742f, -0.36072128f, -0.47047766f}}; >+ >+ std::fill(h_highpass_.begin() + region_.start_sample_, >+ h_highpass_.begin() + region_.end_sample_ + 1, 0.f); >+ for (size_t k = std::max(h.size() - 1, region_.start_sample_); >+ k <= region_.end_sample_; ++k) { >+ for (size_t j = 0; j < h.size(); ++j) { >+ h_highpass_[k] += filter_time_domain[k - j] * h[j]; >+ } >+ } >+} >+ >+void FilterAnalyzer::ResetRegion() { >+ region_.start_sample_ = 0; >+ region_.end_sample_ = 0; >+} >+ >+void FilterAnalyzer::SetRegionToAnalyze( >+ rtc::ArrayView<const float> filter_time_domain) { >+ constexpr size_t kNumberBlocksToUpdate = 1; >+ auto& r = region_; >+ if (use_incremental_analysis_) { >+ r.start_sample_ = >+ r.end_sample_ == filter_time_domain.size() - 1 ? 0 : r.end_sample_ + 1; >+ r.end_sample_ = >+ std::min(r.start_sample_ + kNumberBlocksToUpdate * kBlockSize - 1, >+ filter_time_domain.size() - 1); >+ >+ } else { >+ r.start_sample_ = 0; >+ r.end_sample_ = filter_time_domain.size() - 1; >+ } >+} >+ >+FilterAnalyzer::ConsistentFilterDetector::ConsistentFilterDetector( >+ const EchoCanceller3Config& config) >+ : active_render_threshold_(config.render_levels.active_render_limit * >+ config.render_levels.active_render_limit * >+ kFftLengthBy2) {} >+ >+void FilterAnalyzer::ConsistentFilterDetector::Reset() { >+ significant_peak_ = false; >+ filter_floor_accum_ = 0.f; >+ filter_secondary_peak_ = 0.f; >+ filter_floor_low_limit_ = 0; >+ filter_floor_high_limit_ = 0; >+ consistent_estimate_counter_ = 0; >+ consistent_delay_reference_ = -10; >+} >+ >+bool FilterAnalyzer::ConsistentFilterDetector::Detect( >+ rtc::ArrayView<const float> filter_to_analyze, >+ const FilterRegion& region, >+ rtc::ArrayView<const float> x_block, >+ size_t peak_index, >+ int delay_blocks) { >+ if (region.start_sample_ == 0) { >+ filter_floor_accum_ = 0.f; >+ filter_secondary_peak_ = 0.f; >+ filter_floor_low_limit_ = peak_index < 64 ? 0 : peak_index - 64; >+ filter_floor_high_limit_ = >+ peak_index > filter_to_analyze.size() - 129 ? 0 : peak_index + 128; >+ } >+ >+ for (size_t k = region.start_sample_; >+ k < std::min(region.end_sample_ + 1, filter_floor_low_limit_); ++k) { >+ float abs_h = fabsf(filter_to_analyze[k]); >+ filter_floor_accum_ += abs_h; >+ filter_secondary_peak_ = std::max(filter_secondary_peak_, abs_h); >+ } >+ >+ for (size_t k = std::max(filter_floor_high_limit_, region.start_sample_); >+ k <= region.end_sample_; ++k) { >+ float abs_h = fabsf(filter_to_analyze[k]); >+ filter_floor_accum_ += abs_h; >+ filter_secondary_peak_ = std::max(filter_secondary_peak_, abs_h); >+ } >+ >+ if (region.end_sample_ == filter_to_analyze.size() - 1) { >+ float filter_floor = filter_floor_accum_ / >+ (filter_floor_low_limit_ + filter_to_analyze.size() - >+ filter_floor_high_limit_); >+ >+ float abs_peak = fabsf(filter_to_analyze[peak_index]); >+ significant_peak_ = abs_peak > 10.f * filter_floor && >+ abs_peak > 2.f * filter_secondary_peak_; >+ } >+ >+ if (significant_peak_) { >+ const float x_energy = std::inner_product(x_block.begin(), x_block.end(), >+ x_block.begin(), 0.f); >+ const bool active_render_block = x_energy > active_render_threshold_; >+ >+ if (consistent_delay_reference_ == delay_blocks) { >+ if (active_render_block) { >+ ++consistent_estimate_counter_; >+ } >+ } else { >+ consistent_estimate_counter_ = 0; >+ consistent_delay_reference_ = delay_blocks; >+ } >+ } >+ return consistent_estimate_counter_ > 1.5f * kNumBlocksPerSecond; >+} >+ > } // namespace webrtc >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/filter_analyzer.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/filter_analyzer.h >index 99a0e2597377234ae219081aa8d33e10f978552a..e0fd0695cb9e965998a3fed8e7cb7da8f456aa62 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/filter_analyzer.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/filter_analyzer.h >@@ -37,8 +37,6 @@ class FilterAnalyzer { > > // Updates the estimates with new input data. > void Update(rtc::ArrayView<const float> filter_time_domain, >- const std::vector<std::array<float, kFftLengthBy2Plus1>>& >- filter_freq_response, > const RenderBuffer& render_buffer); > > // Returns the delay of the filter in terms of blocks. >@@ -58,24 +56,61 @@ class FilterAnalyzer { > rtc::ArrayView<const float> GetAdjustedFilter() const { return h_highpass_; } > > private: >+ void AnalyzeRegion(rtc::ArrayView<const float> filter_time_domain, >+ const RenderBuffer& render_buffer); >+ > void UpdateFilterGain(rtc::ArrayView<const float> filter_time_domain, > size_t max_index); > void PreProcessFilter(rtc::ArrayView<const float> filter_time_domain); > >+ void ResetRegion(); >+ >+ void SetRegionToAnalyze(rtc::ArrayView<const float> filter_time_domain); >+ >+ struct FilterRegion { >+ size_t start_sample_; >+ size_t end_sample_; >+ }; >+ >+ // This class checks whether the shape of the impulse response has been >+ // consistent over time. >+ class ConsistentFilterDetector { >+ public: >+ explicit ConsistentFilterDetector(const EchoCanceller3Config& config); >+ void Reset(); >+ bool Detect(rtc::ArrayView<const float> filter_to_analyze, >+ const FilterRegion& region, >+ rtc::ArrayView<const float> x_block, >+ size_t peak_index, >+ int delay_blocks); >+ >+ private: >+ bool significant_peak_; >+ float filter_floor_accum_; >+ float filter_secondary_peak_; >+ size_t filter_floor_low_limit_; >+ size_t filter_floor_high_limit_; >+ const float active_render_threshold_; >+ size_t consistent_estimate_counter_ = 0; >+ int consistent_delay_reference_ = -10; >+ }; >+ > static int instance_count_; > std::unique_ptr<ApmDataDumper> data_dumper_; > const bool use_preprocessed_filter_; > const bool bounded_erl_; > const float default_gain_; >- const float active_render_threshold_; >+ const bool use_incremental_analysis_; > std::vector<float> h_highpass_; > int delay_blocks_ = 0; > size_t blocks_since_reset_ = 0; > bool consistent_estimate_ = false; >- size_t consistent_estimate_counter_ = 0; >- int consistent_delay_reference_ = -10; > float gain_; >+ size_t peak_index_; > int filter_length_blocks_; >+ FilterRegion region_; >+ ConsistentFilterDetector consistent_filter_detector_; >+ > RTC_DISALLOW_COPY_AND_ASSIGN(FilterAnalyzer); > }; > >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/suppression_gain.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/suppression_gain.cc >index 88cfc0a01e4e0d18385c59bfc6180c7ce9773681..c6d2bf6673ef43c7497325b84c72dd4c6a348339 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/suppression_gain.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/audio_processing/aec3/suppression_gain.cc >@@ -419,7 +419,7 @@ void SuppressionGain::DominantNearendDetector::Update( > // Detect strong active nearend if the nearend is sufficiently stronger than > // the echo and the nearend noise. > if ((!initial_state || use_during_initial_phase_) && >- ne_sum > enr_threshold_ * echo_sum && >+ echo_sum < enr_threshold_ * ne_sum && > ne_sum > snr_threshold_ * noise_sum) { > if (++trigger_counter_ >= trigger_threshold_) { > // After a period of strong active nearend activity, flag nearend mode. >@@ -432,7 +432,7 @@ void SuppressionGain::DominantNearendDetector::Update( > } > > // Exit nearend-state early at strong echo. >- if (ne_sum < enr_exit_threshold_ * echo_sum && >+ if (echo_sum > enr_exit_threshold_ * ne_sum && > echo_sum > snr_threshold_ * noise_sum) { > hold_counter_ = 0; > } >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/BUILD.gn b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/BUILD.gn >index edb981bc67edc172d1ad858b59435a40a0b65835..4f621847f5dec42aa658929a4cad21754c7d0b31 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/BUILD.gn >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/BUILD.gn >@@ -11,7 +11,6 @@ import("../../webrtc.gni") > rtc_source_set("rtp_rtcp_format") { > visibility = [ "*" ] > public = [ >- "include/rtcp_statistics.h", > "include/rtp_cvo.h", > "include/rtp_header_extension_map.h", > "include/rtp_rtcp_defines.h", >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/receive_statistics.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/receive_statistics.h >index c299ea69d66ec03bd481dc79e86c539734541db0..f905eb15f19986590d32a9631abf2d65413f0a0d 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/receive_statistics.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/receive_statistics.h >@@ -18,7 +18,6 @@ > #include "call/rtp_packet_sink_interface.h" > #include "modules/include/module.h" > #include "modules/include/module_common_types.h" >-#include "modules/rtp_rtcp/include/rtcp_statistics.h" > #include "modules/rtp_rtcp/include/rtp_rtcp_defines.h" > #include "modules/rtp_rtcp/source/rtcp_packet/report_block.h" > #include "rtc_base/deprecation.h" >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/rtcp_statistics.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/rtcp_statistics.h >deleted file mode 100644 >index e1d576de2d86fa8e85226bbfb59f5a6b3874bf62..0000000000000000000000000000000000000000 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/rtcp_statistics.h >+++ /dev/null >@@ -1,36 +0,0 @@ >-/* >- * Copyright (c) 2018 The WebRTC project authors. All Rights Reserved. >- * >- * Use of this source code is governed by a BSD-style license >- * that can be found in the LICENSE file in the root of the source >- * tree. An additional intellectual property rights grant can be found >- * in the file PATENTS. All contributing project authors may >- * be found in the AUTHORS file in the root of the source tree. >- */ >- >-#ifndef MODULES_RTP_RTCP_INCLUDE_RTCP_STATISTICS_H_ >-#define MODULES_RTP_RTCP_INCLUDE_RTCP_STATISTICS_H_ >- >-#include <stdint.h> >- >-namespace webrtc { >- >-// Statistics for an RTCP channel >-struct RtcpStatistics { >- uint8_t fraction_lost = 0; >- int32_t packets_lost = 0; // Defined as a 24 bit signed integer in RTCP >- uint32_t extended_highest_sequence_number = 0; >- uint32_t jitter = 0; >-}; >- >-class RtcpStatisticsCallback { >- public: >- virtual ~RtcpStatisticsCallback() {} >- >- virtual void StatisticsUpdated(const RtcpStatistics& statistics, >- uint32_t ssrc) = 0; >- virtual void CNameChanged(const char* cname, uint32_t ssrc) = 0; >-}; >- >-} // namespace webrtc >-#endif // MODULES_RTP_RTCP_INCLUDE_RTCP_STATISTICS_H_ >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/rtp_rtcp.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/rtp_rtcp.h >index d136a5e6e9a3ba7623ccb3c4c732131212714d69..d81ea9abec854fd30e8d620a40bc35c761a7636d 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/rtp_rtcp.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/rtp_rtcp.h >@@ -18,9 +18,9 @@ > > #include "absl/types/optional.h" > #include "api/video/video_bitrate_allocation.h" >+#include "common_types.h" // NOLINT(build/include) > #include "modules/include/module.h" > #include "modules/rtp_rtcp/include/flexfec_sender.h" >-#include "modules/rtp_rtcp/include/receive_statistics.h" > #include "modules/rtp_rtcp/include/rtp_rtcp_defines.h" > #include "rtc_base/constructormagic.h" > #include "rtc_base/deprecation.h" >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/rtp_rtcp_defines.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/rtp_rtcp_defines.h >index ab4fcaecc79a481071bc166dc79a52feb0a62556..a8bf5a705fc903c9b41a457b8c2d6be72eee908f 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/rtp_rtcp_defines.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/include/rtp_rtcp_defines.h >@@ -104,7 +104,7 @@ enum RTPExtensionType : int { > kRtpExtensionRepairedRtpStreamId, > kRtpExtensionMid, > kRtpExtensionGenericFrameDescriptor, >- kRtpExtensionColorSpace, >+ kRtpExtensionHdrMetadata, > kRtpExtensionNumberOfExtensions // Must be the last entity in the enum. > }; > >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtcp_receiver.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtcp_receiver.h >index be4c70e3cea49186e009bd17f6d0cd4e1b448645..933faf9cec2cb0b065a7e289fadee3e1df0f0077 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtcp_receiver.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtcp_receiver.h >@@ -17,7 +17,6 @@ > #include <string> > #include <vector> > >-#include "modules/rtp_rtcp/include/rtcp_statistics.h" > #include "modules/rtp_rtcp/include/rtp_rtcp_defines.h" > #include "modules/rtp_rtcp/source/rtcp_nack_stats.h" > #include "modules/rtp_rtcp/source/rtcp_packet/dlrr.h" >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extension_map.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extension_map.cc >index 8e0a484d9774e606ead845cda46e4087266cc72f..49857a033862da801427b7402bfe5550661a03f1 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extension_map.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extension_map.cc >@@ -43,7 +43,7 @@ constexpr ExtensionInfo kExtensions[] = { > CreateExtensionInfo<RepairedRtpStreamId>(), > CreateExtensionInfo<RtpMid>(), > CreateExtensionInfo<RtpGenericFrameDescriptorExtension>(), >- CreateExtensionInfo<ColorSpaceExtension>(), >+ CreateExtensionInfo<HdrMetadataExtension>(), > }; > > // Because of kRtpExtensionNone, NumberOfExtension is 1 bigger than the actual >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extensions.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extensions.cc >index 92694cd5a3a466a328d81c4377f04561b28b2080..fe327b3011240bf9276aa9d829d95d31558b900b 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extensions.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extensions.cc >@@ -434,18 +434,15 @@ bool FrameMarkingExtension::Write(rtc::ArrayView<uint8_t> data, > return true; > } > >-// Color space including HDR metadata as an optional field. >+// HDR Metadata. > // > // RTP header extension to carry HDR metadata. > // Float values are upscaled by a static factor and transmitted as integers. > // >-// Data layout with HDR metadata > // 0 1 2 3 > // 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 > // +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ >-// | ID | length=30 | Primaries | Transfer | >-// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ >-// | Matrix | Range | luminance_max | >+// | ID | length | luminance_max | > // +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > // | | luminance_min | > // +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ >@@ -457,111 +454,77 @@ bool FrameMarkingExtension::Write(rtc::ArrayView<uint8_t> data, > // +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ > // | mastering_metadata.white.x and .y | > // +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ >-// | max_content_light_level | max_frame_average_light_level | >+// | max_content_light_level | > // +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ >-// >-// Data layout without HDR metadata >-// 0 1 2 3 >-// 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 >+// | max_frame_average_light_level | > // +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ >-// | ID | length=4 | Primaries | Transfer | >-// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ >-// | Matrix | Range | >-// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- >- >-constexpr RTPExtensionType ColorSpaceExtension::kId; >-constexpr uint8_t ColorSpaceExtension::kValueSizeBytes; >-constexpr const char ColorSpaceExtension::kUri[]; >- >-bool ColorSpaceExtension::Parse(rtc::ArrayView<const uint8_t> data, >- ColorSpace* color_space) { >- RTC_DCHECK(color_space); >- if (data.size() != kValueSizeBytes && >- data.size() != kValueSizeBytesWithoutHdrMetadata) >+constexpr RTPExtensionType HdrMetadataExtension::kId; >+constexpr uint8_t HdrMetadataExtension::kValueSizeBytes; >+constexpr const char HdrMetadataExtension::kUri[]; >+ >+bool HdrMetadataExtension::Parse(rtc::ArrayView<const uint8_t> data, >+ HdrMetadata* hdr_metadata) { >+ RTC_DCHECK(hdr_metadata); >+ if (data.size() != kValueSizeBytes) > return false; > > size_t offset = 0; >- // Read color space information. >- if (!color_space->set_primaries_from_uint8(data.data()[offset++])) >- return false; >- if (!color_space->set_transfer_from_uint8(data.data()[offset++])) >- return false; >- if (!color_space->set_matrix_from_uint8(data.data()[offset++])) >- return false; >- if (!color_space->set_range_from_uint8(data.data()[offset++])) >- return false; >- >- // Read HDR metadata if it exists, otherwise clear it. >- if (data.size() == kValueSizeBytesWithoutHdrMetadata) { >- color_space->set_hdr_metadata(nullptr); >- } else { >- HdrMetadata hdr_metadata; >- offset += ParseLuminance(data.data() + offset, >- &hdr_metadata.mastering_metadata.luminance_max, >- kLuminanceMaxDenominator); >- offset += ParseLuminance(data.data() + offset, >- &hdr_metadata.mastering_metadata.luminance_min, >- kLuminanceMinDenominator); >- offset += ParseChromaticity(data.data() + offset, >- &hdr_metadata.mastering_metadata.primary_r); >- offset += ParseChromaticity(data.data() + offset, >- &hdr_metadata.mastering_metadata.primary_g); >- offset += ParseChromaticity(data.data() + offset, >- &hdr_metadata.mastering_metadata.primary_b); >- offset += ParseChromaticity(data.data() + offset, >- &hdr_metadata.mastering_metadata.white_point); >- hdr_metadata.max_content_light_level = >- ByteReader<uint16_t>::ReadBigEndian(data.data() + offset); >- offset += 2; >- hdr_metadata.max_frame_average_light_level = >- ByteReader<uint16_t>::ReadBigEndian(data.data() + offset); >- offset += 2; >- color_space->set_hdr_metadata(&hdr_metadata); >- } >- RTC_DCHECK_EQ(ValueSize(*color_space), offset); >+ offset += ParseLuminance(data.data() + offset, >+ &hdr_metadata->mastering_metadata.luminance_max, >+ kLuminanceMaxDenominator); >+ offset += ParseLuminance(data.data() + offset, >+ &hdr_metadata->mastering_metadata.luminance_min, >+ kLuminanceMinDenominator); >+ offset += ParseChromaticity(data.data() + offset, >+ &hdr_metadata->mastering_metadata.primary_r); >+ offset += ParseChromaticity(data.data() + offset, >+ &hdr_metadata->mastering_metadata.primary_g); >+ offset += ParseChromaticity(data.data() + offset, >+ &hdr_metadata->mastering_metadata.primary_b); >+ offset += ParseChromaticity(data.data() + offset, >+ &hdr_metadata->mastering_metadata.white_point); >+ // TODO(kron): Do we need 32 bit here or is it enough with 16 bits? >+ // Also, what resolution is needed? >+ hdr_metadata->max_content_light_level = >+ ByteReader<uint32_t>::ReadBigEndian(data.data() + offset); >+ offset += 4; >+ hdr_metadata->max_frame_average_light_level = >+ ByteReader<uint32_t>::ReadBigEndian(data.data() + offset); >+ RTC_DCHECK_EQ(kValueSizeBytes, offset + 4); > return true; > } > >-bool ColorSpaceExtension::Write(rtc::ArrayView<uint8_t> data, >- const ColorSpace& color_space) { >- RTC_DCHECK(data.size() >= ValueSize(color_space)); >+bool HdrMetadataExtension::Write(rtc::ArrayView<uint8_t> data, >+ const HdrMetadata& hdr_metadata) { >+ RTC_DCHECK_EQ(data.size(), kValueSizeBytes); > size_t offset = 0; >- // Write color space information. >- data.data()[offset++] = static_cast<uint8_t>(color_space.primaries()); >- data.data()[offset++] = static_cast<uint8_t>(color_space.transfer()); >- data.data()[offset++] = static_cast<uint8_t>(color_space.matrix()); >- data.data()[offset++] = static_cast<uint8_t>(color_space.range()); >- >- // Write HDR metadata if it exists. >- if (color_space.hdr_metadata()) { >- const HdrMetadata& hdr_metadata = *color_space.hdr_metadata(); >- offset += WriteLuminance(data.data() + offset, >- hdr_metadata.mastering_metadata.luminance_max, >- kLuminanceMaxDenominator); >- offset += WriteLuminance(data.data() + offset, >- hdr_metadata.mastering_metadata.luminance_min, >- kLuminanceMinDenominator); >- offset += WriteChromaticity(data.data() + offset, >- hdr_metadata.mastering_metadata.primary_r); >- offset += WriteChromaticity(data.data() + offset, >- hdr_metadata.mastering_metadata.primary_g); >- offset += WriteChromaticity(data.data() + offset, >- hdr_metadata.mastering_metadata.primary_b); >- offset += WriteChromaticity(data.data() + offset, >- hdr_metadata.mastering_metadata.white_point); >- >- ByteWriter<uint16_t>::WriteBigEndian(data.data() + offset, >- hdr_metadata.max_content_light_level); >- offset += 2; >- ByteWriter<uint16_t>::WriteBigEndian( >- data.data() + offset, hdr_metadata.max_frame_average_light_level); >- offset += 2; >- } >- RTC_DCHECK_EQ(ValueSize(color_space), offset); >+ offset += WriteLuminance(data.data() + offset, >+ hdr_metadata.mastering_metadata.luminance_max, >+ kLuminanceMaxDenominator); >+ offset += WriteLuminance(data.data() + offset, >+ hdr_metadata.mastering_metadata.luminance_min, >+ kLuminanceMinDenominator); >+ offset += WriteChromaticity(data.data() + offset, >+ hdr_metadata.mastering_metadata.primary_r); >+ offset += WriteChromaticity(data.data() + offset, >+ hdr_metadata.mastering_metadata.primary_g); >+ offset += WriteChromaticity(data.data() + offset, >+ hdr_metadata.mastering_metadata.primary_b); >+ offset += WriteChromaticity(data.data() + offset, >+ hdr_metadata.mastering_metadata.white_point); >+ >+ // TODO(kron): Do we need 32 bit here or is it enough with 16 bits? >+ // Also, what resolution is needed? >+ ByteWriter<uint32_t>::WriteBigEndian(data.data() + offset, >+ hdr_metadata.max_content_light_level); >+ offset += 4; >+ ByteWriter<uint32_t>::WriteBigEndian( >+ data.data() + offset, hdr_metadata.max_frame_average_light_level); >+ RTC_DCHECK_EQ(kValueSizeBytes, offset + 4); > return true; > } > >-size_t ColorSpaceExtension::ParseChromaticity( >+size_t HdrMetadataExtension::ParseChromaticity( > const uint8_t* data, > HdrMasteringMetadata::Chromaticity* p) { > uint16_t chromaticity_x_scaled = ByteReader<uint16_t>::ReadBigEndian(data); >@@ -572,15 +535,15 @@ size_t ColorSpaceExtension::ParseChromaticity( > return 4; // Return number of bytes read. > } > >-size_t ColorSpaceExtension::ParseLuminance(const uint8_t* data, >- float* f, >- int denominator) { >+size_t HdrMetadataExtension::ParseLuminance(const uint8_t* data, >+ float* f, >+ int denominator) { > uint32_t luminance_scaled = ByteReader<uint32_t, 3>::ReadBigEndian(data); > *f = static_cast<float>(luminance_scaled) / denominator; > return 3; // Return number of bytes read. > } > >-size_t ColorSpaceExtension::WriteChromaticity( >+size_t HdrMetadataExtension::WriteChromaticity( > uint8_t* data, > const HdrMasteringMetadata::Chromaticity& p) { > RTC_DCHECK_GE(p.x, 0.0f); >@@ -592,9 +555,9 @@ size_t ColorSpaceExtension::WriteChromaticity( > return 4; // Return number of bytes written. > } > >-size_t ColorSpaceExtension::WriteLuminance(uint8_t* data, >- float f, >- int denominator) { >+size_t HdrMetadataExtension::WriteLuminance(uint8_t* data, >+ float f, >+ int denominator) { > RTC_DCHECK_GE(f, 0.0f); > ByteWriter<uint32_t, 3>::WriteBigEndian(data, std::round(f * denominator)); > return 3; // Return number of bytes written. >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extensions.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extensions.h >index 42a6216c7b529892f5bd58d27cd940c895012dda..ba43415fba24904abf0e34ea50dcbc31c0d0c555 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extensions.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_header_extensions.h >@@ -16,7 +16,7 @@ > > #include "api/array_view.h" > #include "api/rtp_headers.h" >-#include "api/video/color_space.h" >+#include "api/video/hdr_metadata.h" > #include "api/video/video_content_type.h" > #include "api/video/video_frame_marking.h" > #include "api/video/video_rotation.h" >@@ -182,23 +182,19 @@ class FrameMarkingExtension { > static bool IsScalable(uint8_t temporal_id, uint8_t layer_id); > }; > >-class ColorSpaceExtension { >+class HdrMetadataExtension { > public: >- using value_type = ColorSpace; >- static constexpr RTPExtensionType kId = kRtpExtensionColorSpace; >+ using value_type = HdrMetadata; >+ static constexpr RTPExtensionType kId = kRtpExtensionHdrMetadata; > static constexpr uint8_t kValueSizeBytes = 30; >- static constexpr uint8_t kValueSizeBytesWithoutHdrMetadata = 4; > // TODO(webrtc:8651): Change to a valid uri. >- static constexpr const char kUri[] = "rtp-colorspace-uri-placeholder"; >+ static constexpr const char kUri[] = "rtp-hdr-metadata-uri-placeholder"; > > static bool Parse(rtc::ArrayView<const uint8_t> data, >- ColorSpace* color_space); >- static size_t ValueSize(const ColorSpace& color_space) { >- return color_space.hdr_metadata() ? kValueSizeBytes >- : kValueSizeBytesWithoutHdrMetadata; >- } >+ HdrMetadata* hdr_metadata); >+ static size_t ValueSize(const HdrMetadata&) { return kValueSizeBytes; } > static bool Write(rtc::ArrayView<uint8_t> data, >- const ColorSpace& color_space); >+ const HdrMetadata& hdr_metadata); > > private: > static constexpr int kChromaticityDenominator = 10000; // 0.0001 resolution. >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_packet_received.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_packet_received.cc >index f80fad68e0d326d6effeb5595a240f24173a980c..ff7b4e871975719100c19d8faabf8768bebcbeb2 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_packet_received.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_packet_received.cc >@@ -69,7 +69,7 @@ void RtpPacketReceived::GetHeader(RTPHeader* header) const { > GetExtension<RepairedRtpStreamId>(&header->extension.repaired_stream_id); > GetExtension<RtpMid>(&header->extension.mid); > GetExtension<PlayoutDelayLimits>(&header->extension.playout_delay); >- header->extension.color_space = GetExtension<ColorSpaceExtension>(); >+ header->extension.hdr_metadata = GetExtension<HdrMetadataExtension>(); > } > > } // namespace webrtc >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_packet_unittest.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_packet_unittest.cc >index b1c0e42525eae52677c0f80a0ccf64171459e4e7..37a9a531de6c89e5cb0d4e5d80e8de21d81d835a 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_packet_unittest.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_packet_unittest.cc >@@ -203,34 +203,6 @@ HdrMetadata CreateTestHdrMetadata() { > hdr_metadata.max_frame_average_light_level = 1789; > return hdr_metadata; > } >- >-ColorSpace CreateTestColorSpace(bool with_hdr_metadata) { >- ColorSpace color_space( >- ColorSpace::PrimaryID::kBT709, ColorSpace::TransferID::kGAMMA22, >- ColorSpace::MatrixID::kSMPTE2085, ColorSpace::RangeID::kFull); >- if (with_hdr_metadata) { >- HdrMetadata hdr_metadata = CreateTestHdrMetadata(); >- color_space.set_hdr_metadata(&hdr_metadata); >- } >- return color_space; >-} >- >-void TestCreateAndParseColorSpaceExtension(bool with_hdr_metadata) { >- // Create packet with extension. >- RtpPacket::ExtensionManager extensions(/*extmap-allow-mixed=*/true); >- extensions.Register<ColorSpaceExtension>(1); >- RtpPacket packet(&extensions); >- const ColorSpace kColorSpace = CreateTestColorSpace(with_hdr_metadata); >- EXPECT_TRUE(packet.SetExtension<ColorSpaceExtension>(kColorSpace)); >- packet.SetPayloadSize(42); >- >- // Read packet with the extension. >- RtpPacketReceived parsed(&extensions); >- EXPECT_TRUE(parsed.Parse(packet.Buffer())); >- ColorSpace parsed_color_space; >- EXPECT_TRUE(parsed.GetExtension<ColorSpaceExtension>(&parsed_color_space)); >- EXPECT_EQ(kColorSpace, parsed_color_space); >-} > } // namespace > > TEST(RtpPacketTest, CreateMinimum) { >@@ -847,12 +819,21 @@ TEST(RtpPacketTest, ParseLegacyTimingFrameExtension) { > EXPECT_EQ(receivied_timing.flags, 0); > } > >-TEST(RtpPacketTest, CreateAndParseColorSpaceExtension) { >- TestCreateAndParseColorSpaceExtension(/*with_hdr_metadata=*/true); >-} >+TEST(RtpPacketTest, CreateAndParseHdrMetadataExtension) { >+ // Create packet with extension. >+ RtpPacket::ExtensionManager extensions(/*extmap-allow-mixed=*/true); >+ extensions.Register<HdrMetadataExtension>(1); >+ RtpPacket packet(&extensions); >+ const HdrMetadata kHdrMetadata = CreateTestHdrMetadata(); >+ EXPECT_TRUE(packet.SetExtension<HdrMetadataExtension>(kHdrMetadata)); >+ packet.SetPayloadSize(42); > >-TEST(RtpPacketTest, CreateAndParseColorSpaceExtensionWithoutHdrMetadata) { >- TestCreateAndParseColorSpaceExtension(/*with_hdr_metadata=*/false); >+ // Read packet with the extension. >+ RtpPacketReceived parsed(&extensions); >+ EXPECT_TRUE(parsed.Parse(packet.Buffer())); >+ HdrMetadata parsed_hdr_metadata; >+ EXPECT_TRUE(parsed.GetExtension<HdrMetadataExtension>(&parsed_hdr_metadata)); >+ EXPECT_EQ(kHdrMetadata, parsed_hdr_metadata); > } > > } // namespace webrtc >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_utility.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_utility.cc >index 44c671f5067eaf37bfadf8f960afb8fc8c53cf81..80f0eab682ec111a5332b2ed27ad3f9163926dd9 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_utility.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/rtp_rtcp/source/rtp_utility.cc >@@ -507,9 +507,9 @@ void RtpHeaderParser::ParseOneByteExtensionHeader( > RTC_LOG(WARNING) > << "RtpGenericFrameDescriptor unsupported by rtp header parser."; > break; >- case kRtpExtensionColorSpace: >+ case kRtpExtensionHdrMetadata: > RTC_LOG(WARNING) >- << "RtpExtensionColorSpace unsupported by rtp header parser."; >+ << "RtpExtensionHdrMetadata unsupported by rtp header parser."; > break; > case kRtpExtensionNone: > case kRtpExtensionNumberOfExtensions: { >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/test/videocodec_test_libvpx.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/test/videocodec_test_libvpx.cc >index f69fde6884010fc35fc396702e243bb276ab4302..1e365de1d1cfc394a73cce2107134b306b37c277 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/test/videocodec_test_libvpx.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/test/videocodec_test_libvpx.cc >@@ -124,9 +124,9 @@ TEST(VideoCodecTestLibvpx, ChangeBitrateVP9) { > {500, 30, kNumFramesLong}}; > > std::vector<RateControlThresholds> rc_thresholds = { >- {5, 1, 0, 1, 0.5, 0.1, 0, 1}, >- {15, 2, 0, 1, 0.5, 0.1, 0, 0}, >- {10, 1, 0, 1, 0.5, 0.1, 0, 0}}; >+ {5, 2, 0, 1, 0.5, 0.1, 0, 1}, >+ {15, 3, 0, 1, 0.5, 0.1, 0, 0}, >+ {10, 2, 0, 1, 0.5, 0.1, 0, 0}}; > > std::vector<QualityThresholds> quality_thresholds = { > {34, 33, 0.90, 0.88}, {38, 35, 0.95, 0.91}, {35, 34, 0.93, 0.90}}; >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/test/vp9_impl_unittest.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/test/vp9_impl_unittest.cc >index 85fa278a0d376eaf5512ec0758b8cb7dd721eaf2..4644a31cbb7f64f3db70eaad701d44b2c8005f10 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/test/vp9_impl_unittest.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/test/vp9_impl_unittest.cc >@@ -91,15 +91,16 @@ class TestVp9Impl : public VideoCodecUnitTest { > } > } > >- void ConfigureSvc(size_t num_spatial_layers) { >+ void ConfigureSvc(size_t num_spatial_layers, size_t num_temporal_layers = 1) { > codec_settings_.VP9()->numberOfSpatialLayers = > static_cast<unsigned char>(num_spatial_layers); >- codec_settings_.VP9()->numberOfTemporalLayers = 1; >+ codec_settings_.VP9()->numberOfTemporalLayers = num_temporal_layers; > codec_settings_.VP9()->frameDroppingOn = false; > >- std::vector<SpatialLayer> layers = GetSvcConfig( >- codec_settings_.width, codec_settings_.height, >- codec_settings_.maxFramerate, num_spatial_layers, 1, false); >+ std::vector<SpatialLayer> layers = >+ GetSvcConfig(codec_settings_.width, codec_settings_.height, >+ codec_settings_.maxFramerate, num_spatial_layers, >+ num_temporal_layers, false); > for (size_t i = 0; i < layers.size(); ++i) { > codec_settings_.spatialLayers[i] = layers[i]; > } >@@ -401,6 +402,8 @@ TEST_F(TestVp9Impl, EnableDisableSpatialLayers) { > std::vector<EncodedImage> encoded_frame; > std::vector<CodecSpecificInfo> codec_specific_info; > ASSERT_TRUE(WaitForEncodedFrames(&encoded_frame, &codec_specific_info)); >+ EXPECT_EQ(codec_specific_info[0].codecSpecific.VP9.ss_data_available, >+ frame_num == 0); > } > } > >@@ -418,6 +421,8 @@ TEST_F(TestVp9Impl, EnableDisableSpatialLayers) { > std::vector<EncodedImage> encoded_frame; > std::vector<CodecSpecificInfo> codec_specific_info; > ASSERT_TRUE(WaitForEncodedFrames(&encoded_frame, &codec_specific_info)); >+ EXPECT_EQ(codec_specific_info[0].codecSpecific.VP9.ss_data_available, >+ frame_num == 0); > } > } > } >@@ -581,6 +586,248 @@ TEST_F(TestVp9Impl, > } > } > >+TEST_F(TestVp9Impl, EnablingNewLayerIsDelayedInScreenshareAndAddsSsInfo) { >+ const size_t num_spatial_layers = 3; >+ // Chosen by hand, the 2nd frame is dropped with configured per-layer max >+ // framerate. >+ const size_t num_frames_to_encode_before_drop = 1; >+ // Chosen by hand, exactly 5 frames are dropped for input fps=30 and max >+ // framerate = 5. >+ const size_t num_dropped_frames = 5; >+ >+ codec_settings_.maxFramerate = 30; >+ ConfigureSvc(num_spatial_layers); >+ codec_settings_.spatialLayers[0].maxFramerate = 5.0; >+ // use 30 for the SL 1 instead of 5, so even if SL 0 frame is dropped due to >+ // framerate capping we would still get back at least a middle layer. It >+ // simplifies the test. >+ codec_settings_.spatialLayers[1].maxFramerate = 30.0; >+ codec_settings_.spatialLayers[2].maxFramerate = 30.0; >+ codec_settings_.VP9()->frameDroppingOn = false; >+ codec_settings_.mode = VideoCodecMode::kScreensharing; >+ codec_settings_.VP9()->interLayerPred = InterLayerPredMode::kOn; >+ codec_settings_.VP9()->flexibleMode = true; >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->InitEncode(&codec_settings_, 1 /* number of cores */, >+ 0 /* max payload size (unused) */)); >+ >+ // Enable all but the last layer. >+ VideoBitrateAllocation bitrate_allocation; >+ for (size_t sl_idx = 0; sl_idx < num_spatial_layers - 1; ++sl_idx) { >+ bitrate_allocation.SetBitrate( >+ sl_idx, 0, codec_settings_.spatialLayers[sl_idx].targetBitrate * 1000); >+ } >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->SetRateAllocation(bitrate_allocation, >+ codec_settings_.maxFramerate)); >+ >+ // Encode enough frames to force drop due to framerate capping. >+ for (size_t frame_num = 0; frame_num < num_frames_to_encode_before_drop; >+ ++frame_num) { >+ SetWaitForEncodedFramesThreshold(num_spatial_layers - 1); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->Encode(*NextInputFrame(), nullptr, nullptr)); >+ std::vector<EncodedImage> encoded_frames; >+ std::vector<CodecSpecificInfo> codec_specific_info; >+ ASSERT_TRUE(WaitForEncodedFrames(&encoded_frames, &codec_specific_info)); >+ } >+ >+ // Enable the last layer. >+ bitrate_allocation.SetBitrate( >+ num_spatial_layers - 1, 0, >+ codec_settings_.spatialLayers[num_spatial_layers - 1].targetBitrate * >+ 1000); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->SetRateAllocation(bitrate_allocation, >+ codec_settings_.maxFramerate)); >+ >+ for (size_t frame_num = 0; frame_num < num_dropped_frames; ++frame_num) { >+ SetWaitForEncodedFramesThreshold(1); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->Encode(*NextInputFrame(), nullptr, nullptr)); >+ // First layer is dropped due to frame rate cap. The last layer should not >+ // be enabled yet. >+ std::vector<EncodedImage> encoded_frames; >+ std::vector<CodecSpecificInfo> codec_specific_info; >+ ASSERT_TRUE(WaitForEncodedFrames(&encoded_frames, &codec_specific_info)); >+ } >+ >+ SetWaitForEncodedFramesThreshold(2); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->Encode(*NextInputFrame(), nullptr, nullptr)); >+ // Now all 3 layers should be encoded. >+ std::vector<EncodedImage> encoded_frames; >+ std::vector<CodecSpecificInfo> codec_specific_info; >+ ASSERT_TRUE(WaitForEncodedFrames(&encoded_frames, &codec_specific_info)); >+ EXPECT_EQ(encoded_frames.size(), 3u); >+ // Scalability structure has to be triggered. >+ EXPECT_TRUE(codec_specific_info[0].codecSpecific.VP9.ss_data_available); >+} >+ >+TEST_F(TestVp9Impl, RemovingLayerIsNotDelayedInScreenshareAndAddsSsInfo) { >+ const size_t num_spatial_layers = 3; >+ // Chosen by hand, the 2nd frame is dropped with configured per-layer max >+ // framerate. >+ const size_t num_frames_to_encode_before_drop = 1; >+ // Chosen by hand, exactly 5 frames are dropped for input fps=30 and max >+ // framerate = 5. >+ const size_t num_dropped_frames = 5; >+ >+ codec_settings_.maxFramerate = 30; >+ ConfigureSvc(num_spatial_layers); >+ codec_settings_.spatialLayers[0].maxFramerate = 5.0; >+ // use 30 for the SL 1 instead of 5, so even if SL 0 frame is dropped due to >+ // framerate capping we would still get back at least a middle layer. It >+ // simplifies the test. >+ codec_settings_.spatialLayers[1].maxFramerate = 30.0; >+ codec_settings_.spatialLayers[2].maxFramerate = 30.0; >+ codec_settings_.VP9()->frameDroppingOn = false; >+ codec_settings_.mode = VideoCodecMode::kScreensharing; >+ codec_settings_.VP9()->interLayerPred = InterLayerPredMode::kOn; >+ codec_settings_.VP9()->flexibleMode = true; >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->InitEncode(&codec_settings_, 1 /* number of cores */, >+ 0 /* max payload size (unused) */)); >+ >+ // All layers are enabled from the start. >+ VideoBitrateAllocation bitrate_allocation; >+ for (size_t sl_idx = 0; sl_idx < num_spatial_layers; ++sl_idx) { >+ bitrate_allocation.SetBitrate( >+ sl_idx, 0, codec_settings_.spatialLayers[sl_idx].targetBitrate * 1000); >+ } >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->SetRateAllocation(bitrate_allocation, >+ codec_settings_.maxFramerate)); >+ >+ // Encode enough frames to force drop due to framerate capping. >+ for (size_t frame_num = 0; frame_num < num_frames_to_encode_before_drop; >+ ++frame_num) { >+ SetWaitForEncodedFramesThreshold(num_spatial_layers); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->Encode(*NextInputFrame(), nullptr, nullptr)); >+ std::vector<EncodedImage> encoded_frames; >+ std::vector<CodecSpecificInfo> codec_specific_info; >+ ASSERT_TRUE(WaitForEncodedFrames(&encoded_frames, &codec_specific_info)); >+ } >+ >+ // Now the first layer should not have frames in it. >+ for (size_t frame_num = 0; frame_num < num_dropped_frames - 2; ++frame_num) { >+ SetWaitForEncodedFramesThreshold(2); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->Encode(*NextInputFrame(), nullptr, nullptr)); >+ // First layer is dropped due to frame rate cap. The last layer should not >+ // be enabled yet. >+ std::vector<EncodedImage> encoded_frames; >+ std::vector<CodecSpecificInfo> codec_specific_info; >+ ASSERT_TRUE(WaitForEncodedFrames(&encoded_frames, &codec_specific_info)); >+ // First layer is skipped. >+ EXPECT_EQ(encoded_frames[0].SpatialIndex().value_or(-1), 1); >+ } >+ >+ // Disable the last layer. >+ bitrate_allocation.SetBitrate(num_spatial_layers - 1, 0, 0); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->SetRateAllocation(bitrate_allocation, >+ codec_settings_.maxFramerate)); >+ >+ // Still expected to drop first layer. Last layer has to be disable also. >+ for (size_t frame_num = num_dropped_frames - 2; >+ frame_num < num_dropped_frames; ++frame_num) { >+ // Expect back one frame. >+ SetWaitForEncodedFramesThreshold(1); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->Encode(*NextInputFrame(), nullptr, nullptr)); >+ // First layer is dropped due to frame rate cap. The last layer should not >+ // be enabled yet. >+ std::vector<EncodedImage> encoded_frames; >+ std::vector<CodecSpecificInfo> codec_specific_info; >+ ASSERT_TRUE(WaitForEncodedFrames(&encoded_frames, &codec_specific_info)); >+ // First layer is skipped. >+ EXPECT_EQ(encoded_frames[0].SpatialIndex().value_or(-1), 1); >+ // No SS data on non-base spatial layer. >+ EXPECT_FALSE(codec_specific_info[0].codecSpecific.VP9.ss_data_available); >+ } >+ >+ SetWaitForEncodedFramesThreshold(2); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->Encode(*NextInputFrame(), nullptr, nullptr)); >+ std::vector<EncodedImage> encoded_frames; >+ std::vector<CodecSpecificInfo> codec_specific_info; >+ ASSERT_TRUE(WaitForEncodedFrames(&encoded_frames, &codec_specific_info)); >+ // First layer is not skipped now. >+ EXPECT_EQ(encoded_frames[0].SpatialIndex().value_or(-1), 0); >+ // SS data should be present. >+ EXPECT_TRUE(codec_specific_info[0].codecSpecific.VP9.ss_data_available); >+} >+ >+TEST_F(TestVp9Impl, DisableNewLayerInVideoDelaysSsInfoTillTL0) { >+ const size_t num_spatial_layers = 3; >+ const size_t num_temporal_layers = 2; >+ // Chosen by hand, the 2nd frame is dropped with configured per-layer max >+ // framerate. >+ ConfigureSvc(num_spatial_layers, num_temporal_layers); >+ codec_settings_.VP9()->frameDroppingOn = false; >+ codec_settings_.mode = VideoCodecMode::kRealtimeVideo; >+ codec_settings_.VP9()->interLayerPred = InterLayerPredMode::kOnKeyPic; >+ codec_settings_.VP9()->flexibleMode = false; >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->InitEncode(&codec_settings_, 1 /* number of cores */, >+ 0 /* max payload size (unused) */)); >+ >+ // Enable all the layers. >+ VideoBitrateAllocation bitrate_allocation; >+ for (size_t sl_idx = 0; sl_idx < num_spatial_layers; ++sl_idx) { >+ for (size_t tl_idx = 0; tl_idx < num_temporal_layers; ++tl_idx) { >+ bitrate_allocation.SetBitrate( >+ sl_idx, tl_idx, >+ codec_settings_.spatialLayers[sl_idx].targetBitrate * 1000 / >+ num_temporal_layers); >+ } >+ } >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->SetRateAllocation(bitrate_allocation, >+ codec_settings_.maxFramerate)); >+ >+ std::vector<EncodedImage> encoded_frames; >+ std::vector<CodecSpecificInfo> codec_specific_info; >+ >+ // Encode one TL0 frame >+ SetWaitForEncodedFramesThreshold(num_spatial_layers); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->Encode(*NextInputFrame(), nullptr, nullptr)); >+ ASSERT_TRUE(WaitForEncodedFrames(&encoded_frames, &codec_specific_info)); >+ EXPECT_EQ(codec_specific_info[0].codecSpecific.VP9.temporal_idx, 0u); >+ >+ // Disable the last layer. >+ for (size_t tl_idx = 0; tl_idx < num_temporal_layers; ++tl_idx) { >+ bitrate_allocation.SetBitrate(num_spatial_layers - 1, tl_idx, 0); >+ } >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->SetRateAllocation(bitrate_allocation, >+ codec_settings_.maxFramerate)); >+ >+ // Next is TL1 frame. The last layer is disabled immediately, but SS structure >+ // is not provided here. >+ SetWaitForEncodedFramesThreshold(num_spatial_layers - 1); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->Encode(*NextInputFrame(), nullptr, nullptr)); >+ ASSERT_TRUE(WaitForEncodedFrames(&encoded_frames, &codec_specific_info)); >+ EXPECT_EQ(codec_specific_info[0].codecSpecific.VP9.temporal_idx, 1u); >+ >+ // Next is TL0 frame, which should have delayed SS structure. >+ SetWaitForEncodedFramesThreshold(num_spatial_layers - 1); >+ EXPECT_EQ(WEBRTC_VIDEO_CODEC_OK, >+ encoder_->Encode(*NextInputFrame(), nullptr, nullptr)); >+ ASSERT_TRUE(WaitForEncodedFrames(&encoded_frames, &codec_specific_info)); >+ EXPECT_EQ(codec_specific_info[0].codecSpecific.VP9.temporal_idx, 0u); >+ EXPECT_TRUE(codec_specific_info[0].codecSpecific.VP9.ss_data_available); >+ EXPECT_TRUE(codec_specific_info[0] >+ .codecSpecific.VP9.spatial_layer_resolution_present); >+ EXPECT_EQ( >+ codec_specific_info[0].codecSpecific.VP9.width[num_spatial_layers - 1], >+ 0u); >+} >+ > TEST_F(TestVp9Impl, > LowLayerMarkedAsRefIfHighLayerNotEncodedAndInterLayerPredIsEnabled) { > ConfigureSvc(3); >@@ -766,6 +1013,7 @@ TEST_F(TestVp9ImplFrameDropping, DifferentFrameratePerSpatialLayer) { > > codec_settings_.VP9()->numberOfSpatialLayers = num_spatial_layers; > codec_settings_.VP9()->frameDroppingOn = false; >+ codec_settings_.VP9()->flexibleMode = true; > > VideoBitrateAllocation bitrate_allocation; > for (uint8_t sl_idx = 0; sl_idx < num_spatial_layers; ++sl_idx) { >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/vp9_impl.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/vp9_impl.cc >index 61542c508211496297b36cecde4794b46284e605..b899e1fabe4533009b75e17d0299d8a0bec4dcaa 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/vp9_impl.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/vp9_impl.cc >@@ -49,6 +49,9 @@ uint8_t kUpdBufIdx[4] = {0, 0, 1, 0}; > > int kMaxNumTiles4kVideo = 8; > >+// Maximum allowed PID difference for variable frame-rate mode. >+const int kMaxAllowedPidDIff = 8; >+ > // Only positive speeds, range for real-time coding currently is: 5 - 8. > // Lower means slower/better quality, higher means fastest/lower quality. > int GetCpuSpeed(int width, int height) { >@@ -124,6 +127,18 @@ ColorSpace ExtractVP9ColorSpace(vpx_color_space_t space_t, > } > return ColorSpace(primaries, transfer, matrix, range); > } >+ >+bool MoreLayersEnabled(const VideoBitrateAllocation& first, >+ const VideoBitrateAllocation& second) { >+ for (size_t sl_idx = 0; sl_idx < kMaxSpatialLayers; ++sl_idx) { >+ if (first.GetSpatialLayerSum(sl_idx) > 0 && >+ second.GetSpatialLayerSum(sl_idx) == 0) { >+ return true; >+ } >+ } >+ return false; >+} >+ > } // namespace > > void VP9EncoderImpl::EncoderOutputCodedPacketCallback(vpx_codec_cx_pkt* pkt, >@@ -154,12 +169,12 @@ VP9EncoderImpl::VP9EncoderImpl(const cricket::VideoCodec& codec) > field_trial::IsEnabled("WebRTC-Vp9IssueKeyFrameOnLayerDeactivation")), > is_svc_(false), > inter_layer_pred_(InterLayerPredMode::kOn), >- external_ref_control_( >- field_trial::IsEnabled("WebRTC-Vp9ExternalRefCtrl")), >+ external_ref_control_(false), // Set in InitEncode because of tests. > trusted_rate_controller_( > field_trial::IsEnabled(kVp9TrustedRateControllerFieldTrial)), > full_superframe_drop_(true), > first_frame_in_picture_(true), >+ ss_info_needed_(false), > is_flexible_mode_(false) { > memset(&codec_, 0, sizeof(codec_)); > memset(&svc_params_, 0, sizeof(vpx_svc_extra_cfg_t)); >@@ -314,14 +329,8 @@ int VP9EncoderImpl::SetRateAllocation( > > codec_.maxFramerate = frame_rate; > >- if (!SetSvcRates(bitrate_allocation)) { >- return WEBRTC_VIDEO_CODEC_ERR_PARAMETER; >- } >+ requested_bitrate_allocation_ = bitrate_allocation; > >- // Update encoder context >- if (vpx_codec_enc_config_set(encoder_, config_)) { >- return WEBRTC_VIDEO_CODEC_ERROR; >- } > return WEBRTC_VIDEO_CODEC_OK; > } > >@@ -461,6 +470,27 @@ int VP9EncoderImpl::InitEncode(const VideoCodec* inst, > > is_flexible_mode_ = inst->VP9().flexibleMode; > >+ inter_layer_pred_ = inst->VP9().interLayerPred; >+ >+ different_framerates_used_ = false; >+ for (size_t sl_idx = 1; sl_idx < num_spatial_layers_; ++sl_idx) { >+ if (std::abs(codec_.spatialLayers[sl_idx].maxFramerate - >+ codec_.spatialLayers[0].maxFramerate) > 1e-9) { >+ different_framerates_used_ = true; >+ } >+ } >+ >+ if (different_framerates_used_ && !is_flexible_mode_) { >+ RTC_LOG(LS_ERROR) << "Flexible mode required for different framerates on " >+ "different spatial layers"; >+ return WEBRTC_VIDEO_CODEC_ERR_PARAMETER; >+ } >+ >+ // External reference control is required for different frame rate on spatial >+ // layers because libvpx generates rtp incompatible references in this case. >+ external_ref_control_ = field_trial::IsEnabled("WebRTC-Vp9ExternalRefCtrl") || >+ different_framerates_used_; >+ > if (num_temporal_layers_ == 1) { > gof_.SetGofInfoVP9(kTemporalStructureMode1); > config_->temporal_layering_mode = VP9E_TEMPORAL_LAYERING_MODE_NOLAYERING; >@@ -493,8 +523,14 @@ int VP9EncoderImpl::InitEncode(const VideoCodec* inst, > return WEBRTC_VIDEO_CODEC_ERR_PARAMETER; > } > >- inter_layer_pred_ = inst->VP9().interLayerPred; >- >+ if (external_ref_control_) { >+ config_->temporal_layering_mode = VP9E_TEMPORAL_LAYERING_MODE_BYPASS; >+ if (num_temporal_layers_ > 1 && different_framerates_used_) { >+ // External reference control for several temporal layers with different >+ // frame rates on spatial layers is not implemented yet. >+ return WEBRTC_VIDEO_CODEC_ERR_PARAMETER; >+ } >+ } > ref_buf_.clear(); > > return InitAndSetControlSettings(inst); >@@ -575,9 +611,9 @@ int VP9EncoderImpl::InitAndSetControlSettings(const VideoCodec* inst) { > } > > SvcRateAllocator init_allocator(codec_); >- VideoBitrateAllocation allocation = init_allocator.GetAllocation( >+ current_bitrate_allocation_ = init_allocator.GetAllocation( > inst->startBitrate * 1000, inst->maxFramerate); >- if (!SetSvcRates(allocation)) { >+ if (!SetSvcRates(current_bitrate_allocation_)) { > return WEBRTC_VIDEO_CODEC_ERR_PARAMETER; > } > >@@ -595,6 +631,7 @@ int VP9EncoderImpl::InitAndSetControlSettings(const VideoCodec* inst) { > inst->VP9().adaptiveQpMode ? 3 : 0); > > vpx_codec_control(encoder_, VP9E_SET_FRAME_PARALLEL_DECODING, 0); >+ vpx_codec_control(encoder_, VP9E_SET_SVC_GF_TEMPORAL_REF, 0); > > if (is_svc_) { > vpx_codec_control(encoder_, VP9E_SET_SVC, 1); >@@ -696,15 +733,12 @@ int VP9EncoderImpl::Encode(const VideoFrame& input_image, > } > } > >- size_t first_active_spatial_layer_id = 0; >- if (VideoCodecMode::kScreensharing == codec_.mode) { >- vpx_svc_layer_id_t layer_id = {0}; >- if (!force_key_frame_) { >- // Skip encoding spatial layer frames if their target frame rate is lower >- // than actual input frame rate. >- const size_t gof_idx = (pics_since_key_ + 1) % gof_.num_frames_in_gof; >- layer_id.temporal_layer_id = gof_.temporal_idx[gof_idx]; >+ vpx_svc_layer_id_t layer_id = {0}; >+ if (!force_key_frame_) { >+ const size_t gof_idx = (pics_since_key_ + 1) % gof_.num_frames_in_gof; >+ layer_id.temporal_layer_id = gof_.temporal_idx[gof_idx]; > >+ if (VideoCodecMode::kScreensharing == codec_.mode) { > const uint32_t frame_timestamp_ms = > 1000 * input_image.timestamp() / kVideoPayloadTypeFrequency; > >@@ -722,8 +756,42 @@ int VP9EncoderImpl::Encode(const VideoFrame& input_image, > return WEBRTC_VIDEO_CODEC_OK; > } > } >- first_active_spatial_layer_id = layer_id.spatial_layer_id; >- vpx_codec_control(encoder_, VP9E_SET_SVC_LAYER_ID, &layer_id); >+ } >+ >+ for (int sl_idx = 0; sl_idx < num_active_spatial_layers_; ++sl_idx) { >+ layer_id.temporal_layer_id_per_spatial[sl_idx] = layer_id.temporal_layer_id; >+ } >+ >+ vpx_codec_control(encoder_, VP9E_SET_SVC_LAYER_ID, &layer_id); >+ >+ if (requested_bitrate_allocation_) { >+ bool more_layers_requested = MoreLayersEnabled( >+ *requested_bitrate_allocation_, current_bitrate_allocation_); >+ bool less_layers_requested = MoreLayersEnabled( >+ current_bitrate_allocation_, *requested_bitrate_allocation_); >+ // In SVC can enable new layers only if all lower layers are encoded and at >+ // the base temporal layer. >+ // This will delay rate allocation change until the next frame on the base >+ // spatial layer. >+ // In KSVC or simulcast modes KF will be generated for a new layer, so can >+ // update allocation any time. >+ bool can_upswitch = >+ inter_layer_pred_ != InterLayerPredMode::kOn || >+ (layer_id.spatial_layer_id == 0 && layer_id.temporal_layer_id == 0); >+ if (!more_layers_requested || can_upswitch) { >+ current_bitrate_allocation_ = *requested_bitrate_allocation_; >+ requested_bitrate_allocation_ = absl::nullopt; >+ if (!SetSvcRates(current_bitrate_allocation_)) { >+ return WEBRTC_VIDEO_CODEC_ERR_PARAMETER; >+ } >+ if (less_layers_requested || more_layers_requested) { >+ ss_info_needed_ = true; >+ } >+ } >+ } >+ >+ if (vpx_codec_enc_config_set(encoder_, config_)) { >+ return WEBRTC_VIDEO_CODEC_ERROR; > } > > RTC_DCHECK_EQ(input_image.width(), raw_->d_w); >@@ -784,7 +852,7 @@ int VP9EncoderImpl::Encode(const VideoFrame& input_image, > > if (external_ref_control_) { > vpx_svc_ref_frame_config_t ref_config = >- SetReferences(force_key_frame_, first_active_spatial_layer_id); >+ SetReferences(force_key_frame_, layer_id.spatial_layer_id); > > if (VideoCodecMode::kScreensharing == codec_.mode) { > for (uint8_t sl_idx = 0; sl_idx < num_active_spatial_layers_; ++sl_idx) { >@@ -844,9 +912,22 @@ void VP9EncoderImpl::PopulateCodecSpecific(CodecSpecificInfo* codec_specific, > vp9_info->ss_data_available = > (pkt.data.frame.flags & VPX_FRAME_IS_KEY) ? true : false; > >+ if (pkt.data.frame.flags & VPX_FRAME_IS_KEY) { >+ pics_since_key_ = 0; >+ } else if (first_frame_in_picture_) { >+ ++pics_since_key_; >+ } >+ > vpx_svc_layer_id_t layer_id = {0}; > vpx_codec_control(encoder_, VP9E_GET_SVC_LAYER_ID, &layer_id); > >+ if (ss_info_needed_ && layer_id.temporal_layer_id == 0 && >+ layer_id.spatial_layer_id == 0) { >+ // Force SS info after the layers configuration has changed. >+ vp9_info->ss_data_available = true; >+ ss_info_needed_ = false; >+ } >+ > RTC_CHECK_GT(num_temporal_layers_, 0); > RTC_CHECK_GT(num_active_spatial_layers_, 0); > if (num_temporal_layers_ == 1) { >@@ -868,12 +949,6 @@ void VP9EncoderImpl::PopulateCodecSpecific(CodecSpecificInfo* codec_specific, > // TODO(asapersson): this info has to be obtained from the encoder. > vp9_info->temporal_up_switch = false; > >- if (pkt.data.frame.flags & VPX_FRAME_IS_KEY) { >- pics_since_key_ = 0; >- } else if (first_frame_in_picture_) { >- ++pics_since_key_; >- } >- > const bool is_key_pic = (pics_since_key_ == 0); > const bool is_inter_layer_pred_allowed = > (inter_layer_pred_ == InterLayerPredMode::kOn || >@@ -905,8 +980,6 @@ void VP9EncoderImpl::PopulateCodecSpecific(CodecSpecificInfo* codec_specific, > vp9_info->gof_idx = kNoGofIdx; > FillReferenceIndices(pkt, pics_since_key_, vp9_info->inter_layer_predicted, > vp9_info); >- // TODO(webrtc:9794): Add fake reference to empty reference list to >- // workaround the frame buffer issue on receiver. > } else { > vp9_info->gof_idx = > static_cast<uint8_t>(pics_since_key_ % gof_.num_frames_in_gof); >@@ -1054,20 +1127,13 @@ void VP9EncoderImpl::UpdateReferenceBuffers(const vpx_codec_cx_pkt& pkt, > vpx_svc_ref_frame_config_t enc_layer_conf = {{0}}; > vpx_codec_control(encoder_, VP9E_GET_SVC_REF_FRAME_CONFIG, &enc_layer_conf); > >- if (enc_layer_conf.update_last[layer_id.spatial_layer_id]) { >- ref_buf_[enc_layer_conf.lst_fb_idx[layer_id.spatial_layer_id]] = >- frame_buf; >- } >- >- if (enc_layer_conf.update_alt_ref[layer_id.spatial_layer_id]) { >- ref_buf_[enc_layer_conf.alt_fb_idx[layer_id.spatial_layer_id]] = >- frame_buf; >+ for (size_t i = 0; i < kNumVp9Buffers; ++i) { >+ if (enc_layer_conf.update_buffer_slot[layer_id.spatial_layer_id] & >+ (1 << i)) { >+ ref_buf_[i] = frame_buf; >+ } > } > >- if (enc_layer_conf.update_golden[layer_id.spatial_layer_id]) { >- ref_buf_[enc_layer_conf.gld_fb_idx[layer_id.spatial_layer_id]] = >- frame_buf; >- } > } else { > RTC_DCHECK_EQ(num_spatial_layers_, 1); > RTC_DCHECK_EQ(num_temporal_layers_, 1); >@@ -1101,8 +1167,10 @@ vpx_svc_ref_frame_config_t VP9EncoderImpl::SetReferences( > // for temporal references plus 1 buffer for spatial reference. 7 buffers > // in total. > >- for (size_t sl_idx = 0; sl_idx < num_active_spatial_layers_; ++sl_idx) { >- const size_t gof_idx = pics_since_key_ % gof_.num_frames_in_gof; >+ for (size_t sl_idx = first_active_spatial_layer_id; >+ sl_idx < num_active_spatial_layers_; ++sl_idx) { >+ const size_t curr_pic_num = is_key_pic ? 0 : pics_since_key_ + 1; >+ const size_t gof_idx = curr_pic_num % gof_.num_frames_in_gof; > > if (!is_key_pic) { > // Set up temporal reference. >@@ -1114,20 +1182,29 @@ vpx_svc_ref_frame_config_t VP9EncoderImpl::SetReferences( > > // Sanity check that reference picture number is smaller than current > // picture number. >- const size_t curr_pic_num = pics_since_key_ + 1; > RTC_DCHECK_LT(ref_buf_[buf_idx].pic_num, curr_pic_num); > const size_t pid_diff = curr_pic_num - ref_buf_[buf_idx].pic_num; >+ // Incorrect spatial layer may be in the buffer due to a key-frame. >+ const bool same_spatial_layer = >+ ref_buf_[buf_idx].spatial_layer_id == sl_idx; >+ bool correct_pid = false; >+ if (different_framerates_used_) { >+ correct_pid = pid_diff < kMaxAllowedPidDIff; >+ } else { >+ // Below code assumes single temporal referecence. >+ RTC_DCHECK_EQ(gof_.num_ref_pics[gof_idx], 1); >+ correct_pid = pid_diff == gof_.pid_diff[gof_idx][0]; >+ } > >- // Below code assumes single temporal referecence. >- RTC_DCHECK_EQ(gof_.num_ref_pics[gof_idx], 1); >- if (pid_diff == gof_.pid_diff[gof_idx][0]) { >+ if (same_spatial_layer && correct_pid) { > ref_config.lst_fb_idx[sl_idx] = buf_idx; > ref_config.reference_last[sl_idx] = 1; > } else { > // This reference doesn't match with one specified by GOF. This can > // only happen if spatial layer is enabled dynamically without key > // frame. Spatial prediction is supposed to be enabled in this case. >- RTC_DCHECK(is_inter_layer_pred_allowed); >+ RTC_DCHECK(is_inter_layer_pred_allowed && >+ sl_idx > first_active_spatial_layer_id); > } > } > >@@ -1144,7 +1221,8 @@ vpx_svc_ref_frame_config_t VP9EncoderImpl::SetReferences( > > last_updated_buf_idx.reset(); > >- if (gof_.temporal_idx[gof_idx] <= num_temporal_layers_ - 1) { >+ if (gof_.temporal_idx[gof_idx] < num_temporal_layers_ - 1 || >+ num_temporal_layers_ == 1) { > last_updated_buf_idx = sl_idx * num_temporal_refs + kUpdBufIdx[gof_idx]; > > // Ensure last frame buffer is not used for temporal prediction (it is >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/vp9_impl.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/vp9_impl.h >index 3bfab9ad5fd589badd523b419f818d1f2e8207b2..a2dab260102e37ac41d455b91cd8fc0109a00ddf 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/vp9_impl.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/codecs/vp9/vp9_impl.h >@@ -112,6 +112,7 @@ class VP9EncoderImpl : public VP9Encoder { > GofInfoVP9 gof_; // Contains each frame's temporal information for > // non-flexible mode. > bool force_key_frame_; >+ bool different_framerates_used_; > size_t pics_since_key_; > uint8_t num_temporal_layers_; > uint8_t num_spatial_layers_; // Number of configured SLs >@@ -123,6 +124,9 @@ class VP9EncoderImpl : public VP9Encoder { > const bool trusted_rate_controller_; > const bool full_superframe_drop_; > bool first_frame_in_picture_; >+ VideoBitrateAllocation current_bitrate_allocation_; >+ absl::optional<VideoBitrateAllocation> requested_bitrate_allocation_; >+ bool ss_info_needed_; > > std::vector<FramerateController> framerate_controller_; > >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/encoded_frame.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/encoded_frame.h >index c7efd400727b864afdee490d629b1ea1747e99c6..cde0ff4a7f5b6e87ab2fb836b4b7bc8230b12d85 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/encoded_frame.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/encoded_frame.h >@@ -78,8 +78,11 @@ class VCMEncodedFrame : protected EncodedImage { > /** > * Frame RTP timestamp (90kHz) > */ >- using EncodedImage::Timestamp; >+ using EncodedImage::set_size; > using EncodedImage::SetTimestamp; >+ using EncodedImage::size; >+ using EncodedImage::Timestamp; >+ > /** > * Get render time in milliseconds > */ >@@ -100,6 +103,7 @@ class VCMEncodedFrame : protected EncodedImage { > * Get video timing > */ > EncodedImage::Timing video_timing() const { return timing_; } >+ EncodedImage::Timing* video_timing_mutable() { return &timing_; } > /** > * True if this frame is complete, false otherwise > */ >@@ -119,8 +123,10 @@ class VCMEncodedFrame : protected EncodedImage { > * the object. > */ > const CodecSpecificInfo* CodecSpecific() const { return &_codecSpecificInfo; } >+ void SetCodecSpecific(const CodecSpecificInfo* codec_specific) { >+ _codecSpecificInfo = *codec_specific; >+ } > >- protected: > /** > * Verifies that current allocated buffer size is larger than or equal to the > * input size. >@@ -131,6 +137,7 @@ class VCMEncodedFrame : protected EncodedImage { > */ > void VerifyAndAllocate(size_t minimumSize); > >+ protected: > void Reset(); > > void CopyCodecSpecific(const RTPVideoHeader* header); >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2.cc >index 04fba84caf55b447c057da93d06e654016ec1ec3..e15b1390e24afe76c6793cb710229106ba0d6a25 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2.cc >@@ -80,10 +80,10 @@ FrameBuffer::ReturnReason FrameBuffer::NextFrame( > > wait_ms = max_wait_time_ms; > >- // Need to hold |crit_| in order to use |frames_|, therefore we >+ // Need to hold |crit_| in order to access frames_to_decode_. therefore we > // set it here in the loop instead of outside the loop in order to not >- // acquire the lock unnecesserily. >- next_frame_it_ = frames_.end(); >+ // acquire the lock unnecessarily. >+ frames_to_decode_.clear(); > > // |frame_it| points to the first frame after the > // |last_decoded_frame_it_|. >@@ -121,7 +121,53 @@ FrameBuffer::ReturnReason FrameBuffer::NextFrame( > continue; > } > >- next_frame_it_ = frame_it; >+ // Only ever return all parts of a superframe. Therefore skip this >+ // frame if it's not a beginning of a superframe. >+ if (frame->inter_layer_predicted) { >+ continue; >+ } >+ >+ // Gather all remaining frames for the same superframe. >+ std::vector<FrameMap::iterator> current_superframe; >+ current_superframe.push_back(frame_it); >+ bool last_layer_completed = >+ frame_it->second.frame->is_last_spatial_layer; >+ FrameMap::iterator next_frame_it = frame_it; >+ while (true) { >+ ++next_frame_it; >+ if (next_frame_it == frames_.end() || >+ next_frame_it->first.picture_id != frame->id.picture_id || >+ !next_frame_it->second.continuous) { >+ break; >+ } >+ // Check if the next frame has some undecoded references other than >+ // the previous frame in the same superframe. >+ size_t num_allowed_undecoded_refs = >+ (next_frame_it->second.frame->inter_layer_predicted) ? 1 : 0; >+ if (next_frame_it->second.num_missing_decodable > >+ num_allowed_undecoded_refs) { >+ break; >+ } >+ // All frames in the superframe should have the same timestamp. >+ if (frame->Timestamp() != next_frame_it->second.frame->Timestamp()) { >+ RTC_LOG(LS_WARNING) >+ << "Frames in a single superframe have different" >+ " timestamps. Skipping undecodable superframe."; >+ break; >+ } >+ current_superframe.push_back(next_frame_it); >+ last_layer_completed = >+ next_frame_it->second.frame->is_last_spatial_layer; >+ } >+ // Check if the current superframe is complete. >+ // TODO(bugs.webrtc.org/10064): consider returning all available to >+ // decode frames even if the superframe is not complete yet. >+ if (!last_layer_completed) { >+ continue; >+ } >+ >+ frames_to_decode_ = std::move(current_superframe); >+ > if (frame->RenderTime() == -1) { > frame->SetRenderTime( > timing_->RenderTimeMs(frame->Timestamp(), now_ms)); >@@ -147,9 +193,10 @@ FrameBuffer::ReturnReason FrameBuffer::NextFrame( > { > rtc::CritScope lock(&crit_); > now_ms = clock_->TimeInMilliseconds(); >- if (next_frame_it_ != frames_.end()) { >- std::unique_ptr<EncodedFrame> frame = >- std::move(next_frame_it_->second.frame); >+ std::vector<EncodedFrame*> frames_out; >+ for (const FrameMap::iterator& frame_it : frames_to_decode_) { >+ RTC_DCHECK(frame_it != frames_.end()); >+ EncodedFrame* frame = frame_it->second.frame.release(); > > if (!frame->delayed_by_retransmission()) { > int64_t frame_delay; >@@ -180,14 +227,22 @@ FrameBuffer::ReturnReason FrameBuffer::NextFrame( > > UpdateJitterDelay(); > UpdateTimingFrameInfo(); >- PropagateDecodability(next_frame_it_->second); >+ PropagateDecodability(frame_it->second); > >- AdvanceLastDecodedFrame(next_frame_it_); >+ AdvanceLastDecodedFrame(frame_it); > last_decoded_frame_timestamp_ = frame->Timestamp(); >- *frame_out = std::move(frame); >+ frames_out.push_back(frame); >+ } >+ >+ if (!frames_out.empty()) { >+ if (frames_out.size() == 1) { >+ frame_out->reset(frames_out[0]); >+ } else { >+ frame_out->reset(CombineAndDeleteFrames(frames_out)); >+ } > return kFrameFound; > } >- } >+ } // rtc::Critscope lock(&crit_) > > if (latest_return_time_ms - now_ms > 0) { > // If |next_frame_it_ == frames_.end()| and there is still time left, it >@@ -196,7 +251,6 @@ FrameBuffer::ReturnReason FrameBuffer::NextFrame( > // remaining time and then return. > return NextFrame(latest_return_time_ms - now_ms, frame_out); > } >- > return kTimeout; > } > >@@ -599,11 +653,38 @@ void FrameBuffer::ClearFramesAndHistory() { > frames_.clear(); > last_decoded_frame_it_ = frames_.end(); > last_continuous_frame_it_ = frames_.end(); >- next_frame_it_ = frames_.end(); >+ frames_to_decode_.clear(); > num_frames_history_ = 0; > num_frames_buffered_ = 0; > } > >+EncodedFrame* FrameBuffer::CombineAndDeleteFrames( >+ const std::vector<EncodedFrame*>& frames) const { >+ RTC_DCHECK(!frames.empty()); >+ EncodedFrame* frame = frames[0]; >+ size_t total_length = 0; >+ for (size_t i = 0; i < frames.size(); ++i) { >+ total_length += frames[i]->size(); >+ } >+ frame->VerifyAndAllocate(total_length); >+ uint8_t* buffer = frame->MutableBuffer(); >+ // Append all remaining frames to the first one. >+ size_t used_buffer_bytes = frame->size(); >+ for (size_t i = 1; i < frames.size(); ++i) { >+ EncodedFrame* frame_to_append = frames[i]; >+ memcpy(buffer + used_buffer_bytes, frame_to_append->Buffer(), >+ frame_to_append->size()); >+ used_buffer_bytes += frame_to_append->size(); >+ frame->video_timing_mutable()->network2_timestamp_ms = >+ frame_to_append->video_timing().network2_timestamp_ms; >+ frame->video_timing_mutable()->receive_finish_ms = >+ frame_to_append->video_timing().receive_finish_ms; >+ delete frame_to_append; >+ } >+ frame->set_size(total_length); >+ return frame; >+} >+ > FrameBuffer::FrameInfo::FrameInfo() = default; > FrameBuffer::FrameInfo::FrameInfo(FrameInfo&&) = default; > FrameBuffer::FrameInfo::~FrameInfo() = default; >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2.h b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2.h >index dc5e5a2e372fdc10ac80ac6ee71f769c8d86c983..c311bc8f2f1c7e897757237372b4f883ca012a96 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2.h >@@ -15,6 +15,7 @@ > #include <map> > #include <memory> > #include <utility> >+#include <vector> > > #include "api/video/encoded_frame.h" > #include "modules/video_coding/include/video_coding_defines.h" >@@ -156,6 +157,13 @@ class FrameBuffer { > bool HasBadRenderTiming(const EncodedFrame& frame, int64_t now_ms) > RTC_EXCLUSIVE_LOCKS_REQUIRED(crit_); > >+ // The cleaner solution would be to have the NextFrame function return a >+ // vector of frames, but until the decoding pipeline can support decoding >+ // multiple frames at the same time we combine all frames to one frame and >+ // return it. See bugs.webrtc.org/10064 >+ EncodedFrame* CombineAndDeleteFrames( >+ const std::vector<EncodedFrame*>& frames) const; >+ > FrameMap frames_ RTC_GUARDED_BY(crit_); > > rtc::CriticalSection crit_; >@@ -167,7 +175,7 @@ class FrameBuffer { > absl::optional<uint32_t> last_decoded_frame_timestamp_ RTC_GUARDED_BY(crit_); > FrameMap::iterator last_decoded_frame_it_ RTC_GUARDED_BY(crit_); > FrameMap::iterator last_continuous_frame_it_ RTC_GUARDED_BY(crit_); >- FrameMap::iterator next_frame_it_ RTC_GUARDED_BY(crit_); >+ std::vector<FrameMap::iterator> frames_to_decode_ RTC_GUARDED_BY(crit_); > int num_frames_history_ RTC_GUARDED_BY(crit_); > int num_frames_buffered_ RTC_GUARDED_BY(crit_); > bool stopped_ RTC_GUARDED_BY(crit_); >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2_unittest.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2_unittest.cc >index e10f78527957e77c075a0f0827b37d6d7bf4caf5..c8258e480e8681d32d83d5f03a774476ef45ec0d 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2_unittest.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_buffer2_unittest.cc >@@ -126,6 +126,7 @@ class TestFrameBuffer2 : public ::testing::Test { > static constexpr int kFps1 = 1000; > static constexpr int kFps10 = kFps1 / 10; > static constexpr int kFps20 = kFps1 / 20; >+ static constexpr size_t kFrameSize = 10; > > TestFrameBuffer2() > : clock_(0), >@@ -152,6 +153,7 @@ class TestFrameBuffer2 : public ::testing::Test { > uint8_t spatial_layer, > int64_t ts_ms, > bool inter_layer_predicted, >+ bool last_spatial_layer, > T... refs) { > static_assert(sizeof...(refs) <= kMaxReferences, > "To many references specified for EncodedFrame."); >@@ -164,6 +166,10 @@ class TestFrameBuffer2 : public ::testing::Test { > frame->SetTimestamp(ts_ms * 90); > frame->num_references = references.size(); > frame->inter_layer_predicted = inter_layer_predicted; >+ frame->is_last_spatial_layer = last_spatial_layer; >+ // Add some data to buffer. >+ frame->VerifyAndAllocate(kFrameSize); >+ frame->SetSize(kFrameSize); > for (size_t r = 0; r < references.size(); ++r) > frame->references[r] = references[r]; > >@@ -196,6 +202,13 @@ class TestFrameBuffer2 : public ::testing::Test { > ASSERT_EQ(spatial_layer, frames_[index]->id.spatial_layer); > } > >+ void CheckFrameSize(size_t index, size_t size) { >+ rtc::CritScope lock(&crit_); >+ ASSERT_LT(index, frames_.size()); >+ ASSERT_TRUE(frames_[index]); >+ ASSERT_EQ(frames_[index]->size(), size); >+ } >+ > void CheckNoFrame(size_t index) { > rtc::CritScope lock(&crit_); > ASSERT_LT(index, frames_.size()); >@@ -248,7 +261,7 @@ TEST_F(TestFrameBuffer2, WaitForFrame) { > uint32_t ts = Rand(); > > ExtractFrame(50); >- InsertFrame(pid, 0, ts, false); >+ InsertFrame(pid, 0, ts, false, true); > CheckFrame(0, pid, 0); > } > >@@ -256,13 +269,11 @@ TEST_F(TestFrameBuffer2, OneSuperFrame) { > uint16_t pid = Rand(); > uint32_t ts = Rand(); > >- InsertFrame(pid, 0, ts, false); >- ExtractFrame(); >- InsertFrame(pid, 1, ts, true); >+ InsertFrame(pid, 0, ts, false, false); >+ InsertFrame(pid, 1, ts, true, true); > ExtractFrame(); > > CheckFrame(0, pid, 0); >- CheckFrame(1, pid, 1); > } > > TEST_F(TestFrameBuffer2, SetPlayoutDelay) { >@@ -295,8 +306,8 @@ TEST_F(TestFrameBuffer2, DISABLED_OneUnorderedSuperFrame) { > uint32_t ts = Rand(); > > ExtractFrame(50); >- InsertFrame(pid, 1, ts, true); >- InsertFrame(pid, 0, ts, false); >+ InsertFrame(pid, 1, ts, true, true); >+ InsertFrame(pid, 0, ts, false, false); > ExtractFrame(); > > CheckFrame(0, pid, 0); >@@ -307,14 +318,14 @@ TEST_F(TestFrameBuffer2, DISABLED_OneLayerStreamReordered) { > uint16_t pid = Rand(); > uint32_t ts = Rand(); > >- InsertFrame(pid, 0, ts, false); >+ InsertFrame(pid, 0, ts, false, true); > ExtractFrame(); > CheckFrame(0, pid, 0); > for (int i = 1; i < 10; i += 2) { > ExtractFrame(50); >- InsertFrame(pid + i + 1, 0, ts + (i + 1) * kFps10, false, pid + i); >+ InsertFrame(pid + i + 1, 0, ts + (i + 1) * kFps10, false, true, pid + i); > clock_.AdvanceTimeMilliseconds(kFps10); >- InsertFrame(pid + i, 0, ts + i * kFps10, false, pid + i - 1); >+ InsertFrame(pid + i, 0, ts + i * kFps10, false, true, pid + i - 1); > clock_.AdvanceTimeMilliseconds(kFps10); > ExtractFrame(); > CheckFrame(i, pid + i, 0); >@@ -332,9 +343,9 @@ TEST_F(TestFrameBuffer2, MissingFrame) { > uint16_t pid = Rand(); > uint32_t ts = Rand(); > >- InsertFrame(pid, 0, ts, false); >- InsertFrame(pid + 2, 0, ts, false, pid); >- InsertFrame(pid + 3, 0, ts, false, pid + 1, pid + 2); >+ InsertFrame(pid, 0, ts, false, true); >+ InsertFrame(pid + 2, 0, ts, false, true, pid); >+ InsertFrame(pid + 3, 0, ts, false, true, pid + 1, pid + 2); > ExtractFrame(); > ExtractFrame(); > ExtractFrame(); >@@ -348,11 +359,11 @@ TEST_F(TestFrameBuffer2, OneLayerStream) { > uint16_t pid = Rand(); > uint32_t ts = Rand(); > >- InsertFrame(pid, 0, ts, false); >+ InsertFrame(pid, 0, ts, false, true); > ExtractFrame(); > CheckFrame(0, pid, 0); > for (int i = 1; i < 10; ++i) { >- InsertFrame(pid + i, 0, ts + i * kFps10, false, pid + i - 1); >+ InsertFrame(pid + i, 0, ts + i * kFps10, false, true, pid + i - 1); > ExtractFrame(); > clock_.AdvanceTimeMilliseconds(kFps10); > CheckFrame(i, pid + i, 0); >@@ -363,12 +374,13 @@ TEST_F(TestFrameBuffer2, DropTemporalLayerSlowDecoder) { > uint16_t pid = Rand(); > uint32_t ts = Rand(); > >- InsertFrame(pid, 0, ts, false); >- InsertFrame(pid + 1, 0, ts + kFps20, false, pid); >+ InsertFrame(pid, 0, ts, false, true); >+ InsertFrame(pid + 1, 0, ts + kFps20, false, true, pid); > for (int i = 2; i < 10; i += 2) { > uint32_t ts_tl0 = ts + i / 2 * kFps10; >- InsertFrame(pid + i, 0, ts_tl0, false, pid + i - 2); >- InsertFrame(pid + i + 1, 0, ts_tl0 + kFps20, false, pid + i, pid + i - 1); >+ InsertFrame(pid + i, 0, ts_tl0, false, true, pid + i - 2); >+ InsertFrame(pid + i + 1, 0, ts_tl0 + kFps20, false, true, pid + i, >+ pid + i - 1); > } > > for (int i = 0; i < 10; ++i) { >@@ -388,49 +400,15 @@ TEST_F(TestFrameBuffer2, DropTemporalLayerSlowDecoder) { > CheckNoFrame(9); > } > >-TEST_F(TestFrameBuffer2, DropSpatialLayerSlowDecoder) { >- uint16_t pid = Rand(); >- uint32_t ts = Rand(); >- >- InsertFrame(pid, 0, ts, false); >- InsertFrame(pid, 1, ts, false); >- for (int i = 1; i < 6; ++i) { >- uint32_t ts_tl0 = ts + i * kFps10; >- InsertFrame(pid + i, 0, ts_tl0, false, pid + i - 1); >- InsertFrame(pid + i, 1, ts_tl0, false, pid + i - 1); >- } >- >- ExtractFrame(); >- ExtractFrame(); >- clock_.AdvanceTimeMilliseconds(57); >- for (int i = 2; i < 12; ++i) { >- ExtractFrame(); >- clock_.AdvanceTimeMilliseconds(57); >- } >- >- CheckFrame(0, pid, 0); >- CheckFrame(1, pid, 1); >- CheckFrame(2, pid + 1, 0); >- CheckFrame(3, pid + 1, 1); >- CheckFrame(4, pid + 2, 0); >- CheckFrame(5, pid + 2, 1); >- CheckFrame(6, pid + 3, 0); >- CheckFrame(7, pid + 4, 0); >- CheckFrame(8, pid + 5, 0); >- CheckNoFrame(9); >- CheckNoFrame(10); >- CheckNoFrame(11); >-} >- > TEST_F(TestFrameBuffer2, InsertLateFrame) { > uint16_t pid = Rand(); > uint32_t ts = Rand(); > >- InsertFrame(pid, 0, ts, false); >+ InsertFrame(pid, 0, ts, false, true); > ExtractFrame(); >- InsertFrame(pid + 2, 0, ts, false); >+ InsertFrame(pid + 2, 0, ts, false, true); > ExtractFrame(); >- InsertFrame(pid + 1, 0, ts, false, pid); >+ InsertFrame(pid + 1, 0, ts, false, true, pid); > ExtractFrame(); > > CheckFrame(0, pid, 0); >@@ -443,12 +421,12 @@ TEST_F(TestFrameBuffer2, ProtectionMode) { > uint32_t ts = Rand(); > > EXPECT_CALL(jitter_estimator_, GetJitterEstimate(1.0)); >- InsertFrame(pid, 0, ts, false); >+ InsertFrame(pid, 0, ts, false, true); > ExtractFrame(); > > buffer_->SetProtectionMode(kProtectionNackFEC); > EXPECT_CALL(jitter_estimator_, GetJitterEstimate(0.0)); >- InsertFrame(pid + 1, 0, ts, false); >+ InsertFrame(pid + 1, 0, ts, false, true); > ExtractFrame(); > } > >@@ -456,45 +434,45 @@ TEST_F(TestFrameBuffer2, NoContinuousFrame) { > uint16_t pid = Rand(); > uint32_t ts = Rand(); > >- EXPECT_EQ(-1, InsertFrame(pid + 1, 0, ts, false, pid)); >+ EXPECT_EQ(-1, InsertFrame(pid + 1, 0, ts, false, true, pid)); > } > > TEST_F(TestFrameBuffer2, LastContinuousFrameSingleLayer) { > uint16_t pid = Rand(); > uint32_t ts = Rand(); > >- EXPECT_EQ(pid, InsertFrame(pid, 0, ts, false)); >- EXPECT_EQ(pid, InsertFrame(pid + 2, 0, ts, false, pid + 1)); >- EXPECT_EQ(pid + 2, InsertFrame(pid + 1, 0, ts, false, pid)); >- EXPECT_EQ(pid + 2, InsertFrame(pid + 4, 0, ts, false, pid + 3)); >- EXPECT_EQ(pid + 5, InsertFrame(pid + 5, 0, ts, false)); >+ EXPECT_EQ(pid, InsertFrame(pid, 0, ts, false, true)); >+ EXPECT_EQ(pid, InsertFrame(pid + 2, 0, ts, false, true, pid + 1)); >+ EXPECT_EQ(pid + 2, InsertFrame(pid + 1, 0, ts, false, true, pid)); >+ EXPECT_EQ(pid + 2, InsertFrame(pid + 4, 0, ts, false, true, pid + 3)); >+ EXPECT_EQ(pid + 5, InsertFrame(pid + 5, 0, ts, false, true)); > } > > TEST_F(TestFrameBuffer2, LastContinuousFrameTwoLayers) { > uint16_t pid = Rand(); > uint32_t ts = Rand(); > >- EXPECT_EQ(pid, InsertFrame(pid, 0, ts, false)); >- EXPECT_EQ(pid, InsertFrame(pid, 1, ts, true)); >- EXPECT_EQ(pid, InsertFrame(pid + 1, 1, ts, true, pid)); >- EXPECT_EQ(pid, InsertFrame(pid + 2, 0, ts, false, pid + 1)); >- EXPECT_EQ(pid, InsertFrame(pid + 2, 1, ts, true, pid + 1)); >- EXPECT_EQ(pid, InsertFrame(pid + 3, 0, ts, false, pid + 2)); >- EXPECT_EQ(pid + 3, InsertFrame(pid + 1, 0, ts, false, pid)); >- EXPECT_EQ(pid + 3, InsertFrame(pid + 3, 1, ts, true, pid + 2)); >+ EXPECT_EQ(pid, InsertFrame(pid, 0, ts, false, false)); >+ EXPECT_EQ(pid, InsertFrame(pid, 1, ts, true, true)); >+ EXPECT_EQ(pid, InsertFrame(pid + 1, 1, ts, true, true, pid)); >+ EXPECT_EQ(pid, InsertFrame(pid + 2, 0, ts, false, false, pid + 1)); >+ EXPECT_EQ(pid, InsertFrame(pid + 2, 1, ts, true, true, pid + 1)); >+ EXPECT_EQ(pid, InsertFrame(pid + 3, 0, ts, false, false, pid + 2)); >+ EXPECT_EQ(pid + 3, InsertFrame(pid + 1, 0, ts, false, false, pid)); >+ EXPECT_EQ(pid + 3, InsertFrame(pid + 3, 1, ts, true, true, pid + 2)); > } > > TEST_F(TestFrameBuffer2, PictureIdJumpBack) { > uint16_t pid = Rand(); > uint32_t ts = Rand(); > >- EXPECT_EQ(pid, InsertFrame(pid, 0, ts, false)); >- EXPECT_EQ(pid + 1, InsertFrame(pid + 1, 0, ts + 1, false, pid)); >+ EXPECT_EQ(pid, InsertFrame(pid, 0, ts, false, true)); >+ EXPECT_EQ(pid + 1, InsertFrame(pid + 1, 0, ts + 1, false, true, pid)); > ExtractFrame(); > CheckFrame(0, pid, 0); > > // Jump back in pid but increase ts. >- EXPECT_EQ(pid - 1, InsertFrame(pid - 1, 0, ts + 2, false)); >+ EXPECT_EQ(pid - 1, InsertFrame(pid - 1, 0, ts + 2, false, true)); > ExtractFrame(); > ExtractFrame(); > CheckFrame(1, pid - 1, 0); >@@ -513,6 +491,7 @@ TEST_F(TestFrameBuffer2, StatsCallback) { > > { > std::unique_ptr<FrameObjectFake> frame(new FrameObjectFake()); >+ frame->VerifyAndAllocate(kFrameSize); > frame->SetSize(kFrameSize); > frame->id.picture_id = pid; > frame->id.spatial_layer = 0; >@@ -528,42 +507,42 @@ TEST_F(TestFrameBuffer2, StatsCallback) { > } > > TEST_F(TestFrameBuffer2, ForwardJumps) { >- EXPECT_EQ(5453, InsertFrame(5453, 0, 1, false)); >+ EXPECT_EQ(5453, InsertFrame(5453, 0, 1, false, true)); > ExtractFrame(); >- EXPECT_EQ(5454, InsertFrame(5454, 0, 1, false, 5453)); >+ EXPECT_EQ(5454, InsertFrame(5454, 0, 1, false, true, 5453)); > ExtractFrame(); >- EXPECT_EQ(15670, InsertFrame(15670, 0, 1, false)); >+ EXPECT_EQ(15670, InsertFrame(15670, 0, 1, false, true)); > ExtractFrame(); >- EXPECT_EQ(29804, InsertFrame(29804, 0, 1, false)); >+ EXPECT_EQ(29804, InsertFrame(29804, 0, 1, false, true)); > ExtractFrame(); >- EXPECT_EQ(29805, InsertFrame(29805, 0, 1, false, 29804)); >+ EXPECT_EQ(29805, InsertFrame(29805, 0, 1, false, true, 29804)); > ExtractFrame(); >- EXPECT_EQ(29806, InsertFrame(29806, 0, 1, false, 29805)); >+ EXPECT_EQ(29806, InsertFrame(29806, 0, 1, false, true, 29805)); > ExtractFrame(); >- EXPECT_EQ(33819, InsertFrame(33819, 0, 1, false)); >+ EXPECT_EQ(33819, InsertFrame(33819, 0, 1, false, true)); > ExtractFrame(); >- EXPECT_EQ(41248, InsertFrame(41248, 0, 1, false)); >+ EXPECT_EQ(41248, InsertFrame(41248, 0, 1, false, true)); > ExtractFrame(); > } > > TEST_F(TestFrameBuffer2, DuplicateFrames) { >- EXPECT_EQ(22256, InsertFrame(22256, 0, 1, false)); >+ EXPECT_EQ(22256, InsertFrame(22256, 0, 1, false, true)); > ExtractFrame(); >- EXPECT_EQ(22256, InsertFrame(22256, 0, 1, false)); >+ EXPECT_EQ(22256, InsertFrame(22256, 0, 1, false, true)); > } > > // TODO(philipel): implement more unittests related to invalid references. > TEST_F(TestFrameBuffer2, InvalidReferences) { >- EXPECT_EQ(-1, InsertFrame(0, 0, 1000, false, 2)); >- EXPECT_EQ(1, InsertFrame(1, 0, 2000, false)); >+ EXPECT_EQ(-1, InsertFrame(0, 0, 1000, false, true, 2)); >+ EXPECT_EQ(1, InsertFrame(1, 0, 2000, false, true)); > ExtractFrame(); >- EXPECT_EQ(2, InsertFrame(2, 0, 3000, false, 1)); >+ EXPECT_EQ(2, InsertFrame(2, 0, 3000, false, true, 1)); > } > > TEST_F(TestFrameBuffer2, KeyframeRequired) { >- EXPECT_EQ(1, InsertFrame(1, 0, 1000, false)); >- EXPECT_EQ(2, InsertFrame(2, 0, 2000, false, 1)); >- EXPECT_EQ(3, InsertFrame(3, 0, 3000, false)); >+ EXPECT_EQ(1, InsertFrame(1, 0, 1000, false, true)); >+ EXPECT_EQ(2, InsertFrame(2, 0, 2000, false, true, 1)); >+ EXPECT_EQ(3, InsertFrame(3, 0, 3000, false, true)); > ExtractFrame(); > ExtractFrame(0, true); > ExtractFrame(); >@@ -577,42 +556,81 @@ TEST_F(TestFrameBuffer2, KeyframeClearsFullBuffer) { > const int kMaxBufferSize = 600; > > for (int i = 1; i <= kMaxBufferSize; ++i) >- EXPECT_EQ(-1, InsertFrame(i, 0, i * 1000, false, i - 1)); >+ EXPECT_EQ(-1, InsertFrame(i, 0, i * 1000, false, true, i - 1)); > ExtractFrame(); > CheckNoFrame(0); > >- EXPECT_EQ( >- kMaxBufferSize + 1, >- InsertFrame(kMaxBufferSize + 1, 0, (kMaxBufferSize + 1) * 1000, false)); >+ EXPECT_EQ(kMaxBufferSize + 1, >+ InsertFrame(kMaxBufferSize + 1, 0, (kMaxBufferSize + 1) * 1000, >+ false, true)); > ExtractFrame(); > CheckFrame(1, kMaxBufferSize + 1, 0); > } > > TEST_F(TestFrameBuffer2, DontUpdateOnUndecodableFrame) { >- InsertFrame(1, 0, 0, false); >+ InsertFrame(1, 0, 0, false, true); > ExtractFrame(0, true); >- InsertFrame(3, 0, 0, false, 2, 0); >- InsertFrame(3, 0, 0, false, 0); >- InsertFrame(2, 0, 0, false); >+ InsertFrame(3, 0, 0, false, true, 2, 0); >+ InsertFrame(3, 0, 0, false, true, 0); >+ InsertFrame(2, 0, 0, false, true); > ExtractFrame(0, true); > ExtractFrame(0, true); > } > > TEST_F(TestFrameBuffer2, DontDecodeOlderTimestamp) { >- InsertFrame(2, 0, 1, false); >- InsertFrame(1, 0, 2, false); // Older picture id but newer timestamp. >+ InsertFrame(2, 0, 1, false, true); >+ InsertFrame(1, 0, 2, false, true); // Older picture id but newer timestamp. > ExtractFrame(0); > ExtractFrame(0); > CheckFrame(0, 1, 0); > CheckNoFrame(1); > >- InsertFrame(3, 0, 4, false); >- InsertFrame(4, 0, 3, false); // Newer picture id but older timestamp. >+ InsertFrame(3, 0, 4, false, true); >+ InsertFrame(4, 0, 3, false, true); // Newer picture id but older timestamp. > ExtractFrame(0); > ExtractFrame(0); > CheckFrame(2, 3, 0); > CheckNoFrame(3); > } > >+TEST_F(TestFrameBuffer2, CombineFramesToSuperframe) { >+ uint16_t pid = Rand(); >+ uint32_t ts = Rand(); >+ >+ InsertFrame(pid, 0, ts, false, false); >+ InsertFrame(pid, 1, ts, true, true); >+ ExtractFrame(0); >+ ExtractFrame(0); >+ CheckFrame(0, pid, 0); >+ CheckNoFrame(1); >+ // Two frames should be combined and returned together. >+ CheckFrameSize(0, kFrameSize * 2); >+} >+ >+TEST_F(TestFrameBuffer2, HigherSpatialLayerNonDecodable) { >+ uint16_t pid = Rand(); >+ uint32_t ts = Rand(); >+ >+ InsertFrame(pid, 0, ts, false, false); >+ InsertFrame(pid, 1, ts, true, true); >+ >+ ExtractFrame(0); >+ CheckFrame(0, pid, 0); >+ >+ InsertFrame(pid + 1, 1, ts + kFps20, false, true, pid); >+ InsertFrame(pid + 2, 0, ts + kFps10, false, false, pid); >+ InsertFrame(pid + 2, 1, ts + kFps10, true, true, pid + 1); >+ >+ clock_.AdvanceTimeMilliseconds(1000); >+ // Frame pid+1 is decodable but too late. >+ // In superframe pid+2 frame sid=0 is decodable, but frame sid=1 is not. >+ // Incorrect implementation might skip pid+1 frame and output undecodable >+ // pid+2 instead. >+ ExtractFrame(); >+ ExtractFrame(); >+ CheckFrame(1, pid + 1, 1); >+ CheckFrame(2, pid + 2, 0); >+} >+ > } // namespace video_coding > } // namespace webrtc >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_object.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_object.cc >index 0172c55493bba9058f2997c894b50d7990af7e8a..4c179101c91f8ad984a98806fdb3d696fa5eec36 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_object.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/frame_object.cc >@@ -96,6 +96,7 @@ RtpFrameObject::RtpFrameObject(PacketBuffer* packet_buffer, > timing_.receive_finish_ms = last_packet->receive_time_ms; > } > timing_.flags = last_packet->video_header.video_timing.flags; >+ is_last_spatial_layer = last_packet->markerBit; > } > > RtpFrameObject::~RtpFrameObject() { >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/rtp_frame_reference_finder.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/rtp_frame_reference_finder.cc >index 40b16f4156e872bc5fc772964d66d4e047f3fc4a..f6fce17215480640e328c43d5c8519d34913c3d9 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/rtp_frame_reference_finder.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/modules/video_coding/rtp_frame_reference_finder.cc >@@ -489,12 +489,24 @@ RtpFrameReferenceFinder::FrameDecision RtpFrameReferenceFinder::ManageFrameVp9( > UnwrapPictureIds(frame); > return kHandOff; > } >- } else { >- if (frame->frame_type() == kVideoFrameKey) { >+ } else if (frame->frame_type() == kVideoFrameKey) { >+ if (frame->id.spatial_layer == 0) { > RTC_LOG(LS_WARNING) << "Received keyframe without scalability structure"; > return kDrop; > } >+ const auto gof_info_it = gof_info_.find(unwrapped_tl0); >+ if (gof_info_it == gof_info_.end()) >+ return kStash; >+ >+ info = &gof_info_it->second; > >+ if (frame->frame_type() == kVideoFrameKey) { >+ frame->num_references = 0; >+ FrameReceivedVp9(frame->id.picture_id, info); >+ UnwrapPictureIds(frame); >+ return kHandOff; >+ } >+ } else { > auto gof_info_it = gof_info_.find( > (codec_header.temporal_idx == 0) ? unwrapped_tl0 - 1 : unwrapped_tl0); > >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/p2ptransportchannel.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/p2ptransportchannel.cc >index f61291c5ddc00826eb4247364ac4f5a1ba251203..0d21613ba4bf8c95b836d450b8b0944d893f72a0 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/p2ptransportchannel.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/p2ptransportchannel.cc >@@ -1098,7 +1098,9 @@ void P2PTransportChannel::OnCandidateResolved( > Candidate candidate = p->candidate_; > resolvers_.erase(p); > AddRemoteCandidateWithResolver(candidate, resolver); >- resolver->Destroy(false); >+ invoker_.AsyncInvoke<void>( >+ RTC_FROM_HERE, thread(), >+ rtc::Bind(&rtc::AsyncResolverInterface::Destroy, resolver, false)); > } > > void P2PTransportChannel::AddRemoteCandidateWithResolver( >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/p2ptransportchannel_unittest.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/p2ptransportchannel_unittest.cc >index 2ab3d88afbc597ad7e520c7e8f1bf9ffc0c48cf6..8549794b47a874d02a3ad39487da37abb87a99c1 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/p2ptransportchannel_unittest.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/p2ptransportchannel_unittest.cc >@@ -43,6 +43,7 @@ namespace { > > using rtc::SocketAddress; > using ::testing::_; >+using ::testing::Assign; > using ::testing::DoAll; > using ::testing::InSequence; > using ::testing::InvokeWithoutArgs; >@@ -4565,13 +4566,18 @@ TEST_F(P2PTransportChannelMostLikelyToWorkFirstTest, TestTcpTurn) { > } > > // Test that a resolver is created, asked for a result, and destroyed >-// when the address is a hostname. >+// when the address is a hostname. The destruction should happen even >+// if the channel is not destroyed. > TEST(P2PTransportChannelResolverTest, HostnameCandidateIsResolved) { > rtc::MockAsyncResolver mock_async_resolver; > EXPECT_CALL(mock_async_resolver, GetError()).WillOnce(Return(0)); > EXPECT_CALL(mock_async_resolver, GetResolvedAddress(_, _)) > .WillOnce(Return(true)); >- EXPECT_CALL(mock_async_resolver, Destroy(_)); >+ // Destroy is called asynchronously after the address is resolved, >+ // so we need a variable to wait on. >+ bool destroy_called = false; >+ EXPECT_CALL(mock_async_resolver, Destroy(_)) >+ .WillOnce(Assign(&destroy_called, true)); > webrtc::MockAsyncResolverFactory mock_async_resolver_factory; > EXPECT_CALL(mock_async_resolver_factory, Create()) > .WillOnce(Return(&mock_async_resolver)); >@@ -4586,6 +4592,7 @@ TEST(P2PTransportChannelResolverTest, HostnameCandidateIsResolved) { > ASSERT_EQ_WAIT(1u, channel.remote_candidates().size(), kDefaultTimeout); > const RemoteCandidate& candidate = channel.remote_candidates()[0]; > EXPECT_FALSE(candidate.address().IsUnresolvedIP()); >+ WAIT(destroy_called, kShortTimeout); > } > > // Test that if we signal a hostname candidate after the remote endpoint >@@ -4643,7 +4650,11 @@ TEST_F(P2PTransportChannelTest, > EXPECT_CALL(mock_async_resolver, GetResolvedAddress(_, _)) > .WillOnce(DoAll(SetArgPointee<1>(local_address), Return(true))); > } >- EXPECT_CALL(mock_async_resolver, Destroy(_)); >+ // Destroy is called asynchronously after the address is resolved, >+ // so we need a variable to wait on. >+ bool destroy_called = false; >+ EXPECT_CALL(mock_async_resolver, Destroy(_)) >+ .WillOnce(Assign(&destroy_called, true)); > ResumeCandidates(0); > // Verify ep2's selected connection is updated to use the 'local' candidate. > EXPECT_EQ_WAIT(LOCAL_PORT_TYPE, >@@ -4651,6 +4662,7 @@ TEST_F(P2PTransportChannelTest, > kMediumTimeout); > EXPECT_EQ(selected_connection, ep2_ch1()->selected_connection()); > >+ WAIT(destroy_called, kShortTimeout); > DestroyChannels(); > } > >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/port.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/port.cc >index 5b8e02d28ff894d3d376efab3b07f0f3bee40cbf..4954744cc8e976521c840f14e8a5d09803554359 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/port.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/port.cc >@@ -430,50 +430,42 @@ void Port::AddAddress(const rtc::SocketAddress& address, > c.set_network_name(network_->name()); > c.set_network_type(network_->type()); > c.set_url(url); >- c.set_related_address(related_address); >- >- bool pending = MaybeObfuscateAddress(&c, type, is_final); >- >- if (!pending) { >- FinishAddingAddress(c, is_final); >- } >-} >- >-bool Port::MaybeObfuscateAddress(Candidate* c, >- const std::string& type, >- bool is_final) { > // TODO(bugs.webrtc.org/9723): Use a config to control the feature of IP > // handling with mDNS. >- if (network_->GetMdnsResponder() == nullptr) { >- return false; >- } >- if (type != LOCAL_PORT_TYPE) { >- return false; >- } >- >- auto copy = *c; >- auto weak_ptr = weak_factory_.GetWeakPtr(); >- auto callback = [weak_ptr, copy, is_final](const rtc::IPAddress& addr, >- const std::string& name) mutable { >- RTC_DCHECK(copy.address().ipaddr() == addr); >- rtc::SocketAddress hostname_address(name, copy.address().port()); >- // In Port and Connection, we need the IP address information to >- // correctly handle the update of candidate type to prflx. The removal >- // of IP address when signaling this candidate will take place in >- // BasicPortAllocatorSession::OnCandidateReady, via SanitizeCandidate. >- hostname_address.SetResolvedIP(addr); >- copy.set_address(hostname_address); >- copy.set_related_address(rtc::SocketAddress()); >- if (weak_ptr != nullptr) { >- weak_ptr->set_mdns_name_registration_status( >- MdnsNameRegistrationStatus::kCompleted); >- weak_ptr->FinishAddingAddress(copy, is_final); >+ if (network_->GetMdnsResponder() != nullptr) { >+ // Obfuscate the IP address of a host candidates by an mDNS hostname. >+ if (type == LOCAL_PORT_TYPE) { >+ auto weak_ptr = weak_factory_.GetWeakPtr(); >+ auto callback = [weak_ptr, c, is_final](const rtc::IPAddress& addr, >+ const std::string& name) mutable { >+ RTC_DCHECK(c.address().ipaddr() == addr); >+ rtc::SocketAddress hostname_address(name, c.address().port()); >+ // In Port and Connection, we need the IP address information to >+ // correctly handle the update of candidate type to prflx. The removal >+ // of IP address when signaling this candidate will take place in >+ // BasicPortAllocatorSession::OnCandidateReady, via SanitizeCandidate. >+ hostname_address.SetResolvedIP(addr); >+ c.set_address(hostname_address); >+ RTC_DCHECK(c.related_address() == rtc::SocketAddress()); >+ if (weak_ptr != nullptr) { >+ weak_ptr->set_mdns_name_registration_status( >+ MdnsNameRegistrationStatus::kCompleted); >+ weak_ptr->FinishAddingAddress(c, is_final); >+ } >+ }; >+ set_mdns_name_registration_status( >+ MdnsNameRegistrationStatus::kInProgress); >+ network_->GetMdnsResponder()->CreateNameForAddress(c.address().ipaddr(), >+ callback); >+ return; > } >- }; >- set_mdns_name_registration_status(MdnsNameRegistrationStatus::kInProgress); >- network_->GetMdnsResponder()->CreateNameForAddress(copy.address().ipaddr(), >- callback); >- return true; >+ // For other types of candidates, the related address should be set to >+ // 0.0.0.0 or ::0. >+ c.set_related_address(rtc::SocketAddress()); >+ } else { >+ c.set_related_address(related_address); >+ } >+ FinishAddingAddress(c, is_final); > } > > void Port::FinishAddingAddress(const Candidate& c, bool is_final) { >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/port.h b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/port.h >index 9a8f92a96e45fd9a39e8cf8db4b53c5356e640bf..e0b3f078603777f55bf5fb4d9e9d86038aec307f 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/port.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/base/port.h >@@ -514,10 +514,6 @@ class Port : public PortInterface, > > rtc::WeakPtrFactory<Port> weak_factory_; > >- bool MaybeObfuscateAddress(Candidate* c, >- const std::string& type, >- bool is_final); >- > friend class Connection; > }; > >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator.cc >index 0c2fef3112605796b683c5edd5801dfe7478c677..39895dfa5a16d07f2813815aee7b07317857aedf 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator.cc >@@ -518,10 +518,6 @@ void BasicPortAllocatorSession::GetCandidatesFromPort( > } > } > >-bool BasicPortAllocatorSession::MdnsObfuscationEnabled() const { >- return allocator_->network_manager()->GetMdnsResponder() != nullptr; >-} >- > Candidate BasicPortAllocatorSession::SanitizeCandidate( > const Candidate& c) const { > RTC_DCHECK_RUN_ON(network_thread_); >@@ -538,7 +534,7 @@ Candidate BasicPortAllocatorSession::SanitizeCandidate( > bool filter_stun_related_address = > ((flags() & PORTALLOCATOR_DISABLE_ADAPTER_ENUMERATION) && > (flags() & PORTALLOCATOR_DISABLE_DEFAULT_LOCAL_CANDIDATE)) || >- !(candidate_filter_ & CF_HOST) || MdnsObfuscationEnabled(); >+ !(candidate_filter_ & CF_HOST); > // If the candidate filter doesn't allow reflexive addresses, empty TURN raddr > // to avoid reflexive address leakage. > bool filter_turn_related_address = !(candidate_filter_ & CF_REFLEXIVE); >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator.h b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator.h >index 672f3ddb7c263b1a55fa31422b9f103a5be7b601..1012d92f96b1a67380ba1682e19c512eece300dd 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator.h >@@ -236,10 +236,6 @@ class RTC_EXPORT BasicPortAllocatorSession : public PortAllocatorSession, > > bool CheckCandidateFilter(const Candidate& c) const; > bool CandidatePairable(const Candidate& c, const Port* port) const; >- >- // Returns true if there is an mDNS responder attached to the network manager >- bool MdnsObfuscationEnabled() const; >- > // Clears 1) the address if the candidate is supposedly a hostname candidate; > // 2) the related address according to the flags and candidate filter in order > // to avoid leaking any information. >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator_unittest.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator_unittest.cc >index 8943ebc73dba2db68a2fbcdbe40735786f4eaf0f..a07569e94b8dfa5aa56c4256159888a8663ad6f2 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator_unittest.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/p2p/client/basicportallocator_unittest.cc >@@ -2246,8 +2246,8 @@ TEST_F(BasicPortAllocatorTest, IceRegatheringMetricsLoggedWhenNetworkChanges) { > } > > // Test that when an mDNS responder is present, the local address of a host >-// candidate is concealed by an mDNS hostname and the related address of a srflx >-// candidate is set to 0.0.0.0 or ::0. >+// candidate is masked by an mDNS hostname and the related address of any other >+// type of candidates is set to 0.0.0.0 or ::0. > TEST_F(BasicPortAllocatorTest, HostCandidateAddressIsReplacedByHostname) { > // Default config uses GTURN and no NAT, so replace that with the > // desired setup (NAT, STUN server, TURN server, UDP/TCP). >@@ -2269,29 +2269,23 @@ TEST_F(BasicPortAllocatorTest, HostCandidateAddressIsReplacedByHostname) { > int num_srflx_candidates = 0; > int num_relay_candidates = 0; > for (const auto& candidate : candidates_) { >- const auto& raddr = candidate.related_address(); >- > if (candidate.type() == LOCAL_PORT_TYPE) { >- EXPECT_FALSE(candidate.address().hostname().empty()); >- EXPECT_TRUE(raddr.IsNil()); >+ EXPECT_TRUE(candidate.address().IsUnresolvedIP()); > if (candidate.protocol() == UDP_PROTOCOL_NAME) { > ++num_host_udp_candidates; > } else { > ++num_host_tcp_candidates; > } >- } else if (candidate.type() == STUN_PORT_TYPE) { >- // For a srflx candidate, the related address should be set to 0.0.0.0 or >- // ::0 >- EXPECT_TRUE(IPIsAny(raddr.ipaddr())); >- EXPECT_EQ(raddr.port(), 0); >- ++num_srflx_candidates; >- } else if (candidate.type() == RELAY_PORT_TYPE) { >- EXPECT_EQ(kNatUdpAddr.ipaddr(), raddr.ipaddr()); >- EXPECT_EQ(kNatUdpAddr.family(), raddr.family()); >- ++num_relay_candidates; > } else { >- // prflx candidates are not expected >- FAIL(); >+ EXPECT_NE(PRFLX_PORT_TYPE, candidate.type()); >+ // The related address should be set to 0.0.0.0 or ::0 for srflx and >+ // relay candidates. >+ EXPECT_EQ(rtc::SocketAddress(), candidate.related_address()); >+ if (candidate.type() == STUN_PORT_TYPE) { >+ ++num_srflx_candidates; >+ } else { >+ ++num_relay_candidates; >+ } > } > } > EXPECT_EQ(1, num_host_udp_candidates); >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/pc/peerconnection.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/pc/peerconnection.cc >index a6b47c16deba381e57abfbf1230118165fb158a1..89435583b738117ad9e6a33b4f65e2898c53efc6 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/pc/peerconnection.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/pc/peerconnection.cc >@@ -3317,13 +3317,9 @@ bool PeerConnection::StartRtcEventLog(rtc::PlatformFile file, > const size_t max_size = (max_size_bytes < 0) > ? RtcEventLog::kUnlimitedOutput > : rtc::saturated_cast<size_t>(max_size_bytes); >- int64_t output_period_ms = webrtc::RtcEventLog::kImmediateOutput; >- if (field_trial::IsEnabled("WebRTC-RtcEventLogNewFormat")) { >- output_period_ms = 5000; >- } > return StartRtcEventLog( > absl::make_unique<RtcEventLogOutputFile>(file, max_size), >- output_period_ms); >+ webrtc::RtcEventLog::kImmediateOutput); > } > > bool PeerConnection::StartRtcEventLog(std::unique_ptr<RtcEventLogOutput> output, >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/pc/rtcstats_integrationtest.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/pc/rtcstats_integrationtest.cc >index 49084de4d20717c22ddc891b7ea463204cb63b4c..19262d42121d7b2e84451b284d2ae4d619030bc9 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/pc/rtcstats_integrationtest.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/pc/rtcstats_integrationtest.cc >@@ -813,11 +813,11 @@ TEST_F(RTCStatsIntegrationTest, GetStatsWithInvalidReceiverSelector) { > EXPECT_FALSE(report->size()); > } > >-// TODO(bugs.webrtc.org/10041) For now this is equivalent to the following >-// test GetsStatsWhileClosingPeerConnection, because pc() is closed by >-// PeerConnectionTestWrapper. See: bugs.webrtc.org/9847 >+// TODO(bugs.webrtc.org/9847) Remove this test altogether if a proper fix cannot >+// be found. For now it is lying, as we cannot safely gets stats while >+// destroying PeerConnection. > TEST_F(RTCStatsIntegrationTest, >- DISABLED_GetStatsWhileDestroyingPeerConnection) { >+ DISABLED_GetsStatsWhileDestroyingPeerConnection) { > StartCall(); > > rtc::scoped_refptr<RTCStatsObtainer> stats_obtainer = >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/pc/test/peerconnectiontestwrapper.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/pc/test/peerconnectiontestwrapper.cc >index a1db9ed9e6f30c3e072695494e08c5a68f9eb094..0010edb0f8cd3587f4724c245c9c81caa6ae0051 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/pc/test/peerconnectiontestwrapper.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/pc/test/peerconnectiontestwrapper.cc >@@ -23,7 +23,6 @@ > #include "pc/test/mockpeerconnectionobservers.h" > #include "pc/test/peerconnectiontestwrapper.h" > #include "rtc_base/gunit.h" >-#include "rtc_base/thread_checker.h" > #include "rtc_base/timeutils.h" > > using webrtc::FakeConstraints; >@@ -66,20 +65,9 @@ PeerConnectionTestWrapper::PeerConnectionTestWrapper( > rtc::Thread* worker_thread) > : name_(name), > network_thread_(network_thread), >- worker_thread_(worker_thread) { >- pc_thread_checker_.DetachFromThread(); >-} >+ worker_thread_(worker_thread) {} > >-PeerConnectionTestWrapper::~PeerConnectionTestWrapper() { >- RTC_DCHECK_RUN_ON(&pc_thread_checker_); >- // Either network_thread or worker_thread might be active at this point. >- // Relying on ~PeerConnection to properly wait for them doesn't work, >- // as a vptr race might occur (before we enter the destruction body). >- // See: bugs.webrtc.org/9847 >- if (pc()) { >- pc()->Close(); >- } >-} >+PeerConnectionTestWrapper::~PeerConnectionTestWrapper() {} > > bool PeerConnectionTestWrapper::CreatePc( > const webrtc::PeerConnectionInterface::RTCConfiguration& config, >@@ -88,8 +76,6 @@ bool PeerConnectionTestWrapper::CreatePc( > std::unique_ptr<cricket::PortAllocator> port_allocator( > new cricket::FakePortAllocator(network_thread_, nullptr)); > >- RTC_DCHECK_RUN_ON(&pc_thread_checker_); >- > fake_audio_capture_module_ = FakeAudioCaptureModule::Create(); > if (fake_audio_capture_module_ == NULL) { > return false; >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/pc/test/peerconnectiontestwrapper.h b/Source/ThirdParty/libwebrtc/Source/webrtc/pc/test/peerconnectiontestwrapper.h >index b6a57f35d3823311ba5f3d3d7f6646ff07d20888..21ba89acbf07305839a51d5198fad4dd89111f52 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/pc/test/peerconnectiontestwrapper.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/pc/test/peerconnectiontestwrapper.h >@@ -20,7 +20,6 @@ > #include "pc/test/fakeaudiocapturemodule.h" > #include "pc/test/fakevideotrackrenderer.h" > #include "rtc_base/third_party/sigslot/sigslot.h" >-#include "rtc_base/thread_checker.h" > > class PeerConnectionTestWrapper > : public webrtc::PeerConnectionObserver, >@@ -107,7 +106,6 @@ class PeerConnectionTestWrapper > std::string name_; > rtc::Thread* const network_thread_; > rtc::Thread* const worker_thread_; >- rtc::ThreadChecker pc_thread_checker_; > rtc::scoped_refptr<webrtc::PeerConnectionInterface> peer_connection_; > rtc::scoped_refptr<webrtc::PeerConnectionFactoryInterface> > peer_connection_factory_; >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/rtc_base/stringize_macros.h b/Source/ThirdParty/libwebrtc/Source/webrtc/rtc_base/stringize_macros.h >index 38c6f1836bdf32b968ab5e59bd83d348c1fa4101..acf0ba404ca0e5654ef7d9373d824e4ce36fcacc 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/rtc_base/stringize_macros.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/rtc_base/stringize_macros.h >@@ -8,7 +8,7 @@ > * be found in the AUTHORS file in the root of the source tree. > */ > >-// Modified from the Chxsromium original: >+// Modified from the Chromium original: > // src/base/strings/stringize_macros.h > > // This file defines preprocessor macros for stringizing preprocessor >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/sdk/objc/components/video_codec/nalu_rewriter_unittest.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/sdk/objc/components/video_codec/nalu_rewriter_unittest.cc >deleted file mode 100644 >index 4dc00c9231d00d2412b63c89f1a04888693b2cb8..0000000000000000000000000000000000000000 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/sdk/objc/components/video_codec/nalu_rewriter_unittest.cc >+++ /dev/null >@@ -1,233 +0,0 @@ >-/* >- * Copyright (c) 2015 The WebRTC project authors. All Rights Reserved. >- * >- * Use of this source code is governed by a BSD-style license >- * that can be found in the LICENSE file in the root of the source >- * tree. An additional intellectual property rights grant can be found >- * in the file PATENTS. All contributing project authors may >- * be found in the AUTHORS file in the root of the source tree. >- * >- */ >- >-#include <memory> >- >-#include "common_video/h264/h264_common.h" >-#include "rtc_base/arraysize.h" >-#include "sdk/objc/components/video_codec/nalu_rewriter.h" >-#include "test/gtest.h" >- >-namespace webrtc { >- >-using H264::kSps; >- >-static const uint8_t NALU_TEST_DATA_0[] = {0xAA, 0xBB, 0xCC}; >-static const uint8_t NALU_TEST_DATA_1[] = {0xDE, 0xAD, 0xBE, 0xEF}; >- >-TEST(H264VideoToolboxNaluTest, TestCreateVideoFormatDescription) { >- const uint8_t sps_pps_buffer[] = { >- // SPS nalu. >- 0x00, 0x00, 0x00, 0x01, 0x27, 0x42, 0x00, 0x1E, 0xAB, 0x40, 0xF0, 0x28, >- 0xD3, 0x70, 0x20, 0x20, 0x20, 0x20, >- // PPS nalu. >- 0x00, 0x00, 0x00, 0x01, 0x28, 0xCE, 0x3C, 0x30}; >- CMVideoFormatDescriptionRef description = >- CreateVideoFormatDescription(sps_pps_buffer, arraysize(sps_pps_buffer)); >- EXPECT_TRUE(description); >- if (description) { >- CFRelease(description); >- description = nullptr; >- } >- >- const uint8_t sps_pps_not_at_start_buffer[] = { >- // Add some non-SPS/PPS NALUs at the beginning >- 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x01, 0xFF, 0x00, 0x00, 0x00, 0x01, >- 0xAB, 0x33, 0x21, >- // SPS nalu. >- 0x00, 0x00, 0x01, 0x27, 0x42, 0x00, 0x1E, 0xAB, 0x40, 0xF0, 0x28, 0xD3, >- 0x70, 0x20, 0x20, 0x20, 0x20, >- // PPS nalu. >- 0x00, 0x00, 0x01, 0x28, 0xCE, 0x3C, 0x30}; >- description = CreateVideoFormatDescription( >- sps_pps_not_at_start_buffer, arraysize(sps_pps_not_at_start_buffer)); >- EXPECT_TRUE(description); >- if (description) { >- CFRelease(description); >- description = nullptr; >- } >- >- const uint8_t other_buffer[] = {0x00, 0x00, 0x00, 0x01, 0x28}; >- EXPECT_FALSE( >- CreateVideoFormatDescription(other_buffer, arraysize(other_buffer))); >-} >- >-TEST(AnnexBBufferReaderTest, TestReadEmptyInput) { >- const uint8_t annex_b_test_data[] = {0x00}; >- AnnexBBufferReader reader(annex_b_test_data, 0); >- const uint8_t* nalu = nullptr; >- size_t nalu_length = 0; >- EXPECT_EQ(0u, reader.BytesRemaining()); >- EXPECT_FALSE(reader.ReadNalu(&nalu, &nalu_length)); >- EXPECT_EQ(nullptr, nalu); >- EXPECT_EQ(0u, nalu_length); >-} >- >-TEST(AnnexBBufferReaderTest, TestReadSingleNalu) { >- const uint8_t annex_b_test_data[] = {0x00, 0x00, 0x00, 0x01, 0xAA}; >- AnnexBBufferReader reader(annex_b_test_data, arraysize(annex_b_test_data)); >- const uint8_t* nalu = nullptr; >- size_t nalu_length = 0; >- EXPECT_EQ(arraysize(annex_b_test_data), reader.BytesRemaining()); >- EXPECT_TRUE(reader.ReadNalu(&nalu, &nalu_length)); >- EXPECT_EQ(annex_b_test_data + 4, nalu); >- EXPECT_EQ(1u, nalu_length); >- EXPECT_EQ(0u, reader.BytesRemaining()); >- EXPECT_FALSE(reader.ReadNalu(&nalu, &nalu_length)); >- EXPECT_EQ(nullptr, nalu); >- EXPECT_EQ(0u, nalu_length); >-} >- >-TEST(AnnexBBufferReaderTest, TestReadSingleNalu3ByteHeader) { >- const uint8_t annex_b_test_data[] = {0x00, 0x00, 0x01, 0xAA}; >- AnnexBBufferReader reader(annex_b_test_data, arraysize(annex_b_test_data)); >- const uint8_t* nalu = nullptr; >- size_t nalu_length = 0; >- EXPECT_EQ(arraysize(annex_b_test_data), reader.BytesRemaining()); >- EXPECT_TRUE(reader.ReadNalu(&nalu, &nalu_length)); >- EXPECT_EQ(annex_b_test_data + 3, nalu); >- EXPECT_EQ(1u, nalu_length); >- EXPECT_EQ(0u, reader.BytesRemaining()); >- EXPECT_FALSE(reader.ReadNalu(&nalu, &nalu_length)); >- EXPECT_EQ(nullptr, nalu); >- EXPECT_EQ(0u, nalu_length); >-} >- >-TEST(AnnexBBufferReaderTest, TestReadMissingNalu) { >- // clang-format off >- const uint8_t annex_b_test_data[] = {0x01, >- 0x00, 0x01, >- 0x00, 0x00, 0x00, 0xFF}; >- // clang-format on >- AnnexBBufferReader reader(annex_b_test_data, arraysize(annex_b_test_data)); >- const uint8_t* nalu = nullptr; >- size_t nalu_length = 0; >- EXPECT_EQ(0u, reader.BytesRemaining()); >- EXPECT_FALSE(reader.ReadNalu(&nalu, &nalu_length)); >- EXPECT_EQ(nullptr, nalu); >- EXPECT_EQ(0u, nalu_length); >-} >- >-TEST(AnnexBBufferReaderTest, TestReadMultipleNalus) { >- // clang-format off >- const uint8_t annex_b_test_data[] = {0x00, 0x00, 0x00, 0x01, 0xFF, >- 0x01, >- 0x00, 0x01, >- 0x00, 0x00, 0x00, 0xFF, >- 0x00, 0x00, 0x01, 0xAA, 0xBB}; >- // clang-format on >- AnnexBBufferReader reader(annex_b_test_data, arraysize(annex_b_test_data)); >- const uint8_t* nalu = nullptr; >- size_t nalu_length = 0; >- EXPECT_EQ(arraysize(annex_b_test_data), reader.BytesRemaining()); >- EXPECT_TRUE(reader.ReadNalu(&nalu, &nalu_length)); >- EXPECT_EQ(annex_b_test_data + 4, nalu); >- EXPECT_EQ(8u, nalu_length); >- EXPECT_EQ(6u, reader.BytesRemaining()); >- EXPECT_TRUE(reader.ReadNalu(&nalu, &nalu_length)); >- EXPECT_EQ(annex_b_test_data + 16, nalu); >- EXPECT_EQ(2u, nalu_length); >- EXPECT_EQ(0u, reader.BytesRemaining()); >- EXPECT_FALSE(reader.ReadNalu(&nalu, &nalu_length)); >- EXPECT_EQ(nullptr, nalu); >- EXPECT_EQ(0u, nalu_length); >-} >- >-TEST(AnnexBBufferReaderTest, TestFindNextNaluOfType) { >- const uint8_t notSps = 0x1F; >- const uint8_t annex_b_test_data[] = { >- 0x00, 0x00, 0x00, 0x01, kSps, 0x00, 0x00, 0x01, notSps, >- 0x00, 0x00, 0x01, notSps, 0xDD, 0x00, 0x00, 0x01, notSps, >- 0xEE, 0xFF, 0x00, 0x00, 0x00, 0xFF, 0x00, 0x00, 0x00, >- 0x01, 0x00, 0x00, 0x00, 0x01, kSps, 0xBB, 0x00, 0x00, >- 0x01, notSps, 0x00, 0x00, 0x01, notSps, 0xDD, 0x00, 0x00, >- 0x01, notSps, 0xEE, 0xFF, 0x00, 0x00, 0x00, 0x01}; >- >- AnnexBBufferReader reader(annex_b_test_data, arraysize(annex_b_test_data)); >- const uint8_t* nalu = nullptr; >- size_t nalu_length = 0; >- EXPECT_EQ(arraysize(annex_b_test_data), reader.BytesRemaining()); >- EXPECT_TRUE(reader.FindNextNaluOfType(kSps)); >- EXPECT_TRUE(reader.ReadNalu(&nalu, &nalu_length)); >- EXPECT_EQ(annex_b_test_data + 4, nalu); >- EXPECT_EQ(1u, nalu_length); >- >- EXPECT_TRUE(reader.FindNextNaluOfType(kSps)); >- EXPECT_TRUE(reader.ReadNalu(&nalu, &nalu_length)); >- EXPECT_EQ(annex_b_test_data + 32, nalu); >- EXPECT_EQ(2u, nalu_length); >- >- EXPECT_FALSE(reader.FindNextNaluOfType(kSps)); >- EXPECT_FALSE(reader.ReadNalu(&nalu, &nalu_length)); >-} >- >-TEST(AvccBufferWriterTest, TestEmptyOutputBuffer) { >- const uint8_t expected_buffer[] = {0x00}; >- const size_t buffer_size = 1; >- std::unique_ptr<uint8_t[]> buffer(new uint8_t[buffer_size]); >- memset(buffer.get(), 0, buffer_size); >- AvccBufferWriter writer(buffer.get(), 0); >- EXPECT_EQ(0u, writer.BytesRemaining()); >- EXPECT_FALSE(writer.WriteNalu(NALU_TEST_DATA_0, arraysize(NALU_TEST_DATA_0))); >- EXPECT_EQ(0, >- memcmp(expected_buffer, buffer.get(), arraysize(expected_buffer))); >-} >- >-TEST(AvccBufferWriterTest, TestWriteSingleNalu) { >- const uint8_t expected_buffer[] = { >- 0x00, 0x00, 0x00, 0x03, 0xAA, 0xBB, 0xCC, >- }; >- const size_t buffer_size = arraysize(NALU_TEST_DATA_0) + 4; >- std::unique_ptr<uint8_t[]> buffer(new uint8_t[buffer_size]); >- AvccBufferWriter writer(buffer.get(), buffer_size); >- EXPECT_EQ(buffer_size, writer.BytesRemaining()); >- EXPECT_TRUE(writer.WriteNalu(NALU_TEST_DATA_0, arraysize(NALU_TEST_DATA_0))); >- EXPECT_EQ(0u, writer.BytesRemaining()); >- EXPECT_FALSE(writer.WriteNalu(NALU_TEST_DATA_1, arraysize(NALU_TEST_DATA_1))); >- EXPECT_EQ(0, >- memcmp(expected_buffer, buffer.get(), arraysize(expected_buffer))); >-} >- >-TEST(AvccBufferWriterTest, TestWriteMultipleNalus) { >- // clang-format off >- const uint8_t expected_buffer[] = { >- 0x00, 0x00, 0x00, 0x03, 0xAA, 0xBB, 0xCC, >- 0x00, 0x00, 0x00, 0x04, 0xDE, 0xAD, 0xBE, 0xEF >- }; >- // clang-format on >- const size_t buffer_size = >- arraysize(NALU_TEST_DATA_0) + arraysize(NALU_TEST_DATA_1) + 8; >- std::unique_ptr<uint8_t[]> buffer(new uint8_t[buffer_size]); >- AvccBufferWriter writer(buffer.get(), buffer_size); >- EXPECT_EQ(buffer_size, writer.BytesRemaining()); >- EXPECT_TRUE(writer.WriteNalu(NALU_TEST_DATA_0, arraysize(NALU_TEST_DATA_0))); >- EXPECT_EQ(buffer_size - (arraysize(NALU_TEST_DATA_0) + 4), >- writer.BytesRemaining()); >- EXPECT_TRUE(writer.WriteNalu(NALU_TEST_DATA_1, arraysize(NALU_TEST_DATA_1))); >- EXPECT_EQ(0u, writer.BytesRemaining()); >- EXPECT_EQ(0, >- memcmp(expected_buffer, buffer.get(), arraysize(expected_buffer))); >-} >- >-TEST(AvccBufferWriterTest, TestOverflow) { >- const uint8_t expected_buffer[] = {0x00, 0x00, 0x00}; >- const size_t buffer_size = arraysize(NALU_TEST_DATA_0); >- std::unique_ptr<uint8_t[]> buffer(new uint8_t[buffer_size]); >- memset(buffer.get(), 0, buffer_size); >- AvccBufferWriter writer(buffer.get(), buffer_size); >- EXPECT_EQ(buffer_size, writer.BytesRemaining()); >- EXPECT_FALSE(writer.WriteNalu(NALU_TEST_DATA_0, arraysize(NALU_TEST_DATA_0))); >- EXPECT_EQ(buffer_size, writer.BytesRemaining()); >- EXPECT_EQ(0, >- memcmp(expected_buffer, buffer.get(), arraysize(expected_buffer))); >-} >- >-} // namespace webrtc >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/test/fuzzers/rtp_packet_fuzzer.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/test/fuzzers/rtp_packet_fuzzer.cc >index f774c0cb8b0184343601a8f2ffbc6230902794d0..469fb364d2a858fff1c50600d74accb29ba56fb4 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/test/fuzzers/rtp_packet_fuzzer.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/test/fuzzers/rtp_packet_fuzzer.cc >@@ -121,9 +121,9 @@ void FuzzOneInput(const uint8_t* data, size_t size) { > packet.GetExtension<RtpGenericFrameDescriptorExtension>(&descriptor); > break; > } >- case kRtpExtensionColorSpace: { >- ColorSpace color_space; >- packet.GetExtension<ColorSpaceExtension>(&color_space); >+ case kRtpExtensionHdrMetadata: { >+ HdrMetadata hdr_metadata; >+ packet.GetExtension<HdrMetadataExtension>(&hdr_metadata); > break; > } > } >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/ios/internal.client.webrtc/iOS64_Perf.json b/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/ios/internal.client.webrtc/iOS64_Perf.json >index 88bfe75264ad56b2552afd01b34bfab9a9a4b9ed..4286c3076ec64c9640f0c131cc29934967af155a 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/ios/internal.client.webrtc/iOS64_Perf.json >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/ios/internal.client.webrtc/iOS64_Perf.json >@@ -23,6 +23,12 @@ > "include": "perf_tests.json", > "device type": "iPhone 7", > "os": "11.4.1", >+ "dimensions": [ >+ { >+ "os": "Mac", >+ "pool": "Chrome" >+ } >+ ] > } > ] > } >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/ios/internal.tryserver.webrtc/ios_arm64_perf.json b/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/ios/internal.tryserver.webrtc/ios_arm64_perf.json >index 0c30581da1df01076d88bf838b507c45db3e2c7f..0230c74a92ad07d158ef048423a7695378e6d237 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/ios/internal.tryserver.webrtc/ios_arm64_perf.json >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/ios/internal.tryserver.webrtc/ios_arm64_perf.json >@@ -23,6 +23,12 @@ > "include": "perf_trybot_tests.json", > "device type": "iPhone 7", > "os": "11.0.3", >+ "dimensions": [ >+ { >+ "os": "Mac", >+ "pool": "Chrome" >+ } >+ ] > } > ] > } >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/whitespace.txt b/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/whitespace.txt >index 685a8768f31eb94bc730220d0c17b9b77f02f753..377ef270e8e777665c5b063931c5bb307d24e92c 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/whitespace.txt >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/tools_webrtc/whitespace.txt >@@ -13,3 +13,4 @@ Foo Bar Baz Bur > > Alios ego vidi ventos; alias prospexi animo procellas > - Cicero >+. >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/video/report_block_stats.h b/Source/ThirdParty/libwebrtc/Source/webrtc/video/report_block_stats.h >index 90badf70862fc1ab70d6a4b1296754988c451577..e0a69c9052aae0b8bc4d76496c0186150b1bf43d 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/video/report_block_stats.h >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/video/report_block_stats.h >@@ -14,7 +14,6 @@ > #include <map> > #include <vector> > >-#include "modules/rtp_rtcp/include/rtcp_statistics.h" > #include "modules/rtp_rtcp/include/rtp_rtcp_defines.h" > > namespace webrtc { >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/video/rtp_video_stream_receiver.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/video/rtp_video_stream_receiver.cc >index fdf02ff329d741faf289084e38ad8aa7e738b44b..057a1126569ced6239b2709ad9e334f4e74cb87e 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/video/rtp_video_stream_receiver.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/video/rtp_video_stream_receiver.cc >@@ -529,6 +529,14 @@ void RtpVideoStreamReceiver::ReceivePacket(const RtpPacketReceived& packet) { > VideoSendTiming::kInvalid; > webrtc_rtp_header.video_header().is_last_packet_in_frame = > webrtc_rtp_header.header.markerBit; >+ if (parsed_payload.video_header().codec == kVideoCodecVP9) { >+ const RTPVideoHeaderVP9& codec_header = absl::get<RTPVideoHeaderVP9>( >+ parsed_payload.video_header().video_type_header); >+ webrtc_rtp_header.video_header().is_last_packet_in_frame |= >+ codec_header.end_of_frame; >+ webrtc_rtp_header.video_header().is_first_packet_in_frame |= >+ codec_header.beginning_of_frame; >+ } > > packet.GetExtension<VideoOrientation>( > &webrtc_rtp_header.video_header().rotation); >diff --git a/Source/ThirdParty/libwebrtc/Source/webrtc/video/video_receive_stream.cc b/Source/ThirdParty/libwebrtc/Source/webrtc/video/video_receive_stream.cc >index bba33aa363e8a695dac6cf4480cbdc4777615300..454e41457a752dde0d4345b0bd160e6a26665336 100644 >--- a/Source/ThirdParty/libwebrtc/Source/webrtc/video/video_receive_stream.cc >+++ b/Source/ThirdParty/libwebrtc/Source/webrtc/video/video_receive_stream.cc >@@ -374,10 +374,6 @@ void VideoReceiveStream::RequestKeyFrame() { > > void VideoReceiveStream::OnCompleteFrame( > std::unique_ptr<video_coding::EncodedFrame> frame) { >- // TODO(webrtc:9249): Workaround to allow decoding of VP9 SVC stream with >- // partially enabled inter-layer prediction. >- frame->id.spatial_layer = 0; >- > // TODO(https://bugs.webrtc.org/9974): Consider removing this workaround. > int64_t time_now_ms = rtc::TimeMillis(); > if (last_complete_frame_time_ms_ > 0 &&
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Formatted Diff
|
Diff
Attachments on
bug 192517
: 356839