<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
<!DOCTYPE bugzilla SYSTEM "https://bugs.webkit.org/page.cgi?id=bugzilla.dtd">

<bugzilla version="5.0.4.1"
          urlbase="https://bugs.webkit.org/"
          
          maintainer="admin@webkit.org"
>

    <bug>
          <bug_id>129212</bug_id>
          
          <creation_ts>2014-02-22 11:08:59 -0800</creation_ts>
          <short_desc>Refine DFG+FTL inlining and compilation limits</short_desc>
          <delta_ts>2014-02-23 13:48:57 -0800</delta_ts>
          <reporter_accessible>1</reporter_accessible>
          <cclist_accessible>1</cclist_accessible>
          <classification_id>1</classification_id>
          <classification>Unclassified</classification>
          <product>WebKit</product>
          <component>JavaScriptCore</component>
          <version>528+ (Nightly build)</version>
          <rep_platform>All</rep_platform>
          <op_sys>All</op_sys>
          <bug_status>RESOLVED</bug_status>
          <resolution>FIXED</resolution>
          
          
          <bug_file_loc></bug_file_loc>
          <status_whiteboard></status_whiteboard>
          <keywords></keywords>
          <priority>P2</priority>
          <bug_severity>Normal</bug_severity>
          <target_milestone>---</target_milestone>
          
          <blocked>112840</blocked>
          <everconfirmed>1</everconfirmed>
          <reporter name="Filip Pizlo">fpizlo</reporter>
          <assigned_to name="Filip Pizlo">fpizlo</assigned_to>
          <cc>atrick</cc>
    
    <cc>barraclough</cc>
    
    <cc>ggaren</cc>
    
    <cc>mark.lam</cc>
    
    <cc>mhahnenberg</cc>
    
    <cc>mmirman</cc>
    
    <cc>msaboff</cc>
    
    <cc>nrotem</cc>
    
    <cc>oliver</cc>
    
    <cc>sam</cc>
          

      

      

      

          <comment_sort_order>oldest_to_newest</comment_sort_order>  
          <long_desc isprivate="0" >
    <commentid>983617</commentid>
    <comment_count>0</comment_count>
    <who name="Filip Pizlo">fpizlo</who>
    <bug_when>2014-02-22 11:08:59 -0800</bug_when>
    <thetext>Patch forthcoming.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>983620</commentid>
    <comment_count>1</comment_count>
      <attachid>224978</attachid>
    <who name="Filip Pizlo">fpizlo</who>
    <bug_when>2014-02-22 11:13:31 -0800</bug_when>
    <thetext>Created attachment 224978
the patch</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>983701</commentid>
    <comment_count>2</comment_count>
      <attachid>224978</attachid>
    <who name="Mark Hahnenberg">mhahnenberg</who>
    <bug_when>2014-02-23 08:40:08 -0800</bug_when>
    <thetext>Comment on attachment 224978
the patch

r=me</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>983705</commentid>
    <comment_count>3</comment_count>
      <attachid>224978</attachid>
    <who name="Geoffrey Garen">ggaren</who>
    <bug_when>2014-02-23 09:05:26 -0800</bug_when>
    <thetext>Comment on attachment 224978
the patch

View in context: https://bugs.webkit.org/attachment.cgi?id=224978&amp;action=review

&gt; Source/JavaScriptCore/bytecode/CodeBlock.cpp:2793
&gt; +            dataLog(&quot;    Marking SABI because caller is too large.\n&quot;);

Don&apos;t you mean &quot;Marking !SABI&quot;?

&gt; Source/JavaScriptCore/runtime/Options.h:188
&gt; +    /* Maximum size of a caller for enabling inlining. This is purely to protect us */\
&gt; +    /* super long compiles. */\

&quot;from super long compiles&quot;?</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>983709</commentid>
    <comment_count>4</comment_count>
      <attachid>224978</attachid>
    <who name="Geoffrey Garen">ggaren</who>
    <bug_when>2014-02-23 09:17:36 -0800</bug_when>
    <thetext>Comment on attachment 224978
the patch

GCC has a &quot;caller too big&quot; heuristic for inlining, and back when we compiled with GCC it caused very bad performance pathologies. A lot of critical functions in WebCore/JavaScriptCore are indeed big -- HTML parser, CSS parser, JavaScript parser, Interpreter::execute when it was written in C -- and the &quot;caller too big&quot; heuristic forced trivial functions like JSValue::isCell() and StringImpl::isAtomic() not to inline, even though inlining them is a pure win, reducing both code size and compile time.

When those pathologies present, they&apos;re also horrible to debug, since you can add an innocuous line of code somewhere in a function&apos;s slow path, and mysteriously make it 2X slower.

If &quot;SABI&quot; doesn&apos;t actually mean &quot;should always be inlined&quot;, and instead means &quot;should maybe inline depending on caller&apos;s size&quot;, then I think it needs a better name, and we need to introduce a new &quot;SABI&quot; that truly means &quot;I&apos;m so small that I should always inline&quot;.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>983715</commentid>
    <comment_count>5</comment_count>
    <who name="Filip Pizlo">fpizlo</who>
    <bug_when>2014-02-23 10:00:10 -0800</bug_when>
    <thetext>(In reply to comment #4)
&gt; (From update of attachment 224978 [details])
&gt; GCC has a &quot;caller too big&quot; heuristic for inlining, and back when we compiled with GCC it caused very bad performance pathologies. A lot of critical functions in WebCore/JavaScriptCore are indeed big -- HTML parser, CSS parser, JavaScript parser, Interpreter::execute when it was written in C -- and the &quot;caller too big&quot; heuristic forced trivial functions like JSValue::isCell() and StringImpl::isAtomic() not to inline, even though inlining them is a pure win, reducing both code size and compile time.
&gt; 
&gt; When those pathologies present, they&apos;re also horrible to debug, since you can add an innocuous line of code somewhere in a function&apos;s slow path, and mysteriously make it 2X slower.

I&apos;m well aware of these performance pathologies and GCC is not the only compiler that has had this issue in its history.  But, horrible-to-debug performance pathologies are better than chain-crashing because LLVM ran out of memory.  If we ever suspect that the performance of a program is bad because of these heuristics, we should loosen the heuristics and see if we get a speed-up.

&gt; 
&gt; If &quot;SABI&quot; doesn&apos;t actually mean &quot;should always be inlined&quot;, and instead means &quot;should maybe inline depending on caller&apos;s size&quot;, then I think it needs a better name, and we need to introduce a new &quot;SABI&quot; that truly means &quot;I&apos;m so small that I should always inline&quot;.

I think you&apos;re confusing two different heuristics.

There is the too-big-to-inline-into heuristic, which has nothing to do with SABI; and then there&apos;s SABI, which doesn&apos;t mean &quot;I&apos;m so small that I should always inline&quot; - it means that we believe that based on current heuristics, all of the callers of this function have either already been compiled with DFG/FTL and have inlined it, or they will be compiled with DFG/FTL and will inline it.  Notice that we mark !SABI if we see you&apos;re being called from a too-big-to-inline-into function because we can be pretty sure that this function will never inline anything.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>983725</commentid>
    <comment_count>6</comment_count>
    <who name="Filip Pizlo">fpizlo</who>
    <bug_when>2014-02-23 10:39:15 -0800</bug_when>
    <thetext>(In reply to comment #3)
&gt; (From update of attachment 224978 [details])
&gt; View in context: https://bugs.webkit.org/attachment.cgi?id=224978&amp;action=review
&gt; 
&gt; &gt; Source/JavaScriptCore/bytecode/CodeBlock.cpp:2793
&gt; &gt; +            dataLog(&quot;    Marking SABI because caller is too large.\n&quot;);
&gt; 
&gt; Don&apos;t you mean &quot;Marking !SABI&quot;?

Yes, fixed!  Changed to &quot;Clearing SABI&quot;.

&gt; 
&gt; &gt; Source/JavaScriptCore/runtime/Options.h:188
&gt; &gt; +    /* Maximum size of a caller for enabling inlining. This is purely to protect us */\
&gt; &gt; +    /* super long compiles. */\
&gt; 
&gt; &quot;from super long compiles&quot;?

Yes, fixed!

Thanks!</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>983726</commentid>
    <comment_count>7</comment_count>
    <who name="Filip Pizlo">fpizlo</who>
    <bug_when>2014-02-23 10:44:06 -0800</bug_when>
    <thetext>Landed in http://trac.webkit.org/changeset/164558</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>983734</commentid>
    <comment_count>8</comment_count>
    <who name="Geoffrey Garen">ggaren</who>
    <bug_when>2014-02-23 12:52:11 -0800</bug_when>
    <thetext>&gt; I&apos;m well aware of these performance pathologies and GCC is not the only compiler that has had this issue in its history.  But, horrible-to-debug performance pathologies are better than chain-crashing because LLVM ran out of memory.

Won&apos;t maximumOptimizationCandidateInstructionCount protect us from chain-crashing? Assuming that a function is net-win to inline, inlining it will never make the difference between crashing and not.

&gt; If we ever suspect that the performance of a program is bad because of these heuristics, we should loosen the heuristics and see if we get a speed-up.

Given that we both agree, based on lots of experience with existing programs and compilers, that real programs do achieve bad performance because of these heuristics, I don&apos;t think it makes sense to treat this problem as a theoretical one that might never happen, or that needs more investigation.

Instead, I think we should implement the solution that LLVM implemented: If a function is small/trivial enough, we should allow it to inline always, even if its caller is large.

&gt; I think you&apos;re confusing two different heuristics.

Yes, I guess I am.

(1) I propose that we have a real concept of &quot;should always be inlined&quot; that means &quot;I should always be inlined because it is always net profitable to do so regardless of caller size&quot;.

(2) We have an existing concept named &quot;should always be inlined&quot; which means something more like &quot;I predict that I will usually be inlined before optimizing me is necessary&quot;.</thetext>
  </long_desc><long_desc isprivate="0" >
    <commentid>983745</commentid>
    <comment_count>9</comment_count>
    <who name="Filip Pizlo">fpizlo</who>
    <bug_when>2014-02-23 13:48:57 -0800</bug_when>
    <thetext>(In reply to comment #8)
&gt; &gt; I&apos;m well aware of these performance pathologies and GCC is not the only compiler that has had this issue in its history.  But, horrible-to-debug performance pathologies are better than chain-crashing because LLVM ran out of memory.
&gt; 
&gt; Won&apos;t maximumOptimizationCandidateInstructionCount protect us from chain-crashing? Assuming that a function is net-win to inline, inlining it will never make the difference between crashing and not.

OK - before we proceed further with this thread, let&apos;s make some things clear:

- Inlining heuristics are not the longest pole in the tent right now.  Every time that I play with the thresholds, I see negligible performance differences.  The purpose of maximumInliningCallerSize is to protect us from a code size and compile time pathology that I saw on real JS code, and I set it to the smallest value that didn&apos;t adversely affect performance on the benchmarks.

- Your proposals require substantially more work than this 6KB patch.  It&apos;s possible we may do it at some later time, but it won&apos;t help us meet our current goals because after this patch, inlining is no longer the longest pole in the tent.

- This 6KB patch is a net progression and doesn&apos;t hurt any benchmark that we have in our repertoire.

Now that I&apos;ve gotten that out of the way, to answer your questin about maximumOptimizationCandidateInstructionCount:

Currently, if the caller is small enough, we do inlining even if the caller is larger than the size of a callsite.  That size threshold was derived from doing a grid search and optimizing performance on a bunch of benchmarks, so it&apos;s clear that we definitely want to continue to allow inlining even if it increases code size, but that of course also means that sometimes this code-size-increasing inliner must be disabled even if the caller was small enough to compile and the callee otherwise met the inliner&apos;s heuristics.  That&apos;s what maximumInliningCallerSize is for.  If we give the inliner the ability to handle must-inline functions (i.e. functions that are estimated to be smaller than a callsite and therefore inlining them will not increase code size), we will still have to keep around a heuristic like maximumInliningCallerSize, but we will probably rename it to maximumCodeSizeIncreasingInliningCallerSize, and it will apply only to functions that are larger than a callsite but are otherwise inlineable based on our current rules.

&gt; 
&gt; &gt; If we ever suspect that the performance of a program is bad because of these heuristics, we should loosen the heuristics and see if we get a speed-up.
&gt; 
&gt; Given that we both agree, based on lots of experience with existing programs and compilers, that real programs do achieve bad performance because of these heuristics, I don&apos;t think it makes sense to treat this problem as a theoretical one that might never happen, or that needs more investigation.

It&apos;s true that if our inliner was a lot more sophisticated, then the maximumInliningCallerSize limit would either be completely unnecessary or it would go by a different name.  But for now, we need that limit, and we will probably continue to need that limit for the foreseeable future.

&gt; 
&gt; Instead, I think we should implement the solution that LLVM implemented: If a function is small/trivial enough, we should allow it to inline always, even if its caller is large.

Comparing LLVM&apos;s inliner and our inliner is like comparing apples and oranges.

- LLVM&apos;s inliner runs after LLVM has run a lot of its optimization pipeline on both the caller and the callee.  Hence LLVM has a much better estimate of the callee&apos;s size.  By contrast, for a typical callsite, at the time that our inliner makes its decision, we only see the callee&apos;s bytecode and we have to estimate size from that.  Bytecode isn&apos;t a good starting point for making precise code size estimates.  This is probably one of the reasons why our current thresholds often allow inlining that increases code size - we just don&apos;t have a good way of predicting whether code size increased will actually happen.

- LLVM has a detailed cost model for predicting the size and performance of each LLVM instruction.  It&apos;s true that after a lot of work we could probably derive such a cost model for DFG IR, but we haven&apos;t done it yet.

I can understand that it&apos;s tempting to argue that we should just pick up some heuristics from LLVM and drop them into the DFG, but we need to understand that LLVM&apos;s heuristics are architected around a radically different inliner - one that sees a hell of a lot more static information.  To just reuse LLVM&apos;s inlining heuristics we would need to:

        - Change our inliner to see the callee&apos;s optimized DFG IR as an input instead of using bytecode.  This would be a huge rewrite of the inliner and right now I can&apos;t think of a way of doing this without regressing memory use.
        - Add a cost model to DFG IR.  We currently have no such model.  For example, a GetById(Untyped:) is ~10-20x bigger than an ArithAdd(Int32:, Int32:, Unchecked) but right now there is no way to *know* this and adding this knowledge is probably &gt;3000 lines of code.

Of course, it&apos;s theoretically possible that you could build heuristics that are as good as LLVM&apos;s in the average but that don&apos;t rely on seeing optimized IR of the caller or having a super precise cost model.

But whatever you do it&apos;ll be more work than this patch and even if you did all of that work, you&apos;d still have most of the current heuristics as hard fall-backs.  For example, it&apos;s very profitable to have a quick &quot;definitely don&apos;t inline this function&quot; rule based on that function&apos;s bytecode size, since you can get that in O(1) time and it filters out a lot of obvious no-nos.  Also, while it&apos;s true that eventually we might set maximumInliningCallerSize to be larger than maximumFTLCandidateInstructionCount or maximumOptimizationCandidateInstructionCount - effectively obviating the need for that heuristic - we&apos;re definitely not there yet and it won&apos;t happen for a while.

&gt; 
&gt; &gt; I think you&apos;re confusing two different heuristics.
&gt; 
&gt; Yes, I guess I am.
&gt; 
&gt; (1) I propose that we have a real concept of &quot;should always be inlined&quot; that means &quot;I should always be inlined because it is always net profitable to do so regardless of caller size&quot;.

I usually call this &quot;must inline&quot;.

&gt; 
&gt; (2) We have an existing concept named &quot;should always be inlined&quot; which means something more like &quot;I predict that I will usually be inlined before optimizing me is necessary&quot;.

I would summarize my point as: on all of the benchmarks we track, from what I can tell based on profiling, inlining is good enough that the path of least resistance to getting further speed-ups probably involves leaving the inliner alone.  Inlining heuristics are a black art and you can always do better, and it&apos;s great that we have ideas for how to improve it.  But it shouldn&apos;t be a top priority.</thetext>
  </long_desc>
      
          <attachment
              isobsolete="0"
              ispatch="1"
              isprivate="0"
          >
            <attachid>224978</attachid>
            <date>2014-02-22 11:13:31 -0800</date>
            <delta_ts>2014-02-23 09:17:36 -0800</delta_ts>
            <desc>the patch</desc>
            <filename>blah.patch</filename>
            <type>text/plain</type>
            <size>6704</size>
            <attacher name="Filip Pizlo">fpizlo</attacher>
            
              <data encoding="base64">SW5kZXg6IFNvdXJjZS9KYXZhU2NyaXB0Q29yZS9DaGFuZ2VMb2cKPT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQotLS0gU291
cmNlL0phdmFTY3JpcHRDb3JlL0NoYW5nZUxvZwkocmV2aXNpb24gMTY0NTQ3KQorKysgU291cmNl
L0phdmFTY3JpcHRDb3JlL0NoYW5nZUxvZwkod29ya2luZyBjb3B5KQpAQCAtMSwzICsxLDMyIEBA
CisyMDE0LTAyLTIyICBGaWxpcCBQaXpsbyAgPGZwaXpsb0BhcHBsZS5jb20+CisKKyAgICAgICAg
UmVmaW5lIERGRytGVEwgaW5saW5pbmcgYW5kIGNvbXBpbGF0aW9uIGxpbWl0cworICAgICAgICBo
dHRwczovL2J1Z3Mud2Via2l0Lm9yZy9zaG93X2J1Zy5jZ2k/aWQ9MTI5MjEyCisKKyAgICAgICAg
UmV2aWV3ZWQgYnkgTk9CT0RZIChPT1BTISkuCisgICAgICAgIAorICAgICAgICBBbGxvdyBsYXJn
ZXIgZnVuY3Rpb25zIHRvIGJlIERGRy1jb21waWxlZC4gSW5zdGl0dXRlIGEgbGltaXQgb24gRlRM
IGNvbXBpbGF0aW9uLAorICAgICAgICBhbmQgc2V0IHRoYXQgbGltaXQgcXVpdGUgaGlnaC4gSW5z
dGl0dXRlIGEgbGltaXQgb24gaW5saW5pbmctaW50by4gVGhlIGlkZWEgaGVyZSBpcworICAgICAg
ICB0aGF0IGxhcmdlIGZ1bmN0aW9ucyB0ZW5kIHRvIGJlIGF1dG9nZW5lcmF0ZWQsIGFuZCBjb2Rl
IGdlbmVyYXRvcnMgbGlrZSBlbXNjcmlwdGVuCisgICAgICAgIGFwcGVhciB0byBsZWF2ZSBmZXcg
aW5saW5pbmcgb3Bwb3J0dW5pdGllcyBhbnl3YXkuIEFsc28sIHdlIGRvbid0IHdhbnQgdGhlIGNv
ZGUKKyAgICAgICAgc2l6ZSBleHBsb3Npb24gdGhhdCB3ZSB3b3VsZCByaXNrIGlmIHdlIGFsbG93
ZWQgY29tcGlsYXRpb24gb2YgYSBsYXJnZSBmdW5jdGlvbiBhbmQKKyAgICAgICAgdGhlbiBpbmxp
bmVkIGEgdG9uIG9mIHN0dWZmIGludG8gaXQuCisgICAgICAgIAorICAgICAgICBUaGlzIGlzIGEg
MC41JSBzcGVlZC11cCBvbiBPY3RhbmUgdjIgYW5kIGFsbW9zdCBlbGltaW5hdGVzIHRoZSB0eXBl
c2NyaXB0CisgICAgICAgIHJlZ3Jlc3Npb24uIFRoaXMgaXMgYSA5JSBzcGVlZC11cCBvbiBBc21C
ZW5jaC4KKworICAgICAgICAqIGJ5dGVjb2RlL0NvZGVCbG9jay5jcHA6CisgICAgICAgIChKU0M6
OkNvZGVCbG9jazo6bm90aWNlSW5jb21pbmdDYWxsKToKKyAgICAgICAgKiBkZmcvREZHQnl0ZUNv
ZGVQYXJzZXIuY3BwOgorICAgICAgICAoSlNDOjpERkc6OkJ5dGVDb2RlUGFyc2VyOjpoYW5kbGVJ
bmxpbmluZyk6CisgICAgICAgICogZGZnL0RGR0NhcGFiaWxpdGllcy5oOgorICAgICAgICAoSlND
OjpERkc6OmlzU21hbGxFbm91Z2hUb0lubGluZUNvZGVJbnRvKToKKyAgICAgICAgKiBmdGwvRlRM
Q2FwYWJpbGl0aWVzLmNwcDoKKyAgICAgICAgKEpTQzo6RlRMOjpjYW5Db21waWxlKToKKyAgICAg
ICAgKiBmdGwvRlRMU3RhdGUuaDoKKyAgICAgICAgKEpTQzo6RlRMOjpzaG91bGRTaG93RGlzYXNz
ZW1ibHkpOgorICAgICAgICAqIHJ1bnRpbWUvT3B0aW9ucy5oOgorCiAyMDE0LTAyLTIxICBCcmVu
dCBGdWxnaGFtICA8YmZ1bGdoYW1AYXBwbGUuY29tPgogCiAgICAgICAgIEV4dGVuZCBtZWRpYSBz
dXBwb3J0IGZvciBXZWJWVFQgc291cmNlcwpJbmRleDogU291cmNlL0phdmFTY3JpcHRDb3JlL2J5
dGVjb2RlL0NvZGVCbG9jay5jcHAKPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQotLS0gU291cmNlL0phdmFTY3JpcHRDb3Jl
L2J5dGVjb2RlL0NvZGVCbG9jay5jcHAJKHJldmlzaW9uIDE2NDQ5MykKKysrIFNvdXJjZS9KYXZh
U2NyaXB0Q29yZS9ieXRlY29kZS9Db2RlQmxvY2suY3BwCSh3b3JraW5nIGNvcHkpCkBAIC0yNzg2
LDYgKzI3ODYsMTMgQEAgdm9pZCBDb2RlQmxvY2s6Om5vdGljZUluY29taW5nQ2FsbChFeGVjUwog
CiAgICAgaWYgKCFjYW5JbmxpbmUobV9jYXBhYmlsaXR5TGV2ZWxTdGF0ZSkpCiAgICAgICAgIHJl
dHVybjsKKyAgICAKKyAgICBpZiAoIURGRzo6aXNTbWFsbEVub3VnaFRvSW5saW5lQ29kZUludG8o
Y2FsbGVyQ29kZUJsb2NrKSkgeworICAgICAgICBtX3Nob3VsZEFsd2F5c0JlSW5saW5lZCA9IGZh
bHNlOworICAgICAgICBpZiAoT3B0aW9uczo6dmVyYm9zZUNhbGxMaW5rKCkpCisgICAgICAgICAg
ICBkYXRhTG9nKCIgICAgTWFya2luZyBTQUJJIGJlY2F1c2UgY2FsbGVyIGlzIHRvbyBsYXJnZS5c
biIpOworICAgICAgICByZXR1cm47CisgICAgfQogCiAgICAgaWYgKGNhbGxlckNvZGVCbG9jay0+
aml0VHlwZSgpID09IEpJVENvZGU6OkludGVycHJldGVyVGh1bmspIHsKICAgICAgICAgLy8gSWYg
dGhlIGNhbGxlciBpcyBzdGlsbCBpbiB0aGUgaW50ZXJwcmV0ZXIsIHRoZW4gd2UgY2FuJ3QgZXhw
ZWN0IGlubGluaW5nIHRvCkluZGV4OiBTb3VyY2UvSmF2YVNjcmlwdENvcmUvZGZnL0RGR0J5dGVD
b2RlUGFyc2VyLmNwcAo9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09Ci0tLSBTb3VyY2UvSmF2YVNjcmlwdENvcmUvZGZnL0RG
R0J5dGVDb2RlUGFyc2VyLmNwcAkocmV2aXNpb24gMTY0NDkzKQorKysgU291cmNlL0phdmFTY3Jp
cHRDb3JlL2RmZy9ERkdCeXRlQ29kZVBhcnNlci5jcHAJKHdvcmtpbmcgY29weSkKQEAgLTEzMTYs
NiArMTMxNiwxNiBAQCBib29sIEJ5dGVDb2RlUGFyc2VyOjpoYW5kbGVJbmxpbmluZyhOb2RlCiAg
ICAgICAgIHJldHVybiBmYWxzZTsKICAgICB9CiAgICAgCisgICAgLy8gQ2hlY2sgaWYgdGhlIGNh
bGxlciBpcyBhbHJlYWR5IHRvbyBsYXJnZS4gV2UgZG8gdGhpcyBjaGVjayBoZXJlIGJlY2F1c2Ug
dGhhdCdzIGp1c3QKKyAgICAvLyB3aGVyZSB3ZSBoYXBwZW4gdG8gYWxzbyBoYXZlIHRoZSBjYWxs
ZWUncyBjb2RlIGJsb2NrLCBhbmQgd2Ugd2FudCB0aGF0IGZvciB0aGUKKyAgICAvLyBwdXJwb3Nl
IG9mIHVuc2V0dGluZyBTQUJJLgorICAgIGlmICghaXNTbWFsbEVub3VnaFRvSW5saW5lQ29kZUlu
dG8obV9jb2RlQmxvY2spKSB7CisgICAgICAgIGNvZGVCbG9jay0+bV9zaG91bGRBbHdheXNCZUlu
bGluZWQgPSBmYWxzZTsKKyAgICAgICAgaWYgKHZlcmJvc2UpCisgICAgICAgICAgICBkYXRhTG9n
KCIgICAgRmFpbGluZyBiZWNhdXNlIHRoZSBjYWxsZXIgaXMgdG9vIGxhcmdlLlxuIik7CisgICAg
ICAgIHJldHVybiBmYWxzZTsKKyAgICB9CisgICAgCiAgICAgLy8gRklYTUU6IHRoaXMgc2hvdWxk
IGJlIGJldHRlciBhdCBwcmVkaWN0aW5nIGhvdyBtdWNoIGJsb2F0IHdlIHdpbGwgaW50cm9kdWNl
IGJ5IGlubGluaW5nCiAgICAgLy8gdGhpcyBmdW5jdGlvbi4KICAgICAvLyBodHRwczovL2J1Z3Mu
d2Via2l0Lm9yZy9zaG93X2J1Zy5jZ2k/aWQ9MTI3NjI3CkluZGV4OiBTb3VyY2UvSmF2YVNjcmlw
dENvcmUvZGZnL0RGR0NhcGFiaWxpdGllcy5oCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KLS0tIFNvdXJjZS9KYXZhU2Ny
aXB0Q29yZS9kZmcvREZHQ2FwYWJpbGl0aWVzLmgJKHJldmlzaW9uIDE2NDQ5MykKKysrIFNvdXJj
ZS9KYXZhU2NyaXB0Q29yZS9kZmcvREZHQ2FwYWJpbGl0aWVzLmgJKHdvcmtpbmcgY29weSkKQEAg
LTE0NCw2ICsxNDQsMTEgQEAgaW5saW5lIENhcGFiaWxpdHlMZXZlbCBpbmxpbmVGdW5jdGlvbkZv
cgogICAgIHJldHVybiBpbmxpbmVGdW5jdGlvbkZvckNvbnN0cnVjdENhcGFiaWxpdHlMZXZlbChj
b2RlQmxvY2spOwogfQogCitpbmxpbmUgYm9vbCBpc1NtYWxsRW5vdWdoVG9JbmxpbmVDb2RlSW50
byhDb2RlQmxvY2sqIGNvZGVCbG9jaykKK3sKKyAgICByZXR1cm4gY29kZUJsb2NrLT5pbnN0cnVj
dGlvbkNvdW50KCkgPD0gT3B0aW9uczo6bWF4aW11bUlubGluaW5nQ2FsbGVyU2l6ZSgpOworfQor
CiB9IH0gLy8gbmFtZXNwYWNlIEpTQzo6REZHCiAKICNlbmRpZiAvLyBERkdDYXBhYmlsaXRpZXNf
aApJbmRleDogU291cmNlL0phdmFTY3JpcHRDb3JlL2Z0bC9GVExDYXBhYmlsaXRpZXMuY3BwCj09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT0KLS0tIFNvdXJjZS9KYXZhU2NyaXB0Q29yZS9mdGwvRlRMQ2FwYWJpbGl0aWVzLmNw
cAkocmV2aXNpb24gMTY0NDkzKQorKysgU291cmNlL0phdmFTY3JpcHRDb3JlL2Z0bC9GVExDYXBh
YmlsaXRpZXMuY3BwCSh3b3JraW5nIGNvcHkpCkBAIC0yNzgsNiArMjc4LDEyIEBAIGlubGluZSBD
YXBhYmlsaXR5TGV2ZWwgY2FuQ29tcGlsZShOb2RlKiAKIAogQ2FwYWJpbGl0eUxldmVsIGNhbkNv
bXBpbGUoR3JhcGgmIGdyYXBoKQogeworICAgIGlmIChncmFwaC5tX2NvZGVCbG9jay0+aW5zdHJ1
Y3Rpb25Db3VudCgpID4gT3B0aW9uczo6bWF4aW11bUZUTENhbmRpZGF0ZUluc3RydWN0aW9uQ291
bnQoKSkgeworICAgICAgICBpZiAodmVyYm9zZUNhcGFiaWxpdGllcygpKQorICAgICAgICAgICAg
ZGF0YUxvZygiRlRMIHJlamVjdGluZyAiLCAqZ3JhcGgubV9jb2RlQmxvY2ssICIgYmVjYXVzZSBp
dCdzIHRvbyBiaWcuXG4iKTsKKyAgICAgICAgcmV0dXJuIENhbm5vdENvbXBpbGU7CisgICAgfQor
ICAgIAogICAgIGlmIChncmFwaC5tX2NvZGVCbG9jay0+Y29kZVR5cGUoKSAhPSBGdW5jdGlvbkNv
ZGUpIHsKICAgICAgICAgaWYgKHZlcmJvc2VDYXBhYmlsaXRpZXMoKSkKICAgICAgICAgICAgIGRh
dGFMb2coIkZUTCByZWplY3RpbmcgIiwgKmdyYXBoLm1fY29kZUJsb2NrLCAiIGJlY2F1c2UgaXQg
ZG9lc24ndCBiZWxvbmcgdG8gYSBmdW5jdGlvbi5cbiIpOwpJbmRleDogU291cmNlL0phdmFTY3Jp
cHRDb3JlL2Z0bC9GVExTdGF0ZS5oCj09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KLS0tIFNvdXJjZS9KYXZhU2NyaXB0Q29y
ZS9mdGwvRlRMU3RhdGUuaAkocmV2aXNpb24gMTY0NDkzKQorKysgU291cmNlL0phdmFTY3JpcHRD
b3JlL2Z0bC9GVExTdGF0ZS5oCSh3b3JraW5nIGNvcHkpCkBAIC00Niw3ICs0Niw3IEBAIGlubGlu
ZSBib29sIHZlcmJvc2VDb21waWxhdGlvbkVuYWJsZWQoKQogICAgIHJldHVybiBERkc6OnZlcmJv
c2VDb21waWxhdGlvbkVuYWJsZWQoREZHOjpGVExNb2RlKTsKIH0KIAotaW5saW5lIGJvb2wgc2hv
d0Rpc2Fzc2VtYmx5KCkKK2lubGluZSBib29sIHNob3VsZFNob3dEaXNhc3NlbWJseSgpCiB7CiAg
ICAgcmV0dXJuIERGRzo6c2hvdWxkU2hvd0Rpc2Fzc2VtYmx5KERGRzo6RlRMTW9kZSk7CiB9Cklu
ZGV4OiBTb3VyY2UvSmF2YVNjcmlwdENvcmUvcnVudGltZS9PcHRpb25zLmgKPT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQot
LS0gU291cmNlL0phdmFTY3JpcHRDb3JlL3J1bnRpbWUvT3B0aW9ucy5oCShyZXZpc2lvbiAxNjQ0
OTMpCisrKyBTb3VyY2UvSmF2YVNjcmlwdENvcmUvcnVudGltZS9PcHRpb25zLmgJKHdvcmtpbmcg
Y29weSkKQEAgLTE3MCwxOCArMTcwLDI0IEBAIHR5cGVkZWYgT3B0aW9uUmFuZ2Ugb3B0aW9uUmFu
Z2U7CiAgICAgXAogICAgIHYoYm9vbCwgYnJlYWtPblRocm93LCBmYWxzZSkgXAogICAgIFwKLSAg
ICB2KHVuc2lnbmVkLCBtYXhpbXVtT3B0aW1pemF0aW9uQ2FuZGlkYXRlSW5zdHJ1Y3Rpb25Db3Vu
dCwgMTAwMDApIFwKKyAgICB2KHVuc2lnbmVkLCBtYXhpbXVtT3B0aW1pemF0aW9uQ2FuZGlkYXRl
SW5zdHJ1Y3Rpb25Db3VudCwgMTAwMDAwKSBcCiAgICAgXAogICAgIHYodW5zaWduZWQsIG1heGlt
dW1GdW5jdGlvbkZvckNhbGxJbmxpbmVDYW5kaWRhdGVJbnN0cnVjdGlvbkNvdW50LCAxODApIFwK
ICAgICB2KHVuc2lnbmVkLCBtYXhpbXVtRnVuY3Rpb25Gb3JDbG9zdXJlQ2FsbElubGluZUNhbmRp
ZGF0ZUluc3RydWN0aW9uQ291bnQsIDEwMCkgXAogICAgIHYodW5zaWduZWQsIG1heGltdW1GdW5j
dGlvbkZvckNvbnN0cnVjdElubGluZUNhbmRpZGF0ZUluc3RydWN0aW9uQ291bnQsIDEwMCkgXAog
ICAgIFwKKyAgICB2KHVuc2lnbmVkLCBtYXhpbXVtRlRMQ2FuZGlkYXRlSW5zdHJ1Y3Rpb25Db3Vu
dCwgMjAwMDApIFwKKyAgICBcCiAgICAgLyogRGVwdGggb2YgaW5saW5lIHN0YWNrLCBzbyAxID0g
bm8gaW5saW5pbmcsIDIgPSBvbmUgbGV2ZWwsIGV0Yy4gKi8gXAogICAgIHYodW5zaWduZWQsIG1h
eGltdW1JbmxpbmluZ0RlcHRoLCA1KSBcCiAgICAgdih1bnNpZ25lZCwgbWF4aW11bUlubGluaW5n
UmVjdXJzaW9uLCAyKSBcCiAgICAgdih1bnNpZ25lZCwgbWF4aW11bUlubGluaW5nRGVwdGhGb3JN
dXN0SW5saW5lLCA3KSBcCiAgICAgdih1bnNpZ25lZCwgbWF4aW11bUlubGluaW5nUmVjdXJzaW9u
Rm9yTXVzdElubGluZSwgMykgXAogICAgIFwKKyAgICAvKiBNYXhpbXVtIHNpemUgb2YgYSBjYWxs
ZXIgZm9yIGVuYWJsaW5nIGlubGluaW5nLiBUaGlzIGlzIHB1cmVseSB0byBwcm90ZWN0IHVzICov
XAorICAgIC8qIHN1cGVyIGxvbmcgY29tcGlsZXMuICovXAorICAgIHYodW5zaWduZWQsIG1heGlt
dW1JbmxpbmluZ0NhbGxlclNpemUsIDEwMDAwKSBcCisgICAgXAogICAgIHYoYm9vbCwgZW5hYmxl
UG9seXZhcmlhbnRDYWxsSW5saW5pbmcsIHRydWUpIFwKICAgICB2KGJvb2wsIGVuYWJsZVBvbHl2
YXJpYW50QnlJZElubGluaW5nLCB0cnVlKSBcCiAgICAgXAo=
</data>
<flag name="review"
          id="249080"
          type_id="1"
          status="+"
          setter="mhahnenberg"
    />
          </attachment>
      

    </bug>

</bugzilla>