WebKit Bugzilla
Attachment 361716 Details for
Bug 194036
: [WebAssembly] Write a new register allocator for Air O0 and make BBQ use it
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
patch
b-backup.diff (text/plain), 64.48 KB, created by
Saam Barati
on 2019-02-11 15:34:03 PST
(
hide
)
Description:
patch
Filename:
MIME Type:
Creator:
Saam Barati
Created:
2019-02-11 15:34:03 PST
Size:
64.48 KB
patch
obsolete
>Index: JSTests/ChangeLog >=================================================================== >--- JSTests/ChangeLog (revision 241282) >+++ JSTests/ChangeLog (working copy) >@@ -1,3 +1,14 @@ >+2019-02-11 Saam Barati <sbarati@apple.com> >+ >+ [WebAssembly] Write a new register allocator for Air O0 and make BBQ use it >+ https://bugs.webkit.org/show_bug.cgi?id=194036 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ * stress/tail-call-many-arguments.js: Added. >+ (foo): >+ (bar): >+ > 2019-02-08 Yusuke Suzuki <ysuzuki@apple.com> > > [JSC] String.fromCharCode's slow path always generates 16bit string >Index: JSTests/stress/tail-call-many-arguments.js >=================================================================== >--- JSTests/stress/tail-call-many-arguments.js (nonexistent) >+++ JSTests/stress/tail-call-many-arguments.js (working copy) >@@ -0,0 +1,21 @@ >+"use strict"; >+ >+function foo(...args) { >+ return args; >+} >+noInline(foo); >+ >+function bar(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20, a21, a22, a23, a24, a25, a26, a27, a28, a29, a30, a31, a32, a33, a34, a35) >+{ >+ return foo(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20, a21, a22, a23, a24, a25, a26, a27, a28, a29, a30, a31, a32, a33, a34, a35); >+} >+noInline(bar); >+ >+let args = []; >+for (let i = 0; i < 35; ++i) { >+ args.push(i); >+} >+ >+for (let i = 0; i < 100000; ++i) { >+ bar(...args); >+} >Index: Source/JavaScriptCore/ChangeLog >=================================================================== >--- Source/JavaScriptCore/ChangeLog (revision 241282) >+++ Source/JavaScriptCore/ChangeLog (working copy) >@@ -1,3 +1,68 @@ >+2019-02-11 Saam Barati <sbarati@apple.com> >+ >+ [WebAssembly] Write a new register allocator for Air O0 and make BBQ use it >+ https://bugs.webkit.org/show_bug.cgi?id=194036 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ This patch adds a new Air-O0 backend. Air-O0 runs fewer passes and doesn't >+ use linear scan for register allocation. Instead of linear scan, Air-O0 does >+ mostly block-local register allocation, and it does this as it's emitting >+ code directly. The register allocator uses liveness analysis to reduce >+ the number of spills. Doing register allocation as we're emitting code >+ allows us to skip editing the IR to insert spills, which saves a non trivial >+ amount of compile time. For stack allocation, we give each Tmp its own slot. >+ This is less than ideal. We probably want to do some trivial live range analysis >+ in the future. The reason this isn't a deal breaker for Wasm is that this patch >+ makes it so that we reuse Tmps as we're generating Air IR in the AirIRGenerator. >+ Because Wasm is a stack machine, we trivially know when we kill a stack value (its last use). >+ >+ This patch is another 25% Wasm startup time speedup. It seems to be worth >+ another 1% on JetStream2. >+ >+ * JavaScriptCore.xcodeproj/project.pbxproj: >+ * Sources.txt: >+ * b3/air/AirAllocateRegistersAndStackAndGenerateCode.cpp: Added. >+ (JSC::B3::Air::GenerateAndAllocateRegisters::GenerateAndAllocateRegisters): >+ (JSC::B3::Air::GenerateAndAllocateRegisters::buildLiveRanges): >+ (JSC::B3::Air::GenerateAndAllocateRegisters::gatherTerminalPatchSpills): >+ (JSC::B3::Air::callFrameAddr): >+ (JSC::B3::Air::GenerateAndAllocateRegisters::flush): >+ (JSC::B3::Air::GenerateAndAllocateRegisters::spill): >+ (JSC::B3::Air::GenerateAndAllocateRegisters::alloc): >+ (JSC::B3::Air::GenerateAndAllocateRegisters::freeDeadTmpsIfNeeded): >+ (JSC::B3::Air::GenerateAndAllocateRegisters::assignTmp): >+ (JSC::B3::Air::GenerateAndAllocateRegisters::isDisallowedRegister): >+ (JSC::B3::Air::GenerateAndAllocateRegisters::prepareForGeneration): >+ (JSC::B3::Air::GenerateAndAllocateRegisters::generate): >+ * b3/air/AirAllocateRegistersAndStackAndGenerateCode.h: Added. >+ * b3/air/AirCode.cpp: >+ * b3/air/AirCode.h: >+ * b3/air/AirGenerate.cpp: >+ (JSC::B3::Air::prepareForGeneration): >+ (JSC::B3::Air::generateWithAlreadyAllocatedRegisters): >+ (JSC::B3::Air::generate): >+ * b3/air/AirHandleCalleeSaves.cpp: >+ (JSC::B3::Air::handleCalleeSaves): >+ * b3/air/AirHandleCalleeSaves.h: >+ * b3/air/AirTmpMap.h: >+ * runtime/Options.h: >+ * wasm/WasmAirIRGenerator.cpp: >+ (JSC::Wasm::AirIRGenerator::didKill): >+ (JSC::Wasm::AirIRGenerator::newTmp): >+ (JSC::Wasm::AirIRGenerator::AirIRGenerator): >+ (JSC::Wasm::parseAndCompileAir): >+ (JSC::Wasm::AirIRGenerator::addOp<OpType::I64TruncUF64>): >+ (JSC::Wasm::AirIRGenerator::addOp<OpType::I64TruncUF32>): >+ * wasm/WasmB3IRGenerator.cpp: >+ (JSC::Wasm::B3IRGenerator::didKill): >+ * wasm/WasmFunctionParser.h: >+ (JSC::Wasm::FunctionParser<Context>::binaryCase): >+ (JSC::Wasm::FunctionParser<Context>::unaryCase): >+ (JSC::Wasm::FunctionParser<Context>::parseExpression): >+ * wasm/WasmValidate.cpp: >+ (JSC::Wasm::Validate::didKill): >+ > 2019-02-11 Mark Lam <mark.lam@apple.com> > > Randomize insertion of deallocated StructureIDs into the StructureIDTable's free list. >Index: Source/JavaScriptCore/Sources.txt >=================================================================== >--- Source/JavaScriptCore/Sources.txt (revision 241282) >+++ Source/JavaScriptCore/Sources.txt (working copy) >@@ -56,6 +56,7 @@ assembler/Printer.cpp > assembler/ProbeContext.cpp > assembler/ProbeStack.cpp > >+b3/air/AirAllocateRegistersAndStackAndGenerateCode.cpp > b3/air/AirAllocateRegistersAndStackByLinearScan.cpp > b3/air/AirAllocateRegistersByGraphColoring.cpp > b3/air/AirAllocateStackByGraphColoring.cpp >Index: Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj >=================================================================== >--- Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj (revision 241282) >+++ Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj (working copy) >@@ -872,6 +872,7 @@ > 4BAA07CEB81F49A296E02203 /* WasmSignatureInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 30A5F403F11C4F599CD596D5 /* WasmSignatureInlines.h */; settings = {ATTRIBUTES = (Private, ); }; }; > 521131F71F82BF14007CCEEE /* PolyProtoAccessChain.h in Headers */ = {isa = PBXBuildFile; fileRef = 521131F61F82BF11007CCEEE /* PolyProtoAccessChain.h */; settings = {ATTRIBUTES = (Private, ); }; }; > 521322461ECBCE8200F65615 /* WebAssemblyFunctionBase.h in Headers */ = {isa = PBXBuildFile; fileRef = 521322441ECBCE8200F65615 /* WebAssemblyFunctionBase.h */; }; >+ 524E9D7322092B5200A6BEEE /* AirAllocateRegistersAndStackAndGenerateCode.h in Headers */ = {isa = PBXBuildFile; fileRef = 524E9D7222092B4600A6BEEE /* AirAllocateRegistersAndStackAndGenerateCode.h */; }; > 5250D2D21E8DA05A0029A932 /* WasmThunks.h in Headers */ = {isa = PBXBuildFile; fileRef = 5250D2D01E8DA05A0029A932 /* WasmThunks.h */; settings = {ATTRIBUTES = (Private, ); }; }; > 525C0DDA1E935847002184CD /* WasmCallee.h in Headers */ = {isa = PBXBuildFile; fileRef = 525C0DD81E935847002184CD /* WasmCallee.h */; settings = {ATTRIBUTES = (Private, ); }; }; > 525C9CDF220285830082DBFD /* WasmAirIRGenerator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 52847AD921FFB8630061A9DB /* WasmAirIRGenerator.cpp */; }; >@@ -3353,6 +3354,8 @@ > 521131F61F82BF11007CCEEE /* PolyProtoAccessChain.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = PolyProtoAccessChain.h; sourceTree = "<group>"; }; > 521322431ECBCE8200F65615 /* WebAssemblyFunctionBase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = WebAssemblyFunctionBase.cpp; path = js/WebAssemblyFunctionBase.cpp; sourceTree = "<group>"; }; > 521322441ECBCE8200F65615 /* WebAssemblyFunctionBase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = WebAssemblyFunctionBase.h; path = js/WebAssemblyFunctionBase.h; sourceTree = "<group>"; }; >+ 524E9D7122092B4500A6BEEE /* AirAllocateRegistersAndStackAndGenerateCode.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = AirAllocateRegistersAndStackAndGenerateCode.cpp; path = b3/air/AirAllocateRegistersAndStackAndGenerateCode.cpp; sourceTree = "<group>"; }; >+ 524E9D7222092B4600A6BEEE /* AirAllocateRegistersAndStackAndGenerateCode.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = AirAllocateRegistersAndStackAndGenerateCode.h; path = b3/air/AirAllocateRegistersAndStackAndGenerateCode.h; sourceTree = "<group>"; }; > 5250D2CF1E8DA05A0029A932 /* WasmThunks.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WasmThunks.cpp; sourceTree = "<group>"; }; > 5250D2D01E8DA05A0029A932 /* WasmThunks.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WasmThunks.h; sourceTree = "<group>"; }; > 525C0DD71E935847002184CD /* WasmCallee.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WasmCallee.cpp; sourceTree = "<group>"; }; >@@ -5435,6 +5438,8 @@ > 0FEC84B31BDACD880080FF74 /* air */ = { > isa = PBXGroup; > children = ( >+ 524E9D7122092B4500A6BEEE /* AirAllocateRegistersAndStackAndGenerateCode.cpp */, >+ 524E9D7222092B4600A6BEEE /* AirAllocateRegistersAndStackAndGenerateCode.h */, > 0F2AC5681E8A0BD10001EE3F /* AirAllocateRegistersAndStackByLinearScan.cpp */, > 0F2AC5691E8A0BD10001EE3F /* AirAllocateRegistersAndStackByLinearScan.h */, > 7965C2141E5D799600B7591D /* AirAllocateRegistersByGraphColoring.cpp */, >@@ -8852,6 +8857,7 @@ > A7D89CFE17A0B8CC00773AD8 /* DFGOSRAvailabilityAnalysisPhase.h in Headers */, > 0FD82E57141DAF1000179C94 /* DFGOSREntry.h in Headers */, > 0FD8A32617D51F5700CA2C40 /* DFGOSREntrypointCreationPhase.h in Headers */, >+ 524E9D7322092B5200A6BEEE /* AirAllocateRegistersAndStackAndGenerateCode.h in Headers */, > 0FC0976A1468A6F700CF2442 /* DFGOSRExit.h in Headers */, > 0F235BEC17178E7300690C7F /* DFGOSRExitBase.h in Headers */, > 0FFB921C16D02F110055A5DB /* DFGOSRExitCompilationInfo.h in Headers */, >Index: Source/JavaScriptCore/b3/air/AirAllocateRegistersAndStackAndGenerateCode.cpp >=================================================================== >--- Source/JavaScriptCore/b3/air/AirAllocateRegistersAndStackAndGenerateCode.cpp (nonexistent) >+++ Source/JavaScriptCore/b3/air/AirAllocateRegistersAndStackAndGenerateCode.cpp (working copy) >@@ -0,0 +1,712 @@ >+/* >+ * Copyright (C) 2019 Apple Inc. All rights reserved. >+ * >+ * Redistribution and use in source and binary forms, with or without >+ * modification, are permitted provided that the following conditions >+ * are met: >+ * 1. Redistributions of source code must retain the above copyright >+ * notice, this list of conditions and the following disclaimer. >+ * 2. Redistributions in binary form must reproduce the above copyright >+ * notice, this list of conditions and the following disclaimer in the >+ * documentation and/or other materials provided with the distribution. >+ * >+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >+ */ >+ >+#include "config.h" >+#include "AirAllocateRegistersAndStackAndGenerateCode.h" >+ >+#if ENABLE(B3_JIT) >+ >+#include "AirBlockInsertionSet.h" >+#include "AirCode.h" >+#include "AirHandleCalleeSaves.h" >+#include "AirLowerStackArgs.h" >+#include "AirStackAllocation.h" >+#include "AirTmpMap.h" >+#include "CCallHelpers.h" >+#include "DisallowMacroScratchRegisterUsage.h" >+ >+namespace JSC { namespace B3 { namespace Air { >+ >+GenerateAndAllocateRegisters::GenerateAndAllocateRegisters(Code& code) >+ : m_code(code) >+ , m_map(code) >+{ } >+ >+void GenerateAndAllocateRegisters::buildLiveRanges(UnifiedTmpLiveness& liveness) >+{ >+ m_liveRangeEnd = TmpMap<size_t>(m_code, 0); >+ >+ m_globalInstIndex = 0; >+ for (BasicBlock* block : m_code) { >+ for (Tmp tmp : liveness.liveAtHead(block)) { >+ if (!tmp.isReg()) >+ m_liveRangeEnd[tmp] = m_globalInstIndex; >+ } >+ for (size_t instIndex = 0; instIndex < block->size(); ++instIndex) { >+ Inst& inst = block->at(instIndex); >+ inst.forEachTmpFast([&] (Tmp tmp) { >+ if (!tmp.isReg()) >+ m_liveRangeEnd[tmp] = m_globalInstIndex; >+ }); >+ ++m_globalInstIndex; >+ } >+ for (Tmp tmp : liveness.liveAtTail(block)) { >+ if (!tmp.isReg()) >+ m_liveRangeEnd[tmp] = m_globalInstIndex; >+ } >+ } >+} >+ >+void GenerateAndAllocateRegisters::gatherTerminalPatchSpills() >+{ >+ BlockInsertionSet blockInsertionSet(m_code); >+ for (BasicBlock* block : m_code) { >+ Inst& inst = block->last(); >+ if (inst.kind.opcode != Patch) >+ continue; >+ >+ HashMap<Tmp, Arg*> needToDef; >+ >+ inst.forEachArg([&] (Arg& arg, Arg::Role role, Bank, Width) { >+ if (!arg.isTmp()) >+ return; >+ Tmp tmp = arg.tmp(); >+ if (Arg::isAnyDef(role) && !tmp.isReg()) >+ needToDef.add(tmp, &arg); >+ }); >+ >+ if (needToDef.isEmpty()) >+ continue; >+ >+ for (FrequentedBlock& frequentedSuccessor : block->successors()) { >+ BasicBlock* successor = frequentedSuccessor.block(); >+ BasicBlock* newBlock = blockInsertionSet.insertBefore(successor, successor->frequency()); >+ newBlock->appendInst(Inst(Jump, inst.origin)); >+ newBlock->setSuccessors(successor); >+ newBlock->addPredecessor(block); >+ frequentedSuccessor.block() = newBlock; >+ successor->replacePredecessor(block, newBlock); >+ >+ m_blocksForPatchSpilling.add(newBlock, PatchSpillData { CCallHelpers::Jump(), CCallHelpers::Label(), needToDef }); >+ } >+ } >+ >+ blockInsertionSet.execute(); >+} >+ >+static ALWAYS_INLINE CCallHelpers::Address callFrameAddr(CCallHelpers& jit, intptr_t offsetFromFP) >+{ >+ if (isX86()) { >+ ASSERT(Arg::addr(Air::Tmp(GPRInfo::callFrameRegister), offsetFromFP).isValidForm(Width64)); >+ return CCallHelpers::Address(GPRInfo::callFrameRegister, offsetFromFP); >+ } >+ >+ ASSERT(pinnedExtendedOffsetAddrRegister()); >+ auto addr = Arg::addr(Air::Tmp(GPRInfo::callFrameRegister), offsetFromFP); >+ if (addr.isValidForm(Width64)) >+ return CCallHelpers::Address(GPRInfo::callFrameRegister, offsetFromFP); >+ GPRReg reg = *pinnedExtendedOffsetAddrRegister(); >+ jit.move(CCallHelpers::TrustedImmPtr(offsetFromFP), reg); >+ jit.add64(GPRInfo::callFrameRegister, reg); >+ return CCallHelpers::Address(reg); >+} >+ >+ALWAYS_INLINE void GenerateAndAllocateRegisters::flush(Tmp tmp, Reg reg) >+{ >+ ASSERT(tmp); >+ intptr_t offset = m_map[tmp].spillSlot->offsetFromFP(); >+ if (tmp.isGP()) >+ m_jit->store64(reg.gpr(), callFrameAddr(*m_jit, offset)); >+ else >+ m_jit->storeDouble(reg.fpr(), callFrameAddr(*m_jit, offset)); >+} >+ >+ALWAYS_INLINE void GenerateAndAllocateRegisters::spill(Tmp tmp, Reg reg) >+{ >+ ASSERT(reg); >+ ASSERT(m_map[tmp].reg == reg); >+ m_availableRegs[tmp.bank()].set(reg); >+ m_currentAllocation->at(reg) = Tmp(); >+ flush(tmp, reg); >+ m_map[tmp].reg = Reg(); >+} >+ >+ALWAYS_INLINE void GenerateAndAllocateRegisters::alloc(Tmp tmp, Reg reg, bool isDef) >+{ >+ if (Tmp occupyingTmp = m_currentAllocation->at(reg)) >+ spill(occupyingTmp, reg); >+ else { >+ ASSERT(!m_currentAllocation->at(reg)); >+ ASSERT(m_availableRegs[tmp.bank()].get(reg)); >+ } >+ >+ m_map[tmp].reg = reg; >+ m_availableRegs[tmp.bank()].clear(reg); >+ m_currentAllocation->at(reg) = tmp; >+ >+ if (!isDef) { >+ intptr_t offset = m_map[tmp].spillSlot->offsetFromFP(); >+ if (tmp.bank() == GP) >+ m_jit->load64(callFrameAddr(*m_jit, offset), reg.gpr()); >+ else >+ m_jit->loadDouble(callFrameAddr(*m_jit, offset), reg.fpr()); >+ } >+} >+ >+ALWAYS_INLINE void GenerateAndAllocateRegisters::freeDeadTmpsIfNeeded() >+{ >+ if (m_didAlreadyFreeDeadSlots) >+ return; >+ >+ m_didAlreadyFreeDeadSlots = true; >+ for (size_t i = 0; i < m_currentAllocation->size(); ++i) { >+ Tmp tmp = m_currentAllocation->at(i); >+ if (!tmp) >+ continue; >+ if (tmp.isReg()) >+ continue; >+ if (m_liveRangeEnd[tmp] >= m_globalInstIndex) >+ continue; >+ >+ Reg reg = Reg::fromIndex(i); >+ m_map[tmp].reg = Reg(); >+ m_availableRegs[tmp.bank()].set(reg); >+ m_currentAllocation->at(i) = Tmp(); >+ } >+} >+ >+ALWAYS_INLINE bool GenerateAndAllocateRegisters::assignTmp(Tmp& tmp, Bank bank, bool isDef) >+{ >+ ASSERT(!tmp.isReg()); >+ if (Reg reg = m_map[tmp].reg) { >+ ASSERT(!m_namedDefdRegs.contains(reg)); >+ tmp = Tmp(reg); >+ m_namedUsedRegs.set(reg); >+ ASSERT(!m_availableRegs[bank].get(reg)); >+ return true; >+ } >+ >+ if (!m_availableRegs[bank].numberOfSetRegisters()) >+ freeDeadTmpsIfNeeded(); >+ >+ if (m_availableRegs[bank].numberOfSetRegisters()) { >+ // We first take an available register. >+ for (Reg reg : m_registers[bank]) { >+ if (m_namedUsedRegs.contains(reg) || m_namedDefdRegs.contains(reg)) >+ continue; >+ if (!m_availableRegs[bank].contains(reg)) >+ continue; >+ m_namedUsedRegs.set(reg); // At this point, it doesn't matter if we add it to the m_namedUsedRegs or m_namedDefdRegs. We just need to mark that we can't use it again. >+ alloc(tmp, reg, isDef); >+ tmp = Tmp(reg); >+ return true; >+ } >+ >+ RELEASE_ASSERT_NOT_REACHED(); >+ } >+ >+ // Nothing was available, let's make some room. >+ for (Reg reg : m_registers[bank]) { >+ if (m_namedUsedRegs.contains(reg) || m_namedDefdRegs.contains(reg)) >+ continue; >+ >+ m_namedUsedRegs.set(reg); >+ >+ alloc(tmp, reg, isDef); >+ tmp = Tmp(reg); >+ return true; >+ } >+ >+ // This can happen if we have a #WarmAnys > #Available registers >+ return false; >+} >+ >+ALWAYS_INLINE bool GenerateAndAllocateRegisters::isDisallowedRegister(Reg reg) >+{ >+ return !m_allowedRegisters.get(reg); >+} >+ >+void GenerateAndAllocateRegisters::prepareForGeneration() >+{ >+ // We pessimistically assume we use all callee saves. >+ handleCalleeSaves(m_code, RegisterSet::calleeSaveRegisters()); >+ allocateEscapedStackSlots(m_code); >+ >+ // Each Tmp gets its own stack slot. >+ unsigned nextStackIndex = 0; >+ auto assignNextStackSlot = [&] (const Tmp& tmp) { >+ intptr_t offset = -static_cast<intptr_t>(m_code.frameSize()) - static_cast<intptr_t>(nextStackIndex) * 8 - 8; >+ ++nextStackIndex; >+ >+ TmpData data; >+ data.spillSlot = m_code.addStackSlot(8, StackSlotKind::Spill); >+ data.spillSlot->setOffsetFromFP(offset); >+ data.reg = Reg(); >+ m_map[tmp] = data; >+#if !ASSERT_DISABLED >+ m_allTmps[tmp.bank()].append(tmp); >+#endif >+ }; >+ >+ m_code.forEachTmp([&] (Tmp tmp) { >+ ASSERT(!tmp.isReg()); >+ assignNextStackSlot(tmp); >+ }); >+ >+ m_allowedRegisters = RegisterSet(); >+ >+ forEachBank([&] (Bank bank) { >+ m_registers[bank] = m_code.regsInPriorityOrder(bank); >+ >+ for (Reg reg : m_registers[bank]) { >+ m_allowedRegisters.set(reg); >+ >+ Tmp tmp(reg); >+ assignNextStackSlot(tmp); >+ } >+ }); >+ >+ updateFrameSizeBasedOnStackSlots(m_code); >+ m_code.setStackIsAllocated(true); >+ >+ lowerStackArgs(m_code); >+ >+ // Verify none of these passes add any tmps. >+#if !ASSERT_DISABLED >+ forEachBank([&] (Bank bank) { >+ ASSERT(m_allTmps[bank].size() - m_registers[bank].size() == m_code.numTmps(bank)); >+ }); >+#endif >+} >+ >+void GenerateAndAllocateRegisters::generate(CCallHelpers& jit) >+{ >+ m_jit = &jit; >+ >+ // FIXME: We neither use the disassembler nor the CodeOriginMap. >+ // We could change this code to use those APIs. They're currently >+ // only used with JS code, and we don't want to any cycles calling >+ // into them. >+ >+ TimingScope timingScope("Air::generateAndAllocateRegisters"); >+ >+ gatherTerminalPatchSpills(); >+ >+ DisallowMacroScratchRegisterUsage disallowScratch(*m_jit); >+ >+ UnifiedTmpLiveness liveness(m_code); >+ buildLiveRanges(liveness); >+ >+ IndexMap<BasicBlock*, IndexMap<Reg, Tmp>> currentAllocationMap(m_code.size()); >+ { >+ IndexMap<Reg, Tmp> defaultCurrentAllocation(Reg::maxIndex() + 1); >+ for (BasicBlock* block : m_code) { >+ if (block == m_code[0]) // Handled below. >+ continue; >+ currentAllocationMap[block] = defaultCurrentAllocation; >+ } >+ >+ for (Tmp tmp : liveness.liveAtHead(m_code[0])) { >+ if (!tmp.isReg()) >+ continue; >+ defaultCurrentAllocation[tmp.reg()] = tmp; >+ } >+ currentAllocationMap[m_code[0]] = defaultCurrentAllocation; >+ } >+ >+ // And now, we generate code. >+ GenerationContext context; >+ context.code = &m_code; >+ context.blockLabels.resize(m_code.size()); >+ for (BasicBlock* block : m_code) >+ context.blockLabels[block] = Box<CCallHelpers::Label>::create(); >+ IndexMap<BasicBlock*, CCallHelpers::JumpList> blockJumps(m_code.size()); >+ >+ auto link = [&] (CCallHelpers::Jump jump, BasicBlock* target) { >+ if (context.blockLabels[target]->isSet()) { >+ jump.linkTo(*context.blockLabels[target], m_jit); >+ return; >+ } >+ >+ blockJumps[target].append(jump); >+ }; >+ >+ Disassembler* disassembler = m_code.disassembler(); >+ >+ m_globalInstIndex = 0; >+ >+ for (BasicBlock* block : m_code) { >+ context.currentBlock = block; >+ context.indexInBlock = UINT_MAX; >+ blockJumps[block].link(m_jit); >+ CCallHelpers::Label label = m_jit->label(); >+ *context.blockLabels[block] = label; >+ >+ if (disassembler) >+ disassembler->startBlock(block, *m_jit); >+ >+ if (Optional<unsigned> entrypointIndex = m_code.entrypointIndex(block)) { >+ ASSERT(m_code.isEntrypoint(block)); >+ if (disassembler) >+ disassembler->startEntrypoint(*m_jit); >+ >+ m_code.prologueGeneratorForEntrypoint(*entrypointIndex)->run(*m_jit, m_code); >+ >+ if (disassembler) >+ disassembler->endEntrypoint(*m_jit); >+ } else >+ ASSERT(!m_code.isEntrypoint(block)); >+ >+ auto startLabel = m_jit->labelIgnoringWatchpoints(); >+ >+ { >+ auto iter = m_blocksForPatchSpilling.find(block); >+ if (iter != m_blocksForPatchSpilling.end()) { >+ auto& data = iter->value; >+ data.jump = m_jit->jump(); >+ data.continueLabel = m_jit->label(); >+ } >+ } >+ >+ forEachBank([&] (Bank bank) { >+#if !ASSERT_DISABLED >+ // By default, everything is spilled at block boundaries. We do this after we process each block >+ // so we don't have to walk all Tmps, since #Tmps >> #Available regs. Instead, we walk the register file at >+ // each block boundary and clear entries in this map. >+ for (Tmp tmp : m_allTmps[bank]) >+ ASSERT(m_map[tmp].reg == Reg()); >+#endif >+ >+ RegisterSet availableRegisters; >+ for (Reg reg : m_registers[bank]) >+ availableRegisters.set(reg); >+ m_availableRegs[bank] = WTFMove(availableRegisters); >+ }); >+ >+ IndexMap<Reg, Tmp>& currentAllocation = currentAllocationMap[block]; >+ m_currentAllocation = ¤tAllocation; >+ >+ for (unsigned i = 0; i < currentAllocation.size(); ++i) { >+ Tmp tmp = currentAllocation[i]; >+ if (!tmp) >+ continue; >+ Reg reg = Reg::fromIndex(i); >+ m_map[tmp].reg = reg; >+ m_availableRegs[tmp.bank()].clear(reg); >+ } >+ >+ bool isReplayingSameInst = false; >+ for (size_t instIndex = 0; instIndex < block->size(); ++instIndex) { >+ if (instIndex && !isReplayingSameInst) >+ startLabel = m_jit->labelIgnoringWatchpoints(); >+ >+ context.indexInBlock = instIndex; >+ >+ Inst& inst = block->at(instIndex); >+ >+ m_didAlreadyFreeDeadSlots = false; >+ >+ m_namedUsedRegs = RegisterSet(); >+ m_namedDefdRegs = RegisterSet(); >+ >+ inst.forEachArg([&] (Arg& arg, Arg::Role role, Bank, Width) { >+ if (!arg.isTmp()) >+ return; >+ >+ Tmp tmp = arg.tmp(); >+ if (tmp.isReg() && isDisallowedRegister(tmp.reg())) >+ return; >+ >+ if (tmp.isReg()) { >+ if (Arg::isAnyUse(role)) >+ m_namedUsedRegs.set(tmp.reg()); >+ if (Arg::isAnyDef(role)) >+ m_namedDefdRegs.set(tmp.reg()); >+ } >+ >+ // We convert any cold uses that are already in the stack to just point to >+ // the canonical stack location. >+ if (!Arg::isColdUse(role)) >+ return; >+ >+ if (!inst.admitsStack(arg)) >+ return; >+ >+ auto& entry = m_map[tmp]; >+ if (!entry.reg) { >+ // We're a cold use, and our current location is already on the stack. Just use that. >+ arg = Arg::addr(Tmp(GPRInfo::callFrameRegister), entry.spillSlot->offsetFromFP()); >+ } >+ }); >+ >+ RegisterSet clobberedRegisters; >+ { >+ Inst* nextInst = block->get(instIndex + 1); >+ if (inst.kind.opcode == Patch || (nextInst && nextInst->kind.opcode == Patch)) { >+ if (inst.kind.opcode == Patch) >+ clobberedRegisters.merge(inst.extraClobberedRegs()); >+ if (nextInst && nextInst->kind.opcode == Patch) >+ clobberedRegisters.merge(nextInst->extraEarlyClobberedRegs()); >+ >+ clobberedRegisters.filter(m_allowedRegisters); >+ clobberedRegisters.exclude(m_namedDefdRegs); >+ m_namedDefdRegs.merge(clobberedRegisters); >+ } >+ } >+ >+ auto allocNamed = [&] (const RegisterSet& named, bool isDef) { >+ for (Reg reg : named) { >+ if (Tmp occupyingTmp = currentAllocation[reg]) { >+ if (occupyingTmp == Tmp(reg)) >+ continue; >+ } >+ >+ freeDeadTmpsIfNeeded(); // We don't want to spill a dead tmp. >+ alloc(Tmp(reg), reg, isDef); >+ } >+ }; >+ >+ allocNamed(m_namedUsedRegs, false); // Must come before the defd registers since we may use and def the same register. >+ allocNamed(m_namedDefdRegs, true); >+ >+ { >+ auto tryAllocate = [&] { >+ Vector<Tmp*, 8> usesToAlloc; >+ Vector<Tmp*, 8> defsToAlloc; >+ >+ inst.forEachTmp([&] (Tmp& tmp, Arg::Role role, Bank, Width) { >+ if (tmp.isReg()) >+ return; >+ // We bucket any arg that is Use+Def as a Use. >+ if (Arg::isAnyUse(role)) >+ usesToAlloc.append(&tmp); >+ else if (Arg::isAnyDef(role)) >+ defsToAlloc.append(&tmp); >+ }); >+ >+ auto tryAllocateTmps = [&] (auto& vector, bool isDef) { >+ bool success = true; >+ for (Tmp* tmp : vector) >+ success &= assignTmp(*tmp, tmp->bank(), isDef); >+ return success; >+ }; >+ >+ // We first handle uses, then defs. We want to be able to tell the register allocator >+ // which tmps need to be loaded from memory into their assigned register. Those such >+ // tmps are uses. Defs don't need to be reloaded since we're defining them. However, >+ // some tmps may both be used and defd. So we handle uses first since forEachTmp could >+ // walk uses/defs in any order. >+ bool success = true; >+ success &= tryAllocateTmps(usesToAlloc, false); >+ success &= tryAllocateTmps(defsToAlloc, true); >+ >+ return success; >+ }; >+ >+ // We first allocate trying to give any Tmp a register. If that makes us exhaust the >+ // available registers, we convert anything that accepts stack to be a stack addr >+ // instead. This can happen for programs Insts that take in many args, but most >+ // args can just be stack values. >+ bool success = tryAllocate(); >+ if (!success) { >+ RELEASE_ASSERT(!isReplayingSameInst); // We should only need to do the below at most once per inst. >+ >+ // We need to capture the register state before we start spilling things >+ // since we may have multiple arguments that are the same register. >+ IndexMap<Reg, Tmp> allocationSnapshot = currentAllocation; >+ >+ // We rewind this Inst to be in its previous state, however, if any arg admits stack, >+ // we move to providing that arg in stack form. This will allow us to fully allocate >+ // this inst when we rewind. >+ inst.forEachArg([&] (Arg& arg, Arg::Role, Bank, Width) { >+ if (!arg.isTmp()) >+ return; >+ >+ Tmp tmp = arg.tmp(); >+ if (tmp.isReg() && isDisallowedRegister(tmp.reg())) >+ return; >+ >+ if (tmp.isReg()) { >+ Tmp originalTmp = allocationSnapshot[tmp.reg()]; >+ if (originalTmp.isReg()) { >+ ASSERT(tmp.reg() == originalTmp.reg()); >+ // This means this Inst referred to this reg directly. We leave these as is. >+ return; >+ } >+ tmp = originalTmp; >+ } >+ >+ if (!inst.admitsStack(arg)) { >+ arg = tmp; >+ return; >+ } >+ >+ auto& entry = m_map[tmp]; >+ if (Reg reg = entry.reg) >+ spill(tmp, reg); >+ >+ arg = Arg::addr(Tmp(GPRInfo::callFrameRegister), entry.spillSlot->offsetFromFP()); >+ }); >+ >+ --instIndex; >+ isReplayingSameInst = true; >+ continue; >+ } >+ >+ isReplayingSameInst = false; >+ } >+ >+ if (m_code.needsUsedRegisters() && inst.kind.opcode == Patch) { >+ freeDeadTmpsIfNeeded(); >+ RegisterSet registerSet; >+ for (size_t i = 0; i < currentAllocation.size(); ++i) { >+ if (currentAllocation[i]) >+ registerSet.set(Reg::fromIndex(i)); >+ } >+ inst.reportUsedRegisters(registerSet); >+ } >+ >+ if (inst.isTerminal() && block->numSuccessors()) { >+ // By default, we spill everything between block boundaries. However, we have a small >+ // heuristic to pass along register state. We should eventually make this better. >+ // What we do now is if we have a successor with a single predecessor (us), and we >+ // haven't yet generated code for it, we give it our register state. If all our successors >+ // can take on our register state, we don't flush at the end of this block. >+ >+ bool everySuccessorGetsOurRegisterState = true; >+ for (unsigned i = 0; i < block->numSuccessors(); ++i) { >+ BasicBlock* successor = block->successorBlock(i); >+ if (successor->numPredecessors() == 1 && !context.blockLabels[successor]->isSet()) >+ currentAllocationMap[successor] = currentAllocation; >+ else >+ everySuccessorGetsOurRegisterState = false; >+ } >+ if (!everySuccessorGetsOurRegisterState) { >+ for (Tmp tmp : liveness.liveAtTail(block)) { >+ if (tmp.isReg() && isDisallowedRegister(tmp.reg())) >+ continue; >+ if (Reg reg = m_map[tmp].reg) >+ flush(tmp, reg); >+ } >+ } >+ } >+ >+ if (!inst.isTerminal()) { >+ CCallHelpers::Jump jump = inst.generate(*m_jit, context); >+ ASSERT_UNUSED(jump, !jump.isSet()); >+ >+ for (Reg reg : clobberedRegisters) { >+ Tmp tmp(reg); >+ ASSERT(currentAllocation[reg] == tmp); >+ m_availableRegs[tmp.bank()].set(reg); >+ m_currentAllocation->at(reg) = Tmp(); >+ m_map[tmp].reg = Reg(); >+ } >+ } else { >+ bool needsToGenerate = true; >+ if (inst.kind.opcode == Jump && block->successorBlock(0) == m_code.findNextBlock(block)) >+ needsToGenerate = false; >+ >+ if (isReturn(inst.kind.opcode)) { >+ needsToGenerate = false; >+ >+ // We currently don't represent the full epilogue in Air, so we need to >+ // have this override. >+ if (m_code.frameSize()) { >+ m_jit->emitRestore(m_code.calleeSaveRegisterAtOffsetList()); >+ m_jit->emitFunctionEpilogue(); >+ } else >+ m_jit->emitFunctionEpilogueWithEmptyFrame(); >+ m_jit->ret(); >+ } >+ >+ if (needsToGenerate) { >+ CCallHelpers::Jump jump = block->last().generate(*m_jit, context); >+ >+ // The jump won't be set for patchpoints. It won't be set for Oops because then it won't have >+ // any successors. >+ if (jump.isSet()) { >+ switch (block->numSuccessors()) { >+ case 1: >+ link(jump, block->successorBlock(0)); >+ break; >+ case 2: >+ link(jump, block->successorBlock(0)); >+ if (block->successorBlock(1) != m_code.findNextBlock(block)) >+ link(m_jit->jump(), block->successorBlock(1)); >+ break; >+ default: >+ RELEASE_ASSERT_NOT_REACHED(); >+ break; >+ } >+ } >+ } >+ } >+ >+ auto endLabel = m_jit->labelIgnoringWatchpoints(); >+ if (disassembler) >+ disassembler->addInst(&inst, startLabel, endLabel); >+ >+ ++m_globalInstIndex; >+ } >+ >+ // Registers usually get spilled at block boundaries. We do it this way since we don't >+ // want to iterate the entire TmpMap, since usually #Tmps >> #Regs. We may not actually spill >+ // all registers, but at the top of this loop we handle that case by pre-populating register >+ // state. Here, we just clear this map. After this loop, this map should contain only >+ // null entries. >+ for (size_t i = 0; i < currentAllocation.size(); ++i) { >+ if (Tmp tmp = currentAllocation[i]) >+ m_map[tmp].reg = Reg(); >+ } >+ } >+ >+ for (auto& entry : m_blocksForPatchSpilling) { >+ entry.value.jump.linkTo(m_jit->label(), m_jit); >+ const HashMap<Tmp, Arg*>& spills = entry.value.defdTmps; >+ for (auto& entry : spills) { >+ Arg* arg = entry.value; >+ if (!arg->isTmp()) >+ continue; >+ Tmp originalTmp = entry.key; >+ Tmp currentTmp = arg->tmp(); >+ ASSERT_WITH_MESSAGE(currentTmp.isReg(), "We already did register allocation so we should have assigned this Tmp to a register."); >+ flush(originalTmp, currentTmp.reg()); >+ } >+ m_jit->jump().linkTo(entry.value.continueLabel, m_jit); >+ } >+ >+ context.currentBlock = nullptr; >+ context.indexInBlock = UINT_MAX; >+ >+ Vector<CCallHelpers::Label> entrypointLabels(m_code.numEntrypoints()); >+ for (unsigned i = m_code.numEntrypoints(); i--;) >+ entrypointLabels[i] = *context.blockLabels[m_code.entrypoint(i).block()]; >+ m_code.setEntrypointLabels(WTFMove(entrypointLabels)); >+ >+ if (disassembler) >+ disassembler->startLatePath(*m_jit); >+ >+ // FIXME: Make late paths have Origins: https://bugs.webkit.org/show_bug.cgi?id=153689 >+ for (auto& latePath : context.latePaths) >+ latePath->run(*m_jit, context); >+ >+ if (disassembler) >+ disassembler->endLatePath(*m_jit); >+} >+ >+} } } // namespace JSC::B3::Air >+ >+#endif // ENABLE(B3_JIT) >Index: Source/JavaScriptCore/b3/air/AirAllocateRegistersAndStackAndGenerateCode.h >=================================================================== >--- Source/JavaScriptCore/b3/air/AirAllocateRegistersAndStackAndGenerateCode.h (nonexistent) >+++ Source/JavaScriptCore/b3/air/AirAllocateRegistersAndStackAndGenerateCode.h (working copy) >@@ -0,0 +1,93 @@ >+/* >+ * Copyright (C) 2019 Apple Inc. All rights reserved. >+ * >+ * Redistribution and use in source and binary forms, with or without >+ * modification, are permitted provided that the following conditions >+ * are met: >+ * 1. Redistributions of source code must retain the above copyright >+ * notice, this list of conditions and the following disclaimer. >+ * 2. Redistributions in binary form must reproduce the above copyright >+ * notice, this list of conditions and the following disclaimer in the >+ * documentation and/or other materials provided with the distribution. >+ * >+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY >+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE >+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR >+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR >+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, >+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, >+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR >+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY >+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT >+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE >+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. >+ */ >+ >+#pragma once >+ >+#if ENABLE(B3_JIT) >+ >+#include "AirLiveness.h" >+#include "AirTmpMap.h" >+ >+namespace JSC { >+ >+class CCallHelpers; >+ >+namespace B3 { namespace Air { >+ >+class Code; >+ >+class GenerateAndAllocateRegisters { >+ struct TmpData { >+ StackSlot* spillSlot; >+ Reg reg; >+ }; >+ >+public: >+ GenerateAndAllocateRegisters(Code&); >+ >+ void prepareForGeneration(); >+ void generate(CCallHelpers&); >+ >+private: >+ void gatherTerminalPatchSpills(); >+ void flush(Tmp, Reg); >+ void spill(Tmp, Reg); >+ void alloc(Tmp, Reg, bool isDef); >+ void freeDeadTmpsIfNeeded(); >+ bool assignTmp(Tmp&, Bank, bool isDef); >+ void buildLiveRanges(UnifiedTmpLiveness&); >+ bool isDisallowedRegister(Reg); >+ >+ Code& m_code; >+ CCallHelpers* m_jit { nullptr }; >+ >+ TmpMap<TmpData> m_map; >+ >+#if !ASSERT_DISABLED >+ Vector<Tmp> m_allTmps[numBanks]; >+#endif >+ >+ Vector<Reg> m_registers[numBanks]; >+ RegisterSet m_availableRegs[numBanks]; >+ size_t m_globalInstIndex; >+ IndexMap<Reg, Tmp>* m_currentAllocation { nullptr }; >+ bool m_didAlreadyFreeDeadSlots; >+ TmpMap<size_t> m_liveRangeEnd; >+ RegisterSet m_namedUsedRegs; >+ RegisterSet m_namedDefdRegs; >+ RegisterSet m_allowedRegisters; >+ >+ struct PatchSpillData { >+ CCallHelpers::Jump jump; >+ CCallHelpers::Label continueLabel; >+ HashMap<Tmp, Arg*> defdTmps; >+ }; >+ >+ HashMap<BasicBlock*, PatchSpillData> m_blocksForPatchSpilling; >+}; >+ >+} } } // namespace JSC::B3::Air >+ >+#endif // ENABLE(B3_JIT) >Index: Source/JavaScriptCore/b3/air/AirCode.cpp >=================================================================== >--- Source/JavaScriptCore/b3/air/AirCode.cpp (revision 241282) >+++ Source/JavaScriptCore/b3/air/AirCode.cpp (working copy) >@@ -28,6 +28,7 @@ > > #if ENABLE(B3_JIT) > >+#include "AirAllocateRegistersAndStackAndGenerateCode.h" > #include "AirCCallSpecial.h" > #include "AirCFG.h" > #include "AllowMacroScratchRegisterUsageIf.h" >Index: Source/JavaScriptCore/b3/air/AirCode.h >=================================================================== >--- Source/JavaScriptCore/b3/air/AirCode.h (revision 241282) >+++ Source/JavaScriptCore/b3/air/AirCode.h (working copy) >@@ -50,6 +50,7 @@ IGNORE_RETURN_TYPE_WARNINGS_BEGIN > > namespace Air { > >+class GenerateAndAllocateRegisters; > class BlockInsertionSet; > class CCallSpecial; > class CFG; >@@ -337,6 +338,8 @@ public: > WeakRandom& weakRandom() { return m_weakRandom; } > > void emitDefaultPrologue(CCallHelpers&); >+ >+ std::unique_ptr<GenerateAndAllocateRegisters> m_generateAndAllocateRegisters; > > private: > friend class ::JSC::B3::Procedure; >Index: Source/JavaScriptCore/b3/air/AirGenerate.cpp >=================================================================== >--- Source/JavaScriptCore/b3/air/AirGenerate.cpp (revision 241282) >+++ Source/JavaScriptCore/b3/air/AirGenerate.cpp (working copy) >@@ -28,6 +28,7 @@ > > #if ENABLE(B3_JIT) > >+#include "AirAllocateRegistersAndStackAndGenerateCode.h" > #include "AirAllocateRegistersAndStackByLinearScan.h" > #include "AirAllocateRegistersByGraphColoring.h" > #include "AirAllocateStackByGraphColoring.h" >@@ -36,6 +37,8 @@ > #include "AirFixObviousSpills.h" > #include "AirFixPartialRegisterStalls.h" > #include "AirGenerationContext.h" >+#include "AirHandleCalleeSaves.h" >+#include "AirLiveness.h" > #include "AirLogRegisterPressure.h" > #include "AirLowerAfterRegAlloc.h" > #include "AirLowerEntrySwitch.h" >@@ -45,6 +48,8 @@ > #include "AirOptimizeBlockOrder.h" > #include "AirReportUsedRegisters.h" > #include "AirSimplifyCFG.h" >+#include "AirStackAllocation.h" >+#include "AirTmpMap.h" > #include "AirValidate.h" > #include "B3Common.h" > #include "B3Procedure.h" >@@ -73,6 +78,34 @@ void prepareForGeneration(Code& code) > if (shouldValidateIR()) > validate(code); > >+ if (!code.optLevel()) { >+ lowerMacros(code); >+ >+ // We may still need to do post-allocation lowering. Doing it after both register and >+ // stack allocation is less optimal, but it works fine. >+ lowerAfterRegAlloc(code); >+ >+ // Actually create entrypoints. >+ lowerEntrySwitch(code); >+ >+ // This sorts the basic blocks in Code to achieve an ordering that maximizes the likelihood that a high >+ // frequency successor is also the fall-through target. >+ optimizeBlockOrder(code); >+ >+ if (shouldValidateIR()) >+ validate(code); >+ >+ if (shouldDumpIR(AirMode)) { >+ dataLog("Air after ", code.lastPhaseName(), ", before generation:\n"); >+ dataLog(code); >+ } >+ >+ code.m_generateAndAllocateRegisters = std::make_unique<GenerateAndAllocateRegisters>(code); >+ code.m_generateAndAllocateRegisters->prepareForGeneration(); >+ >+ return; >+ } >+ > simplifyCFG(code); > > lowerMacros(code); >@@ -161,7 +194,7 @@ void prepareForGeneration(Code& code) > } > } > >-void generate(Code& code, CCallHelpers& jit) >+static void generateWithAlreadyAllocatedRegisters(Code& code, CCallHelpers& jit) > { > TimingScope timingScope("Air::generate"); > >@@ -171,10 +204,8 @@ void generate(Code& code, CCallHelpers& > GenerationContext context; > context.code = &code; > context.blockLabels.resize(code.size()); >- for (BasicBlock* block : code) { >- if (block) >- context.blockLabels[block] = Box<CCallHelpers::Label>::create(); >- } >+ for (BasicBlock* block : code) >+ context.blockLabels[block] = Box<CCallHelpers::Label>::create(); > IndexMap<BasicBlock*, CCallHelpers::JumpList> blockJumps(code.size()); > > auto link = [&] (CCallHelpers::Jump jump, BasicBlock* target) { >@@ -305,6 +336,14 @@ void generate(Code& code, CCallHelpers& > pcToOriginMap.appendItem(jit.labelIgnoringWatchpoints(), Origin()); > } > >+void generate(Code& code, CCallHelpers& jit) >+{ >+ if (code.optLevel()) >+ generateWithAlreadyAllocatedRegisters(code, jit); >+ else >+ code.m_generateAndAllocateRegisters->generate(jit); >+} >+ > } } } // namespace JSC::B3::Air > > #endif // ENABLE(B3_JIT) >Index: Source/JavaScriptCore/b3/air/AirHandleCalleeSaves.cpp >=================================================================== >--- Source/JavaScriptCore/b3/air/AirHandleCalleeSaves.cpp (revision 241282) >+++ Source/JavaScriptCore/b3/air/AirHandleCalleeSaves.cpp (working copy) >@@ -50,7 +50,12 @@ void handleCalleeSaves(Code& code) > } > } > >- // Now we filter to really get the callee saves. >+ handleCalleeSaves(code, WTFMove(usedCalleeSaves)); >+} >+ >+void handleCalleeSaves(Code& code, RegisterSet usedCalleeSaves) >+{ >+ // We filter to really get the callee saves. > usedCalleeSaves.filter(RegisterSet::calleeSaveRegisters()); > usedCalleeSaves.filter(code.mutableRegs()); > usedCalleeSaves.exclude(RegisterSet::stackRegisters()); // We don't need to save FP here. >Index: Source/JavaScriptCore/b3/air/AirHandleCalleeSaves.h >=================================================================== >--- Source/JavaScriptCore/b3/air/AirHandleCalleeSaves.h (revision 241282) >+++ Source/JavaScriptCore/b3/air/AirHandleCalleeSaves.h (working copy) >@@ -41,6 +41,7 @@ class Code; > // We should make this interact with the client: https://bugs.webkit.org/show_bug.cgi?id=150459 > > void handleCalleeSaves(Code&); >+void handleCalleeSaves(Code&, RegisterSet); > > } } } // namespace JSC::B3::Air > >Index: Source/JavaScriptCore/b3/air/AirTmpMap.h >=================================================================== >--- Source/JavaScriptCore/b3/air/AirTmpMap.h (revision 241282) >+++ Source/JavaScriptCore/b3/air/AirTmpMap.h (working copy) >@@ -39,9 +39,9 @@ namespace JSC { namespace B3 { namespace > template<typename Value> > class TmpMap { > public: >- TmpMap() >- { >- } >+ TmpMap() = default; >+ TmpMap(TmpMap&&) = default; >+ TmpMap& operator=(TmpMap&&) = default; > > template<typename... Args> > TmpMap(Code& code, const Args&... args) >Index: Source/JavaScriptCore/runtime/Options.h >=================================================================== >--- Source/JavaScriptCore/runtime/Options.h (revision 241282) >+++ Source/JavaScriptCore/runtime/Options.h (working copy) >@@ -480,7 +480,7 @@ constexpr bool enableWebAssemblyStreamin > \ > v(bool, failToCompileWebAssemblyCode, false, Normal, "If true, no Wasm::Plan will sucessfully compile a function.") \ > v(size, webAssemblyPartialCompileLimit, 5000, Normal, "Limit on the number of bytes a Wasm::Plan::compile should attempt before checking for other work.") \ >- v(unsigned, webAssemblyBBQOptimizationLevel, 1, Normal, "B3 Optimization level for BBQ Web Assembly module compilations.") \ >+ v(unsigned, webAssemblyBBQOptimizationLevel, 0, Normal, "B3 Optimization level for BBQ Web Assembly module compilations.") \ > v(unsigned, webAssemblyOMGOptimizationLevel, Options::defaultB3OptLevel(), Normal, "B3 Optimization level for OMG Web Assembly module compilations.") \ > \ > v(bool, useBBQTierUpChecks, true, Normal, "Enables tier up checks for our BBQ code.") \ >Index: Source/JavaScriptCore/wasm/WasmAirIRGenerator.cpp >=================================================================== >--- Source/JavaScriptCore/wasm/WasmAirIRGenerator.cpp (revision 241282) >+++ Source/JavaScriptCore/wasm/WasmAirIRGenerator.cpp (working copy) >@@ -222,7 +222,7 @@ public: > return fail(__VA_ARGS__); \ > } while (0) > >- AirIRGenerator(const ModuleInformation&, B3::Procedure&, InternalFunction*, Vector<UnlinkedWasmToWasmCall>&, MemoryMode, CompilationMode, unsigned functionIndex, TierUpCount*, ThrowWasmException, const Signature&); >+ AirIRGenerator(const ModuleInformation&, B3::Procedure&, InternalFunction*, Vector<UnlinkedWasmToWasmCall>&, MemoryMode, unsigned functionIndex, TierUpCount*, ThrowWasmException, const Signature&); > > PartialResult WARN_UNUSED_RETURN addArguments(const Signature&); > PartialResult WARN_UNUSED_RETURN addLocal(Type, uint32_t); >@@ -285,6 +285,17 @@ public: > return result; > } > >+ ALWAYS_INLINE void didKill(const ExpressionType& typedTmp) >+ { >+ Tmp tmp = typedTmp.tmp(); >+ if (!tmp) >+ return; >+ if (tmp.isGP()) >+ m_freeGPs.append(tmp); >+ else >+ m_freeFPs.append(tmp); >+ } >+ > private: > ALWAYS_INLINE void validateInst(Inst& inst) > { >@@ -324,6 +335,16 @@ private: > > Tmp newTmp(B3::Bank bank) > { >+ switch (bank) { >+ case B3::GP: >+ if (m_freeGPs.size()) >+ return m_freeGPs.takeLast(); >+ break; >+ case B3::FP: >+ if (m_freeFPs.size()) >+ return m_freeFPs.takeLast(); >+ break; >+ } > return m_code.newTmp(bank); > } > >@@ -559,7 +580,6 @@ private: > FunctionParser<AirIRGenerator>* m_parser { nullptr }; > const ModuleInformation& m_info; > const MemoryMode m_mode { MemoryMode::BoundsChecking }; >- const CompilationMode m_compilationMode { CompilationMode::BBQMode }; > const unsigned m_functionIndex { UINT_MAX }; > const TierUpCount* m_tierUp { nullptr }; > >@@ -574,6 +594,9 @@ private: > GPRReg m_wasmContextInstanceGPR { InvalidGPRReg }; > bool m_makesCalls { false }; > >+ Vector<Tmp, 8> m_freeGPs; >+ Vector<Tmp, 8> m_freeFPs; >+ > TypedTmp m_instanceValue; // Always use the accessor below to ensure the instance value is materialized when used. > bool m_usesInstanceValue { false }; > TypedTmp instanceValue() >@@ -630,10 +653,9 @@ void AirIRGenerator::restoreWasmContextI > emitPatchpoint(block, patchpoint, Tmp(), instance); > } > >-AirIRGenerator::AirIRGenerator(const ModuleInformation& info, B3::Procedure& procedure, InternalFunction* compilation, Vector<UnlinkedWasmToWasmCall>& unlinkedWasmToWasmCalls, MemoryMode mode, CompilationMode compilationMode, unsigned functionIndex, TierUpCount* tierUp, ThrowWasmException throwWasmException, const Signature& signature) >+AirIRGenerator::AirIRGenerator(const ModuleInformation& info, B3::Procedure& procedure, InternalFunction* compilation, Vector<UnlinkedWasmToWasmCall>& unlinkedWasmToWasmCalls, MemoryMode mode, unsigned functionIndex, TierUpCount* tierUp, ThrowWasmException throwWasmException, const Signature& signature) > : m_info(info) > , m_mode(mode) >- , m_compilationMode(compilationMode) > , m_functionIndex(functionIndex) > , m_tierUp(tierUp) > , m_proc(procedure) >@@ -719,7 +741,6 @@ AirIRGenerator::AirIRGenerator(const Mod > // This allows leaf functions to not do stack checks if their frame size is within > // certain limits since their caller would have already done the check. > if (needsOverflowCheck) { >- AllowMacroScratchRegisterUsage allowScratch(jit); > GPRReg scratch = wasmCallingConventionAir().prologueScratch(0); > > if (Context::useFastTLS()) >@@ -735,7 +756,6 @@ AirIRGenerator::AirIRGenerator(const Mod > }); > } else if (m_usesInstanceValue && Context::useFastTLS()) { > // No overflow check is needed, but the instance values still needs to be correct. >- AllowMacroScratchRegisterUsageIf allowScratch(jit, CCallHelpers::loadWasmContextInstanceNeedsMacroScratchRegister()); > jit.loadWasmContextInstance(contextInstance); > } > } >@@ -1930,11 +1950,10 @@ Expected<std::unique_ptr<InternalFunctio > // optLevel=1. > procedure.setNeedsUsedRegisters(false); > >- procedure.setOptLevel(compilationMode == CompilationMode::BBQMode >- ? Options::webAssemblyBBQOptimizationLevel() >- : Options::webAssemblyOMGOptimizationLevel()); >+ ASSERT_UNUSED(compilationMode, compilationMode == CompilationMode::BBQMode); >+ procedure.setOptLevel(Options::webAssemblyBBQOptimizationLevel()); > >- AirIRGenerator irGenerator(info, procedure, result.get(), unlinkedWasmToWasmCalls, mode, compilationMode, functionIndex, tierUp, throwWasmException, signature); >+ AirIRGenerator irGenerator(info, procedure, result.get(), unlinkedWasmToWasmCalls, mode, functionIndex, tierUp, throwWasmException, signature); > FunctionParser<AirIRGenerator> parser(irGenerator, functionStart, functionLength, signature, info); > WASM_FAIL_IF_HELPER_FAILS(parser.parse()); > >@@ -2501,6 +2520,7 @@ auto AirIRGenerator::addOp<OpType::I64Tr > Vector<ConstrainedTmp> args; > auto* patchpoint = addPatchpoint(B3::Int64); > patchpoint->effects = B3::Effects::none(); >+ patchpoint->clobber(RegisterSet::macroScratchRegisters()); > args.append(arg); > if (isX86()) { > args.append(signBitConstant); >@@ -2574,6 +2594,7 @@ auto AirIRGenerator::addOp<OpType::I64Tr > > auto* patchpoint = addPatchpoint(B3::Int64); > patchpoint->effects = B3::Effects::none(); >+ patchpoint->clobber(RegisterSet::macroScratchRegisters()); > Vector<ConstrainedTmp> args; > args.append(arg); > if (isX86()) { >Index: Source/JavaScriptCore/wasm/WasmB3IRGenerator.cpp >=================================================================== >--- Source/JavaScriptCore/wasm/WasmB3IRGenerator.cpp (revision 241282) >+++ Source/JavaScriptCore/wasm/WasmB3IRGenerator.cpp (working copy) >@@ -230,6 +230,8 @@ public: > Value* constant(B3::Type, uint64_t bits, Optional<Origin> = WTF::nullopt); > void insertConstants(); > >+ ALWAYS_INLINE void didKill(ExpressionType) { } >+ > private: > void emitExceptionCheck(CCallHelpers&, ExceptionType); > >Index: Source/JavaScriptCore/wasm/WasmFunctionParser.h >=================================================================== >--- Source/JavaScriptCore/wasm/WasmFunctionParser.h (revision 241282) >+++ Source/JavaScriptCore/wasm/WasmFunctionParser.h (working copy) >@@ -168,6 +168,8 @@ auto FunctionParser<Context>::binaryCase > WASM_TRY_POP_EXPRESSION_STACK_INTO(right, "binary right"); > WASM_TRY_POP_EXPRESSION_STACK_INTO(left, "binary left"); > WASM_TRY_ADD_TO_CONTEXT(template addOp<op>(left, right, result)); >+ m_context.didKill(left); >+ m_context.didKill(right); > > m_expressionStack.append(result); > return { }; >@@ -182,6 +184,7 @@ auto FunctionParser<Context>::unaryCase( > > WASM_TRY_POP_EXPRESSION_STACK_INTO(value, "unary"); > WASM_TRY_ADD_TO_CONTEXT(template addOp<op>(value, result)); >+ m_context.didKill(value); > > m_expressionStack.append(result); > return { }; >@@ -211,6 +214,10 @@ auto FunctionParser<Context>::parseExpre > ExpressionType result; > WASM_TRY_ADD_TO_CONTEXT(addSelect(condition, nonZero, zero, result)); > >+ m_context.didKill(condition); >+ m_context.didKill(zero); >+ m_context.didKill(nonZero); >+ > m_expressionStack.append(result); > return { }; > } >@@ -226,6 +233,7 @@ auto FunctionParser<Context>::parseExpre > WASM_PARSER_FAIL_IF(!parseVarUInt32(offset), "can't get load offset"); > WASM_TRY_POP_EXPRESSION_STACK_INTO(pointer, "load pointer"); > WASM_TRY_ADD_TO_CONTEXT(load(static_cast<LoadOpType>(m_currentOpcode), pointer, result, offset)); >+ m_context.didKill(pointer); > m_expressionStack.append(result); > return { }; > } >@@ -241,6 +249,8 @@ auto FunctionParser<Context>::parseExpre > WASM_TRY_POP_EXPRESSION_STACK_INTO(value, "store value"); > WASM_TRY_POP_EXPRESSION_STACK_INTO(pointer, "store pointer"); > WASM_TRY_ADD_TO_CONTEXT(store(static_cast<StoreOpType>(m_currentOpcode), pointer, value, offset)); >+ m_context.didKill(value); >+ m_context.didKill(pointer); > return { }; > } > #undef CREATE_CASE >@@ -288,6 +298,7 @@ auto FunctionParser<Context>::parseExpre > WASM_PARSER_FAIL_IF(!parseVarUInt32(index), "can't get index for set_local"); > WASM_TRY_POP_EXPRESSION_STACK_INTO(value, "set_local"); > WASM_TRY_ADD_TO_CONTEXT(setLocal(index, value)); >+ m_context.didKill(value); > return { }; > } > >@@ -314,6 +325,7 @@ auto FunctionParser<Context>::parseExpre > WASM_PARSER_FAIL_IF(!parseVarUInt32(index), "can't get set_global's index"); > WASM_TRY_POP_EXPRESSION_STACK_INTO(value, "set_global value"); > WASM_TRY_ADD_TO_CONTEXT(setGlobal(index, value)); >+ m_context.didKill(value); > return { }; > } > >@@ -396,6 +408,7 @@ auto FunctionParser<Context>::parseExpre > WASM_TRY_ADD_TO_CONTEXT(addIf(condition, inlineSignature, control)); > m_controlStack.append({ WTFMove(m_expressionStack), control }); > m_expressionStack = ExpressionList(); >+ m_context.didKill(condition); > return { }; > } > >@@ -420,6 +433,9 @@ auto FunctionParser<Context>::parseExpre > ControlType& data = m_controlStack[m_controlStack.size() - 1 - target].controlData; > > WASM_TRY_ADD_TO_CONTEXT(addBranch(data, condition, m_expressionStack)); >+ >+ m_context.didKill(condition); >+ > return { }; > } > >@@ -446,6 +462,8 @@ auto FunctionParser<Context>::parseExpre > WASM_TRY_POP_EXPRESSION_STACK_INTO(condition, "br_table condition"); > WASM_TRY_ADD_TO_CONTEXT(addSwitch(condition, targets, m_controlStack[m_controlStack.size() - 1 - defaultTarget].controlData, m_expressionStack)); > >+ m_context.didKill(condition); >+ > m_unreachableBlocks = 1; > return { }; > } >@@ -503,6 +521,8 @@ auto FunctionParser<Context>::parseExpre > WASM_TRY_ADD_TO_CONTEXT(addGrowMemory(delta, result)); > m_expressionStack.append(result); > >+ m_context.didKill(delta); >+ > return { }; > } > >Index: Source/JavaScriptCore/wasm/WasmValidate.cpp >=================================================================== >--- Source/JavaScriptCore/wasm/WasmValidate.cpp (revision 241282) >+++ Source/JavaScriptCore/wasm/WasmValidate.cpp (working copy) >@@ -141,6 +141,8 @@ public: > Result WARN_UNUSED_RETURN addCall(unsigned calleeIndex, const Signature&, const Vector<ExpressionType>& args, ExpressionType& result); > Result WARN_UNUSED_RETURN addCallIndirect(const Signature&, const Vector<ExpressionType>& args, ExpressionType& result); > >+ ALWAYS_INLINE void didKill(ExpressionType) { } >+ > bool hasMemory() const { return !!m_module.memory; } > > Validate(const ModuleInformation& module) >Index: Source/WTF/ChangeLog >=================================================================== >--- Source/WTF/ChangeLog (revision 241282) >+++ Source/WTF/ChangeLog (working copy) >@@ -1,3 +1,16 @@ >+2019-02-11 Saam Barati <sbarati@apple.com> >+ >+ [WebAssembly] Write a new register allocator for Air O0 and make BBQ use it >+ https://bugs.webkit.org/show_bug.cgi?id=194036 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ * wtf/IndexMap.h: >+ (WTF::IndexMap::at): >+ (WTF::IndexMap::at const): >+ (WTF::IndexMap::operator[]): >+ (WTF::IndexMap::operator[] const): >+ > 2019-02-11 Truitt Savell <tsavell@apple.com> > > Unreviewed, rolling out r241229. >Index: Source/WTF/wtf/IndexMap.h >=================================================================== >--- Source/WTF/wtf/IndexMap.h (revision 241282) >+++ Source/WTF/wtf/IndexMap.h (working copy) >@@ -37,9 +37,11 @@ namespace WTF { > template<typename Key, typename Value> > class IndexMap { > public: >- IndexMap() >- { >- } >+ IndexMap() = default; >+ IndexMap(IndexMap&&) = default; >+ IndexMap& operator=(IndexMap&&) = default; >+ IndexMap(const IndexMap&) = default; >+ IndexMap& operator=(const IndexMap&) = default; > > template<typename... Args> > explicit IndexMap(size_t size, Args&&... args) >@@ -61,25 +63,30 @@ public: > > size_t size() const { return m_vector.size(); } > >- Value& operator[](size_t index) >+ Value& at(const Key& key) > { >- return m_vector[index]; >+ return m_vector[IndexKeyType<Key>::index(key)]; >+ } >+ >+ const Value& at(const Key& key) const >+ { >+ return m_vector[IndexKeyType<Key>::index(key)]; > } > >- const Value& operator[](size_t index) const >+ Value& at(size_t index) > { > return m_vector[index]; > } >- >- Value& operator[](const Key& key) >+ >+ const Value& at(size_t index) const > { >- return m_vector[IndexKeyType<Key>::index(key)]; >+ return m_vector[index]; > } > >- const Value& operator[](const Key& key) const >- { >- return m_vector[IndexKeyType<Key>::index(key)]; >- } >+ Value& operator[](size_t index) { return at(index); } >+ const Value& operator[](size_t index) const { return at(index); } >+ Value& operator[](const Key& key) { return at(key); } >+ const Value& operator[](const Key& key) const { return at(key); } > > template<typename PassedValue> > void append(const Key& key, PassedValue&& value) >Index: Tools/ChangeLog >=================================================================== >--- Tools/ChangeLog (revision 241282) >+++ Tools/ChangeLog (working copy) >@@ -1,3 +1,12 @@ >+2019-02-11 Saam Barati <sbarati@apple.com> >+ >+ [WebAssembly] Write a new register allocator for Air O0 and make BBQ use it >+ https://bugs.webkit.org/show_bug.cgi?id=194036 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ * Scripts/run-jsc-stress-tests: >+ > 2019-02-11 Daniel Bates <dabates@apple.com> > > [iOS] Mouse/Touch/Pointer events are missing modifier keys >Index: Tools/Scripts/run-jsc-stress-tests >=================================================================== >--- Tools/Scripts/run-jsc-stress-tests (revision 241282) >+++ Tools/Scripts/run-jsc-stress-tests (working copy) >@@ -486,6 +486,7 @@ EAGER_OPTIONS = ["--thresholdForJITAfter > # NOTE: Tests rely on this using scribbleFreeCells. > NO_CJIT_OPTIONS = ["--useConcurrentJIT=false", "--thresholdForJITAfterWarmUp=100", "--scribbleFreeCells=true"] > B3O1_OPTIONS = ["--defaultB3OptLevel=1"] >+B3O0_OPTIONS = ["--defaultB3OptLevel=0"] > FTL_OPTIONS = ["--useFTLJIT=true"] > PROBE_OSR_EXIT_OPTION = ["--useProbeOSRExit=true"] > >@@ -678,8 +679,8 @@ def runFTLNoCJIT(*optionalTestSpecificOp > run("misc-ftl-no-cjit", *(FTL_OPTIONS + NO_CJIT_OPTIONS + optionalTestSpecificOptions)) > end > >-def runFTLNoCJITB3O1(*optionalTestSpecificOptions) >- run("ftl-no-cjit-b3o1", "--useArrayAllocationProfiling=false", "--forcePolyProto=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + B3O1_OPTIONS + optionalTestSpecificOptions)) >+def runFTLNoCJITB3O0(*optionalTestSpecificOptions) >+ run("ftl-no-cjit-b3o0", "--useArrayAllocationProfiling=false", "--forcePolyProto=true", *(FTL_OPTIONS + NO_CJIT_OPTIONS + B3O0_OPTIONS + optionalTestSpecificOptions)) > end > > def runFTLNoCJITValidate(*optionalTestSpecificOptions) >@@ -781,7 +782,7 @@ def defaultRun > return if $mode == "basic" > > runFTLNoCJITValidate >- runFTLNoCJITB3O1 >+ runFTLNoCJITB3O0 > runFTLNoCJITNoPutStackValidate > runFTLNoCJITNoInlineValidate > runFTLEagerNoCJITB3O1 >@@ -811,7 +812,7 @@ def defaultNoNoLLIntRun > > return if $mode == "basic" > >- runFTLNoCJITB3O1 >+ runFTLNoCJITB3O0 > runFTLNoCJITNoPutStackValidate > runFTLNoCJITNoInlineValidate > runFTLEager >@@ -840,7 +841,7 @@ def defaultSpotCheckNoMaximalFlush > > runFTLNoCJITOSRValidation > runFTLNoCJITNoAccessInlining >- runFTLNoCJITB3O1 >+ runFTLNoCJITB3O0 > end > > def defaultSpotCheck >@@ -866,7 +867,7 @@ def defaultNoEagerRun > return if $mode == "basic" > > runFTLNoCJITNoInlineValidate >- runFTLNoCJITB3O1 >+ runFTLNoCJITB3O0 > end > end >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Flags:
ews-watchlist
:
commit-queue-
Actions:
View
|
Formatted Diff
|
Diff
Attachments on
bug 194036
:
360608
|
360647
|
360667
|
360907
|
360947
|
360948
|
361270
|
361481
|
361488
|
361496
|
361497
|
361716
|
361742
|
361989
|
361990
|
361991
|
362002
|
362055
|
362098