WebKit Bugzilla
Attachment 362676 Details for
Bug 194375
: [JSC] sizeof(JSString) should be 16
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
Patch
bug-194375-20190221174530.patch (text/plain), 156.19 KB, created by
Yusuke Suzuki
on 2019-02-21 17:45:32 PST
(
hide
)
Description:
Patch
Filename:
MIME Type:
Creator:
Yusuke Suzuki
Created:
2019-02-21 17:45:32 PST
Size:
156.19 KB
patch
obsolete
>Subversion Revision: 241874 >diff --git a/Source/JavaScriptCore/ChangeLog b/Source/JavaScriptCore/ChangeLog >index fcab6e3d3eb73d070e9a55c0a1b2e25be80c420d..353f922dd3471584385ad728d635952b3007a68e 100644 >--- a/Source/JavaScriptCore/ChangeLog >+++ b/Source/JavaScriptCore/ChangeLog >@@ -1,3 +1,153 @@ >+2019-02-21 Yusuke Suzuki <ysuzuki@apple.com> >+ >+ [JSC] sizeof(JSString) should be 16 >+ https://bugs.webkit.org/show_bug.cgi?id=194375 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ This patch reduces sizeof(JSString) from 24 to 16 to fit it into GC heap cell atom. And it also reduces sizeof(JSRopeString) from 48 to 32. >+ Both classes cut 16 bytes per instance in GC allocation. This new layout is used in 64bit architectures which allows unaligned accesses and >+ has little endianess. >+ >+ JSString no longer has length and flags directly. JSString has String, and we query information to this String instead of holding duplicate >+ information in JSString. We embed isRope bit into this String's pointer so that we can convert JSRopeString to JSString in an atomic manner. >+ We emit store-store fence when we put String pointer. This should exist even before this patch, so this patch also fixes one concurrency issue. >+ >+ The old JSRopeString separately had JSString* fibers along with String. In this patch, we merge the first JSString* fiber and String pointer >+ storage into one to reduce the size of JSRopeString. JSRopeString has three pointer width storage. We pick 48bit effective address of JSString* >+ fibers to compress three fibers + length + flags into three pointer width storage. >+ >+ This patch is performance neutral in Speedometer2 and JetStream2. And it improves RAMification by 2.7%. >+ >+ * assembler/MacroAssemblerARM64.h: >+ (JSC::MacroAssemblerARM64::storeZero16): >+ * assembler/MacroAssemblerX86Common.h: >+ (JSC::MacroAssemblerX86Common::storeZero16): >+ (JSC::MacroAssemblerX86Common::store16): >+ * bytecode/AccessCase.cpp: >+ (JSC::AccessCase::generateImpl): >+ * bytecode/InlineAccess.cpp: >+ (JSC::InlineAccess::dumpCacheSizesAndCrash): >+ (JSC::InlineAccess::generateStringLength): >+ * dfg/DFGOperations.cpp: >+ * dfg/DFGOperations.h: >+ * dfg/DFGSpeculativeJIT.cpp: >+ (JSC::DFG::SpeculativeJIT::compileStringSlice): >+ (JSC::DFG::SpeculativeJIT::compileToLowerCase): >+ (JSC::DFG::SpeculativeJIT::compileGetCharCodeAt): >+ (JSC::DFG::SpeculativeJIT::compileGetByValOnString): >+ (JSC::DFG::SpeculativeJIT::compileStringEquality): >+ (JSC::DFG::SpeculativeJIT::compileStringZeroLength): >+ (JSC::DFG::SpeculativeJIT::compileLogicalNotStringOrOther): >+ (JSC::DFG::SpeculativeJIT::emitStringBranch): >+ (JSC::DFG::SpeculativeJIT::emitStringOrOtherBranch): >+ (JSC::DFG::SpeculativeJIT::compileGetIndexedPropertyStorage): >+ (JSC::DFG::SpeculativeJIT::compileGetArrayLength): >+ (JSC::DFG::SpeculativeJIT::speculateStringIdentAndLoadStorage): >+ (JSC::DFG::SpeculativeJIT::emitSwitchCharStringJump): >+ (JSC::DFG::SpeculativeJIT::emitSwitchStringOnString): >+ (JSC::DFG::SpeculativeJIT::compileMakeRope): Deleted. >+ * dfg/DFGSpeculativeJIT32_64.cpp: >+ (JSC::DFG::SpeculativeJIT::compile): >+ (JSC::DFG::SpeculativeJIT::compileMakeRope): >+ * dfg/DFGSpeculativeJIT64.cpp: >+ (JSC::DFG::SpeculativeJIT::compile): >+ (JSC::DFG::SpeculativeJIT::compileMakeRope): >+ * ftl/FTLAbstractHeapRepository.cpp: >+ (JSC::FTL::AbstractHeapRepository::AbstractHeapRepository): >+ * ftl/FTLAbstractHeapRepository.h: >+ * ftl/FTLLowerDFGToB3.cpp: >+ (JSC::FTL::DFG::LowerDFGToB3::compileGetIndexedPropertyStorage): >+ (JSC::FTL::DFG::LowerDFGToB3::compileGetArrayLength): >+ (JSC::FTL::DFG::LowerDFGToB3::compileMakeRope): >+ (JSC::FTL::DFG::LowerDFGToB3::compileStringCharAt): >+ (JSC::FTL::DFG::LowerDFGToB3::compileStringCharCodeAt): >+ (JSC::FTL::DFG::LowerDFGToB3::compileCompareStrictEq): >+ (JSC::FTL::DFG::LowerDFGToB3::compileStringToUntypedStrictEquality): >+ (JSC::FTL::DFG::LowerDFGToB3::compileSwitch): >+ (JSC::FTL::DFG::LowerDFGToB3::mapHashString): >+ (JSC::FTL::DFG::LowerDFGToB3::compileMapHash): >+ (JSC::FTL::DFG::LowerDFGToB3::compileHasOwnProperty): >+ (JSC::FTL::DFG::LowerDFGToB3::compileStringSlice): >+ (JSC::FTL::DFG::LowerDFGToB3::compileToLowerCase): >+ (JSC::FTL::DFG::LowerDFGToB3::stringsEqual): >+ (JSC::FTL::DFG::LowerDFGToB3::boolify): >+ (JSC::FTL::DFG::LowerDFGToB3::switchString): >+ (JSC::FTL::DFG::LowerDFGToB3::isRopeString): >+ (JSC::FTL::DFG::LowerDFGToB3::isNotRopeString): >+ (JSC::FTL::DFG::LowerDFGToB3::speculateStringIdent): >+ * jit/AssemblyHelpers.cpp: >+ (JSC::AssemblyHelpers::emitConvertValueToBoolean): >+ (JSC::AssemblyHelpers::branchIfValue): >+ * jit/AssemblyHelpers.h: >+ (JSC::AssemblyHelpers::branchIfRopeStringImpl): >+ (JSC::AssemblyHelpers::branchIfNotRopeStringImpl): >+ * jit/JITInlines.h: >+ (JSC::JIT::emitLoadCharacterString): >+ * jit/ThunkGenerators.cpp: >+ (JSC::stringGetByValGenerator): >+ (JSC::stringCharLoad): >+ * llint/LowLevelInterpreter.asm: >+ * llint/LowLevelInterpreter32_64.asm: >+ * llint/LowLevelInterpreter64.asm: >+ * runtime/JSString.cpp: >+ (JSC::JSString::createEmptyString): >+ (JSC::JSRopeString::RopeBuilder<RecordOverflow>::expand): >+ (JSC::JSString::dumpToStream): >+ (JSC::JSString::estimatedSize): >+ (JSC::JSString::visitChildren): >+ (JSC::JSRopeString::resolveRopeInternal8 const): >+ (JSC::JSRopeString::resolveRopeInternal8NoSubstring const): >+ (JSC::JSRopeString::resolveRopeInternal16 const): >+ (JSC::JSRopeString::resolveRopeInternal16NoSubstring const): >+ (JSC::JSRopeString::resolveRopeToAtomicString const): >+ (JSC::JSRopeString::convertToNonRope const): >+ (JSC::JSRopeString::resolveRopeToExistingAtomicString const): >+ (JSC::JSRopeString::resolveRopeWithFunction const): >+ (JSC::JSRopeString::resolveRope const): >+ (JSC::JSRopeString::resolveRopeSlowCase8 const): >+ (JSC::JSRopeString::resolveRopeSlowCase const): >+ (JSC::JSRopeString::outOfMemory const): >+ (JSC::JSRopeString::visitFibers): Deleted. >+ (JSC::JSRopeString::clearFibers const): Deleted. >+ * runtime/JSString.h: >+ (JSC::JSString::uninitializedValueInternal const): >+ (JSC::JSString::valueInternal const): >+ (JSC::JSString::JSString): >+ (JSC::JSString::finishCreation): >+ (JSC::JSString::offsetOfValue): >+ (JSC::JSString::isRope const): >+ (JSC::JSString::is8Bit const): >+ (JSC::JSString::length const): >+ (JSC::JSString::tryGetValueImpl const): >+ (JSC::JSString::toAtomicString const): >+ (JSC::JSString::toExistingAtomicString const): >+ (JSC::JSString::value const): >+ (JSC::JSString::tryGetValue const): >+ (JSC::JSRopeString::unsafeView const): >+ (JSC::JSRopeString::viewWithUnderlyingString const): >+ (JSC::JSString::unsafeView const): >+ (JSC::JSString::viewWithUnderlyingString const): >+ (JSC::JSString::offsetOfLength): Deleted. >+ (JSC::JSString::offsetOfFlags): Deleted. >+ (JSC::JSString::setIs8Bit const): Deleted. >+ (JSC::JSString::setLength): Deleted. >+ (JSC::JSString::string): Deleted. >+ (JSC::jsStringBuilder): Deleted. >+ * runtime/JSStringInlines.h: >+ (JSC::JSString::~JSString): >+ (JSC::JSString::equal const): >+ * runtime/RegExpMatchesArray.h: >+ (JSC::createRegExpMatchesArray): >+ * runtime/RegExpObjectInlines.h: >+ (JSC::collectMatches): >+ * runtime/RegExpPrototype.cpp: >+ (JSC::regExpProtoFuncSplitFast): >+ * runtime/SmallStrings.cpp: >+ (JSC::SmallStrings::initializeCommonStrings): >+ (JSC::SmallStrings::createEmptyString): Deleted. >+ * runtime/SmallStrings.h: >+ > 2019-02-20 Yusuke Suzuki <ysuzuki@apple.com> > > [JSC] Remove WatchpointSet creation for SymbolTable entries if VM::canUseJIT() returns false >diff --git a/Source/WTF/ChangeLog b/Source/WTF/ChangeLog >index 4ecf7736ec6fd4a2ff0799e188231d5c38d5185c..8491097cefea9fad7532a55190ac0fb496cf3c6c 100644 >--- a/Source/WTF/ChangeLog >+++ b/Source/WTF/ChangeLog >@@ -1,3 +1,16 @@ >+2019-02-21 Yusuke Suzuki <ysuzuki@apple.com> >+ >+ [JSC] sizeof(JSString) should be 16 >+ https://bugs.webkit.org/show_bug.cgi?id=194375 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ * wtf/text/StringImpl.h: >+ (WTF::StringImpl::flagIs8Bit): >+ (WTF::StringImpl::flagIsAtomic): >+ (WTF::StringImpl::flagIsSymbol): >+ (WTF::StringImpl::maskStringKind): >+ > 2019-02-21 Dean Jackson <dino@apple.com> > > Rotation animations sometimes use the wrong origin (affects apple.com) >diff --git a/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h b/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h >index 9b20665be2e1d9878467115208a933c35487f36a..eff8b71f39232277fbe3a6344869659374bf8621 100644 >--- a/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h >+++ b/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h >@@ -1540,6 +1540,16 @@ class MacroAssemblerARM64 : public AbstractMacroAssembler<Assembler> { > m_assembler.strh(src, address.base, memoryTempRegister); > } > >+ void storeZero16(ImplicitAddress address) >+ { >+ store16(ARM64Registers::zr, address); >+ } >+ >+ void storeZero16(BaseIndex address) >+ { >+ store16(ARM64Registers::zr, address); >+ } >+ > void store8(RegisterID src, BaseIndex address) > { > if (!address.offset && !address.scale) { >diff --git a/Source/JavaScriptCore/assembler/MacroAssemblerX86Common.h b/Source/JavaScriptCore/assembler/MacroAssemblerX86Common.h >index 12434225acd9e4cb7ac93a4bafdd5613a884fcac..8fe9ea7dd13acfa6cd4eba45d829558ed190b21a 100644 >--- a/Source/JavaScriptCore/assembler/MacroAssemblerX86Common.h >+++ b/Source/JavaScriptCore/assembler/MacroAssemblerX86Common.h >@@ -1348,6 +1348,16 @@ class MacroAssemblerX86Common : public AbstractMacroAssembler<Assembler> { > store32(TrustedImm32(0), address); > } > >+ void storeZero16(ImplicitAddress address) >+ { >+ store16(TrustedImm32(0), address); >+ } >+ >+ void storeZero16(BaseIndex address) >+ { >+ store16(TrustedImm32(0), address); >+ } >+ > void store8(TrustedImm32 imm, Address address) > { > TrustedImm32 imm8(static_cast<int8_t>(imm.m_value)); >@@ -1434,7 +1444,7 @@ class MacroAssemblerX86Common : public AbstractMacroAssembler<Assembler> { > m_assembler.movw_im(static_cast<int16_t>(imm.m_value), address.offset, address.base, address.index, address.scale); > } > >- void store16(TrustedImm32 imm, Address address) >+ void store16(TrustedImm32 imm, ImplicitAddress address) > { > m_assembler.movw_im(static_cast<int16_t>(imm.m_value), address.offset, address.base); > } >diff --git a/Source/JavaScriptCore/bytecode/AccessCase.cpp b/Source/JavaScriptCore/bytecode/AccessCase.cpp >index 9e6ec7b063ceaa26b2b278dc17a8f21bb8cd3c5f..63d2109a3309d98c96c6e5998e56ce66ddeffad2 100644 >--- a/Source/JavaScriptCore/bytecode/AccessCase.cpp >+++ b/Source/JavaScriptCore/bytecode/AccessCase.cpp >@@ -1214,7 +1214,15 @@ void AccessCase::generateImpl(AccessGenerationState& state) > } > > case StringLength: { >- jit.load32(CCallHelpers::Address(baseGPR, JSString::offsetOfLength()), valueRegs.payloadGPR()); >+ jit.loadPtr(CCallHelpers::Address(baseGPR, JSString::offsetOfValue()), scratchGPR); >+ auto isRope = jit.branchIfRopeStringImpl(scratchGPR); >+ jit.load32(CCallHelpers::Address(scratchGPR, StringImpl::lengthMemoryOffset()), valueRegs.payloadGPR()); >+ auto done = jit.jump(); >+ >+ isRope.link(&jit); >+ jit.load32(CCallHelpers::Address(baseGPR, JSRopeString::offsetOfLength()), valueRegs.payloadGPR()); >+ >+ done.link(&jit); > jit.boxInt32(valueRegs.payloadGPR(), valueRegs); > state.succeed(); > return; >diff --git a/Source/JavaScriptCore/bytecode/InlineAccess.cpp b/Source/JavaScriptCore/bytecode/InlineAccess.cpp >index b7efc375c52458f77357d9e1667d1fbc56eb65b8..0809650f88993019db462a026595994bbbeefd58 100644 >--- a/Source/JavaScriptCore/bytecode/InlineAccess.cpp >+++ b/Source/JavaScriptCore/bytecode/InlineAccess.cpp >@@ -50,11 +50,21 @@ void InlineAccess::dumpCacheSizesAndCrash() > { > CCallHelpers jit; > >+ GPRReg scratchGPR = value; > jit.patchableBranch8( > CCallHelpers::NotEqual, > CCallHelpers::Address(base, JSCell::typeInfoTypeOffset()), > CCallHelpers::TrustedImm32(StringType)); >- jit.load32(CCallHelpers::Address(base, JSString::offsetOfLength()), regs.payloadGPR()); >+ >+ jit.loadPtr(CCallHelpers::Address(base, JSString::offsetOfValue()), scratchGPR); >+ auto isRope = jit.branchIfRopeStringImpl(scratchGPR); >+ jit.load32(CCallHelpers::Address(scratchGPR, StringImpl::lengthMemoryOffset()), regs.payloadGPR()); >+ auto done = jit.jump(); >+ >+ isRope.link(&jit); >+ jit.load32(CCallHelpers::Address(base, JSRopeString::offsetOfLength()), regs.payloadGPR()); >+ >+ done.link(&jit); > jit.boxInt32(regs.payloadGPR(), regs); > > dataLog("string length size: ", jit.m_assembler.buffer().codeSize(), "\n"); >@@ -294,12 +304,22 @@ bool InlineAccess::generateStringLength(StructureStubInfo& stubInfo) > > GPRReg base = stubInfo.baseGPR(); > JSValueRegs value = stubInfo.valueRegs(); >+ GPRReg scratch = getScratchRegister(stubInfo); > > auto branchToSlowPath = jit.patchableBranch8( > CCallHelpers::NotEqual, > CCallHelpers::Address(base, JSCell::typeInfoTypeOffset()), > CCallHelpers::TrustedImm32(StringType)); >- jit.load32(CCallHelpers::Address(base, JSString::offsetOfLength()), value.payloadGPR()); >+ >+ jit.loadPtr(CCallHelpers::Address(base, JSString::offsetOfValue()), scratch); >+ auto isRope = jit.branchIfRopeStringImpl(scratch); >+ jit.load32(CCallHelpers::Address(scratch, StringImpl::lengthMemoryOffset()), value.payloadGPR()); >+ auto done = jit.jump(); >+ >+ isRope.link(&jit); >+ jit.load32(CCallHelpers::Address(base, JSRopeString::offsetOfLength()), value.payloadGPR()); >+ >+ done.link(&jit); > jit.boxInt32(value.payloadGPR(), value); > > bool linkedCodeInline = linkCodeInline("string length", jit, stubInfo, [&] (LinkBuffer& linkBuffer) { >diff --git a/Source/JavaScriptCore/dfg/DFGOperations.cpp b/Source/JavaScriptCore/dfg/DFGOperations.cpp >index 6a578266eb4a128327e53cdaacb7d861577c49ef..963412983313f0da9c5927e838c4436979ff3815 100644 >--- a/Source/JavaScriptCore/dfg/DFGOperations.cpp >+++ b/Source/JavaScriptCore/dfg/DFGOperations.cpp >@@ -2157,6 +2157,32 @@ JSCell* JIT_OPERATION operationStringSubstr(ExecState* exec, JSCell* cell, int32 > return jsSubstring(exec, string, from, span); > } > >+JSCell* JIT_OPERATION operationStringSlice(ExecState* exec, JSCell* cell, int32_t start, int32_t end) >+{ >+ VM& vm = exec->vm(); >+ NativeCallFrameTracer tracer(&vm, exec); >+ auto scope = DECLARE_THROW_SCOPE(vm); >+ >+ auto string = jsCast<JSString*>(cell)->value(exec); >+ RETURN_IF_EXCEPTION(scope, nullptr); >+ >+ int32_t length = string.length(); >+ static_assert(static_cast<uint64_t>(JSString::MaxLength) <= static_cast<uint64_t>(std::numeric_limits<int32_t>::max()), ""); >+ int32_t from = start < 0 ? length + start : start; >+ int32_t to = end < 0 ? length + end : end; >+ if (to > from && to > 0 && from < length) { >+ if (from < 0) >+ from = 0; >+ if (to > length) >+ to = length; >+ scope.release(); >+ return jsSubstring(exec, string, static_cast<unsigned>(from), static_cast<unsigned>(to) - static_cast<unsigned>(from)); >+ } >+ >+ scope.release(); >+ return jsEmptyString(exec); >+} >+ > JSString* JIT_OPERATION operationToLowerCase(ExecState* exec, JSString* string, uint32_t failingIndex) > { > VM& vm = exec->vm(); >diff --git a/Source/JavaScriptCore/dfg/DFGOperations.h b/Source/JavaScriptCore/dfg/DFGOperations.h >index 38c116a54f91a615b81c15112148f25f94adb1ef..45b9c9342e5eca6f8a74b4b2a04d50c417b1ca85 100644 >--- a/Source/JavaScriptCore/dfg/DFGOperations.h >+++ b/Source/JavaScriptCore/dfg/DFGOperations.h >@@ -203,6 +203,7 @@ StringImpl* JIT_OPERATION operationResolveRope(ExecState*, JSString*); > JSString* JIT_OPERATION operationSingleCharacterString(ExecState*, int32_t); > > JSCell* JIT_OPERATION operationStringSubstr(ExecState*, JSCell*, int32_t, int32_t); >+JSCell* JIT_OPERATION operationStringSlice(ExecState*, JSCell*, int32_t, int32_t); > JSString* JIT_OPERATION operationStringValueOf(ExecState*, EncodedJSValue); > JSString* JIT_OPERATION operationToLowerCase(ExecState*, JSString*, uint32_t); > >diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp >index 2911288ede8d5699066d0d11ff2b7fd4b29de121..5e2fe61409c51d4fe1c7259524eed6cc4ef774ab 100644 >--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp >+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp >@@ -1530,23 +1530,79 @@ void SpeculativeJIT::compilePeepHoleBooleanBranch(Node* node, Node* branchNode, > void SpeculativeJIT::compileStringSlice(Node* node) > { > SpeculateCellOperand string(this, node->child1()); >- GPRTemporary startIndex(this); >- GPRTemporary temp(this); >- GPRTemporary temp2(this); > > GPRReg stringGPR = string.gpr(); >- GPRReg startIndexGPR = startIndex.gpr(); >- GPRReg tempGPR = temp.gpr(); >- GPRReg temp2GPR = temp2.gpr(); > > speculateString(node->child1(), stringGPR); > >+ SpeculateInt32Operand start(this, node->child2()); >+ GPRReg startGPR = start.gpr(); >+ >+ Optional<SpeculateInt32Operand> end; >+ Optional<GPRReg> endGPR; >+ if (node->child3()) { >+ end.emplace(this, node->child3()); >+ endGPR.emplace(end->gpr()); >+ } >+ >+ GPRTemporary temp(this); >+ GPRReg tempGPR = temp.gpr(); >+ >+ m_jit.loadPtr(CCallHelpers::Address(stringGPR, JSString::offsetOfValue()), tempGPR); >+ auto isRope = m_jit.branchIfRopeStringImpl(tempGPR); >+ >+ auto emitPopulateSliceIndex = [&] (Edge& target, GPRReg indexGPR, GPRReg lengthGPR, GPRReg resultGPR) { >+ if (target->isInt32Constant()) { >+ int32_t value = target->asInt32(); >+ if (!value) { >+ m_jit.move(TrustedImm32(0), resultGPR); >+ return; >+ } >+ >+ MacroAssembler::JumpList done; >+ if (value > 0) { >+ m_jit.move(TrustedImm32(value), resultGPR); >+ done.append(m_jit.branch32(MacroAssembler::BelowOrEqual, resultGPR, lengthGPR)); >+ m_jit.move(lengthGPR, resultGPR); >+ } else { >+ ASSERT(value); >+ m_jit.move(lengthGPR, resultGPR); >+ done.append(m_jit.branchAdd32(MacroAssembler::PositiveOrZero, TrustedImm32(value), resultGPR)); >+ m_jit.move(TrustedImm32(0), resultGPR); >+ } >+ done.link(&m_jit); >+ return; >+ } >+ >+ MacroAssembler::JumpList done; >+ >+ auto isPositive = m_jit.branch32(MacroAssembler::GreaterThanOrEqual, indexGPR, TrustedImm32(0)); >+ m_jit.move(lengthGPR, resultGPR); >+ done.append(m_jit.branchAdd32(MacroAssembler::PositiveOrZero, indexGPR, resultGPR)); >+ m_jit.move(TrustedImm32(0), resultGPR); >+ done.append(m_jit.jump()); >+ >+ isPositive.link(&m_jit); >+ m_jit.move(indexGPR, resultGPR); >+ done.append(m_jit.branch32(MacroAssembler::BelowOrEqual, resultGPR, lengthGPR)); >+ m_jit.move(lengthGPR, resultGPR); >+ >+ done.link(&m_jit); >+ }; >+ >+ GPRTemporary temp2(this); >+ GPRTemporary startIndex(this); >+ >+ GPRReg temp2GPR = temp2.gpr(); >+ GPRReg startIndexGPR = startIndex.gpr(); >+ > { >- m_jit.load32(JITCompiler::Address(stringGPR, JSString::offsetOfLength()), temp2GPR); >+ m_jit.load32(MacroAssembler::Address(tempGPR, StringImpl::lengthMemoryOffset()), temp2GPR); >+ >+ emitPopulateSliceIndex(node->child2(), startGPR, temp2GPR, startIndexGPR); > >- emitPopulateSliceIndex(node->child2(), temp2GPR, startIndexGPR); > if (node->child3()) >- emitPopulateSliceIndex(node->child3(), temp2GPR, tempGPR); >+ emitPopulateSliceIndex(node->child3(), endGPR.value(), temp2GPR, tempGPR); > else > m_jit.move(temp2GPR, tempGPR); > } >@@ -1562,9 +1618,8 @@ void SpeculativeJIT::compileStringSlice(Node* node) > m_jit.sub32(startIndexGPR, tempGPR); // the size of the sliced string. > slowCases.append(m_jit.branch32(MacroAssembler::NotEqual, tempGPR, TrustedImm32(1))); > >+ // Refill StringImpl* here. > m_jit.loadPtr(MacroAssembler::Address(stringGPR, JSString::offsetOfValue()), temp2GPR); >- slowCases.append(m_jit.branchTestPtr(MacroAssembler::Zero, temp2GPR)); >- > m_jit.loadPtr(MacroAssembler::Address(temp2GPR, StringImpl::dataOffset()), tempGPR); > > // Load the character into scratchReg >@@ -1586,13 +1641,14 @@ void SpeculativeJIT::compileStringSlice(Node* node) > m_jit.addPtr(TrustedImmPtr(m_jit.vm()->smallStrings.singleCharacterStrings()), tempGPR); > m_jit.loadPtr(tempGPR, tempGPR); > >- addSlowPathGenerator( >- slowPathCall( >- bigCharacter, this, operationSingleCharacterString, tempGPR, tempGPR)); >+ addSlowPathGenerator(slowPathCall(bigCharacter, this, operationSingleCharacterString, tempGPR, tempGPR)); > >- addSlowPathGenerator( >- slowPathCall( >- slowCases, this, operationStringSubstr, tempGPR, stringGPR, startIndexGPR, tempGPR)); >+ addSlowPathGenerator(slowPathCall(slowCases, this, operationStringSubstr, tempGPR, stringGPR, startIndexGPR, tempGPR)); >+ >+ if (endGPR) >+ addSlowPathGenerator(slowPathCall(isRope, this, operationStringSlice, tempGPR, stringGPR, startGPR, *endGPR)); >+ else >+ addSlowPathGenerator(slowPathCall(isRope, this, operationStringSlice, tempGPR, stringGPR, startGPR, TrustedImm32(std::numeric_limits<int32_t>::max()))); > > doneCases.link(&m_jit); > cellResult(tempGPR, node); >@@ -1620,8 +1676,7 @@ void SpeculativeJIT::compileToLowerCase(Node* node) > m_jit.move(TrustedImmPtr(nullptr), indexGPR); > > m_jit.loadPtr(MacroAssembler::Address(stringGPR, JSString::offsetOfValue()), tempGPR); >- slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, tempGPR)); >- >+ slowPath.append(m_jit.branchIfRopeStringImpl(tempGPR)); > slowPath.append(m_jit.branchTest32( > MacroAssembler::Zero, MacroAssembler::Address(tempGPR, StringImpl::flagsOffset()), > MacroAssembler::TrustedImm32(StringImpl::flagIs8Bit()))); >@@ -2128,13 +2183,13 @@ void SpeculativeJIT::compileGetCharCodeAt(Node* node) > > ASSERT(speculationChecked(m_state.forNode(node->child1()).m_type, SpecString)); > >- // unsigned comparison so we can filter out negative indices and indices that are too large >- speculationCheck(Uncountable, JSValueRegs(), 0, m_jit.branch32(MacroAssembler::AboveOrEqual, indexReg, MacroAssembler::Address(stringReg, JSString::offsetOfLength()))); >- > GPRTemporary scratch(this); > GPRReg scratchReg = scratch.gpr(); > >+ // unsigned comparison so we can filter out negative indices and indices that are too large > m_jit.loadPtr(MacroAssembler::Address(stringReg, JSString::offsetOfValue()), scratchReg); >+ >+ speculationCheck(Uncountable, JSValueRegs(), 0, m_jit.branch32(MacroAssembler::AboveOrEqual, indexReg, CCallHelpers::Address(scratchReg, StringImpl::lengthMemoryOffset()))); > > // Load the character into scratchReg > JITCompiler::Jump is16Bit = m_jit.branchTest32(MacroAssembler::Zero, MacroAssembler::Address(scratchReg, StringImpl::flagsOffset()), TrustedImm32(StringImpl::flagIs8Bit())); >@@ -2175,14 +2230,13 @@ void SpeculativeJIT::compileGetByValOnString(Node* node) > ASSERT(ArrayMode(Array::String, Array::Read).alreadyChecked(m_jit.graph(), node, m_state.forNode(m_graph.child(node, 0)))); > > // unsigned comparison so we can filter out negative indices and indices that are too large >+ m_jit.loadPtr(MacroAssembler::Address(baseReg, JSString::offsetOfValue()), scratchReg); > JITCompiler::Jump outOfBounds = m_jit.branch32( > MacroAssembler::AboveOrEqual, propertyReg, >- MacroAssembler::Address(baseReg, JSString::offsetOfLength())); >+ MacroAssembler::Address(scratchReg, StringImpl::lengthMemoryOffset())); > if (node->arrayMode().isInBounds()) > speculationCheck(OutOfBounds, JSValueRegs(), 0, outOfBounds); > >- m_jit.loadPtr(MacroAssembler::Address(baseReg, JSString::offsetOfValue()), scratchReg); >- > // Load the character into scratchReg > JITCompiler::Jump is16Bit = m_jit.branchTest32(MacroAssembler::Zero, MacroAssembler::Address(scratchReg, StringImpl::flagsOffset()), TrustedImm32(StringImpl::flagIs8Bit())); > >@@ -4330,88 +4384,6 @@ void SpeculativeJIT::compileArithAdd(Node* node) > } > } > >-void SpeculativeJIT::compileMakeRope(Node* node) >-{ >- ASSERT(node->child1().useKind() == KnownStringUse); >- ASSERT(node->child2().useKind() == KnownStringUse); >- ASSERT(!node->child3() || node->child3().useKind() == KnownStringUse); >- >- SpeculateCellOperand op1(this, node->child1()); >- SpeculateCellOperand op2(this, node->child2()); >- SpeculateCellOperand op3(this, node->child3()); >- GPRTemporary result(this); >- GPRTemporary allocator(this); >- GPRTemporary scratch(this); >- >- GPRReg opGPRs[3]; >- unsigned numOpGPRs; >- opGPRs[0] = op1.gpr(); >- opGPRs[1] = op2.gpr(); >- if (node->child3()) { >- opGPRs[2] = op3.gpr(); >- numOpGPRs = 3; >- } else { >- opGPRs[2] = InvalidGPRReg; >- numOpGPRs = 2; >- } >- GPRReg resultGPR = result.gpr(); >- GPRReg allocatorGPR = allocator.gpr(); >- GPRReg scratchGPR = scratch.gpr(); >- >- JITCompiler::JumpList slowPath; >- Allocator allocatorValue = allocatorForNonVirtualConcurrently<JSRopeString>(*m_jit.vm(), sizeof(JSRopeString), AllocatorForMode::AllocatorIfExists); >- emitAllocateJSCell(resultGPR, JITAllocator::constant(allocatorValue), allocatorGPR, TrustedImmPtr(m_jit.graph().registerStructure(m_jit.vm()->stringStructure.get())), scratchGPR, slowPath); >- >- m_jit.storePtr(TrustedImmPtr(nullptr), JITCompiler::Address(resultGPR, JSString::offsetOfValue())); >- for (unsigned i = 0; i < numOpGPRs; ++i) >- m_jit.storePtr(opGPRs[i], JITCompiler::Address(resultGPR, JSRopeString::offsetOfFibers() + sizeof(WriteBarrier<JSString>) * i)); >- for (unsigned i = numOpGPRs; i < JSRopeString::s_maxInternalRopeLength; ++i) >- m_jit.storePtr(TrustedImmPtr(nullptr), JITCompiler::Address(resultGPR, JSRopeString::offsetOfFibers() + sizeof(WriteBarrier<JSString>) * i)); >- m_jit.load16(JITCompiler::Address(opGPRs[0], JSString::offsetOfFlags()), scratchGPR); >- m_jit.load32(JITCompiler::Address(opGPRs[0], JSString::offsetOfLength()), allocatorGPR); >- if (!ASSERT_DISABLED) { >- JITCompiler::Jump ok = m_jit.branch32( >- JITCompiler::GreaterThanOrEqual, allocatorGPR, TrustedImm32(0)); >- m_jit.abortWithReason(DFGNegativeStringLength); >- ok.link(&m_jit); >- } >- for (unsigned i = 1; i < numOpGPRs; ++i) { >- m_jit.and16(JITCompiler::Address(opGPRs[i], JSString::offsetOfFlags()), scratchGPR); >- speculationCheck( >- Uncountable, JSValueSource(), nullptr, >- m_jit.branchAdd32( >- JITCompiler::Overflow, >- JITCompiler::Address(opGPRs[i], JSString::offsetOfLength()), allocatorGPR)); >- } >- m_jit.and32(JITCompiler::TrustedImm32(JSString::Is8Bit), scratchGPR); >- m_jit.store16(scratchGPR, JITCompiler::Address(resultGPR, JSString::offsetOfFlags())); >- if (!ASSERT_DISABLED) { >- JITCompiler::Jump ok = m_jit.branch32( >- JITCompiler::GreaterThanOrEqual, allocatorGPR, TrustedImm32(0)); >- m_jit.abortWithReason(DFGNegativeStringLength); >- ok.link(&m_jit); >- } >- m_jit.store32(allocatorGPR, JITCompiler::Address(resultGPR, JSString::offsetOfLength())); >- >- m_jit.mutatorFence(*m_jit.vm()); >- >- switch (numOpGPRs) { >- case 2: >- addSlowPathGenerator(slowPathCall( >- slowPath, this, operationMakeRope2, resultGPR, opGPRs[0], opGPRs[1])); >- break; >- case 3: >- addSlowPathGenerator(slowPathCall( >- slowPath, this, operationMakeRope3, resultGPR, opGPRs[0], opGPRs[1], opGPRs[2])); >- break; >- default: >- RELEASE_ASSERT_NOT_REACHED(); >- break; >- } >- >- cellResult(resultGPR, node); >-} >- > void SpeculativeJIT::compileArithAbs(Node* node) > { > switch (node->child1().useKind()) { >@@ -6320,21 +6292,21 @@ void SpeculativeJIT::compileStringEquality( > trueCase.append(fastTrue); > falseCase.append(fastFalse); > >- m_jit.load32(MacroAssembler::Address(leftGPR, JSString::offsetOfLength()), lengthGPR); >+ m_jit.loadPtr(MacroAssembler::Address(leftGPR, JSString::offsetOfValue()), leftTempGPR); >+ m_jit.loadPtr(MacroAssembler::Address(rightGPR, JSString::offsetOfValue()), rightTempGPR); >+ >+ slowCase.append(m_jit.branchIfRopeStringImpl(leftTempGPR)); >+ slowCase.append(m_jit.branchIfRopeStringImpl(rightTempGPR)); >+ >+ m_jit.load32(MacroAssembler::Address(leftTempGPR, StringImpl::lengthMemoryOffset()), lengthGPR); > > falseCase.append(m_jit.branch32( > MacroAssembler::NotEqual, >- MacroAssembler::Address(rightGPR, JSString::offsetOfLength()), >+ MacroAssembler::Address(rightTempGPR, StringImpl::lengthMemoryOffset()), > lengthGPR)); > > trueCase.append(m_jit.branchTest32(MacroAssembler::Zero, lengthGPR)); > >- m_jit.loadPtr(MacroAssembler::Address(leftGPR, JSString::offsetOfValue()), leftTempGPR); >- m_jit.loadPtr(MacroAssembler::Address(rightGPR, JSString::offsetOfValue()), rightTempGPR); >- >- slowCase.append(m_jit.branchTestPtr(MacroAssembler::Zero, leftTempGPR)); >- slowCase.append(m_jit.branchTestPtr(MacroAssembler::Zero, rightTempGPR)); >- > slowCase.append(m_jit.branchTest32( > MacroAssembler::Zero, > MacroAssembler::Address(leftTempGPR, StringImpl::flagsOffset()), >@@ -6638,9 +6610,8 @@ void SpeculativeJIT::compileStringZeroLength(Node* node) > GPRTemporary eq(this); > GPRReg eqGPR = eq.gpr(); > >- // Fetch the length field from the string object. >- m_jit.test32(MacroAssembler::Zero, MacroAssembler::Address(strGPR, JSString::offsetOfLength()), MacroAssembler::TrustedImm32(-1), eqGPR); >- >+ m_jit.move(TrustedImmPtr::weakPointer(m_jit.graph(), jsEmptyString(m_jit.vm())), eqGPR); >+ m_jit.comparePtr(CCallHelpers::Equal, strGPR, eqGPR, eqGPR); > unblessedBooleanResult(eqGPR, node); > } > >@@ -6655,16 +6626,17 @@ void SpeculativeJIT::compileLogicalNotStringOrOther(Node* node) > GPRReg cellGPR = valueRegs.payloadGPR(); > DFG_TYPE_CHECK( > valueRegs, node->child1(), (~SpecCellCheck) | SpecString, m_jit.branchIfNotString(cellGPR)); >- m_jit.test32( >- JITCompiler::Zero, JITCompiler::Address(cellGPR, JSString::offsetOfLength()), >- JITCompiler::TrustedImm32(-1), tempGPR); >- JITCompiler::Jump done = m_jit.jump(); >+ >+ m_jit.move(TrustedImmPtr::weakPointer(m_jit.graph(), jsEmptyString(m_jit.vm())), tempGPR); >+ m_jit.comparePtr(CCallHelpers::Equal, cellGPR, tempGPR, tempGPR); >+ auto done = m_jit.jump(); >+ > notCell.link(&m_jit); > DFG_TYPE_CHECK( > valueRegs, node->child1(), SpecCellCheck | SpecOther, m_jit.branchIfNotOther(valueRegs, tempGPR)); > m_jit.move(TrustedImm32(1), tempGPR); >- done.link(&m_jit); > >+ done.link(&m_jit); > unblessedBooleanResult(tempGPR, node); > > } >@@ -6672,9 +6644,14 @@ void SpeculativeJIT::compileLogicalNotStringOrOther(Node* node) > void SpeculativeJIT::emitStringBranch(Edge nodeUse, BasicBlock* taken, BasicBlock* notTaken) > { > SpeculateCellOperand str(this, nodeUse); >- speculateString(nodeUse, str.gpr()); >- branchTest32(JITCompiler::NonZero, MacroAssembler::Address(str.gpr(), JSString::offsetOfLength()), taken); >- jump(notTaken); >+ >+ GPRReg strGPR = str.gpr(); >+ >+ speculateString(nodeUse, strGPR); >+ >+ branchPtr(CCallHelpers::Equal, strGPR, TrustedImmPtr::weakPointer(m_jit.graph(), jsEmptyString(m_jit.vm())), notTaken); >+ jump(taken); >+ > noResult(m_currentNode); > } > >@@ -6688,10 +6665,10 @@ void SpeculativeJIT::emitStringOrOtherBranch(Edge nodeUse, BasicBlock* taken, Ba > JITCompiler::Jump notCell = m_jit.branchIfNotCell(valueRegs); > GPRReg cellGPR = valueRegs.payloadGPR(); > DFG_TYPE_CHECK(valueRegs, nodeUse, (~SpecCellCheck) | SpecString, m_jit.branchIfNotString(cellGPR)); >- branchTest32( >- JITCompiler::Zero, JITCompiler::Address(cellGPR, JSString::offsetOfLength()), >- JITCompiler::TrustedImm32(-1), notTaken); >+ >+ branchPtr(CCallHelpers::Equal, cellGPR, TrustedImmPtr::weakPointer(m_jit.graph(), jsEmptyString(m_jit.vm())), notTaken); > jump(taken, ForceJump); >+ > notCell.link(&m_jit); > DFG_TYPE_CHECK( > valueRegs, nodeUse, SpecCellCheck | SpecOther, m_jit.branchIfNotOther(valueRegs, tempGPR)); >@@ -6740,7 +6717,7 @@ void SpeculativeJIT::compileGetIndexedPropertyStorage(Node* node) > > addSlowPathGenerator( > slowPathCall( >- m_jit.branchTest32(MacroAssembler::Zero, storageReg), >+ m_jit.branchIfRopeStringImpl(storageReg), > this, operationResolveRope, storageReg, baseReg)); > > m_jit.loadPtr(MacroAssembler::Address(storageReg, StringImpl::dataOffset()), storageReg); >@@ -6996,9 +6973,20 @@ void SpeculativeJIT::compileGetArrayLength(Node* node) > case Array::String: { > SpeculateCellOperand base(this, node->child1()); > GPRTemporary result(this, Reuse, base); >+ GPRTemporary temp(this); > GPRReg baseGPR = base.gpr(); > GPRReg resultGPR = result.gpr(); >- m_jit.load32(MacroAssembler::Address(baseGPR, JSString::offsetOfLength()), resultGPR); >+ GPRReg tempGPR = temp.gpr(); >+ >+ m_jit.loadPtr(MacroAssembler::Address(baseGPR, JSString::offsetOfValue()), tempGPR); >+ auto isRope = m_jit.branchIfRopeStringImpl(tempGPR); >+ m_jit.load32(MacroAssembler::Address(tempGPR, StringImpl::lengthMemoryOffset()), resultGPR); >+ auto done = m_jit.jump(); >+ >+ isRope.link(&m_jit); >+ m_jit.load32(CCallHelpers::Address(baseGPR, JSRopeString::offsetOfLength()), resultGPR); >+ >+ done.link(&m_jit); > int32Result(resultGPR, node); > break; > } >@@ -10146,7 +10134,7 @@ void SpeculativeJIT::speculateStringIdentAndLoadStorage(Edge edge, GPRReg string > > speculationCheck( > BadType, JSValueSource::unboxedCell(string), edge, >- m_jit.branchTestPtr(MacroAssembler::Zero, storage)); >+ m_jit.branchIfRopeStringImpl(storage)); > speculationCheck( > BadType, JSValueSource::unboxedCell(string), edge, m_jit.branchTest32( > MacroAssembler::Zero, >@@ -10547,19 +10535,17 @@ void SpeculativeJIT::emitSwitchImm(Node* node, SwitchData* data) > void SpeculativeJIT::emitSwitchCharStringJump( > SwitchData* data, GPRReg value, GPRReg scratch, GPRReg scratch2) > { >+ m_jit.loadPtr(MacroAssembler::Address(value, JSString::offsetOfValue()), scratch); >+ auto isRope = m_jit.branchIfRopeStringImpl(scratch); >+ > addBranch( > m_jit.branch32( > MacroAssembler::NotEqual, >- MacroAssembler::Address(value, JSString::offsetOfLength()), >+ MacroAssembler::Address(scratch, StringImpl::lengthMemoryOffset()), > TrustedImm32(1)), > data->fallThrough.block); > >- m_jit.loadPtr(MacroAssembler::Address(value, JSString::offsetOfValue()), scratch); >- >- addSlowPathGenerator( >- slowPathCall( >- m_jit.branchTestPtr(MacroAssembler::Zero, scratch), >- this, operationResolveRope, scratch, value)); >+ addSlowPathGenerator(slowPathCall(isRope, this, operationResolveRope, scratch, value)); > > m_jit.loadPtr(MacroAssembler::Address(scratch, StringImpl::dataOffset()), value); > >@@ -10806,11 +10792,11 @@ void SpeculativeJIT::emitSwitchStringOnString(SwitchData* data, GPRReg string) > GPRReg lengthGPR = length.gpr(); > GPRReg tempGPR = temp.gpr(); > >- m_jit.load32(MacroAssembler::Address(string, JSString::offsetOfLength()), lengthGPR); >+ MacroAssembler::JumpList slowCases; > m_jit.loadPtr(MacroAssembler::Address(string, JSString::offsetOfValue()), tempGPR); >+ slowCases.append(m_jit.branchIfRopeStringImpl(tempGPR)); >+ m_jit.load32(MacroAssembler::Address(tempGPR, StringImpl::lengthMemoryOffset()), lengthGPR); > >- MacroAssembler::JumpList slowCases; >- slowCases.append(m_jit.branchTestPtr(MacroAssembler::Zero, tempGPR)); > slowCases.append(m_jit.branchTest32( > MacroAssembler::Zero, > MacroAssembler::Address(tempGPR, StringImpl::flagsOffset()), >diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp >index 4dfe9c92321d753d178148f4ed35f8973f3e6f52..98cf2c7e4a3bc808854cc125a7e144320ceeac2c 100644 >--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp >+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp >@@ -3880,7 +3880,7 @@ void SpeculativeJIT::compile(Node* node) > case StringUse: { > speculateString(node->child2(), keyRegs.payloadGPR()); > m_jit.loadPtr(MacroAssembler::Address(keyRegs.payloadGPR(), JSString::offsetOfValue()), implGPR); >- slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, implGPR)); >+ slowPath.append(m_jit.branchIfRopeStringImpl(implGPR)); > slowPath.append(m_jit.branchTest32( > MacroAssembler::Zero, MacroAssembler::Address(implGPR, StringImpl::flagsOffset()), > MacroAssembler::TrustedImm32(StringImpl::flagIsAtomic()))); >@@ -3890,7 +3890,7 @@ void SpeculativeJIT::compile(Node* node) > slowPath.append(m_jit.branchIfNotCell(keyRegs)); > auto isNotString = m_jit.branchIfNotString(keyRegs.payloadGPR()); > m_jit.loadPtr(MacroAssembler::Address(keyRegs.payloadGPR(), JSString::offsetOfValue()), implGPR); >- slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, implGPR)); >+ slowPath.append(m_jit.branchIfRopeStringImpl(implGPR)); > slowPath.append(m_jit.branchTest32( > MacroAssembler::Zero, MacroAssembler::Address(implGPR, StringImpl::flagsOffset()), > MacroAssembler::TrustedImm32(StringImpl::flagIsAtomic()))); >@@ -4195,6 +4195,48 @@ void SpeculativeJIT::compileArithRandom(Node* node) > doubleResult(result.fpr(), node); > } > >+void SpeculativeJIT::compileMakeRope(Node* node) >+{ >+ ASSERT(node->child1().useKind() == KnownStringUse); >+ ASSERT(node->child2().useKind() == KnownStringUse); >+ ASSERT(!node->child3() || node->child3().useKind() == KnownStringUse); >+ >+ SpeculateCellOperand op1(this, node->child1()); >+ SpeculateCellOperand op2(this, node->child2()); >+ SpeculateCellOperand op3(this, node->child3()); >+ >+ GPRReg opGPRs[3]; >+ unsigned numOpGPRs; >+ opGPRs[0] = op1.gpr(); >+ opGPRs[1] = op2.gpr(); >+ if (node->child3()) { >+ opGPRs[2] = op3.gpr(); >+ numOpGPRs = 3; >+ } else { >+ opGPRs[2] = InvalidGPRReg; >+ numOpGPRs = 2; >+ } >+ >+ flushRegisters(); >+ GPRFlushedCallResult result(this); >+ GPRReg resultGPR = result.gpr(); >+ switch (numOpGPRs) { >+ case 2: >+ callOperation(operationMakeRope2, resultGPR, opGPRs[0], opGPRs[1]); >+ m_jit.exceptionCheck(); >+ break; >+ case 3: >+ callOperation(operationMakeRope3, resultGPR, opGPRs[0], opGPRs[1], opGPRs[2]); >+ m_jit.exceptionCheck(); >+ break; >+ default: >+ RELEASE_ASSERT_NOT_REACHED(); >+ break; >+ } >+ >+ cellResult(resultGPR, node); >+} >+ > #endif > > } } // namespace JSC::DFG >diff --git a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp >index db8df894e336261cef5b6d33ead857c78f894b82..7906a34319730dcd62ce976819824f340af98858 100644 >--- a/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp >+++ b/Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp >@@ -4038,7 +4038,7 @@ void SpeculativeJIT::compile(Node* node) > } > > m_jit.loadPtr(MacroAssembler::Address(inputGPR, JSString::offsetOfValue()), resultGPR); >- slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, resultGPR)); >+ slowPath.append(m_jit.branchIfRopeStringImpl(resultGPR)); > m_jit.load32(MacroAssembler::Address(resultGPR, StringImpl::flagsOffset()), resultGPR); > m_jit.urshift32(MacroAssembler::TrustedImm32(StringImpl::s_flagCount), resultGPR); > slowPath.append(m_jit.branchTest32(MacroAssembler::Zero, resultGPR)); >@@ -4076,7 +4076,7 @@ void SpeculativeJIT::compile(Node* node) > MacroAssembler::JumpList slowPath; > straightHash.append(m_jit.branchIfNotString(inputGPR)); > m_jit.loadPtr(MacroAssembler::Address(inputGPR, JSString::offsetOfValue()), resultGPR); >- slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, resultGPR)); >+ slowPath.append(m_jit.branchIfRopeStringImpl(resultGPR)); > m_jit.load32(MacroAssembler::Address(resultGPR, StringImpl::flagsOffset()), resultGPR); > m_jit.urshift32(MacroAssembler::TrustedImm32(StringImpl::s_flagCount), resultGPR); > slowPath.append(m_jit.branchTest32(MacroAssembler::Zero, resultGPR)); >@@ -4450,7 +4450,7 @@ void SpeculativeJIT::compile(Node* node) > case StringUse: { > speculateString(node->child2(), keyGPR); > m_jit.loadPtr(MacroAssembler::Address(keyGPR, JSString::offsetOfValue()), implGPR); >- slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, implGPR)); >+ slowPath.append(m_jit.branchIfRopeStringImpl(implGPR)); > slowPath.append(m_jit.branchTest32( > MacroAssembler::Zero, MacroAssembler::Address(implGPR, StringImpl::flagsOffset()), > MacroAssembler::TrustedImm32(StringImpl::flagIsAtomic()))); >@@ -4460,7 +4460,7 @@ void SpeculativeJIT::compile(Node* node) > slowPath.append(m_jit.branchIfNotCell(JSValueRegs(keyGPR))); > auto isNotString = m_jit.branchIfNotString(keyGPR); > m_jit.loadPtr(MacroAssembler::Address(keyGPR, JSString::offsetOfValue()), implGPR); >- slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, implGPR)); >+ slowPath.append(m_jit.branchIfRopeStringImpl(implGPR)); > slowPath.append(m_jit.branchTest32( > MacroAssembler::Zero, MacroAssembler::Address(implGPR, StringImpl::flagsOffset()), > MacroAssembler::TrustedImm32(StringImpl::flagIsAtomic()))); >@@ -5230,6 +5230,162 @@ void SpeculativeJIT::compileArithRandom(Node* node) > doubleResult(result.fpr(), node); > } > >+void SpeculativeJIT::compileMakeRope(Node* node) >+{ >+ ASSERT(node->child1().useKind() == KnownStringUse); >+ ASSERT(node->child2().useKind() == KnownStringUse); >+ ASSERT(!node->child3() || node->child3().useKind() == KnownStringUse); >+ >+ SpeculateCellOperand op1(this, node->child1()); >+ SpeculateCellOperand op2(this, node->child2()); >+ SpeculateCellOperand op3(this, node->child3()); >+ GPRTemporary result(this); >+ GPRTemporary allocator(this); >+ GPRTemporary scratch(this); >+ GPRTemporary scratch2(this); >+ >+ Edge edges[3] = { >+ node->child1(), >+ node->child2(), >+ node->child3() >+ }; >+ GPRReg opGPRs[3]; >+ unsigned numOpGPRs; >+ opGPRs[0] = op1.gpr(); >+ opGPRs[1] = op2.gpr(); >+ if (node->child3()) { >+ opGPRs[2] = op3.gpr(); >+ numOpGPRs = 3; >+ } else { >+ opGPRs[2] = InvalidGPRReg; >+ numOpGPRs = 2; >+ } >+ GPRReg resultGPR = result.gpr(); >+ GPRReg allocatorGPR = allocator.gpr(); >+ GPRReg scratchGPR = scratch.gpr(); >+ GPRReg scratch2GPR = scratch2.gpr(); >+ >+ CCallHelpers::JumpList slowPath; >+ Allocator allocatorValue = allocatorForNonVirtualConcurrently<JSRopeString>(*m_jit.vm(), sizeof(JSRopeString), AllocatorForMode::AllocatorIfExists); >+ emitAllocateJSCell(resultGPR, JITAllocator::constant(allocatorValue), allocatorGPR, TrustedImmPtr(m_jit.graph().registerStructure(m_jit.vm()->stringStructure.get())), scratchGPR, slowPath); >+ >+ m_jit.orPtr(TrustedImm32(JSString::IsRopeInPointer), opGPRs[0], allocatorGPR); >+ m_jit.storePtr(allocatorGPR, CCallHelpers::Address(resultGPR, JSRopeString::offsetOfFiber0())); >+ >+ m_jit.move(opGPRs[1], scratchGPR); >+ m_jit.store32(scratchGPR, CCallHelpers::Address(resultGPR, JSRopeString::offsetOfFiber1Lower())); >+ m_jit.rshiftPtr(TrustedImm32(32), scratchGPR); >+ m_jit.store16(scratchGPR, CCallHelpers::Address(resultGPR, JSRopeString::offsetOfFiber1Upper())); >+ >+ if (numOpGPRs == 3) { >+ m_jit.move(opGPRs[2], scratchGPR); >+ m_jit.store32(scratchGPR, CCallHelpers::Address(resultGPR, JSRopeString::offsetOfFiber2Lower())); >+ m_jit.rshiftPtr(TrustedImm32(32), scratchGPR); >+ m_jit.store16(scratchGPR, CCallHelpers::Address(resultGPR, JSRopeString::offsetOfFiber2Upper())); >+ } else { >+ m_jit.storeZero32(CCallHelpers::Address(resultGPR, JSRopeString::offsetOfFiber2Lower())); >+ m_jit.storeZero16(CCallHelpers::Address(resultGPR, JSRopeString::offsetOfFiber2Upper())); >+ } >+ >+ { >+ if (JSString* string = edges[0]->dynamicCastConstant<JSString*>(*m_jit.vm())) { >+ m_jit.move(TrustedImm32(string->is8Bit() ? StringImpl::flagIs8Bit() : 0), scratchGPR); >+ m_jit.move(TrustedImm32(string->length()), allocatorGPR); >+ } else { >+ bool canBeRope = !m_state.forNode(edges[0]).isType(SpecStringIdent); >+ m_jit.loadPtr(CCallHelpers::Address(opGPRs[0], JSString::offsetOfValue()), scratch2GPR); >+ CCallHelpers::Jump isRope; >+ if (canBeRope) >+ isRope = m_jit.branchIfRopeStringImpl(scratch2GPR); >+ >+ m_jit.load32(CCallHelpers::Address(scratch2GPR, StringImpl::flagsOffset()), scratchGPR); >+ m_jit.load32(CCallHelpers::Address(scratch2GPR, StringImpl::lengthMemoryOffset()), allocatorGPR); >+ >+ if (canBeRope) { >+ auto done = m_jit.jump(); >+ >+ isRope.link(&m_jit); >+ m_jit.load16(CCallHelpers::Address(opGPRs[0], JSRopeString::offsetOfFlags()), scratchGPR); >+ m_jit.load32(CCallHelpers::Address(opGPRs[0], JSRopeString::offsetOfLength()), allocatorGPR); >+ done.link(&m_jit); >+ } >+ } >+ >+ if (!ASSERT_DISABLED) { >+ CCallHelpers::Jump ok = m_jit.branch32( >+ CCallHelpers::GreaterThanOrEqual, allocatorGPR, TrustedImm32(0)); >+ m_jit.abortWithReason(DFGNegativeStringLength); >+ ok.link(&m_jit); >+ } >+ } >+ >+ for (unsigned i = 1; i < numOpGPRs; ++i) { >+ if (JSString* string = edges[i]->dynamicCastConstant<JSString*>(*m_jit.vm())) { >+ m_jit.and32(TrustedImm32(string->is8Bit() ? StringImpl::flagIs8Bit() : 0), scratchGPR); >+ speculationCheck( >+ Uncountable, JSValueSource(), nullptr, >+ m_jit.branchAdd32( >+ CCallHelpers::Overflow, >+ TrustedImm32(string->length()), allocatorGPR)); >+ } else { >+ bool canBeRope = !m_state.forNode(edges[i]).isType(SpecStringIdent); >+ m_jit.loadPtr(CCallHelpers::Address(opGPRs[i], JSString::offsetOfValue()), scratch2GPR); >+ CCallHelpers::Jump isRope; >+ if (canBeRope) >+ isRope = m_jit.branchIfRopeStringImpl(scratch2GPR); >+ >+ m_jit.and32(CCallHelpers::Address(scratch2GPR, StringImpl::flagsOffset()), scratchGPR); >+ speculationCheck( >+ Uncountable, JSValueSource(), nullptr, >+ m_jit.branchAdd32( >+ CCallHelpers::Overflow, >+ CCallHelpers::Address(scratch2GPR, StringImpl::lengthMemoryOffset()), allocatorGPR)); >+ if (canBeRope) { >+ auto done = m_jit.jump(); >+ >+ isRope.link(&m_jit); >+ m_jit.and16(CCallHelpers::Address(opGPRs[i], JSRopeString::offsetOfFlags()), scratchGPR); >+ m_jit.load32(CCallHelpers::Address(opGPRs[i], JSRopeString::offsetOfLength()), scratch2GPR); >+ speculationCheck( >+ Uncountable, JSValueSource(), nullptr, >+ m_jit.branchAdd32( >+ CCallHelpers::Overflow, scratch2GPR, allocatorGPR)); >+ done.link(&m_jit); >+ } >+ } >+ } >+ m_jit.store16(scratchGPR, CCallHelpers::Address(resultGPR, JSRopeString::offsetOfFlags())); >+ if (!ASSERT_DISABLED) { >+ CCallHelpers::Jump ok = m_jit.branch32( >+ CCallHelpers::GreaterThanOrEqual, allocatorGPR, TrustedImm32(0)); >+ m_jit.abortWithReason(DFGNegativeStringLength); >+ ok.link(&m_jit); >+ } >+ m_jit.store32(allocatorGPR, CCallHelpers::Address(resultGPR, JSRopeString::offsetOfLength())); >+ auto isNonEmptyString = m_jit.branchTest32(CCallHelpers::NonZero, allocatorGPR); >+ >+ m_jit.move(TrustedImmPtr::weakPointer(m_jit.graph(), jsEmptyString(&m_jit.graph().m_vm)), resultGPR); >+ >+ isNonEmptyString.link(&m_jit); >+ m_jit.mutatorFence(*m_jit.vm()); >+ >+ switch (numOpGPRs) { >+ case 2: >+ addSlowPathGenerator(slowPathCall( >+ slowPath, this, operationMakeRope2, resultGPR, opGPRs[0], opGPRs[1])); >+ break; >+ case 3: >+ addSlowPathGenerator(slowPathCall( >+ slowPath, this, operationMakeRope3, resultGPR, opGPRs[0], opGPRs[1], opGPRs[2])); >+ break; >+ default: >+ RELEASE_ASSERT_NOT_REACHED(); >+ break; >+ } >+ >+ cellResult(resultGPR, node); >+} >+ > #endif > > } } // namespace JSC::DFG >diff --git a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.cpp b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.cpp >index c035112580c6b8a2958965654bc1c81ed7ae97d5..a06ac4e535975ac84931bd2cda2e19f200bbfa4c 100644 >--- a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.cpp >+++ b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.cpp >@@ -60,8 +60,6 @@ AbstractHeapRepository::AbstractHeapRepository() > , JSCell_freeListNext(JSCell_header) > , ArrayStorage_publicLength(Butterfly_publicLength) > , ArrayStorage_vectorLength(Butterfly_vectorLength) >- , JSBigInt_length(JSBigIntOrString_length) >- , JSString_length(JSBigIntOrString_length) > > #define INDEXED_ABSTRACT_HEAP_INITIALIZATION(name, offset, size) , name(&root, #name, offset, size) > FOR_EACH_INDEXED_ABSTRACT_HEAP(INDEXED_ABSTRACT_HEAP_INITIALIZATION) >@@ -71,6 +69,8 @@ AbstractHeapRepository::AbstractHeapRepository() > FOR_EACH_NUMBERED_ABSTRACT_HEAP(NUMBERED_ABSTRACT_HEAP_INITIALIZATION) > #undef NUMBERED_ABSTRACT_HEAP_INITIALIZATION > >+ , JSString_value(JSRopeString_fiber0) >+ > , absolute(&root, "absolute") > { > // Make sure that our explicit assumptions about the StructureIDBlob match reality. >@@ -79,14 +79,13 @@ AbstractHeapRepository::AbstractHeapRepository() > RELEASE_ASSERT(JSCell_indexingTypeAndMisc.offset() + 2 == JSCell_typeInfoFlags.offset()); > RELEASE_ASSERT(JSCell_indexingTypeAndMisc.offset() + 3 == JSCell_cellState.offset()); > >- RELEASE_ASSERT(JSBigInt::offsetOfLength() == JSString::offsetOfLength()); >- > JSCell_structureID.changeParent(&JSCell_header); > JSCell_usefulBytes.changeParent(&JSCell_header); > JSCell_indexingTypeAndMisc.changeParent(&JSCell_usefulBytes); > JSCell_typeInfoType.changeParent(&JSCell_usefulBytes); > JSCell_typeInfoFlags.changeParent(&JSCell_usefulBytes); > JSCell_cellState.changeParent(&JSCell_usefulBytes); >+ JSRopeString_flags.changeParent(&JSRopeString_fiber0); > > RELEASE_ASSERT(!JSCell_freeListNext.offset()); > } >diff --git a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h >index e50541ba526b06682d0053f546bea2de2abd532a..1bdcf7ef17e9650b37167bed096ee90121e0a606 100644 >--- a/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h >+++ b/Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h >@@ -64,7 +64,7 @@ namespace JSC { namespace FTL { > macro(JSArrayBufferView_length, JSArrayBufferView::offsetOfLength()) \ > macro(JSArrayBufferView_mode, JSArrayBufferView::offsetOfMode()) \ > macro(JSArrayBufferView_vector, JSArrayBufferView::offsetOfVector()) \ >- macro(JSBigIntOrString_length, JSBigInt::offsetOfLength()) \ >+ macro(JSBigInt_length, JSBigInt::offsetOfLength()) \ > macro(JSCell_cellState, JSCell::cellStateOffset()) \ > macro(JSCell_header, 0) \ > macro(JSCell_indexingTypeAndMisc, JSCell::indexingTypeAndMiscOffset()) \ >@@ -88,9 +88,14 @@ namespace JSC { namespace FTL { > macro(JSPropertyNameEnumerator_endGenericPropertyIndex, JSPropertyNameEnumerator::endGenericPropertyIndexOffset()) \ > macro(JSPropertyNameEnumerator_endStructurePropertyIndex, JSPropertyNameEnumerator::endStructurePropertyIndexOffset()) \ > macro(JSPropertyNameEnumerator_indexLength, JSPropertyNameEnumerator::indexedLengthOffset()) \ >+ macro(JSRopeString_flags, JSRopeString::offsetOfFlags()) \ >+ macro(JSRopeString_fiber0, JSRopeString::offsetOfFiber0()) \ >+ macro(JSRopeString_length, JSRopeString::offsetOfLength()) \ >+ macro(JSRopeString_fiber1Lower, JSRopeString::offsetOfFiber1Lower()) \ >+ macro(JSRopeString_fiber1Upper, JSRopeString::offsetOfFiber1Upper()) \ >+ macro(JSRopeString_fiber2Lower, JSRopeString::offsetOfFiber2Lower()) \ >+ macro(JSRopeString_fiber2Upper, JSRopeString::offsetOfFiber2Upper()) \ > macro(JSScope_next, JSScope::offsetOfNext()) \ >- macro(JSString_flags, JSString::offsetOfFlags()) \ >- macro(JSString_value, JSString::offsetOfValue()) \ > macro(JSSymbolTableObject_symbolTable, JSSymbolTableObject::offsetOfSymbolTable()) \ > macro(JSWrapperObject_internalValue, JSWrapperObject::internalValueOffset()) \ > macro(RegExpObject_regExp, RegExpObject::offsetOfRegExp()) \ >@@ -140,7 +145,6 @@ namespace JSC { namespace FTL { > macro(DirectArguments_storage, DirectArguments::storageOffset(), sizeof(EncodedJSValue)) \ > macro(JSLexicalEnvironment_variables, JSLexicalEnvironment::offsetOfVariables(), sizeof(EncodedJSValue)) \ > macro(JSPropertyNameEnumerator_cachedPropertyNamesVectorContents, 0, sizeof(WriteBarrier<JSString>)) \ >- macro(JSRopeString_fibers, JSRopeString::offsetOfFibers(), sizeof(WriteBarrier<JSString>)) \ > macro(ScopedArguments_Storage_storage, 0, sizeof(EncodedJSValue)) \ > macro(WriteBarrierBuffer_bufferContents, 0, sizeof(JSCell*)) \ > macro(characters8, 0, sizeof(LChar)) \ >@@ -180,8 +184,6 @@ class AbstractHeapRepository { > AbstractHeap& JSCell_freeListNext; > AbstractHeap& ArrayStorage_publicLength; > AbstractHeap& ArrayStorage_vectorLength; >- AbstractHeap& JSBigInt_length; >- AbstractHeap& JSString_length; > > #define INDEXED_ABSTRACT_HEAP_DECLARATION(name, offset, size) IndexedAbstractHeap name; > FOR_EACH_INDEXED_ABSTRACT_HEAP(INDEXED_ABSTRACT_HEAP_DECLARATION) >@@ -191,6 +193,8 @@ class AbstractHeapRepository { > FOR_EACH_NUMBERED_ABSTRACT_HEAP(NUMBERED_ABSTRACT_HEAP_DECLARATION) > #undef NUMBERED_ABSTRACT_HEAP_DECLARATION > >+ AbstractHeap& JSString_value; >+ > AbsoluteAbstractHeap absolute; > > IndexedAbstractHeap* forIndexingType(IndexingType indexingType) >diff --git a/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp b/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp >index 87de9baba05c1cedc843fd751240a805a902734e..ed0d22fabcd23459fec869fa94c6f33cd73a1474 100644 >--- a/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp >+++ b/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp >@@ -3655,8 +3655,7 @@ class LowerDFGToB3 { > LValue fastResultValue = m_out.loadPtr(cell, m_heaps.JSString_value); > ValueFromBlock fastResult = m_out.anchor(fastResultValue); > >- m_out.branch( >- m_out.notNull(fastResultValue), usually(continuation), rarely(slowPath)); >+ m_out.branch(isRopeString(cell, m_node->child1()), rarely(slowPath), usually(continuation)); > > LBasicBlock lastNext = m_out.appendTo(slowPath, continuation); > >@@ -3828,7 +3827,23 @@ class LowerDFGToB3 { > > case Array::String: { > LValue string = lowCell(m_node->child1()); >- setInt32(m_out.load32NonNegative(string, m_heaps.JSString_length)); >+ >+ LBasicBlock ropePath = m_out.newBlock(); >+ LBasicBlock nonRopePath = m_out.newBlock(); >+ LBasicBlock continuation = m_out.newBlock(); >+ >+ m_out.branch(isRopeString(string, m_node->child1()), rarely(ropePath), usually(nonRopePath)); >+ >+ LBasicBlock lastNext = m_out.appendTo(ropePath, nonRopePath); >+ ValueFromBlock ropeLength = m_out.anchor(m_out.load32NonNegative(string, m_heaps.JSRopeString_length)); >+ m_out.jump(continuation); >+ >+ m_out.appendTo(nonRopePath, continuation); >+ ValueFromBlock nonRopeLength = m_out.anchor(m_out.load32NonNegative(m_out.loadPtr(string, m_heaps.JSString_value), m_heaps.StringImpl_length)); >+ m_out.jump(continuation); >+ >+ m_out.appendTo(continuation, lastNext); >+ setInt32(m_out.phi(Int32, ropeLength, nonRopeLength)); > return; > } > >@@ -6496,14 +6511,24 @@ class LowerDFGToB3 { > setJSValue(m_out.phi(Int64, results)); > } > >+ struct FlagsAndLength { >+ LValue flags; >+ LValue length; >+ }; >+ > void compileMakeRope() > { >+ Edge edges[3] = { >+ m_node->child1(), >+ m_node->child2(), >+ m_node->child3(), >+ }; > LValue kids[3]; > unsigned numKids; >- kids[0] = lowCell(m_node->child1()); >- kids[1] = lowCell(m_node->child2()); >- if (m_node->child3()) { >- kids[2] = lowCell(m_node->child3()); >+ kids[0] = lowCell(edges[0]); >+ kids[1] = lowCell(edges[1]); >+ if (edges[2]) { >+ kids[2] = lowCell(edges[2]); > numKids = 3; > } else { > kids[2] = 0; >@@ -6513,37 +6538,108 @@ class LowerDFGToB3 { > LBasicBlock slowPath = m_out.newBlock(); > LBasicBlock continuation = m_out.newBlock(); > >- LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath); >- > Allocator allocator = allocatorForNonVirtualConcurrently<JSRopeString>(vm(), sizeof(JSRopeString), AllocatorForMode::AllocatorIfExists); > > LValue result = allocateCell( > m_out.constIntPtr(allocator.localAllocator()), vm().stringStructure.get(), slowPath); > >- m_out.storePtr(m_out.intPtrZero, result, m_heaps.JSString_value); >- for (unsigned i = 0; i < numKids; ++i) >- m_out.storePtr(kids[i], result, m_heaps.JSRopeString_fibers[i]); >- for (unsigned i = numKids; i < JSRopeString::s_maxInternalRopeLength; ++i) >- m_out.storePtr(m_out.intPtrZero, result, m_heaps.JSRopeString_fibers[i]); >- LValue flags = m_out.load16ZeroExt32(kids[0], m_heaps.JSString_flags); >- LValue length = m_out.load32(kids[0], m_heaps.JSString_length); >+ m_out.storePtr(m_out.bitOr(kids[0], m_out.constIntPtr(JSString::IsRopeInPointer)), result, m_heaps.JSRopeString_fiber0); >+ >+ m_out.store32(m_out.castToInt32(kids[1]), result, m_heaps.JSRopeString_fiber1Lower); >+ m_out.store32As16(m_out.castToInt32(m_out.lShr(kids[1], m_out.constInt32(32))), result, m_heaps.JSRopeString_fiber1Upper); >+ >+ if (numKids == 3) { >+ m_out.store32(m_out.castToInt32(kids[2]), result, m_heaps.JSRopeString_fiber2Lower); >+ m_out.store32As16(m_out.castToInt32(m_out.lShr(kids[2], m_out.constInt32(32))), result, m_heaps.JSRopeString_fiber2Upper); >+ } else { >+ m_out.store32(m_out.int32Zero, result, m_heaps.JSRopeString_fiber2Lower); >+ m_out.store32As16(m_out.int32Zero, result, m_heaps.JSRopeString_fiber2Upper); >+ } >+ >+ auto initializeFlagsAndLength = [&] (Edge& edge, LValue child) { >+ if (JSString* string = edge->dynamicCastConstant<JSString*>(vm())) { >+ return FlagsAndLength { >+ m_out.constInt32(string->is8Bit() ? StringImpl::flagIs8Bit() : 0), >+ m_out.constInt32(string->length()) >+ }; >+ } >+ >+ LBasicBlock continuation = m_out.newBlock(); >+ LBasicBlock ropeCase = m_out.newBlock(); >+ LBasicBlock notRopeCase = m_out.newBlock(); >+ >+ m_out.branch(isRopeString(child, edge), unsure(ropeCase), unsure(notRopeCase)); >+ >+ LBasicBlock lastNext = m_out.appendTo(ropeCase, notRopeCase); >+ ValueFromBlock flagsForRope = m_out.anchor(m_out.load16ZeroExt32(child, m_heaps.JSRopeString_flags)); >+ ValueFromBlock lengthForRope = m_out.anchor(m_out.load32NonNegative(child, m_heaps.JSRopeString_length)); >+ m_out.jump(continuation); >+ >+ m_out.appendTo(notRopeCase, continuation); >+ LValue stringImpl = m_out.loadPtr(child, m_heaps.JSString_value); >+ ValueFromBlock flagsForNonRope = m_out.anchor(m_out.load32NonNegative(stringImpl, m_heaps.StringImpl_hashAndFlags)); >+ ValueFromBlock lengthForNonRope = m_out.anchor(m_out.load32NonNegative(stringImpl, m_heaps.StringImpl_length)); >+ m_out.jump(continuation); >+ >+ m_out.appendTo(continuation, lastNext); >+ return FlagsAndLength { >+ m_out.phi(Int32, flagsForRope, flagsForNonRope), >+ m_out.phi(Int32, lengthForRope, lengthForNonRope) >+ }; >+ }; >+ >+ FlagsAndLength flagsAndLength = initializeFlagsAndLength(edges[0], kids[0]); > for (unsigned i = 1; i < numKids; ++i) { >- flags = m_out.bitAnd(flags, m_out.load16ZeroExt32(kids[i], m_heaps.JSString_flags)); >- CheckValue* lengthCheck = m_out.speculateAdd( >- length, m_out.load32(kids[i], m_heaps.JSString_length)); >- blessSpeculation(lengthCheck, Uncountable, noValue(), nullptr, m_origin); >- length = lengthCheck; >+ auto mergeFlagsAndLength = [&] (Edge& edge, LValue child, FlagsAndLength flagsAndLength) { >+ if (JSString* string = edge->dynamicCastConstant<JSString*>(vm())) { >+ LValue flags = m_out.bitAnd(flagsAndLength.flags, m_out.constInt32(string->is8Bit() ? StringImpl::flagIs8Bit() : 0)); >+ CheckValue* lengthCheck = m_out.speculateAdd(flagsAndLength.length, m_out.constInt32(string->length())); >+ blessSpeculation(lengthCheck, Uncountable, noValue(), nullptr, m_origin); >+ LValue length = lengthCheck; >+ return FlagsAndLength { >+ flags, >+ length >+ }; >+ } >+ >+ LBasicBlock continuation = m_out.newBlock(); >+ LBasicBlock ropeCase = m_out.newBlock(); >+ LBasicBlock notRopeCase = m_out.newBlock(); >+ >+ m_out.branch(isRopeString(child, edge), unsure(ropeCase), unsure(notRopeCase)); >+ >+ LBasicBlock lastNext = m_out.appendTo(ropeCase, notRopeCase); >+ ValueFromBlock flagsForRope = m_out.anchor(m_out.load16ZeroExt32(child, m_heaps.JSRopeString_flags)); >+ ValueFromBlock lengthForRope = m_out.anchor(m_out.load32NonNegative(child, m_heaps.JSRopeString_length)); >+ m_out.jump(continuation); >+ >+ m_out.appendTo(notRopeCase, continuation); >+ LValue stringImpl = m_out.loadPtr(child, m_heaps.JSString_value); >+ ValueFromBlock flagsForNonRope = m_out.anchor(m_out.load32NonNegative(stringImpl, m_heaps.StringImpl_hashAndFlags)); >+ ValueFromBlock lengthForNonRope = m_out.anchor(m_out.load32NonNegative(stringImpl, m_heaps.StringImpl_length)); >+ m_out.jump(continuation); >+ >+ m_out.appendTo(continuation, lastNext); >+ LValue flags = m_out.bitAnd(flagsAndLength.flags, m_out.phi(Int32, flagsForRope, flagsForNonRope)); >+ CheckValue* lengthCheck = m_out.speculateAdd(flagsAndLength.length, m_out.phi(Int32, lengthForRope, lengthForNonRope)); >+ blessSpeculation(lengthCheck, Uncountable, noValue(), nullptr, m_origin); >+ LValue length = lengthCheck; >+ return FlagsAndLength { >+ flags, >+ length >+ }; >+ }; >+ >+ flagsAndLength = mergeFlagsAndLength(edges[i], kids[i], flagsAndLength); > } >- m_out.store32As16( >- m_out.bitAnd(m_out.constInt32(JSString::Is8Bit), flags), >- result, m_heaps.JSString_flags); >- m_out.store32(length, result, m_heaps.JSString_length); >+ m_out.store32As16(flagsAndLength.flags, result, m_heaps.JSRopeString_flags); >+ m_out.store32(flagsAndLength.length, result, m_heaps.JSRopeString_length); > > mutatorFence(); >- ValueFromBlock fastResult = m_out.anchor(result); >+ ValueFromBlock fastResult = m_out.anchor(m_out.select(m_out.isZero32(flagsAndLength.length), weakPointer(jsEmptyString(&m_graph.m_vm)), result)); > m_out.jump(continuation); > >- m_out.appendTo(slowPath, continuation); >+ LBasicBlock lastNext = m_out.appendTo(slowPath, continuation); > LValue slowResultValue; > VM& vm = this->vm(); > switch (numKids) { >@@ -6584,15 +6680,14 @@ class LowerDFGToB3 { > LBasicBlock slowPath = m_out.newBlock(); > LBasicBlock continuation = m_out.newBlock(); > >+ LValue stringImpl = m_out.loadPtr(base, m_heaps.JSString_value); > m_out.branch( > m_out.aboveOrEqual( >- index, m_out.load32NonNegative(base, m_heaps.JSString_length)), >+ index, m_out.load32NonNegative(stringImpl, m_heaps.StringImpl_length)), > rarely(slowPath), usually(fastPath)); > > LBasicBlock lastNext = m_out.appendTo(fastPath, slowPath); > >- LValue stringImpl = m_out.loadPtr(base, m_heaps.JSString_value); >- > LBasicBlock is8Bit = m_out.newBlock(); > LBasicBlock is16Bit = m_out.newBlock(); > LBasicBlock bitsContinuation = m_out.newBlock(); >@@ -6694,12 +6789,12 @@ class LowerDFGToB3 { > LValue index = lowInt32(m_node->child2()); > LValue storage = lowStorage(m_node->child3()); > >+ LValue stringImpl = m_out.loadPtr(base, m_heaps.JSString_value); >+ > speculate( > Uncountable, noValue(), 0, > m_out.aboveOrEqual( >- index, m_out.load32NonNegative(base, m_heaps.JSString_length))); >- >- LValue stringImpl = m_out.loadPtr(base, m_heaps.JSString_value); >+ index, m_out.load32NonNegative(stringImpl, m_heaps.StringImpl_length))); > > m_out.branch( > m_out.testIsZero32( >@@ -7213,7 +7308,7 @@ class LowerDFGToB3 { > > speculateString(m_node->child2(), right); > >- ValueFromBlock slowResult = m_out.anchor(stringsEqual(left, right)); >+ ValueFromBlock slowResult = m_out.anchor(stringsEqual(left, right, m_node->child1(), m_node->child2())); > m_out.jump(continuation); > > m_out.appendTo(continuation, lastNext); >@@ -7387,7 +7482,7 @@ class LowerDFGToB3 { > > // Full String compare. > m_out.appendTo(testStringEquality, continuation); >- ValueFromBlock slowResult = m_out.anchor(stringsEqual(leftString, rightValue)); >+ ValueFromBlock slowResult = m_out.anchor(stringsEqual(leftString, rightValue, stringEdge, untypedEdge)); > m_out.jump(continuation); > > // Continuation. >@@ -9037,25 +9132,25 @@ class LowerDFGToB3 { > LBasicBlock is16Bit = m_out.newBlock(); > LBasicBlock continuation = m_out.newBlock(); > >+ ValueFromBlock fastValue = m_out.anchor(m_out.loadPtr(stringValue, m_heaps.JSString_value)); >+ m_out.branch( >+ isRopeString(stringValue, m_node->child1()), >+ rarely(needResolution), usually(resolved)); >+ >+ LBasicBlock lastNext = m_out.appendTo(needResolution, resolved); >+ ValueFromBlock slowValue = m_out.anchor( >+ vmCall(pointerType(), m_out.operation(operationResolveRope), m_callFrame, stringValue)); >+ m_out.jump(resolved); >+ >+ m_out.appendTo(resolved, lengthIs1); >+ LValue value = m_out.phi(pointerType(), fastValue, slowValue); > m_out.branch( > m_out.notEqual( >- m_out.load32NonNegative(stringValue, m_heaps.JSString_length), >+ m_out.load32NonNegative(value, m_heaps.StringImpl_length), > m_out.int32One), > unsure(lowBlock(data->fallThrough.block)), unsure(lengthIs1)); >- >- LBasicBlock lastNext = m_out.appendTo(lengthIs1, needResolution); >- Vector<ValueFromBlock, 2> values; >- LValue fastValue = m_out.loadPtr(stringValue, m_heaps.JSString_value); >- values.append(m_out.anchor(fastValue)); >- m_out.branch(m_out.isNull(fastValue), rarely(needResolution), usually(resolved)); >- >- m_out.appendTo(needResolution, resolved); >- values.append(m_out.anchor( >- vmCall(pointerType(), m_out.operation(operationResolveRope), m_callFrame, stringValue))); >- m_out.jump(resolved); >- >- m_out.appendTo(resolved, is8Bit); >- LValue value = m_out.phi(pointerType(), values); >+ >+ m_out.appendTo(lengthIs1, is8Bit); > LValue characterData = m_out.loadPtr(value, m_heaps.StringImpl_data); > m_out.branch( > m_out.testNonZero32( >@@ -9097,7 +9192,7 @@ class LowerDFGToB3 { > } > > case StringUse: { >- switchString(data, lowString(m_node->child1())); >+ switchString(data, lowString(m_node->child1()), m_node->child1()); > return; > } > >@@ -9119,7 +9214,7 @@ class LowerDFGToB3 { > > m_out.appendTo(isStringBlock, lastNext); > >- switchString(data, value); >+ switchString(data, value, m_node->child1()); > return; > } > >@@ -9457,17 +9552,16 @@ class LowerDFGToB3 { > return key; > } > >- LValue mapHashString(LValue string) >+ LValue mapHashString(LValue string, Edge& edge) > { > LBasicBlock nonEmptyStringCase = m_out.newBlock(); > LBasicBlock slowCase = m_out.newBlock(); > LBasicBlock continuation = m_out.newBlock(); > >- LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value); >- m_out.branch( >- m_out.equal(stringImpl, m_out.constIntPtr(0)), unsure(slowCase), unsure(nonEmptyStringCase)); >+ m_out.branch(isRopeString(string, edge), rarely(slowCase), usually(nonEmptyStringCase)); > > LBasicBlock lastNext = m_out.appendTo(nonEmptyStringCase, slowCase); >+ LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value); > LValue hash = m_out.lShr(m_out.load32(stringImpl, m_heaps.StringImpl_hashAndFlags), m_out.constInt32(StringImpl::s_flagCount)); > ValueFromBlock nonEmptyStringHashResult = m_out.anchor(hash); > m_out.branch(m_out.equal(hash, m_out.constInt32(0)), >@@ -9506,7 +9600,7 @@ class LowerDFGToB3 { > isStringValue, unsure(isString), unsure(notString)); > > LBasicBlock lastNext = m_out.appendTo(isString, notString); >- ValueFromBlock stringResult = m_out.anchor(mapHashString(value)); >+ ValueFromBlock stringResult = m_out.anchor(mapHashString(value, m_node->child1())); > m_out.jump(continuation); > > m_out.appendTo(notString, continuation); >@@ -9520,7 +9614,7 @@ class LowerDFGToB3 { > > case StringUse: { > LValue string = lowString(m_node->child1()); >- setInt32(mapHashString(string)); >+ setInt32(mapHashString(string, m_node->child1())); > return; > } > >@@ -9547,11 +9641,10 @@ class LowerDFGToB3 { > isString, unsure(isStringCase), unsure(straightHash)); > > m_out.appendTo(isStringCase, nonEmptyStringCase); >- LValue stringImpl = m_out.loadPtr(value, m_heaps.JSString_value); >- m_out.branch( >- m_out.equal(stringImpl, m_out.constIntPtr(0)), rarely(slowCase), usually(nonEmptyStringCase)); >+ m_out.branch(isRopeString(value, m_node->child1()), rarely(slowCase), usually(nonEmptyStringCase)); > > m_out.appendTo(nonEmptyStringCase, straightHash); >+ LValue stringImpl = m_out.loadPtr(value, m_heaps.JSString_value); > LValue hash = m_out.lShr(m_out.load32(stringImpl, m_heaps.StringImpl_hashAndFlags), m_out.constInt32(StringImpl::s_flagCount)); > ValueFromBlock nonEmptyStringHashResult = m_out.anchor(hash); > m_out.branch(m_out.equal(hash, m_out.constInt32(0)), >@@ -10149,10 +10242,10 @@ class LowerDFGToB3 { > LBasicBlock isAtomicString = m_out.newBlock(); > > keyAsValue = lowString(m_node->child2()); >- uniquedStringImpl = m_out.loadPtr(keyAsValue, m_heaps.JSString_value); >- m_out.branch(m_out.notNull(uniquedStringImpl), usually(isNonEmptyString), rarely(slowCase)); >+ m_out.branch(isNotRopeString(keyAsValue, m_node->child2()), usually(isNonEmptyString), rarely(slowCase)); > > lastNext = m_out.appendTo(isNonEmptyString, isAtomicString); >+ uniquedStringImpl = m_out.loadPtr(keyAsValue, m_heaps.JSString_value); > LValue isNotAtomic = m_out.testIsZero32(m_out.load32(uniquedStringImpl, m_heaps.StringImpl_hashAndFlags), m_out.constInt32(StringImpl::flagIsAtomic())); > m_out.branch(isNotAtomic, rarely(slowCase), usually(isAtomicString)); > >@@ -10180,11 +10273,11 @@ class LowerDFGToB3 { > m_out.branch(isString(keyAsValue), unsure(isStringCase), unsure(notStringCase)); > > m_out.appendTo(isStringCase, isNonEmptyString); >- LValue implFromString = m_out.loadPtr(keyAsValue, m_heaps.JSString_value); >- ValueFromBlock stringResult = m_out.anchor(implFromString); >- m_out.branch(m_out.notNull(implFromString), usually(isNonEmptyString), rarely(slowCase)); >+ m_out.branch(isNotRopeString(keyAsValue, m_node->child2()), usually(isNonEmptyString), rarely(slowCase)); > > m_out.appendTo(isNonEmptyString, notStringCase); >+ LValue implFromString = m_out.loadPtr(keyAsValue, m_heaps.JSString_value); >+ ValueFromBlock stringResult = m_out.anchor(implFromString); > LValue isNotAtomic = m_out.testIsZero32(m_out.load32(implFromString, m_heaps.StringImpl_hashAndFlags), m_out.constInt32(StringImpl::flagIsAtomic())); > m_out.branch(isNotAtomic, rarely(slowCase), usually(hasUniquedStringImpl)); > >@@ -11922,45 +12015,46 @@ class LowerDFGToB3 { > > void compileStringSlice() > { >+ LBasicBlock lengthCheckCase = m_out.newBlock(); > LBasicBlock emptyCase = m_out.newBlock(); > LBasicBlock notEmptyCase = m_out.newBlock(); > LBasicBlock oneCharCase = m_out.newBlock(); >- LBasicBlock bitCheckCase = m_out.newBlock(); > LBasicBlock is8Bit = m_out.newBlock(); > LBasicBlock is16Bit = m_out.newBlock(); > LBasicBlock bitsContinuation = m_out.newBlock(); > LBasicBlock bigCharacter = m_out.newBlock(); > LBasicBlock slowCase = m_out.newBlock(); >+ LBasicBlock ropeSlowCase = m_out.newBlock(); > LBasicBlock continuation = m_out.newBlock(); > > LValue string = lowString(m_node->child1()); >- LValue length = m_out.load32NonNegative(string, m_heaps.JSString_length); > LValue start = lowInt32(m_node->child2()); > LValue end = nullptr; > if (m_node->child3()) > end = lowInt32(m_node->child3()); >+ else >+ end = m_out.constInt32(std::numeric_limits<int32_t>::max()); >+ m_out.branch(isRopeString(string, m_node->child1()), rarely(ropeSlowCase), usually(lengthCheckCase)); > >+ LBasicBlock lastNext = m_out.appendTo(lengthCheckCase, emptyCase); >+ LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value); >+ LValue length = m_out.load32NonNegative(stringImpl, m_heaps.StringImpl_length); > auto range = populateSliceRange(start, end, length); > LValue from = range.first; > LValue to = range.second; >- > LValue span = m_out.sub(to, from); > m_out.branch(m_out.lessThanOrEqual(span, m_out.int32Zero), unsure(emptyCase), unsure(notEmptyCase)); > >- Vector<ValueFromBlock, 4> results; >+ Vector<ValueFromBlock, 5> results; > >- LBasicBlock lastNext = m_out.appendTo(emptyCase, notEmptyCase); >+ m_out.appendTo(emptyCase, notEmptyCase); > results.append(m_out.anchor(weakPointer(jsEmptyString(&vm())))); > m_out.jump(continuation); > > m_out.appendTo(notEmptyCase, oneCharCase); > m_out.branch(m_out.equal(span, m_out.int32One), unsure(oneCharCase), unsure(slowCase)); > >- m_out.appendTo(oneCharCase, bitCheckCase); >- LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value); >- m_out.branch(m_out.isNull(stringImpl), unsure(slowCase), unsure(bitCheckCase)); >- >- m_out.appendTo(bitCheckCase, is8Bit); >+ m_out.appendTo(oneCharCase, is8Bit); > LValue storage = m_out.loadPtr(stringImpl, m_heaps.StringImpl_data); > m_out.branch( > m_out.testIsZero32( >@@ -11969,8 +12063,6 @@ class LowerDFGToB3 { > unsure(is16Bit), unsure(is8Bit)); > > m_out.appendTo(is8Bit, is16Bit); >- // FIXME: Need to cage strings! >- // https://bugs.webkit.org/show_bug.cgi?id=174924 > ValueFromBlock char8Bit = m_out.anchor(m_out.load8ZeroExt32(m_out.baseIndex(m_heaps.characters8, storage, m_out.zeroExtPtr(from)))); > m_out.jump(bitsContinuation); > >@@ -11994,10 +12086,14 @@ class LowerDFGToB3 { > m_heaps.singleCharacterStrings, smallStrings, m_out.zeroExtPtr(character))))); > m_out.jump(continuation); > >- m_out.appendTo(slowCase, continuation); >+ m_out.appendTo(slowCase, ropeSlowCase); > results.append(m_out.anchor(vmCall(pointerType(), m_out.operation(operationStringSubstr), m_callFrame, string, from, span))); > m_out.jump(continuation); > >+ m_out.appendTo(ropeSlowCase, continuation); >+ results.append(m_out.anchor(vmCall(pointerType(), m_out.operation(operationStringSlice), m_callFrame, string, start, end))); >+ m_out.jump(continuation); >+ > m_out.appendTo(continuation, lastNext); > setJSValue(m_out.phi(pointerType(), results)); > } >@@ -12014,12 +12110,11 @@ class LowerDFGToB3 { > LValue string = lowString(m_node->child1()); > ValueFromBlock startIndex = m_out.anchor(m_out.constInt32(0)); > ValueFromBlock startIndexForCall = m_out.anchor(m_out.constInt32(0)); >- LValue impl = m_out.loadPtr(string, m_heaps.JSString_value); >- m_out.branch(m_out.isZero64(impl), >+ m_out.branch(isRopeString(string, m_node->child1()), > unsure(slowPath), unsure(notRope)); > > LBasicBlock lastNext = m_out.appendTo(notRope, is8Bit); >- >+ LValue impl = m_out.loadPtr(string, m_heaps.JSString_value); > m_out.branch( > m_out.testIsZero32( > m_out.load32(impl, m_heaps.StringImpl_hashAndFlags), >@@ -12791,7 +12886,7 @@ class LowerDFGToB3 { > setBoolean(m_out.phi(Int32, fastResult, slowResult)); > } > >- LValue stringsEqual(LValue leftJSString, LValue rightJSString) >+ LValue stringsEqual(LValue leftJSString, LValue rightJSString, Edge leftJSStringEdge = Edge(), Edge rightJSStringEdge = Edge()) > { > LBasicBlock notTriviallyUnequalCase = m_out.newBlock(); > LBasicBlock notEmptyCase = m_out.newBlock(); >@@ -12806,29 +12901,23 @@ class LowerDFGToB3 { > LBasicBlock slowCase = m_out.newBlock(); > LBasicBlock continuation = m_out.newBlock(); > >- LValue length = m_out.load32(leftJSString, m_heaps.JSString_length); >+ m_out.branch(isRopeString(leftJSString, leftJSStringEdge), rarely(slowCase), usually(leftReadyCase)); > >- m_out.branch( >- m_out.notEqual(length, m_out.load32(rightJSString, m_heaps.JSString_length)), >- unsure(falseCase), unsure(notTriviallyUnequalCase)); >- >- LBasicBlock lastNext = m_out.appendTo(notTriviallyUnequalCase, notEmptyCase); >- >- m_out.branch(m_out.isZero32(length), unsure(trueCase), unsure(notEmptyCase)); >- >- m_out.appendTo(notEmptyCase, leftReadyCase); >+ LBasicBlock lastNext = m_out.appendTo(leftReadyCase, rightReadyCase); >+ m_out.branch(isRopeString(rightJSString, rightJSStringEdge), rarely(slowCase), usually(rightReadyCase)); > >+ m_out.appendTo(rightReadyCase, notTriviallyUnequalCase); > LValue left = m_out.loadPtr(leftJSString, m_heaps.JSString_value); > LValue right = m_out.loadPtr(rightJSString, m_heaps.JSString_value); >+ LValue length = m_out.load32(left, m_heaps.StringImpl_length); >+ m_out.branch( >+ m_out.notEqual(length, m_out.load32(right, m_heaps.StringImpl_length)), >+ unsure(falseCase), unsure(notTriviallyUnequalCase)); > >- m_out.branch(m_out.notNull(left), usually(leftReadyCase), rarely(slowCase)); >- >- m_out.appendTo(leftReadyCase, rightReadyCase); >- >- m_out.branch(m_out.notNull(right), usually(rightReadyCase), rarely(slowCase)); >- >- m_out.appendTo(rightReadyCase, left8BitCase); >+ m_out.appendTo(notTriviallyUnequalCase, notEmptyCase); >+ m_out.branch(m_out.isZero32(length), unsure(trueCase), unsure(notEmptyCase)); > >+ m_out.appendTo(notEmptyCase, left8BitCase); > m_out.branch( > m_out.testIsZero32( > m_out.load32(left, m_heaps.StringImpl_hashAndFlags), >@@ -12836,7 +12925,6 @@ class LowerDFGToB3 { > unsure(slowCase), unsure(left8BitCase)); > > m_out.appendTo(left8BitCase, right8BitCase); >- > m_out.branch( > m_out.testIsZero32( > m_out.load32(right, m_heaps.StringImpl_hashAndFlags), >@@ -13558,11 +13646,8 @@ class LowerDFGToB3 { > equalNullOrUndefined( > edge, CellCaseSpeculatesObject, SpeculateNullOrUndefined, > ManualOperandSpeculation)); >- case StringUse: { >- LValue stringValue = lowString(edge); >- LValue length = m_out.load32NonNegative(stringValue, m_heaps.JSString_length); >- return m_out.notEqual(length, m_out.int32Zero); >- } >+ case StringUse: >+ return m_out.notEqual(lowString(edge), weakPointer(jsEmptyString(&m_graph.m_vm))); > case StringOrOtherUse: { > LValue value = lowJSValue(edge, ManualOperandSpeculation); > >@@ -13573,20 +13658,17 @@ class LowerDFGToB3 { > m_out.branch(isCell(value, provenType(edge)), unsure(cellCase), unsure(notCellCase)); > > LBasicBlock lastNext = m_out.appendTo(cellCase, notCellCase); >- > FTL_TYPE_CHECK(jsValueValue(value), edge, (~SpecCellCheck) | SpecString, isNotString(value)); >- LValue length = m_out.load32NonNegative(value, m_heaps.JSString_length); >- ValueFromBlock cellResult = m_out.anchor(m_out.notEqual(length, m_out.int32Zero)); >+ ValueFromBlock stringResult = m_out.anchor(m_out.notEqual(value, weakPointer(jsEmptyString(&m_graph.m_vm)))); > m_out.jump(continuation); >- >+ > m_out.appendTo(notCellCase, continuation); >- > FTL_TYPE_CHECK(jsValueValue(value), edge, SpecCellCheck | SpecOther, isNotOther(value)); > ValueFromBlock notCellResult = m_out.anchor(m_out.booleanFalse); > m_out.jump(continuation); >- m_out.appendTo(continuation, lastNext); > >- return m_out.phi(Int32, cellResult, notCellResult); >+ m_out.appendTo(continuation, lastNext); >+ return m_out.phi(Int32, stringResult, notCellResult); > } > case UntypedUse: { > LValue value = lowJSValue(edge); >@@ -13609,7 +13691,8 @@ class LowerDFGToB3 { > > LBasicBlock cellCase = m_out.newBlock(); > LBasicBlock notStringCase = m_out.newBlock(); >- LBasicBlock stringOrBigIntCase = m_out.newBlock(); >+ LBasicBlock stringCase = m_out.newBlock(); >+ LBasicBlock bigIntCase = m_out.newBlock(); > LBasicBlock notStringOrBigIntCase = m_out.newBlock(); > LBasicBlock notCellCase = m_out.newBlock(); > LBasicBlock int32Case = m_out.newBlock(); >@@ -13625,17 +13708,21 @@ class LowerDFGToB3 { > LBasicBlock lastNext = m_out.appendTo(cellCase, notStringCase); > m_out.branch( > isString(value, provenType(edge) & SpecCell), >- unsure(stringOrBigIntCase), unsure(notStringCase)); >+ unsure(stringCase), unsure(notStringCase)); > >- m_out.appendTo(notStringCase, stringOrBigIntCase); >+ m_out.appendTo(notStringCase, stringCase); > m_out.branch( > isBigInt(value, provenType(edge) & (SpecCell - SpecString)), >- unsure(stringOrBigIntCase), unsure(notStringOrBigIntCase)); >+ unsure(bigIntCase), unsure(notStringOrBigIntCase)); >+ >+ m_out.appendTo(stringCase, bigIntCase); >+ results.append(m_out.anchor(m_out.notEqual(value, weakPointer(jsEmptyString(&m_graph.m_vm))))); >+ m_out.jump(continuation); > >- m_out.appendTo(stringOrBigIntCase, notStringOrBigIntCase); >- LValue nonZeroCell = m_out.notZero32( >- m_out.load32NonNegative(value, m_heaps.JSBigIntOrString_length)); >- results.append(m_out.anchor(nonZeroCell)); >+ m_out.appendTo(bigIntCase, notStringOrBigIntCase); >+ LValue nonZeroBigInt = m_out.notZero32( >+ m_out.load32NonNegative(value, m_heaps.JSBigInt_length)); >+ results.append(m_out.anchor(nonZeroBigInt)); > m_out.jump(continuation); > > m_out.appendTo(notStringOrBigIntCase, notCellCase); >@@ -13899,7 +13986,7 @@ class LowerDFGToB3 { > lowBlock(data->fallThrough.block), Weight(data->fallThrough.count)); > } > >- void switchString(SwitchData* data, LValue string) >+ void switchString(SwitchData* data, LValue string, Edge& edge) > { > bool canDoBinarySwitch = true; > unsigned totalLength = 0; >@@ -13922,16 +14009,16 @@ class LowerDFGToB3 { > return; > } > >- LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value); >- LValue length = m_out.load32(string, m_heaps.JSString_length); >- > LBasicBlock hasImplBlock = m_out.newBlock(); > LBasicBlock is8BitBlock = m_out.newBlock(); > LBasicBlock slowBlock = m_out.newBlock(); > >- m_out.branch(m_out.isNull(stringImpl), unsure(slowBlock), unsure(hasImplBlock)); >+ m_out.branch(isRopeString(string, edge), unsure(slowBlock), unsure(hasImplBlock)); > > LBasicBlock lastNext = m_out.appendTo(hasImplBlock, is8BitBlock); >+ >+ LValue stringImpl = m_out.loadPtr(string, m_heaps.JSString_value); >+ LValue length = m_out.load32(stringImpl, m_heaps.StringImpl_length); > > m_out.branch( > m_out.testIsZero32( >@@ -15626,6 +15713,34 @@ class LowerDFGToB3 { > m_out.constInt32(vm().stringStructure->id())); > } > >+ LValue isRopeString(LValue string, Edge edge = Edge()) >+ { >+ if (edge) { >+ if (!((provenType(edge) & SpecString) & ~SpecStringIdent)) >+ return m_out.booleanFalse; >+ if (JSValue value = provenValue(edge)) { >+ if (value.isCell() && value.asCell()->type() == StringType && !asString(value)->isRope()) >+ return m_out.booleanFalse; >+ } >+ } >+ >+ return m_out.testNonZeroPtr(m_out.loadPtr(string, m_heaps.JSString_value), m_out.constIntPtr(JSString::IsRopeInPointer)); >+ } >+ >+ LValue isNotRopeString(LValue string, Edge edge = Edge()) >+ { >+ if (edge) { >+ if (!((provenType(edge) & SpecString) & ~SpecStringIdent)) >+ return m_out.booleanTrue; >+ if (JSValue value = provenValue(edge)) { >+ if (value.isCell() && value.asCell()->type() == StringType && !asString(value)->isRope()) >+ return m_out.booleanTrue; >+ } >+ } >+ >+ return m_out.testIsZeroPtr(m_out.loadPtr(string, m_heaps.JSString_value), m_out.constIntPtr(JSString::IsRopeInPointer)); >+ } >+ > LValue isNotSymbol(LValue cell, SpeculatedType type = SpecFullTop) > { > if (LValue proven = isProvenValue(type & SpecCell, ~SpecSymbol)) >@@ -16023,7 +16138,7 @@ class LowerDFGToB3 { > if (!m_interpreter.needsTypeCheck(edge, SpecStringIdent | ~SpecString)) > return; > >- speculate(BadType, jsValueValue(string), edge.node(), m_out.isNull(stringImpl)); >+ speculate(BadType, jsValueValue(string), edge.node(), isRopeString(string)); > speculate( > BadType, jsValueValue(string), edge.node(), > m_out.testIsZero32( >diff --git a/Source/JavaScriptCore/jit/AssemblyHelpers.cpp b/Source/JavaScriptCore/jit/AssemblyHelpers.cpp >index 973fd3ce381b5ecafe37c8fe3727ffc5ad90507f..d8403a93fa1f7c5b203d8914e2b162ced0818720 100644 >--- a/Source/JavaScriptCore/jit/AssemblyHelpers.cpp >+++ b/Source/JavaScriptCore/jit/AssemblyHelpers.cpp >@@ -713,8 +713,11 @@ void AssemblyHelpers::emitConvertValueToBoolean(VM& vm, JSValueRegs value, GPRRe > done.append(jump()); > > isString.link(this); >+ move(TrustedImmPtr(jsEmptyString(&vm)), result); >+ comparePtr(invert ? Equal : NotEqual, value.payloadGPR(), result, result); >+ done.append(jump()); >+ > isBigInt.link(this); >- RELEASE_ASSERT(JSString::offsetOfLength() == JSBigInt::offsetOfLength()); > load32(Address(value.payloadGPR(), JSBigInt::offsetOfLength()), result); > compare32(invert ? Equal : NotEqual, result, TrustedImm32(0), result); > done.append(jump()); >@@ -800,8 +803,10 @@ AssemblyHelpers::JumpList AssemblyHelpers::branchIfValue(VM& vm, JSValueRegs val > } > > isString.link(this); >+ truthy.append(branchPtr(invert ? Equal : NotEqual, value.payloadGPR(), TrustedImmPtr(jsEmptyString(&vm)))); >+ done.append(jump()); >+ > isBigInt.link(this); >- RELEASE_ASSERT(JSString::offsetOfLength() == JSBigInt::offsetOfLength()); > truthy.append(branchTest32(invert ? Zero : NonZero, Address(value.payloadGPR(), JSBigInt::offsetOfLength()))); > done.append(jump()); > >diff --git a/Source/JavaScriptCore/jit/AssemblyHelpers.h b/Source/JavaScriptCore/jit/AssemblyHelpers.h >index 16e36204b9ce22780780d859b80dde55a23f5be3..5d02449f6de99196e1861360041cbf5be3471ac5 100644 >--- a/Source/JavaScriptCore/jit/AssemblyHelpers.h >+++ b/Source/JavaScriptCore/jit/AssemblyHelpers.h >@@ -1075,6 +1075,16 @@ class AssemblyHelpers : public MacroAssembler { > return branchDouble(DoubleEqual, fpr, fpr); > } > >+ Jump branchIfRopeStringImpl(GPRReg stringImplGPR) >+ { >+ return branchTestPtr(NonZero, stringImplGPR, TrustedImm32(JSString::IsRopeInPointer)); >+ } >+ >+ Jump branchIfNotRopeStringImpl(GPRReg stringImplGPR) >+ { >+ return branchTestPtr(Zero, stringImplGPR, TrustedImm32(JSString::IsRopeInPointer)); >+ } >+ > static Address addressForByteOffset(ptrdiff_t byteOffset) > { > return Address(GPRInfo::callFrameRegister, byteOffset); >diff --git a/Source/JavaScriptCore/jit/JITInlines.h b/Source/JavaScriptCore/jit/JITInlines.h >index e260c11fecf3eb4715d0dcc541b6ab0bd4774dcd..0ae4d50cd264475a60dc707924ef909db424c7bd 100644 >--- a/Source/JavaScriptCore/jit/JITInlines.h >+++ b/Source/JavaScriptCore/jit/JITInlines.h >@@ -94,9 +94,9 @@ ALWAYS_INLINE void JIT::emitPutIntToCallFrameHeader(RegisterID from, int entry) > ALWAYS_INLINE void JIT::emitLoadCharacterString(RegisterID src, RegisterID dst, JumpList& failures) > { > failures.append(branchIfNotString(src)); >- failures.append(branch32(NotEqual, MacroAssembler::Address(src, JSString::offsetOfLength()), TrustedImm32(1))); > loadPtr(MacroAssembler::Address(src, JSString::offsetOfValue()), dst); >- failures.append(branchTest32(Zero, dst)); >+ failures.append(branchIfRopeStringImpl(dst)); >+ failures.append(branch32(NotEqual, MacroAssembler::Address(dst, StringImpl::lengthMemoryOffset()), TrustedImm32(1))); > loadPtr(MacroAssembler::Address(dst, StringImpl::flagsOffset()), regT1); > loadPtr(MacroAssembler::Address(dst, StringImpl::dataOffset()), dst); > >diff --git a/Source/JavaScriptCore/jit/ThunkGenerators.cpp b/Source/JavaScriptCore/jit/ThunkGenerators.cpp >index 38ec6a0dd95758930cd58f21cc43d0bde48e4bf4..68d6b5ed502fbe35a861a2707a31a2a4b4aeafa9 100644 >--- a/Source/JavaScriptCore/jit/ThunkGenerators.cpp >+++ b/Source/JavaScriptCore/jit/ThunkGenerators.cpp >@@ -636,9 +636,9 @@ MacroAssemblerCodeRef<JITThunkPtrTag> stringGetByValGenerator(VM* vm) > jit.tagReturnAddress(); > > // Load string length to regT2, and start the process of loading the data pointer into regT0 >- jit.load32(JSInterfaceJIT::Address(stringGPR, JSString::offsetOfLength()), scratchGPR); > jit.loadPtr(JSInterfaceJIT::Address(stringGPR, JSString::offsetOfValue()), stringGPR); >- failures.append(jit.branchTestPtr(JSInterfaceJIT::Zero, stringGPR)); >+ failures.append(jit.branchIfRopeStringImpl(stringGPR)); >+ jit.load32(JSInterfaceJIT::Address(stringGPR, StringImpl::lengthMemoryOffset()), scratchGPR); > > // Do an unsigned compare to simultaneously filter negative indices as well as indices that are too large > failures.append(jit.branch32(JSInterfaceJIT::AboveOrEqual, indexGPR, scratchGPR)); >@@ -675,9 +675,9 @@ static void stringCharLoad(SpecializedThunkJIT& jit) > jit.loadJSStringArgument(SpecializedThunkJIT::ThisArgument, SpecializedThunkJIT::regT0); > > // Load string length to regT2, and start the process of loading the data pointer into regT0 >- jit.load32(MacroAssembler::Address(SpecializedThunkJIT::regT0, JSString::offsetOfLength()), SpecializedThunkJIT::regT2); > jit.loadPtr(MacroAssembler::Address(SpecializedThunkJIT::regT0, JSString::offsetOfValue()), SpecializedThunkJIT::regT0); >- jit.appendFailure(jit.branchTest32(MacroAssembler::Zero, SpecializedThunkJIT::regT0)); >+ jit.appendFailure(jit.branchIfRopeStringImpl(SpecializedThunkJIT::regT0)); >+ jit.load32(MacroAssembler::Address(SpecializedThunkJIT::regT0, StringImpl::lengthMemoryOffset()), SpecializedThunkJIT::regT2); > > // load index > jit.loadInt32Argument(0, SpecializedThunkJIT::regT1); // regT1 contains the index >diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm >index d86857dcf65455f06ac420196730600f0d9926ad..339aca9ef84634fb2ae5cb44117a1ed0b4ee409a 100644 >--- a/Source/JavaScriptCore/llint/LowLevelInterpreter.asm >+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter.asm >@@ -477,6 +477,7 @@ const ModuleCode = constexpr ModuleCode > const LLIntReturnPC = ArgumentCount + TagOffset > > # String flags. >+const IsRopeInPointer = constexpr JSString::IsRopeInPointer > const HashFlags8BitBuffer = constexpr StringImpl::s_hashFlag8BitBuffer > > # Copied from PropertyOffset.h >diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm >index 9bb10d1ddb03f049c31168338b3a438f8692885c..a3705803d4055d157ffdd4d2097974aff88b0df9 100644 >--- a/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm >+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm >@@ -1788,9 +1788,9 @@ llintOpWithJump(op_switch_char, OpSwitchChar, macro (size, get, jump, dispatch) > addp t3, t2 > bineq t1, CellTag, .opSwitchCharFallThrough > bbneq JSCell::m_type[t0], StringType, .opSwitchCharFallThrough >- bineq JSString::m_length[t0], 1, .opSwitchCharFallThrough >- loadp JSString::m_value[t0], t0 >- btpz t0, .opSwitchOnRope >+ loadp JSString::m_fiber[t0], t0 >+ btpnz t0, IsRopeInPointer, .opSwitchOnRope >+ bineq StringImpl::m_length[t0], 1, .opSwitchCharFallThrough > loadp StringImpl::m_data8[t0], t1 > btinz StringImpl::m_hashAndFlags[t0], HashFlags8BitBuffer, .opSwitchChar8Bit > loadh [t1], t0 >diff --git a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm >index 7ba94b68c6c2dc38fab1fc8b1d4110991417361e..c063383303b2ecc55ab956f989b2abbf04dfcd4c 100644 >--- a/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm >+++ b/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm >@@ -1873,9 +1873,9 @@ llintOpWithJump(op_switch_char, OpSwitchChar, macro (size, get, jump, dispatch) > addp t3, t2 > btqnz t1, tagMask, .opSwitchCharFallThrough > bbneq JSCell::m_type[t1], StringType, .opSwitchCharFallThrough >- bineq JSString::m_length[t1], 1, .opSwitchCharFallThrough >- loadp JSString::m_value[t1], t0 >- btpz t0, .opSwitchOnRope >+ loadp JSString::m_fiber[t1], t0 >+ btpnz t0, IsRopeInPointer, .opSwitchOnRope >+ bineq StringImpl::m_length[t0], 1, .opSwitchCharFallThrough > loadp StringImpl::m_data8[t0], t1 > btinz StringImpl::m_hashAndFlags[t0], HashFlags8BitBuffer, .opSwitchChar8Bit > loadh [t1], t0 >diff --git a/Source/JavaScriptCore/runtime/JSString.cpp b/Source/JavaScriptCore/runtime/JSString.cpp >index d95e25cd02c7d5d32fa312410e97e6c512941298..7a82a152b3ad4d30993f9fb5a37affd65bb66cf7 100644 >--- a/Source/JavaScriptCore/runtime/JSString.cpp >+++ b/Source/JavaScriptCore/runtime/JSString.cpp >@@ -40,13 +40,20 @@ Structure* JSString::createStructure(VM& vm, JSGlobalObject* globalObject, JSVal > return Structure::create(vm, globalObject, proto, TypeInfo(StringType, StructureFlags), info()); > } > >+JSString* JSString::createEmptyString(VM& vm) >+{ >+ JSString* newString = new (NotNull, allocateCell<JSString>(vm.heap)) JSString(vm, *StringImpl::empty()); >+ newString->finishCreation(vm); >+ return newString; >+} >+ > template<> > void JSRopeString::RopeBuilder<RecordOverflow>::expand() > { > RELEASE_ASSERT(!this->hasOverflowed()); > ASSERT(m_index == JSRopeString::s_maxInternalRopeLength); > JSString* jsString = m_jsString; >- m_jsString = jsStringBuilder(&m_vm); >+ m_jsString = JSRopeString::createNullForRopeBuilder(m_vm); > m_index = 0; > append(jsString); > } >@@ -61,14 +68,16 @@ void JSString::dumpToStream(const JSCell* cell, PrintStream& out) > VM& vm = *cell->vm(); > const JSString* thisObject = jsCast<const JSString*>(cell); > out.printf("<%p, %s, [%u], ", thisObject, thisObject->className(vm), thisObject->length()); >- if (thisObject->isRope()) >+ uintptr_t pointer = thisObject->m_fiber; >+ if (pointer & IsRopeInPointer) > out.printf("[rope]"); > else { >- WTF::StringImpl* ourImpl = thisObject->m_value.impl(); >- if (ourImpl->is8Bit()) >- out.printf("[8 %p]", ourImpl->characters8()); >- else >- out.printf("[16 %p]", ourImpl->characters16()); >+ if (WTF::StringImpl* ourImpl = bitwise_cast<StringImpl*>(pointer)) { >+ if (ourImpl->is8Bit()) >+ out.printf("[8 %p]", ourImpl->characters8()); >+ else >+ out.printf("[16 %p]", ourImpl->characters16()); >+ } > } > out.printf(">"); > } >@@ -86,9 +95,10 @@ bool JSString::equalSlowCase(ExecState* exec, JSString* other) const > size_t JSString::estimatedSize(JSCell* cell, VM& vm) > { > JSString* thisObject = asString(cell); >- if (thisObject->isRope()) >+ uintptr_t pointer = thisObject->m_fiber; >+ if (pointer & IsRopeInPointer) > return Base::estimatedSize(cell, vm); >- return Base::estimatedSize(cell, vm) + thisObject->m_value.impl()->costDuringGC(); >+ return Base::estimatedSize(cell, vm) + bitwise_cast<StringImpl*>(pointer)->costDuringGC(); > } > > void JSString::visitChildren(JSCell* cell, SlotVisitor& visitor) >@@ -96,20 +106,36 @@ void JSString::visitChildren(JSCell* cell, SlotVisitor& visitor) > JSString* thisObject = asString(cell); > Base::visitChildren(thisObject, visitor); > >- if (thisObject->isRope()) >- static_cast<JSRopeString*>(thisObject)->visitFibers(visitor); >- if (StringImpl* impl = thisObject->m_value.impl()) >- visitor.reportExtraMemoryVisited(impl->costDuringGC()); >-} >- >-void JSRopeString::visitFibers(SlotVisitor& visitor) >-{ >- if (isSubstring()) { >- visitor.append(substringBase()); >+ uintptr_t pointer = thisObject->m_fiber; >+ if (pointer & IsRopeInPointer) { >+ if ((pointer & JSRopeString::stringMask) == JSRopeString::substringSentinel()) { >+ visitor.appendUnbarriered(static_cast<JSRopeString*>(thisObject)->fiber1()); >+ return; >+ } >+ for (unsigned index = 0; index < JSRopeString::s_maxInternalRopeLength; ++index) { >+ JSString* fiber = nullptr; >+ switch (index) { >+ case 0: >+ fiber = bitwise_cast<JSString*>(pointer & JSRopeString::stringMask); >+ break; >+ case 1: >+ fiber = static_cast<JSRopeString*>(thisObject)->fiber1(); >+ break; >+ case 2: >+ fiber = static_cast<JSRopeString*>(thisObject)->fiber2(); >+ break; >+ default: >+ ASSERT_NOT_REACHED(); >+ return; >+ } >+ if (!fiber) >+ break; >+ visitor.appendUnbarriered(fiber); >+ } > return; > } >- for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i) >- visitor.append(fiber(i)); >+ if (StringImpl* impl = bitwise_cast<StringImpl*>(pointer)) >+ visitor.reportExtraMemoryVisited(impl->costDuringGC()); > } > > static const unsigned maxLengthForOnStackResolve = 2048; >@@ -117,7 +143,7 @@ static const unsigned maxLengthForOnStackResolve = 2048; > void JSRopeString::resolveRopeInternal8(LChar* buffer) const > { > if (isSubstring()) { >- StringImpl::copyCharacters(buffer, substringBase()->m_value.characters8() + substringOffset(), length()); >+ StringImpl::copyCharacters(buffer, substringBase()->valueInternal().characters8() + substringOffset(), length()); > return; > } > >@@ -135,7 +161,7 @@ void JSRopeString::resolveRopeInternal8NoSubstring(LChar* buffer) const > > LChar* position = buffer; > for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i) { >- const StringImpl& fiberString = *fiber(i)->m_value.impl(); >+ const StringImpl& fiberString = *fiber(i)->valueInternal().impl(); > unsigned length = fiberString.length(); > StringImpl::copyCharacters(position, fiberString.characters8(), length); > position += length; >@@ -147,7 +173,7 @@ void JSRopeString::resolveRopeInternal16(UChar* buffer) const > { > if (isSubstring()) { > StringImpl::copyCharacters( >- buffer, substringBase()->m_value.characters16() + substringOffset(), length()); >+ buffer, substringBase()->valueInternal().characters16() + substringOffset(), length()); > return; > } > >@@ -165,7 +191,7 @@ void JSRopeString::resolveRopeInternal16NoSubstring(UChar* buffer) const > > UChar* position = buffer; > for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i) { >- const StringImpl& fiberString = *fiber(i)->m_value.impl(); >+ const StringImpl& fiberString = *fiber(i)->valueInternal().impl(); > unsigned length = fiberString.length(); > if (fiberString.is8Bit()) > StringImpl::copyCharacters(position, fiberString.characters8(), length); >@@ -182,67 +208,67 @@ void JSRopeString::resolveRopeToAtomicString(ExecState* exec) const > auto scope = DECLARE_THROW_SCOPE(vm); > > if (length() > maxLengthForOnStackResolve) { >- resolveRope(exec); >- RETURN_IF_EXCEPTION(scope, void()); >- m_value = AtomicString(m_value); >- setIs8Bit(m_value.impl()->is8Bit()); >+ scope.release(); >+ resolveRopeWithFunction(exec, [&] (Ref<StringImpl>&& newImpl) { >+ return AtomicStringImpl::add(newImpl.ptr()); >+ }); > return; > } > > if (is8Bit()) { > LChar buffer[maxLengthForOnStackResolve]; > resolveRopeInternal8(buffer); >- m_value = AtomicString(buffer, length()); >- setIs8Bit(m_value.impl()->is8Bit()); >+ convertToNonRope(AtomicStringImpl::add(buffer, length())); > } else { > UChar buffer[maxLengthForOnStackResolve]; > resolveRopeInternal16(buffer); >- m_value = AtomicString(buffer, length()); >- setIs8Bit(m_value.impl()->is8Bit()); >+ convertToNonRope(AtomicStringImpl::add(buffer, length())); > } > >- clearFibers(); >- > // If we resolved a string that didn't previously exist, notify the heap that we've grown. >- if (m_value.impl()->hasOneRef()) >- vm.heap.reportExtraMemoryAllocated(m_value.impl()->cost()); >+ if (valueInternal().impl()->hasOneRef()) >+ vm.heap.reportExtraMemoryAllocated(valueInternal().impl()->cost()); > } > >-void JSRopeString::clearFibers() const >+inline void JSRopeString::convertToNonRope(String&& string) const > { >- for (size_t i = 0; i < s_maxInternalRopeLength; ++i) >- u[i].number = 0; >+ // Concurrent compiler threads can access String held by JSString. So we always emit >+ // store-store barrier here to ensure concurrent compiler threads see initialized String. >+ ASSERT(JSString::isRope()); >+ WTF::storeStoreFence(); >+ new (&uninitializedValueInternal()) String(WTFMove(string)); >+ ASSERT(!JSString::isRope()); > } > > RefPtr<AtomicStringImpl> JSRopeString::resolveRopeToExistingAtomicString(ExecState* exec) const > { >+ VM& vm = exec->vm(); >+ auto scope = DECLARE_THROW_SCOPE(vm); >+ > if (length() > maxLengthForOnStackResolve) { >- resolveRope(exec); >- if (RefPtr<AtomicStringImpl> existingAtomicString = AtomicStringImpl::lookUp(m_value.impl())) { >- m_value = *existingAtomicString; >- setIs8Bit(m_value.impl()->is8Bit()); >- clearFibers(); >- return existingAtomicString; >- } >- return nullptr; >+ RefPtr<AtomicStringImpl> existingAtomicString; >+ resolveRopeWithFunction(exec, [&] (Ref<StringImpl>&& newImpl) -> Ref<StringImpl> { >+ existingAtomicString = AtomicStringImpl::lookUp(newImpl.ptr()); >+ if (existingAtomicString) >+ return makeRef(*existingAtomicString); >+ return WTFMove(newImpl); >+ }); >+ RETURN_IF_EXCEPTION(scope, nullptr); >+ return existingAtomicString; > } > > if (is8Bit()) { > LChar buffer[maxLengthForOnStackResolve]; > resolveRopeInternal8(buffer); > if (RefPtr<AtomicStringImpl> existingAtomicString = AtomicStringImpl::lookUp(buffer, length())) { >- m_value = *existingAtomicString; >- setIs8Bit(m_value.impl()->is8Bit()); >- clearFibers(); >+ convertToNonRope(*existingAtomicString); > return existingAtomicString; > } > } else { > UChar buffer[maxLengthForOnStackResolve]; > resolveRopeInternal16(buffer); > if (RefPtr<AtomicStringImpl> existingAtomicString = AtomicStringImpl::lookUp(buffer, length())) { >- m_value = *existingAtomicString; >- setIs8Bit(m_value.impl()->is8Bit()); >- clearFibers(); >+ convertToNonRope(*existingAtomicString); > return existingAtomicString; > } > } >@@ -250,44 +276,50 @@ RefPtr<AtomicStringImpl> JSRopeString::resolveRopeToExistingAtomicString(ExecSta > return nullptr; > } > >-void JSRopeString::resolveRope(ExecState* nullOrExecForOOM) const >+template<typename Function> >+void JSRopeString::resolveRopeWithFunction(ExecState* nullOrExecForOOM, Function&& function) const > { > ASSERT(isRope()); > >+ VM& vm = *this->vm(); > if (isSubstring()) { > ASSERT(!substringBase()->isRope()); >- m_value = substringBase()->m_value.substringSharingImpl(substringOffset(), length()); >- substringBase().clear(); >+ auto newImpl = substringBase()->valueInternal().substringSharingImpl(substringOffset(), length()); >+ convertToNonRope(function(newImpl.releaseImpl().releaseNonNull())); > return; > } > > if (is8Bit()) { > LChar* buffer; >- if (auto newImpl = StringImpl::tryCreateUninitialized(length(), buffer)) { >- Heap::heap(this)->reportExtraMemoryAllocated(newImpl->cost()); >- m_value = WTFMove(newImpl); >- } else { >+ auto newImpl = StringImpl::tryCreateUninitialized(length(), buffer); >+ if (!newImpl) { > outOfMemory(nullOrExecForOOM); > return; > } >+ vm.heap.reportExtraMemoryAllocated(newImpl->cost()); >+ > resolveRopeInternal8NoSubstring(buffer); >- clearFibers(); >- ASSERT(!isRope()); >+ convertToNonRope(function(newImpl.releaseNonNull())); > return; > } > > UChar* buffer; >- if (auto newImpl = StringImpl::tryCreateUninitialized(length(), buffer)) { >- Heap::heap(this)->reportExtraMemoryAllocated(newImpl->cost()); >- m_value = WTFMove(newImpl); >- } else { >+ auto newImpl = StringImpl::tryCreateUninitialized(length(), buffer); >+ if (!newImpl) { > outOfMemory(nullOrExecForOOM); > return; > } >+ vm.heap.reportExtraMemoryAllocated(newImpl->cost()); > > resolveRopeInternal16NoSubstring(buffer); >- clearFibers(); >- ASSERT(!isRope()); >+ convertToNonRope(function(newImpl.releaseNonNull())); >+} >+ >+void JSRopeString::resolveRope(ExecState* nullOrExecForOOM) const >+{ >+ resolveRopeWithFunction(nullOrExecForOOM, [] (Ref<StringImpl>&& newImpl) { >+ return WTFMove(newImpl); >+ }); > } > > // Overview: These functions convert a JSString from holding a string in rope form >@@ -306,7 +338,7 @@ void JSRopeString::resolveRopeSlowCase8(LChar* buffer) const > Vector<JSString*, 32, UnsafeVectorOverflow> workQueue; // Putting strings into a Vector is only OK because there are no GC points in this method. > > for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i) >- workQueue.append(fiber(i).get()); >+ workQueue.append(fiber(i)); > > while (!workQueue.isEmpty()) { > JSString* currentFiber = workQueue.last(); >@@ -318,15 +350,15 @@ void JSRopeString::resolveRopeSlowCase8(LChar* buffer) const > JSRopeString* currentFiberAsRope = static_cast<JSRopeString*>(currentFiber); > if (!currentFiberAsRope->isSubstring()) { > for (size_t i = 0; i < s_maxInternalRopeLength && currentFiberAsRope->fiber(i); ++i) >- workQueue.append(currentFiberAsRope->fiber(i).get()); >+ workQueue.append(currentFiberAsRope->fiber(i)); > continue; > } > ASSERT(!currentFiberAsRope->substringBase()->isRope()); > characters = >- currentFiberAsRope->substringBase()->m_value.characters8() + >+ currentFiberAsRope->substringBase()->valueInternal().characters8() + > currentFiberAsRope->substringOffset(); > } else >- characters = currentFiber->m_value.characters8(); >+ characters = currentFiber->valueInternal().characters8(); > > unsigned length = currentFiber->length(); > position -= length; >@@ -342,7 +374,7 @@ void JSRopeString::resolveRopeSlowCase(UChar* buffer) const > Vector<JSString*, 32, UnsafeVectorOverflow> workQueue; // These strings are kept alive by the parent rope, so using a Vector is OK. > > for (size_t i = 0; i < s_maxInternalRopeLength && fiber(i); ++i) >- workQueue.append(fiber(i).get()); >+ workQueue.append(fiber(i)); > > while (!workQueue.isEmpty()) { > JSString* currentFiber = workQueue.last(); >@@ -353,7 +385,7 @@ void JSRopeString::resolveRopeSlowCase(UChar* buffer) const > if (currentFiberAsRope->isSubstring()) { > ASSERT(!currentFiberAsRope->substringBase()->isRope()); > StringImpl* string = static_cast<StringImpl*>( >- currentFiberAsRope->substringBase()->m_value.impl()); >+ currentFiberAsRope->substringBase()->valueInternal().impl()); > unsigned offset = currentFiberAsRope->substringOffset(); > unsigned length = currentFiberAsRope->length(); > position -= length; >@@ -364,11 +396,11 @@ void JSRopeString::resolveRopeSlowCase(UChar* buffer) const > continue; > } > for (size_t i = 0; i < s_maxInternalRopeLength && currentFiberAsRope->fiber(i); ++i) >- workQueue.append(currentFiberAsRope->fiber(i).get()); >+ workQueue.append(currentFiberAsRope->fiber(i)); > continue; > } > >- StringImpl* string = static_cast<StringImpl*>(currentFiber->m_value.impl()); >+ StringImpl* string = static_cast<StringImpl*>(currentFiber->valueInternal().impl()); > unsigned length = string->length(); > position -= length; > if (string->is8Bit()) >@@ -382,9 +414,13 @@ void JSRopeString::resolveRopeSlowCase(UChar* buffer) const > > void JSRopeString::outOfMemory(ExecState* nullOrExecForOOM) const > { >- clearFibers(); > ASSERT(isRope()); >- ASSERT(m_value.isNull()); >+ auto* string = const_cast<JSRopeString*>(this); >+ string->clearFiber0(); >+ string->clearFiber1(); >+ string->clearFiber2(); >+ string->setLength(0); // Invalid status. This JSRopeString is never shown to users. >+ string->convertToNonRope(*StringImpl::empty()); > if (nullOrExecForOOM) { > VM& vm = nullOrExecForOOM->vm(); > auto scope = DECLARE_THROW_SCOPE(vm); >diff --git a/Source/JavaScriptCore/runtime/JSString.h b/Source/JavaScriptCore/runtime/JSString.h >index a5a63f5a4607c0dbe40cf5f727a23767e285bff1..b2e9a9dbdd174befe7b08dc1a0727880d5cb11e3 100644 >--- a/Source/JavaScriptCore/runtime/JSString.h >+++ b/Source/JavaScriptCore/runtime/JSString.h >@@ -35,6 +35,10 @@ > > namespace JSC { > >+#if ASSERT_DISABLED >+IGNORE_RETURN_TYPE_WARNINGS_BEGIN >+#endif >+ > class JSString; > class JSRopeString; > class LLIntOffsetsExtractor; >@@ -61,8 +65,6 @@ JSString* jsNontrivialString(ExecState*, String&&); > JSString* jsOwnedString(VM*, const String&); > JSString* jsOwnedString(ExecState*, const String&); > >-JSRopeString* jsStringBuilder(VM*); >- > bool isJSString(JSCell*); > bool isJSString(JSValue); > JSString* asString(JSValue); >@@ -72,6 +74,19 @@ struct StringViewWithUnderlyingString { > String underlyingString; > }; > >+ >+// In 64bit architecture (we require unaligned access and little endian too), JSString and JSRopeString have intersting memory layout to make sizeof(JSString) == 16 >+// and sizeof(JSRopeString) == 32. JSString has only one pointer. We use it for String. length() and is8Bit() queries go to StringImpl. In JSRopeString, we reuse the >+// above pointer place for the 1st fiber. JSRopeString has three fibers so its size is 48. To keep length and is8Bit flag information in JSRopeString, JSRopeString >+// encodes these information into the fiber pointers. is8Bit flag is encoded in the 1st fiber pointer. length is embedded directly, and two fibers are compressed >+// into 12bytes. isRope information is encoded in the first fiber's LSB. We must load and store a fiber pointer in one load or store instruction to make an entire >+// pointer visible to concurrent collector. >+// >+// 0 8 10 16 32 >+// JSString [ ID ][ header ][ String pointer 0] >+// JSRopeString [ ID ][ header ][ flags ][ 1st fiber 1][ length ][ 2nd low32 ][2nd upper16][ 3rd fiber ] >+// ^ >+// isRope bit > class JSString : public JSCell { > public: > friend class JIT; >@@ -80,6 +95,7 @@ class JSString : public JSCell { > friend class JSRopeString; > friend class MarkStack; > friend class SlotVisitor; >+ friend class SmallStrings; > > typedef JSCell Base; > static const unsigned StructureFlags = Base::StructureFlags | OverridesGetOwnPropertySlot | InterceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero | StructureIsImmortal | OverridesToThis; >@@ -101,44 +117,58 @@ class JSString : public JSCell { > static constexpr unsigned MaxLength = std::numeric_limits<int32_t>::max(); > static_assert(MaxLength == String::MaxLength, ""); > >+ static constexpr uintptr_t IsRopeInPointer = 0x1; >+ > private: >+ String& uninitializedValueInternal() const >+ { >+ return *bitwise_cast<String*>(&m_fiber); >+ } >+ >+ String& valueInternal() const >+ { >+ ASSERT(!isRope()); >+ return uninitializedValueInternal(); >+ } >+ > JSString(VM& vm, Ref<StringImpl>&& value) > : JSCell(vm, vm.stringStructure.get()) >- , m_value(WTFMove(value)) > { >+ new (&uninitializedValueInternal()) String(WTFMove(value)); > } > > JSString(VM& vm) > : JSCell(vm, vm.stringStructure.get()) >+ , m_fiber(IsRopeInPointer) > { > } > > void finishCreation(VM& vm, unsigned length) > { >- ASSERT(!m_value.isNull()); >+ ASSERT_UNUSED(length, length > 0); >+ ASSERT(!valueInternal().isNull()); > Base::finishCreation(vm); >- setLength(length); >- setIs8Bit(m_value.impl()->is8Bit()); > } > > void finishCreation(VM& vm, unsigned length, size_t cost) > { >- ASSERT(!m_value.isNull()); >+ ASSERT_UNUSED(length, length > 0); >+ ASSERT(!valueInternal().isNull()); > Base::finishCreation(vm); >- setLength(length); >- setIs8Bit(m_value.impl()->is8Bit()); > vm.heap.reportExtraMemoryAllocated(cost); > } > >+ static JSString* createEmptyString(VM&); >+ > protected: > void finishCreation(VM& vm) > { > Base::finishCreation(vm); >- setLength(0); >- setIs8Bit(true); > } > > public: >+ ~JSString(); >+ > static JSString* create(VM& vm, Ref<StringImpl>&& value) > { > unsigned length = value->length(); >@@ -165,7 +195,7 @@ class JSString : public JSCell { > const String& value(ExecState*) const; > inline const String& tryGetValue(bool allocationAllowed = true) const; > const StringImpl* tryGetValueImpl() const; >- ALWAYS_INLINE unsigned length() const { return m_length; } >+ ALWAYS_INLINE unsigned length() const; > > JSValue toPrimitive(ExecState*, PreferredPrimitiveType) const; > bool toBoolean() const { return !!length(); } >@@ -182,9 +212,7 @@ class JSString : public JSCell { > > static Structure* createStructure(VM&, JSGlobalObject*, JSValue); > >- static size_t offsetOfLength() { return OBJECT_OFFSETOF(JSString, m_length); } >- static size_t offsetOfFlags() { return OBJECT_OFFSETOF(JSString, m_flags); } >- static size_t offsetOfValue() { return OBJECT_OFFSETOF(JSString, m_value); } >+ static ptrdiff_t offsetOfValue() { return OBJECT_OFFSETOF(JSString, m_fiber); } > > DECLARE_EXPORT_INFO; > >@@ -192,45 +220,26 @@ class JSString : public JSCell { > static size_t estimatedSize(JSCell*, VM&); > static void visitChildren(JSCell*, SlotVisitor&); > >- enum { >- Is8Bit = 1u >- }; >+ ALWAYS_INLINE bool isRope() const >+ { >+ return m_fiber & IsRopeInPointer; >+ } > >- bool isRope() const { return m_value.isNull(); } >+ bool is8Bit() const; > > protected: > friend class JSValue; > > JS_EXPORT_PRIVATE bool equalSlowCase(ExecState*, JSString* other) const; > bool isSubstring() const; >- bool is8Bit() const { return m_flags & Is8Bit; } >- void setIs8Bit(bool flag) const >- { >- if (flag) >- m_flags |= Is8Bit; >- else >- m_flags &= ~Is8Bit; >- } > >- ALWAYS_INLINE void setLength(unsigned length) >- { >- ASSERT(length <= MaxLength); >- m_length = length; >- } >+ mutable uintptr_t m_fiber; > > private: >- // A string is represented either by a String or a rope of fibers. >- unsigned m_length { 0 }; >- mutable uint16_t m_flags { 0 }; >- // The poison is strategically placed and holds a value such that the first >- // 64 bits of JSString look like a double JSValue. >- mutable String m_value; >- > friend class LLIntOffsetsExtractor; > > static JSValue toThis(JSCell*, ExecState*, ECMAMode); > >- String& string() { ASSERT(!isRope()); return m_value; } > StringView unsafeView(ExecState*) const; > > friend JSString* jsString(ExecState*, JSString*, JSString*); >@@ -241,16 +250,144 @@ class JSString : public JSCell { > // from JSStringSubspace:: > class JSRopeString final : public JSString { > friend class JSString; >+public: >+ static constexpr unsigned Is8Bit = StringImpl::flagIs8Bit(); >+ static_assert(Is8Bit == 0b100, ""); > >- friend JSRopeString* jsStringBuilder(VM*); >+#if !CPU(NEEDS_ALIGNED_ACCESS) && CPU(ADDRESS64) >+ static_assert(sizeof(uintptr_t) == sizeof(uint64_t), ""); >+ static constexpr uintptr_t flagMask = 0xffff000000000000ULL; >+ static constexpr uintptr_t stringMask = ~(flagMask | IsRopeInPointer); >+ static constexpr uintptr_t Is8BitInPointer = static_cast<uintptr_t>(Is8Bit) << 48; >+ >+ class CompactFibers { >+ public: >+ JSString* fiber1() const >+ { >+ uintptr_t* target = bitwise_cast<uintptr_t*>(&m_fiber1Lower); >+ Fiber1 data = bitwise_cast<Fiber1>(*target); >+ return bitwise_cast<JSString*>(static_cast<uintptr_t>(data.m_fiber1Lower) | (static_cast<uintptr_t>(data.m_fiber1Upper) << 32)); >+ } >+ >+ void setFiber1(JSString* fiber) >+ { >+ uintptr_t pointer = bitwise_cast<uintptr_t>(fiber); >+ uintptr_t* target = bitwise_cast<uintptr_t*>(&m_fiber1Lower); >+ Fiber1 data = bitwise_cast<Fiber1>(*target); >+ data.m_fiber1Lower = static_cast<uint32_t>(pointer); >+ data.m_fiber1Upper = static_cast<uint16_t>(pointer >> 32); >+ *target = bitwise_cast<uintptr_t>(data); >+ } >+ >+ JSString* fiber2() const >+ { >+ uintptr_t* target = bitwise_cast<uintptr_t*>(&m_fiber1Upper); >+ Fiber2 data = bitwise_cast<Fiber2>(*target); >+ return bitwise_cast<JSString*>(static_cast<uintptr_t>(data.m_fiber2Lower) | (static_cast<uintptr_t>(data.m_fiber2Upper) << 32)); >+ } >+ void setFiber2(JSString* fiber) >+ { >+ uintptr_t pointer = bitwise_cast<uintptr_t>(fiber); >+ uintptr_t* target = bitwise_cast<uintptr_t*>(&m_fiber1Upper); >+ Fiber2 data = bitwise_cast<Fiber2>(*target); >+ data.m_fiber2Lower = static_cast<uint32_t>(pointer); >+ data.m_fiber2Upper = static_cast<uint16_t>(pointer >> 32); >+ *target = bitwise_cast<uintptr_t>(data); >+ } >+ >+ unsigned length() const { return m_length; } >+ void setLength(unsigned length) >+ { >+ m_length = length; >+ } >+ >+ static ptrdiff_t offsetOfLength() { return OBJECT_OFFSETOF(CompactFibers, m_length); } >+ static ptrdiff_t offsetOfFiber1Lower() { return OBJECT_OFFSETOF(CompactFibers, m_fiber1Lower); } >+ static ptrdiff_t offsetOfFiber1Upper() { return OBJECT_OFFSETOF(CompactFibers, m_fiber1Upper); } >+ static ptrdiff_t offsetOfFiber2Lower() { return OBJECT_OFFSETOF(CompactFibers, m_fiber2Lower); } >+ static ptrdiff_t offsetOfFiber2Upper() { return OBJECT_OFFSETOF(CompactFibers, m_fiber2Upper); } >+ >+ private: >+ // We must keep these structs' field order in sync with CompactFibers' field order. >+ struct Fiber1 { >+ uint32_t m_fiber1Lower; >+ uint16_t m_fiber1Upper; >+ uint16_t m_fiber2Upper; >+ }; >+ static_assert(sizeof(Fiber1) == sizeof(uintptr_t), ""); >+ >+ struct Fiber2 { >+ uint16_t m_fiber1Upper; >+ uint16_t m_fiber2Upper; >+ uint32_t m_fiber2Lower; >+ }; >+ static_assert(sizeof(Fiber2) == sizeof(uintptr_t), ""); >+ >+ uint32_t m_length { 0 }; >+ uint32_t m_fiber1Lower { 0 }; >+ uint16_t m_fiber1Upper { 0 }; >+ uint16_t m_fiber2Upper { 0 }; >+ uint32_t m_fiber2Lower { 0 }; >+ }; >+ static_assert(sizeof(CompactFibers) == sizeof(void*) * 2, ""); >+#else >+ static constexpr uintptr_t stringMask = ~(IsRopeInPointer); >+ >+ class CompactFibers { >+ public: >+ JSString* fiber1() const >+ { >+ return m_fiber1; >+ } >+ void setFiber1(JSString* fiber) >+ { >+ m_fiber1 = fiber; >+ } >+ >+ JSString* fiber2() const >+ { >+ return m_fiber2; >+ } >+ void setFiber2(JSString* fiber) >+ { >+ m_fiber2 = fiber; >+ } >+ >+ unsigned length() const { return m_length; } >+ void setLength(unsigned length) >+ { >+ m_length = length; >+ } >+ >+ void setIs8Bit(bool flag) >+ { >+ if (flag) >+ m_flags |= Is8Bit; >+ else >+ m_flags &= ~Is8Bit; >+ } >+ >+ bool is8Bit() >+ { >+ return m_flags & Is8Bit; >+ } >+ >+ static ptrdiff_t offsetOfLength() { return OBJECT_OFFSETOF(CompactFibers, m_length); } >+ >+ private: >+ uint32_t m_length { 0 }; >+ uint32_t m_flags { 0 }; >+ JSString* m_fiber1 { nullptr }; >+ JSString* m_fiber2 { nullptr }; >+ }; >+#endif > >-public: > template <class OverflowHandler = CrashOnOverflow> > class RopeBuilder : public OverflowHandler { > public: > RopeBuilder(VM& vm) > : m_vm(vm) >- , m_jsString(jsStringBuilder(&vm)) >+ , m_jsString(JSRopeString::createNullForRopeBuilder(vm)) > , m_index(0) > { > } >@@ -273,11 +410,13 @@ class JSRopeString final : public JSString { > return true; > } > >- JSRopeString* release() >+ JSString* release() > { > RELEASE_ASSERT(!this->hasOverflowed()); > JSRopeString* tmp = m_jsString; > m_jsString = nullptr; >+ if (!tmp->length()) >+ return jsEmptyString(&m_vm); > return tmp; > } > >@@ -295,7 +434,39 @@ class JSRopeString final : public JSString { > size_t m_index; > }; > >+ inline unsigned length() const >+ { >+ return m_compactFibers.length(); >+ } >+ >+ static JSRopeString* createNullForRopeBuilder(VM& vm) >+ { >+ JSRopeString* newString = new (NotNull, allocateCell<JSRopeString>(vm.heap)) JSRopeString(vm); >+ newString->finishCreation(vm); >+ return newString; >+ } >+ > private: >+ void convertToNonRope(String&&) const; >+ >+ void setIs8Bit(bool flag) const >+ { >+#if !CPU(NEEDS_ALIGNED_ACCESS) && CPU(ADDRESS64) >+ if (flag) >+ m_fiber |= Is8BitInPointer; >+ else >+ m_fiber &= ~Is8BitInPointer; >+#else >+ m_compactFibers.setIs8Bit(flag); >+#endif >+ } >+ >+ ALWAYS_INLINE void setLength(unsigned length) >+ { >+ ASSERT(length <= MaxLength); >+ m_compactFibers.setLength(length); >+ } >+ > ALWAYS_INLINE JSRopeString(VM& vm) > : JSString(vm) > { >@@ -305,24 +476,26 @@ class JSRopeString final : public JSString { > { > Base::finishCreation(vm); > ASSERT(!sumOverflows<int32_t>(s1->length(), s2->length())); >+ setIsSubstring(false); >+ setFiber0(vm, s1); >+ setFiber1(vm, s2); >+ setFiber2(vm, nullptr); > setLength(s1->length() + s2->length()); > setIs8Bit(s1->is8Bit() && s2->is8Bit()); >- setIsSubstring(false); >- fiber(0).set(vm, this, s1); >- fiber(1).set(vm, this, s2); >- fiber(2).clear(); >+ ASSERT((s1->length() + s2->length()) == length()); > } > > void finishCreation(VM& vm, JSString* s1, JSString* s2, JSString* s3) > { > Base::finishCreation(vm); > ASSERT(!sumOverflows<int32_t>(s1->length(), s2->length(), s3->length())); >+ setIsSubstring(false); >+ setFiber0(vm, s1); >+ setFiber1(vm, s2); >+ setFiber2(vm, s3); > setLength(s1->length() + s2->length() + s3->length()); > setIs8Bit(s1->is8Bit() && s2->is8Bit() && s3->is8Bit()); >- setIsSubstring(false); >- fiber(0).set(vm, this, s1); >- fiber(1).set(vm, this, s2); >- fiber(2).set(vm, this, s3); >+ ASSERT((s1->length() + s2->length() + s3->length()) == length()); > } > > void finishCreation(VM& vm, ExecState* exec, JSString* base, unsigned offset, unsigned length) >@@ -330,16 +503,14 @@ class JSRopeString final : public JSString { > Base::finishCreation(vm); > RELEASE_ASSERT(!sumOverflows<int32_t>(offset, length)); > RELEASE_ASSERT(offset + length <= base->length()); >- setLength(length); >- setIs8Bit(base->is8Bit()); > setIsSubstring(true); > if (base->isSubstring()) { > JSRopeString* baseRope = jsCast<JSRopeString*>(base); >- substringBase().set(vm, this, baseRope->substringBase().get()); >- substringOffset() = baseRope->substringOffset() + offset; >+ setSubstringBase(vm, baseRope->substringBase()); >+ setSubstringOffset(baseRope->substringOffset() + offset); > } else { >- substringBase().set(vm, this, base); >- substringOffset() = offset; >+ setSubstringBase(vm, base); >+ setSubstringOffset(offset); > > // For now, let's not allow substrings with a rope base. > // Resolve non-substring rope bases so we don't have to deal with it. >@@ -347,6 +518,9 @@ class JSRopeString final : public JSString { > if (base->isRope()) > jsCast<JSRopeString*>(base)->resolveRope(exec); > } >+ setLength(length); >+ setIs8Bit(base->is8Bit()); >+ ASSERT(length == this->length()); > } > > ALWAYS_INLINE void finishCreationSubstringOfResolved(VM& vm, JSString* base, unsigned offset, unsigned length) >@@ -354,73 +528,78 @@ class JSRopeString final : public JSString { > Base::finishCreation(vm); > RELEASE_ASSERT(!sumOverflows<int32_t>(offset, length)); > RELEASE_ASSERT(offset + length <= base->length()); >+ setIsSubstring(true); >+ setSubstringBase(vm, base); >+ setSubstringOffset(offset); > setLength(length); > setIs8Bit(base->is8Bit()); >- setIsSubstring(true); >- substringBase().set(vm, this, base); >- substringOffset() = offset; >+ ASSERT(length == this->length()); > } > > void finishCreation(VM& vm) > { > JSString::finishCreation(vm); > setIsSubstring(false); >- fiber(0).clear(); >- fiber(1).clear(); >- fiber(2).clear(); >+ setFiber0(vm, nullptr); >+ setFiber1(vm, nullptr); >+ setFiber2(vm, nullptr); >+ setLength(0); >+ setIs8Bit(true); >+ ASSERT(!length()); > } > > void append(VM& vm, size_t index, JSString* jsString) > { >- fiber(index).set(vm, this, jsString); >- setLength(length() + jsString->length()); >- setIs8Bit(is8Bit() && jsString->is8Bit()); >- } >- >- static JSRopeString* createNull(VM& vm) >- { >- JSRopeString* newString = new (NotNull, allocateCell<JSRopeString>(vm.heap)) JSRopeString(vm); >- newString->finishCreation(vm); >- return newString; >+ bool is8Bit = this->is8Bit() && jsString->is8Bit(); >+ unsigned length = this->length() + jsString->length(); >+ setFiber(vm, index, jsString); >+ setLength(length); >+ setIs8Bit(is8Bit); >+ ASSERT(length == this->length()); > } > > public: >- static JSString* create(VM& vm, ExecState* exec, JSString* base, unsigned offset, unsigned length) >+ static JSRopeString* create(VM& vm, ExecState* exec, JSString* base, unsigned offset, unsigned length) > { > JSRopeString* newString = new (NotNull, allocateCell<JSRopeString>(vm.heap)) JSRopeString(vm); > newString->finishCreation(vm, exec, base, offset, length); >+ ASSERT(newString->length()); > return newString; > } > >- ALWAYS_INLINE static JSString* createSubstringOfResolved(VM& vm, GCDeferralContext* deferralContext, JSString* base, unsigned offset, unsigned length) >+ ALWAYS_INLINE static JSRopeString* createSubstringOfResolved(VM& vm, GCDeferralContext* deferralContext, JSString* base, unsigned offset, unsigned length) > { > JSRopeString* newString = new (NotNull, allocateCell<JSRopeString>(vm.heap, deferralContext)) JSRopeString(vm); > newString->finishCreationSubstringOfResolved(vm, base, offset, length); >+ ASSERT(newString->length()); > return newString; > } > >- ALWAYS_INLINE static JSString* createSubstringOfResolved(VM& vm, JSString* base, unsigned offset, unsigned length) >- { >- return createSubstringOfResolved(vm, nullptr, base, offset, length); >- } >+ static ptrdiff_t offsetOfLength() { return OBJECT_OFFSETOF(JSRopeString, m_compactFibers) + CompactFibers::offsetOfLength(); } // 32byte width. >+#if !CPU(NEEDS_ALIGNED_ACCESS) && CPU(ADDRESS64) >+ static ptrdiff_t offsetOfFlags() { return offsetOfValue() + sizeof(uint16_t) * 3; } // 16byte width. >+ static ptrdiff_t offsetOfFiber0() { return offsetOfValue(); } >+ static ptrdiff_t offsetOfFiber1Lower() { return OBJECT_OFFSETOF(JSRopeString, m_compactFibers) + CompactFibers::offsetOfFiber1Lower(); } // 32byte width. >+ static ptrdiff_t offsetOfFiber1Upper() { return OBJECT_OFFSETOF(JSRopeString, m_compactFibers) + CompactFibers::offsetOfFiber1Upper(); } // 16byte width. >+ static ptrdiff_t offsetOfFiber2Lower() { return OBJECT_OFFSETOF(JSRopeString, m_compactFibers) + CompactFibers::offsetOfFiber2Lower(); } // 32byte width. >+ static ptrdiff_t offsetOfFiber2Upper() { return OBJECT_OFFSETOF(JSRopeString, m_compactFibers) + CompactFibers::offsetOfFiber2Upper(); } // 16byte width. >+#endif > >- void visitFibers(SlotVisitor&); >- >- static ptrdiff_t offsetOfFibers() { return OBJECT_OFFSETOF(JSRopeString, u); } >- >- static const unsigned s_maxInternalRopeLength = 3; >+ static constexpr unsigned s_maxInternalRopeLength = 3; > > private: >- static JSString* create(VM& vm, JSString* s1, JSString* s2) >+ static JSRopeString* create(VM& vm, JSString* s1, JSString* s2) > { > JSRopeString* newString = new (NotNull, allocateCell<JSRopeString>(vm.heap)) JSRopeString(vm); > newString->finishCreation(vm, s1, s2); >+ ASSERT(newString->length()); > return newString; > } >- static JSString* create(VM& vm, JSString* s1, JSString* s2, JSString* s3) >+ static JSRopeString* create(VM& vm, JSString* s1, JSString* s2, JSString* s3) > { > JSRopeString* newString = new (NotNull, allocateCell<JSRopeString>(vm.heap)) JSRopeString(vm); > newString->finishCreation(vm, s1, s2, s3); >+ ASSERT(newString->length()); > return newString; > } > >@@ -430,6 +609,7 @@ class JSRopeString final : public JSString { > // If nullOrExecForOOM is null, resolveRope() will be do nothing in the event of an OOM error. > // The rope value will remain a null string in that case. > JS_EXPORT_PRIVATE void resolveRope(ExecState* nullOrExecForOOM) const; >+ template<typename Function> void resolveRopeWithFunction(ExecState* nullOrExecForOOM, Function&&) const; > JS_EXPORT_PRIVATE void resolveRopeToAtomicString(ExecState*) const; > JS_EXPORT_PRIVATE RefPtr<AtomicStringImpl> resolveRopeToExistingAtomicString(ExecState*) const; > void resolveRopeSlowCase8(LChar*) const; >@@ -439,52 +619,131 @@ class JSRopeString final : public JSString { > void resolveRopeInternal8NoSubstring(LChar*) const; > void resolveRopeInternal16(UChar*) const; > void resolveRopeInternal16NoSubstring(UChar*) const; >- void clearFibers() const; > StringView unsafeView(ExecState*) const; > StringViewWithUnderlyingString viewWithUnderlyingString(ExecState*) const; > >- WriteBarrierBase<JSString>& fiber(unsigned i) const >+ JSString* fiber0() const >+ { >+ return bitwise_cast<JSString*>(m_fiber & stringMask); >+ } >+ >+ JSString* fiber1() const >+ { >+ return m_compactFibers.fiber1(); >+ } >+ >+ JSString* fiber2() const >+ { >+ return m_compactFibers.fiber2(); >+ } >+ >+ JSString* fiber(unsigned i) const > { > ASSERT(!isSubstring()); > ASSERT(i < s_maxInternalRopeLength); >- return u[i].string; >+ switch (i) { >+ case 0: >+ return fiber0(); >+ case 1: >+ return fiber1(); >+ case 2: >+ return fiber2(); >+ } >+ ASSERT_NOT_REACHED(); >+ return nullptr; >+ } >+ >+ void setFiber0(VM& vm, JSString* fiber) >+ { >+ uintptr_t pointer = bitwise_cast<uintptr_t>(fiber); >+ m_fiber = (pointer | (m_fiber & ~stringMask)); >+ vm.heap.writeBarrier(this, fiber); >+ } >+ >+ void setFiber1(VM& vm, JSString* fiber) >+ { >+ m_compactFibers.setFiber1(fiber); >+ vm.heap.writeBarrier(this, fiber); >+ } >+ >+ void setFiber2(VM& vm, JSString* fiber) >+ { >+ m_compactFibers.setFiber2(fiber); >+ vm.heap.writeBarrier(this, fiber); >+ } >+ >+ void setFiber(VM& vm, unsigned i, JSString* fiber) >+ { >+ ASSERT(!isSubstring()); >+ ASSERT(i < s_maxInternalRopeLength); >+ switch (i) { >+ case 0: >+ setFiber0(vm, fiber); >+ return; >+ case 1: >+ setFiber1(vm, fiber); >+ return; >+ case 2: >+ setFiber2(vm, fiber); >+ return; >+ } >+ } >+ >+ void clearFiber0() >+ { >+ m_fiber = (m_fiber & ~stringMask); >+ } >+ >+ void clearFiber1() >+ { >+ m_compactFibers.setFiber1(nullptr); >+ } >+ >+ void clearFiber2() >+ { >+ m_compactFibers.setFiber2(nullptr); > } > >- WriteBarrierBase<JSString>& substringBase() const >+ void setSubstringBase(VM& vm, JSString* fiber) > { >- return u[1].string; >+ setFiber1(vm, fiber); > } > >- uintptr_t& substringOffset() const >+ JSString* substringBase() const { return fiber1(); } >+ >+ void setSubstringOffset(unsigned offset) > { >- return u[2].number; >+ // Do not emit write barrier. This is not actually an JSString*. >+ m_compactFibers.setFiber2(bitwise_cast<JSString*>(static_cast<uintptr_t>(offset))); > } > >- static uintptr_t notSubstringSentinel() >+ unsigned substringOffset() const >+ { >+ return static_cast<unsigned>(bitwise_cast<uintptr_t>(fiber2())); >+ } >+ >+ static constexpr uintptr_t notSubstringSentinel() > { > return 0; > } > >- static uintptr_t substringSentinel() >+ static constexpr uintptr_t substringSentinel() > { >- return 1; >+ return 2; > } > > bool isSubstring() const > { >- return u[0].number == substringSentinel(); >+ return (m_fiber & stringMask) == substringSentinel(); > } > > void setIsSubstring(bool isSubstring) > { >- u[0].number = isSubstring ? substringSentinel() : notSubstringSentinel(); >+ m_fiber |= (isSubstring ? substringSentinel() : notSubstringSentinel()); > } > >- mutable union { >- uintptr_t number; >- WriteBarrierBase<JSString> string; >- } u[s_maxInternalRopeLength]; >- >+ static_assert(s_maxInternalRopeLength >= 2, ""); >+ mutable CompactFibers m_compactFibers; > > friend JSString* jsString(ExecState*, JSString*, JSString*); > friend JSString* jsString(ExecState*, const String&, JSString*); >@@ -496,9 +755,36 @@ class JSRopeString final : public JSString { > > JS_EXPORT_PRIVATE JSString* jsStringWithCacheSlowCase(VM&, StringImpl&); > >+ALWAYS_INLINE bool JSString::is8Bit() const >+{ >+ uintptr_t pointer = m_fiber; >+ if (pointer & IsRopeInPointer) { >+#if !CPU(NEEDS_ALIGNED_ACCESS) && CPU(ADDRESS64) >+ // Do not load m_fiber twice. We should use the information in pointer. >+ // Otherwise, JSRopeString may be converted to JSString between the first and second accesses. >+ return pointer & JSRopeString::Is8BitInPointer; >+#else >+ // It is OK to load flag since even if JSRopeString is converted to JSString, this flag still exists. >+ return static_cast<const JSRopeString*>(this)->m_compactFibers.is8Bit(); >+#endif >+ } >+ return bitwise_cast<StringImpl*>(pointer)->is8Bit(); >+} >+ >+ALWAYS_INLINE unsigned JSString::length() const >+{ >+ uintptr_t pointer = m_fiber; >+ if (pointer & IsRopeInPointer) >+ return static_cast<const JSRopeString*>(this)->length(); >+ return bitwise_cast<StringImpl*>(pointer)->length(); >+} >+ > inline const StringImpl* JSString::tryGetValueImpl() const > { >- return m_value.impl(); >+ uintptr_t pointer = m_fiber; >+ if (pointer & IsRopeInPointer) >+ return nullptr; >+ return bitwise_cast<StringImpl*>(pointer); > } > > inline JSString* asString(JSValue value) >@@ -545,7 +831,7 @@ ALWAYS_INLINE AtomicString JSString::toAtomicString(ExecState* exec) const > RELEASE_ASSERT(vm()->heap.expectDoesGC()); > if (isRope()) > static_cast<const JSRopeString*>(this)->resolveRopeToAtomicString(exec); >- return AtomicString(m_value); >+ return AtomicString(valueInternal()); > } > > ALWAYS_INLINE RefPtr<AtomicStringImpl> JSString::toExistingAtomicString(ExecState* exec) const >@@ -554,9 +840,9 @@ ALWAYS_INLINE RefPtr<AtomicStringImpl> JSString::toExistingAtomicString(ExecStat > RELEASE_ASSERT(vm()->heap.expectDoesGC()); > if (isRope()) > return static_cast<const JSRopeString*>(this)->resolveRopeToExistingAtomicString(exec); >- if (m_value.impl()->isAtomic()) >- return static_cast<AtomicStringImpl*>(m_value.impl()); >- return AtomicStringImpl::lookUp(m_value.impl()); >+ if (valueInternal().impl()->isAtomic()) >+ return static_cast<AtomicStringImpl*>(valueInternal().impl()); >+ return AtomicStringImpl::lookUp(valueInternal().impl()); > } > > inline const String& JSString::value(ExecState* exec) const >@@ -565,7 +851,7 @@ inline const String& JSString::value(ExecState* exec) const > RELEASE_ASSERT(vm()->heap.expectDoesGC()); > if (isRope()) > static_cast<const JSRopeString*>(this)->resolveRope(exec); >- return m_value; >+ return valueInternal(); > } > > inline const String& JSString::tryGetValue(bool allocationAllowed) const >@@ -579,7 +865,7 @@ inline const String& JSString::tryGetValue(bool allocationAllowed) const > } > } else > RELEASE_ASSERT(!isRope()); >- return m_value; >+ return valueInternal(); > } > > inline JSString* JSString::getIndex(ExecState* exec, unsigned i) >@@ -670,11 +956,6 @@ inline JSString* jsOwnedString(VM* vm, const String& s) > return JSString::createHasOtherOwner(*vm, *s.impl()); > } > >-inline JSRopeString* jsStringBuilder(VM* vm) >-{ >- return JSRopeString::createNull(*vm); >-} >- > inline JSString* jsEmptyString(ExecState* exec) { return jsEmptyString(&exec->vm()); } > inline JSString* jsString(ExecState* exec, const String& s) { return jsString(&exec->vm(), s); } > inline JSString* jsSingleCharacterString(ExecState* exec, UChar c) { return jsSingleCharacterString(&exec->vm(), c); } >@@ -755,12 +1036,13 @@ ALWAYS_INLINE StringView JSRopeString::unsafeView(ExecState* exec) const > if (validateDFGDoesGC) > RELEASE_ASSERT(vm()->heap.expectDoesGC()); > if (isSubstring()) { >- if (is8Bit()) >- return StringView(substringBase()->m_value.characters8() + substringOffset(), length()); >- return StringView(substringBase()->m_value.characters16() + substringOffset(), length()); >+ auto& base = substringBase()->valueInternal(); >+ if (base.is8Bit()) >+ return StringView(base.characters8() + substringOffset(), length()); >+ return StringView(base.characters16() + substringOffset(), length()); > } > resolveRope(exec); >- return m_value; >+ return valueInternal(); > } > > ALWAYS_INLINE StringViewWithUnderlyingString JSRopeString::viewWithUnderlyingString(ExecState* exec) const >@@ -768,13 +1050,13 @@ ALWAYS_INLINE StringViewWithUnderlyingString JSRopeString::viewWithUnderlyingStr > if (validateDFGDoesGC) > RELEASE_ASSERT(vm()->heap.expectDoesGC()); > if (isSubstring()) { >- auto& base = substringBase()->m_value; >- if (is8Bit()) >+ auto& base = substringBase()->valueInternal(); >+ if (base.is8Bit()) > return { { base.characters8() + substringOffset(), length() }, base }; > return { { base.characters16() + substringOffset(), length() }, base }; > } > resolveRope(exec); >- return { m_value, m_value }; >+ return { valueInternal(), valueInternal() }; > } > > ALWAYS_INLINE StringView JSString::unsafeView(ExecState* exec) const >@@ -783,14 +1065,14 @@ ALWAYS_INLINE StringView JSString::unsafeView(ExecState* exec) const > RELEASE_ASSERT(vm()->heap.expectDoesGC()); > if (isRope()) > return static_cast<const JSRopeString*>(this)->unsafeView(exec); >- return m_value; >+ return valueInternal(); > } > > ALWAYS_INLINE StringViewWithUnderlyingString JSString::viewWithUnderlyingString(ExecState* exec) const > { > if (isRope()) > return static_cast<const JSRopeString&>(*this).viewWithUnderlyingString(exec); >- return { m_value, m_value }; >+ return { valueInternal(), valueInternal() }; > } > > inline bool JSString::isSubstring() const >@@ -834,4 +1116,8 @@ inline String JSValue::toWTFString(ExecState* exec) const > return toWTFStringSlowCase(exec); > } > >+#if ASSERT_DISABLED >+IGNORE_RETURN_TYPE_WARNINGS_END >+#endif >+ > } // namespace JSC >diff --git a/Source/JavaScriptCore/runtime/JSStringInlines.h b/Source/JavaScriptCore/runtime/JSStringInlines.h >index 19d4756e0c1a9bcee73bb8a5b0cae6e03373074c..e12c659ee7590a7fe20d948b3f5d8b7bcf599f88 100644 >--- a/Source/JavaScriptCore/runtime/JSStringInlines.h >+++ b/Source/JavaScriptCore/runtime/JSStringInlines.h >@@ -29,11 +29,18 @@ > > namespace JSC { > >+inline JSString::~JSString() >+{ >+ if (isRope()) >+ return; >+ valueInternal().~String(); >+} >+ > bool JSString::equal(ExecState* exec, JSString* other) const > { > if (isRope() || other->isRope()) > return equalSlowCase(exec, other); >- return WTF::equal(*m_value.impl(), *other->m_value.impl()); >+ return WTF::equal(*valueInternal().impl(), *other->valueInternal().impl()); > } > > template<typename StringType> >diff --git a/Source/JavaScriptCore/runtime/RegExpMatchesArray.h b/Source/JavaScriptCore/runtime/RegExpMatchesArray.h >index e957bf9483d8d42b39e439134031476a0dab83ae..268c74ba413ca47b7b9ab089459618fc11ca979d 100644 >--- a/Source/JavaScriptCore/runtime/RegExpMatchesArray.h >+++ b/Source/JavaScriptCore/runtime/RegExpMatchesArray.h >@@ -117,7 +117,7 @@ ALWAYS_INLINE JSArray* createRegExpMatchesArray( > int start = subpatternResults[2 * i]; > JSValue value; > if (start >= 0) >- value = JSRopeString::createSubstringOfResolved(vm, &deferralContext, input, start, subpatternResults[2 * i + 1] - start); >+ value = jsSubstringOfResolved(vm, &deferralContext, input, start, subpatternResults[2 * i + 1] - start); > else > value = jsUndefined(); > array->initializeIndexWithoutBarrier(scope, i, value); >@@ -141,7 +141,7 @@ ALWAYS_INLINE JSArray* createRegExpMatchesArray( > int start = subpatternResults[2 * i]; > JSValue value; > if (start >= 0) >- value = JSRopeString::createSubstringOfResolved(vm, &deferralContext, input, start, subpatternResults[2 * i + 1] - start); >+ value = jsSubstringOfResolved(vm, &deferralContext, input, start, subpatternResults[2 * i + 1] - start); > else > value = jsUndefined(); > array->initializeIndexWithoutBarrier(scope, i, value, ArrayWithContiguous); >diff --git a/Source/JavaScriptCore/runtime/RegExpObjectInlines.h b/Source/JavaScriptCore/runtime/RegExpObjectInlines.h >index d02dfcdcfa77561d6bdc2abf0281a58e98f531ec..e6f4d294dbeb335ced6a6c69ef5566fc9b3978d4 100644 >--- a/Source/JavaScriptCore/runtime/RegExpObjectInlines.h >+++ b/Source/JavaScriptCore/runtime/RegExpObjectInlines.h >@@ -162,7 +162,7 @@ JSValue collectMatches(VM& vm, ExecState* exec, JSString* string, const String& > auto iterate = [&] () { > size_t end = result.end; > size_t length = end - result.start; >- array->putDirectIndex(exec, arrayIndex++, JSRopeString::createSubstringOfResolved(vm, string, result.start, length)); >+ array->putDirectIndex(exec, arrayIndex++, jsSubstringOfResolved(vm, string, result.start, length)); > if (UNLIKELY(scope.exception())) { > hasException = true; > return; >diff --git a/Source/JavaScriptCore/runtime/RegExpPrototype.cpp b/Source/JavaScriptCore/runtime/RegExpPrototype.cpp >index b6a5891ca8bf2d1f65600499e70bc1ccfb497e14..60724cccc62e5f850be4b2e58c631d7c457260e7 100644 >--- a/Source/JavaScriptCore/runtime/RegExpPrototype.cpp >+++ b/Source/JavaScriptCore/runtime/RegExpPrototype.cpp >@@ -653,7 +653,7 @@ EncodedJSValue JSC_HOST_CALL regExpProtoFuncSplitFast(ExecState* exec) > return ContinueSplit; > }, > [&] (bool isDefined, unsigned start, unsigned length) -> SplitControl { >- result->putDirectIndex(exec, resultLength++, isDefined ? JSRopeString::createSubstringOfResolved(vm, inputString, start, length) : jsUndefined()); >+ result->putDirectIndex(exec, resultLength++, isDefined ? jsSubstringOfResolved(vm, inputString, start, length) : jsUndefined()); > RETURN_IF_EXCEPTION(scope, AbortSplit); > if (resultLength >= limit) > return AbortSplit; >@@ -667,7 +667,7 @@ EncodedJSValue JSC_HOST_CALL regExpProtoFuncSplitFast(ExecState* exec) > // 20. Let T be a String value equal to the substring of S consisting of the elements at indices p (inclusive) through size (exclusive). > // 21. Perform ! CreateDataProperty(A, ! ToString(lengthA), T). > scope.release(); >- result->putDirectIndex(exec, resultLength, JSRopeString::createSubstringOfResolved(vm, inputString, position, inputSize - position)); >+ result->putDirectIndex(exec, resultLength, jsSubstringOfResolved(vm, inputString, position, inputSize - position)); > > // 22. Return A. > return JSValue::encode(result); >@@ -706,7 +706,7 @@ EncodedJSValue JSC_HOST_CALL regExpProtoFuncSplitFast(ExecState* exec) > return ContinueSplit; > }, > [&] (bool isDefined, unsigned start, unsigned length) -> SplitControl { >- result->putDirectIndex(exec, resultLength++, isDefined ? JSRopeString::createSubstringOfResolved(vm, inputString, start, length) : jsUndefined()); >+ result->putDirectIndex(exec, resultLength++, isDefined ? jsSubstringOfResolved(vm, inputString, start, length) : jsUndefined()); > RETURN_IF_EXCEPTION(scope, AbortSplit); > if (resultLength >= limit) > return AbortSplit; >@@ -720,7 +720,7 @@ EncodedJSValue JSC_HOST_CALL regExpProtoFuncSplitFast(ExecState* exec) > // 20. Let T be a String value equal to the substring of S consisting of the elements at indices p (inclusive) through size (exclusive). > // 21. Perform ! CreateDataProperty(A, ! ToString(lengthA), T). > scope.release(); >- result->putDirectIndex(exec, resultLength, JSRopeString::createSubstringOfResolved(vm, inputString, position, inputSize - position)); >+ result->putDirectIndex(exec, resultLength, jsSubstringOfResolved(vm, inputString, position, inputSize - position)); > // 22. Return A. > return JSValue::encode(result); > } >diff --git a/Source/JavaScriptCore/runtime/SmallStrings.cpp b/Source/JavaScriptCore/runtime/SmallStrings.cpp >index 8f26b3e1e2a164e7f3c3347287c0f4e33734747f..a46bffee54f94175bc73ef25f320655647e5f156 100644 >--- a/Source/JavaScriptCore/runtime/SmallStrings.cpp >+++ b/Source/JavaScriptCore/runtime/SmallStrings.cpp >@@ -68,7 +68,10 @@ SmallStrings::SmallStrings() > > void SmallStrings::initializeCommonStrings(VM& vm) > { >- createEmptyString(&vm); >+ ASSERT(!m_emptyString); >+ m_emptyString = JSString::createEmptyString(vm); >+ ASSERT(m_needsToBeVisited); >+ > for (unsigned i = 0; i <= maxSingleCharacterString; ++i) > createSingleCharacterString(&vm, i); > #define JSC_COMMON_STRINGS_ATTRIBUTE_INITIALIZE(name) initialize(&vm, m_##name, #name); >@@ -97,13 +100,6 @@ SmallStrings::~SmallStrings() > { > } > >-void SmallStrings::createEmptyString(VM* vm) >-{ >- ASSERT(!m_emptyString); >- m_emptyString = JSString::createHasOtherOwner(*vm, *StringImpl::empty()); >- ASSERT(m_needsToBeVisited); >-} >- > void SmallStrings::createSingleCharacterString(VM* vm, unsigned char character) > { > if (!m_storage) >diff --git a/Source/JavaScriptCore/runtime/SmallStrings.h b/Source/JavaScriptCore/runtime/SmallStrings.h >index 769d494f320c403eff6f1539e22396f09cf83486..a5270d47e96385da24996b3037b910ffcf8b081b 100644 >--- a/Source/JavaScriptCore/runtime/SmallStrings.h >+++ b/Source/JavaScriptCore/runtime/SmallStrings.h >@@ -126,7 +126,6 @@ class SmallStrings { > private: > static const unsigned singleCharacterStringCount = maxSingleCharacterString + 1; > >- void createEmptyString(VM*); > void createSingleCharacterString(VM*, unsigned char); > > void initialize(VM*, JSString*&, const char* value); >diff --git a/Source/WTF/wtf/text/StringImpl.h b/Source/WTF/wtf/text/StringImpl.h >index 77e00d1ca0ffe7554c35780148ded4c292dfca84..e564648c543cd823de8f223c4237cc9269519304 100644 >--- a/Source/WTF/wtf/text/StringImpl.h >+++ b/Source/WTF/wtf/text/StringImpl.h >@@ -200,8 +200,8 @@ class StringImpl : private StringImplShape { > static constexpr const unsigned s_hashFlagStringKindIsAtomic = 1u << (s_flagStringKindCount); > static constexpr const unsigned s_hashFlagStringKindIsSymbol = 1u << (s_flagStringKindCount + 1); > static constexpr const unsigned s_hashMaskStringKind = s_hashFlagStringKindIsAtomic | s_hashFlagStringKindIsSymbol; >- static constexpr const unsigned s_hashFlag8BitBuffer = 1u << 3; >- static constexpr const unsigned s_hashFlagDidReportCost = 1u << 2; >+ static constexpr const unsigned s_hashFlagDidReportCost = 1u << 3; >+ static constexpr const unsigned s_hashFlag8BitBuffer = 1u << 2; > static constexpr const unsigned s_hashMaskBufferOwnership = (1u << 0) | (1u << 1); > > enum StringKind { >@@ -264,10 +264,10 @@ class StringImpl : private StringImplShape { > static Expected<Ref<StringImpl>, UTF8ConversionError> tryReallocate(Ref<StringImpl>&& originalString, unsigned length, UChar*& data); > > static unsigned flagsOffset() { return OBJECT_OFFSETOF(StringImpl, m_hashAndFlags); } >- static unsigned flagIs8Bit() { return s_hashFlag8BitBuffer; } >- static unsigned flagIsAtomic() { return s_hashFlagStringKindIsAtomic; } >- static unsigned flagIsSymbol() { return s_hashFlagStringKindIsSymbol; } >- static unsigned maskStringKind() { return s_hashMaskStringKind; } >+ static constexpr unsigned flagIs8Bit() { return s_hashFlag8BitBuffer; } >+ static constexpr unsigned flagIsAtomic() { return s_hashFlagStringKindIsAtomic; } >+ static constexpr unsigned flagIsSymbol() { return s_hashFlagStringKindIsSymbol; } >+ static constexpr unsigned maskStringKind() { return s_hashMaskStringKind; } > static unsigned dataOffset() { return OBJECT_OFFSETOF(StringImpl, m_data8); } > > template<typename CharacterType, size_t inlineCapacity, typename OverflowHandler, size_t minCapacity> >diff --git a/JSTests/ChangeLog b/JSTests/ChangeLog >index 261d8178b3a76236d1fa30ce6ecabe0386111f4a..8653cf5ecce9740ecf5dede76129edf609ed0676 100644 >--- a/JSTests/ChangeLog >+++ b/JSTests/ChangeLog >@@ -1,3 +1,14 @@ >+2019-02-21 Yusuke Suzuki <ysuzuki@apple.com> >+ >+ [JSC] sizeof(JSString) should be 16 >+ https://bugs.webkit.org/show_bug.cgi?id=194375 >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ * microbenchmarks/make-rope.js: Added. >+ (makeRope): >+ * stress/to-lower-case-intrinsic-on-empty-rope.js: Removed. We no longer allow 0 length JSString except for jsEmptyString singleton per VM. >+ > 2019-02-19 Joseph Pecoraro <pecoraro@apple.com> > > Web Inspector: Improve ES6 Class instances in Heap Snapshot instances view >diff --git a/JSTests/microbenchmarks/make-rope.js b/JSTests/microbenchmarks/make-rope.js >new file mode 100644 >index 0000000000000000000000000000000000000000..e0f18681707700d4f54121c8c5f1af63c62ea5a4 >--- /dev/null >+++ b/JSTests/microbenchmarks/make-rope.js >@@ -0,0 +1,10 @@ >+function makeRope(str1, str2, str3) >+{ >+ return str1 + str2 + str3; >+} >+noInline(makeRope); >+ >+for (var i = 0; i < 1e6; ++i) { >+ makeRope("Hello", "Another", "World"); >+ makeRope(makeRope("Hello", "Another", "World"), "Another", makeRope("Hello", "Another", "World")); >+} >diff --git a/JSTests/stress/to-lower-case-intrinsic-on-empty-rope.js b/JSTests/stress/to-lower-case-intrinsic-on-empty-rope.js >deleted file mode 100644 >index 58f31244f689e63b07fa9fdb04a60649a2b06baf..0000000000000000000000000000000000000000 >--- a/JSTests/stress/to-lower-case-intrinsic-on-empty-rope.js >+++ /dev/null >@@ -1,26 +0,0 @@ >-function assert(b) { >- if (!b) >- throw new Error("bad!"); >-} >- >-function returnRope() { >- function helper(r, s) { >- // I'm paranoid about RegExp matching constant folding. >- return s.match(r)[1]; >- } >- noInline(helper); >- return helper(/(b*)fo/, "fo"); >-} >-noInline(returnRope); >- >-function lower(a) { >- return a.toLowerCase(); >-} >-noInline(lower); >- >-for (let i = 0; i < 10000; i++) { >- let rope = returnRope(); >- assert(!rope.length); >- assert(isRope(rope)); // Keep this test future proofed. If this stops returning a rope, we should try to find another way to return an empty rope. >- lower(rope); >-}
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Formatted Diff
|
Diff
Attachments on
bug 194375
:
361566
|
361603
|
361717
|
361778
|
361785
|
361786
|
361854
|
361868
|
361943
|
361958
|
361961
|
361962
|
362097
|
362362
|
362369
|
362370
|
362379
|
362480
|
362595
|
362624
|
362625
|
362627
|
362629
|
362632
|
362638
|
362658
|
362661
|
362669
|
362672
|
362676
|
362686
|
362816
|
362817
|
363064
|
363076
|
363077
|
363080
|
363288