WebKit Bugzilla
Attachment 359424 Details for
Bug 193557
: Audit bytecode fields and ensure that LLInt instructions for accessing them are appropriate.
Home
|
New
|
Browse
|
Search
|
[?]
|
Reports
|
Requests
|
Help
|
New Account
|
Log In
Remember
[x]
|
Forgot Password
Login:
[x]
[patch]
proposed patch.
bug-193557.patch (text/plain), 119.38 KB, created by
Mark Lam
on 2019-01-17 16:41:42 PST
(
hide
)
Description:
proposed patch.
Filename:
MIME Type:
Creator:
Mark Lam
Created:
2019-01-17 16:41:42 PST
Size:
119.38 KB
patch
obsolete
>Index: Source/JavaScriptCore/ChangeLog >=================================================================== >--- Source/JavaScriptCore/ChangeLog (revision 240135) >+++ Source/JavaScriptCore/ChangeLog (working copy) >@@ -1,3 +1,141 @@ >+2019-01-17 Mark Lam <mark.lam@apple.com> >+ >+ Audit bytecode fields and ensure that LLInt instructions for accessing them are appropriate. >+ https://bugs.webkit.org/show_bug.cgi?id=193557 >+ <rdar://problem/47369125> >+ >+ Reviewed by NOBODY (OOPS!). >+ >+ 1. Rename some bytecode fields so that it's easier to discern whether the LLInt >+ is accessing them the right way: >+ - distinguish between targetVirtualRegister and targetLabel. >+ - name all StructureID fields as structureID (oldStructureID, newStructureID) >+ instead of structure (oldStructure, newStructure). >+ >+ 2. Use bitwise_cast in struct Fits when sizeof(T) == size. >+ This prevents potential undefined behavior issues arising from doing >+ assignments with reinterpret_cast'ed pointers. >+ >+ 3. Make Special::Pointer an unsigned type (previously int). >+ Make ResolveType an unsigned type (previously int). >+ >+ 4. In LowLevelInterpreter*.asm: >+ >+ - rename the op macro argument to opcodeName or opcodeStruct respectively. >+ This makes it clearer which argument type the macro is working with. >+ >+ - rename the name macro argument to opcodeName. >+ >+ - fix operator types to match the field type being accessed. The following >+ may have resulted in bugs before: >+ >+ 1. The following should be read with getu() instead of get() because they >+ are unsigned ints: >+ OpSwitchImm::m_tableIndex >+ OpSwitchChar::m_tableIndex >+ OpGetFromArguments::m_index >+ OpPutToArguments::m_index >+ OpGetRestLength::m_numParametersToSkip >+ >+ OpJneqPtr::m_specialPointer should also be read with getu() though this >+ wasn't a bug because it was previously an int by default, and is only >+ changed to an unsigned int in this patch. >+ >+ 2.The following should be read with loadi (not loadp) because they are of >+ unsigned type (not a pointer): >+ OpResolveScope::Metadata::m_resolveType >+ CodeBlock::m_numParameters (see prepareForTailCall) >+ >+ 3. OpPutToScope::Metadata::m_operand should be read with loadp (not loadis) >+ because it is a uintptr_t. >+ >+ 4. The following should be read with loadi (not loadis) because they are >+ unsigned ints: >+ OpNegate::Metadata::m_arithProfile + ArithProfile::m_bits >+ OpPutById::Metadata::m_oldStructureID >+ OpPutToScope::Metadata::m_getPutInfo + GetPutInfo::m_operand >+ >+ These may not have manifested in bugs because the operations that follow >+ the load are 32-bit instructions which ignore the high word. >+ >+ 5. Give class GetPutInfo a default constructor so that we can use bitwise_cast >+ on it. Also befriend LLIntOffsetsExtractor so that we can take the offset of >+ m_operand in it. >+ >+ * bytecode/ArithProfile.h: >+ * bytecode/BytecodeList.rb: >+ * bytecode/BytecodeUseDef.h: >+ (JSC::computeUsesForBytecodeOffset): >+ (JSC::computeDefsForBytecodeOffset): >+ * bytecode/CodeBlock.cpp: >+ (JSC::CodeBlock::propagateTransitions): >+ (JSC::CodeBlock::finalizeLLIntInlineCaches): >+ * bytecode/Fits.h: >+ * bytecode/GetByIdMetadata.h: >+ * bytecode/GetByIdStatus.cpp: >+ (JSC::GetByIdStatus::computeFromLLInt): >+ * bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp: >+ (JSC::LLIntPrototypeLoadAdaptiveStructureWatchpoint::clearLLIntGetByIdCache): >+ * bytecode/PreciseJumpTargetsInlines.h: >+ (JSC::jumpTargetForInstruction): >+ (JSC::updateStoredJumpTargetsForInstruction): >+ * bytecode/PutByIdStatus.cpp: >+ (JSC::PutByIdStatus::computeFromLLInt): >+ * bytecode/SpecialPointer.h: >+ * bytecompiler/BytecodeGenerator.cpp: >+ (JSC::Label::setLocation): >+ * dfg/DFGByteCodeParser.cpp: >+ (JSC::DFG::ByteCodeParser::parseBlock): >+ * jit/JITArithmetic.cpp: >+ (JSC::JIT::emit_compareAndJump): >+ (JSC::JIT::emit_compareUnsignedAndJump): >+ (JSC::JIT::emit_compareAndJumpSlow): >+ * jit/JITArithmetic32_64.cpp: >+ (JSC::JIT::emit_compareAndJump): >+ (JSC::JIT::emit_compareUnsignedAndJump): >+ (JSC::JIT::emit_compareAndJumpSlow): >+ (JSC::JIT::emitBinaryDoubleOp): >+ * jit/JITOpcodes.cpp: >+ (JSC::JIT::emit_op_jmp): >+ (JSC::JIT::emit_op_jfalse): >+ (JSC::JIT::emit_op_jeq_null): >+ (JSC::JIT::emit_op_jneq_null): >+ (JSC::JIT::emit_op_jneq_ptr): >+ (JSC::JIT::emit_op_jeq): >+ (JSC::JIT::emit_op_jtrue): >+ (JSC::JIT::emit_op_jneq): >+ (JSC::JIT::compileOpStrictEqJump): >+ (JSC::JIT::emitSlow_op_jstricteq): >+ (JSC::JIT::emitSlow_op_jnstricteq): >+ (JSC::JIT::emit_op_check_tdz): >+ (JSC::JIT::emitSlow_op_jeq): >+ (JSC::JIT::emitSlow_op_jneq): >+ (JSC::JIT::emit_op_profile_type): >+ * jit/JITOpcodes32_64.cpp: >+ (JSC::JIT::emit_op_jmp): >+ (JSC::JIT::emit_op_jfalse): >+ (JSC::JIT::emit_op_jtrue): >+ (JSC::JIT::emit_op_jeq_null): >+ (JSC::JIT::emit_op_jneq_null): >+ (JSC::JIT::emit_op_jneq_ptr): >+ (JSC::JIT::emit_op_jeq): >+ (JSC::JIT::emitSlow_op_jeq): >+ (JSC::JIT::emit_op_jneq): >+ (JSC::JIT::emitSlow_op_jneq): >+ (JSC::JIT::compileOpStrictEqJump): >+ (JSC::JIT::emitSlow_op_jstricteq): >+ (JSC::JIT::emitSlow_op_jnstricteq): >+ (JSC::JIT::emit_op_check_tdz): >+ (JSC::JIT::emit_op_profile_type): >+ * llint/LLIntSlowPaths.cpp: >+ (JSC::LLInt::LLINT_SLOW_PATH_DECL): >+ (JSC::LLInt::setupGetByIdPrototypeCache): >+ * llint/LowLevelInterpreter.asm: >+ * llint/LowLevelInterpreter32_64.asm: >+ * llint/LowLevelInterpreter64.asm: >+ * runtime/CommonSlowPaths.cpp: >+ * runtime/GetPutInfo.h: >+ > 2019-01-17 Jer Noble <jer.noble@apple.com> > > SDK_VARIANT build destinations should be separate from non-SDK_VARIANT builds >Index: Source/JavaScriptCore/bytecode/ArithProfile.h >=================================================================== >--- Source/JavaScriptCore/bytecode/ArithProfile.h (revision 240135) >+++ Source/JavaScriptCore/bytecode/ArithProfile.h (working copy) >@@ -1,5 +1,5 @@ > /* >- * Copyright (C) 2016 Apple Inc. All rights reserved. >+ * Copyright (C) 2016-2019 Apple Inc. All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions >@@ -306,6 +306,8 @@ private: > void setBit(int mask) { m_bits |= mask; } > > uint32_t m_bits { 0 }; // We take care to update m_bits only in a single operation. We don't ever store an inconsistent bit representation to it. >+ >+ friend class JSC::LLIntOffsetsExtractor; > }; > > } // namespace JSC >Index: Source/JavaScriptCore/bytecode/BytecodeList.rb >=================================================================== >--- Source/JavaScriptCore/bytecode/BytecodeList.rb (revision 240135) >+++ Source/JavaScriptCore/bytecode/BytecodeList.rb (working copy) >@@ -1,4 +1,4 @@ >-# Copyright (C) 2018 Apple Inc. All rights reserved. >+# Copyright (C) 2018-2019 Apple Inc. All rights reserved. > # > # Redistribution and use in source and binary forms, with or without > # modification, are permitted provided that the following conditions >@@ -142,7 +142,7 @@ op :to_this, > > op :check_tdz, > args: { >- target: VirtualRegister, >+ targetVirtualRegister: VirtualRegister, > } > > op :new_object, >@@ -449,7 +449,7 @@ op :get_by_id_direct, > }, > metadata: { > profile: ValueProfile, # not used in llint >- structure: StructureID, >+ structureID: StructureID, > offset: unsigned, > } > >@@ -471,9 +471,9 @@ op :put_by_id, > flags: PutByIdFlags, > }, > metadata: { >- oldStructure: StructureID, >+ oldStructureID: StructureID, > offset: unsigned, >- newStructure: StructureID, >+ newStructureID: StructureID, > structureChain: WriteBarrierBase[StructureChain], > } > >@@ -598,38 +598,38 @@ op :define_accessor_property, > > op :jmp, > args: { >- target: BoundLabel, >+ targetLabel: BoundLabel, > } > > op :jtrue, > args: { > condition: VirtualRegister, >- target: BoundLabel, >+ targetLabel: BoundLabel, > } > > op :jfalse, > args: { > condition: VirtualRegister, >- target: BoundLabel, >+ targetLabel: BoundLabel, > } > > op :jeq_null, > args: { > value: VirtualRegister, >- target: BoundLabel, >+ targetLabel: BoundLabel, > } > > op :jneq_null, > args: { > value: VirtualRegister, >- target: BoundLabel, >+ targetLabel: BoundLabel, > } > > op :jneq_ptr, > args: { > value: VirtualRegister, > specialPointer: Special::Pointer, >- target: BoundLabel, >+ targetLabel: BoundLabel, > }, > metadata: { > hasJumped: bool, >@@ -655,7 +655,7 @@ op_group :BinaryJmp, > args: { > lhs: VirtualRegister, > rhs: VirtualRegister, >- target: BoundLabel, >+ targetLabel: BoundLabel, > } > > op :loop_hint >@@ -965,7 +965,7 @@ op :end, > > op :profile_type, > args: { >- target: VirtualRegister, >+ targetVirtualRegister: VirtualRegister, > symbolTableOrScopeDepth: int, > flag: ProfileTypeBytecodeFlag, > identifier?: unsigned, >Index: Source/JavaScriptCore/bytecode/BytecodeUseDef.h >=================================================================== >--- Source/JavaScriptCore/bytecode/BytecodeUseDef.h (revision 240135) >+++ Source/JavaScriptCore/bytecode/BytecodeUseDef.h (working copy) >@@ -94,9 +94,9 @@ void computeUsesForBytecodeOffset(Block* > > USES(OpGetScope, dst) > USES(OpToThis, srcDst) >- USES(OpCheckTdz, target) >+ USES(OpCheckTdz, targetVirtualRegister) > USES(OpIdentityWithProfile, srcDst) >- USES(OpProfileType, target); >+ USES(OpProfileType, targetVirtualRegister); > USES(OpThrow, value) > USES(OpThrowStaticError, message) > USES(OpEnd, value) >@@ -448,7 +448,7 @@ void computeDefsForBytecodeOffset(Block* > DEFS(OpMov, dst) > DEFS(OpNewObject, dst) > DEFS(OpToThis, srcDst) >- DEFS(OpCheckTdz, target) >+ DEFS(OpCheckTdz, targetVirtualRegister) > DEFS(OpGetScope, dst) > DEFS(OpCreateDirectArguments, dst) > DEFS(OpCreateScopedArguments, dst) >Index: Source/JavaScriptCore/bytecode/CodeBlock.cpp >=================================================================== >--- Source/JavaScriptCore/bytecode/CodeBlock.cpp (revision 240135) >+++ Source/JavaScriptCore/bytecode/CodeBlock.cpp (working copy) >@@ -1095,8 +1095,8 @@ void CodeBlock::propagateTransitions(con > auto instruction = m_instructions->at(propertyAccessInstructions[i]); > if (instruction->is<OpPutById>()) { > auto& metadata = instruction->as<OpPutById>().metadata(this); >- StructureID oldStructureID = metadata.m_oldStructure; >- StructureID newStructureID = metadata.m_newStructure; >+ StructureID oldStructureID = metadata.m_oldStructureID; >+ StructureID newStructureID = metadata.m_newStructureID; > if (!oldStructureID || !newStructureID) > continue; > Structure* oldStructure = >@@ -1226,7 +1226,7 @@ void CodeBlock::finalizeLLIntInlineCache > auto& metadata = curInstruction->as<OpGetById>().metadata(this); > if (metadata.m_mode != GetByIdMode::Default) > break; >- StructureID oldStructureID = metadata.m_modeMetadata.defaultMode.structure; >+ StructureID oldStructureID = metadata.m_modeMetadata.defaultMode.structureID; > if (!oldStructureID || Heap::isMarked(vm.heap.structureIDTable().get(oldStructureID))) > break; > if (Options::verboseOSR()) >@@ -1236,19 +1236,19 @@ void CodeBlock::finalizeLLIntInlineCache > } > case op_get_by_id_direct: { > auto& metadata = curInstruction->as<OpGetByIdDirect>().metadata(this); >- StructureID oldStructureID = metadata.m_structure; >+ StructureID oldStructureID = metadata.m_structureID; > if (!oldStructureID || Heap::isMarked(vm.heap.structureIDTable().get(oldStructureID))) > break; > if (Options::verboseOSR()) > dataLogF("Clearing LLInt property access.\n"); >- metadata.m_structure = 0; >+ metadata.m_structureID = 0; > metadata.m_offset = 0; > break; > } > case op_put_by_id: { > auto& metadata = curInstruction->as<OpPutById>().metadata(this); >- StructureID oldStructureID = metadata.m_oldStructure; >- StructureID newStructureID = metadata.m_newStructure; >+ StructureID oldStructureID = metadata.m_oldStructureID; >+ StructureID newStructureID = metadata.m_newStructureID; > StructureChain* chain = metadata.m_structureChain.get(); > if ((!oldStructureID || Heap::isMarked(vm.heap.structureIDTable().get(oldStructureID))) > && (!newStructureID || Heap::isMarked(vm.heap.structureIDTable().get(newStructureID))) >@@ -1256,9 +1256,9 @@ void CodeBlock::finalizeLLIntInlineCache > break; > if (Options::verboseOSR()) > dataLogF("Clearing LLInt put transition.\n"); >- metadata.m_oldStructure = 0; >+ metadata.m_oldStructureID = 0; > metadata.m_offset = 0; >- metadata.m_newStructure = 0; >+ metadata.m_newStructureID = 0; > metadata.m_structureChain.clear(); > break; > } >Index: Source/JavaScriptCore/bytecode/Fits.h >=================================================================== >--- Source/JavaScriptCore/bytecode/Fits.h (revision 240135) >+++ Source/JavaScriptCore/bytecode/Fits.h (working copy) >@@ -1,5 +1,5 @@ > /* >- * Copyright (C) 2018 Apple Inc. All rights reserved. >+ * Copyright (C) 2018-2019 Apple Inc. All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions >@@ -52,10 +52,10 @@ template<typename T, OpcodeSize size> > struct Fits<T, size, std::enable_if_t<sizeof(T) == size, std::true_type>> { > static bool check(T) { return true; } > >- static typename TypeBySize<size>::type convert(T t) { return *reinterpret_cast<typename TypeBySize<size>::type*>(&t); } >+ static typename TypeBySize<size>::type convert(T t) { return bitwise_cast<typename TypeBySize<size>::type>(t); } > > template<class T1 = T, OpcodeSize size1 = size, typename = std::enable_if_t<!std::is_same<T1, typename TypeBySize<size1>::type>::value, std::true_type>> >- static T1 convert(typename TypeBySize<size1>::type t) { return *reinterpret_cast<T1*>(&t); } >+ static T1 convert(typename TypeBySize<size1>::type t) { return bitwise_cast<T1>(t); } > }; > > template<typename T, OpcodeSize size> >Index: Source/JavaScriptCore/bytecode/GetByIdMetadata.h >=================================================================== >--- Source/JavaScriptCore/bytecode/GetByIdMetadata.h (revision 240135) >+++ Source/JavaScriptCore/bytecode/GetByIdMetadata.h (working copy) >@@ -1,5 +1,5 @@ > /* >- * Copyright (C) 2018 Apple Inc. All rights reserved. >+ * Copyright (C) 2018-2019 Apple Inc. All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions >@@ -39,16 +39,16 @@ union GetByIdModeMetadata { > { } > > struct Default { >- StructureID structure; >+ StructureID structureID; > PropertyOffset cachedOffset; > } defaultMode; > > struct Unset { >- StructureID structure; >+ StructureID structureID; > } unsetMode; > > struct ProtoLoad { >- StructureID structure; >+ StructureID structureID; > PropertyOffset cachedOffset; > JSObject* cachedSlot; > } protoLoadMode; >Index: Source/JavaScriptCore/bytecode/GetByIdStatus.cpp >=================================================================== >--- Source/JavaScriptCore/bytecode/GetByIdStatus.cpp (revision 240135) >+++ Source/JavaScriptCore/bytecode/GetByIdStatus.cpp (working copy) >@@ -66,11 +66,11 @@ GetByIdStatus GetByIdStatus::computeFrom > // https://bugs.webkit.org/show_bug.cgi?id=158039 > if (metadata.m_mode != GetByIdMode::Default) > return GetByIdStatus(NoInformation, false); >- structureID = metadata.m_modeMetadata.defaultMode.structure; >+ structureID = metadata.m_modeMetadata.defaultMode.structureID; > break; > } > case op_get_by_id_direct: >- structureID = instruction->as<OpGetByIdDirect>().metadata(profiledBlock).m_structure; >+ structureID = instruction->as<OpGetByIdDirect>().metadata(profiledBlock).m_structureID; > break; > case op_try_get_by_id: { > // FIXME: We should not just bail if we see a try_get_by_id. >Index: Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp >=================================================================== >--- Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp (revision 240135) >+++ Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp (working copy) >@@ -61,7 +61,7 @@ void LLIntPrototypeLoadAdaptiveStructure > { > metadata.m_mode = GetByIdMode::Default; > metadata.m_modeMetadata.defaultMode.cachedOffset = 0; >- metadata.m_modeMetadata.defaultMode.structure = 0; >+ metadata.m_modeMetadata.defaultMode.structureID = 0; > } > > >Index: Source/JavaScriptCore/bytecode/PreciseJumpTargetsInlines.h >=================================================================== >--- Source/JavaScriptCore/bytecode/PreciseJumpTargetsInlines.h (revision 240135) >+++ Source/JavaScriptCore/bytecode/PreciseJumpTargetsInlines.h (working copy) >@@ -108,7 +108,7 @@ template<typename Op, typename Block> > inline int jumpTargetForInstruction(Block&& codeBlock, const InstructionStream::Ref& instruction) > { > auto bytecode = instruction->as<Op>(); >- return jumpTargetForInstruction(codeBlock, instruction, bytecode.m_target); >+ return jumpTargetForInstruction(codeBlock, instruction, bytecode.m_targetLabel); > } > > template<typename Block, typename Function> >@@ -139,7 +139,7 @@ inline void updateStoredJumpTargetsForIn > case __op::opcodeID: { \ > int32_t target = jumpTargetForInstruction<__op>(codeBlockOrHashMap, instruction); \ > int32_t newTarget = function(target); \ >- instruction->cast<__op>()->setTarget(BoundLabel(newTarget), [&]() { \ >+ instruction->cast<__op>()->setTargetLabel(BoundLabel(newTarget), [&]() { \ > codeBlock->addOutOfLineJumpTarget(finalOffset + instruction.offset(), newTarget); \ > return BoundLabel(); \ > }); \ >Index: Source/JavaScriptCore/bytecode/PutByIdStatus.cpp >=================================================================== >--- Source/JavaScriptCore/bytecode/PutByIdStatus.cpp (revision 240135) >+++ Source/JavaScriptCore/bytecode/PutByIdStatus.cpp (working copy) >@@ -55,13 +55,13 @@ PutByIdStatus PutByIdStatus::computeFrom > auto bytecode = instruction->as<OpPutById>(); > auto& metadata = bytecode.metadata(profiledBlock); > >- StructureID structureID = metadata.m_oldStructure; >+ StructureID structureID = metadata.m_oldStructureID; > if (!structureID) > return PutByIdStatus(NoInformation); > > Structure* structure = vm.heap.structureIDTable().get(structureID); > >- StructureID newStructureID = metadata.m_newStructure; >+ StructureID newStructureID = metadata.m_newStructureID; > if (!newStructureID) { > PropertyOffset offset = structure->getConcurrently(uid); > if (!isValidOffset(offset)) >Index: Source/JavaScriptCore/bytecode/SpecialPointer.h >=================================================================== >--- Source/JavaScriptCore/bytecode/SpecialPointer.h (revision 240135) >+++ Source/JavaScriptCore/bytecode/SpecialPointer.h (working copy) >@@ -1,5 +1,5 @@ > /* >- * Copyright (C) 2012 Apple Inc. All rights reserved. >+ * Copyright (C) 2012-2019 Apple Inc. All rights reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions >@@ -31,7 +31,7 @@ class CodeBlock; > class JSGlobalObject; > > namespace Special { >-enum Pointer { >+enum Pointer : unsigned { > CallFunction, > ApplyFunction, > ObjectConstructor, >Index: Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp >=================================================================== >--- Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp (revision 240135) >+++ Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp (working copy) >@@ -102,7 +102,7 @@ void Label::setLocation(BytecodeGenerato > > #define CASE(__op) \ > case __op::opcodeID: \ >- instruction->cast<__op>()->setTarget(BoundLabel(target), [&]() { \ >+ instruction->cast<__op>()->setTargetLabel(BoundLabel(target), [&]() { \ > generator.m_codeBlock->addOutOfLineJumpTarget(instruction.offset(), target); \ > return BoundLabel(); \ > }); \ >Index: Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp >=================================================================== >--- Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp (revision 240135) >+++ Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp (working copy) >@@ -5090,7 +5090,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_check_tdz: { > auto bytecode = currentInstruction->as<OpCheckTdz>(); >- addToGraph(CheckNotEmpty, get(bytecode.m_target)); >+ addToGraph(CheckNotEmpty, get(bytecode.m_targetVirtualRegister)); > NEXT_OPCODE(op_check_tdz); > } > >@@ -5598,7 +5598,7 @@ void ByteCodeParser::parseBlock(unsigned > case op_profile_type: { > auto bytecode = currentInstruction->as<OpProfileType>(); > auto& metadata = bytecode.metadata(codeBlock); >- Node* valueToProfile = get(bytecode.m_target); >+ Node* valueToProfile = get(bytecode.m_targetVirtualRegister); > addToGraph(ProfileType, OpInfo(metadata.m_typeLocation), valueToProfile); > NEXT_OPCODE(op_profile_type); > } >@@ -5615,7 +5615,7 @@ void ByteCodeParser::parseBlock(unsigned > case op_jmp: { > ASSERT(!m_currentBlock->terminal()); > auto bytecode = currentInstruction->as<OpJmp>(); >- int relativeOffset = jumpTarget(bytecode.m_target); >+ int relativeOffset = jumpTarget(bytecode.m_targetLabel); > addToGraph(Jump, OpInfo(m_currentIndex + relativeOffset)); > if (relativeOffset <= 0) > flushForTerminal(); >@@ -5624,7 +5624,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jtrue: { > auto bytecode = currentInstruction->as<OpJtrue>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* condition = get(bytecode.m_condition); > addToGraph(Branch, OpInfo(branchData(m_currentIndex + relativeOffset, m_currentIndex + currentInstruction->size())), condition); > LAST_OPCODE(op_jtrue); >@@ -5632,7 +5632,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jfalse: { > auto bytecode = currentInstruction->as<OpJfalse>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* condition = get(bytecode.m_condition); > addToGraph(Branch, OpInfo(branchData(m_currentIndex + currentInstruction->size(), m_currentIndex + relativeOffset)), condition); > LAST_OPCODE(op_jfalse); >@@ -5640,7 +5640,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jeq_null: { > auto bytecode = currentInstruction->as<OpJeqNull>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* value = get(bytecode.m_value); > Node* nullConstant = addToGraph(JSConstant, OpInfo(m_constantNull)); > Node* condition = addToGraph(CompareEq, value, nullConstant); >@@ -5650,7 +5650,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jneq_null: { > auto bytecode = currentInstruction->as<OpJneqNull>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* value = get(bytecode.m_value); > Node* nullConstant = addToGraph(JSConstant, OpInfo(m_constantNull)); > Node* condition = addToGraph(CompareEq, value, nullConstant); >@@ -5660,7 +5660,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jless: { > auto bytecode = currentInstruction->as<OpJless>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareLess, op1, op2); >@@ -5670,7 +5670,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jlesseq: { > auto bytecode = currentInstruction->as<OpJlesseq>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareLessEq, op1, op2); >@@ -5680,7 +5680,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jgreater: { > auto bytecode = currentInstruction->as<OpJgreater>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareGreater, op1, op2); >@@ -5690,7 +5690,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jgreatereq: { > auto bytecode = currentInstruction->as<OpJgreatereq>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareGreaterEq, op1, op2); >@@ -5700,7 +5700,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jeq: { > auto bytecode = currentInstruction->as<OpJeq>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareEq, op1, op2); >@@ -5710,7 +5710,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jstricteq: { > auto bytecode = currentInstruction->as<OpJstricteq>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareStrictEq, op1, op2); >@@ -5720,7 +5720,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jnless: { > auto bytecode = currentInstruction->as<OpJnless>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareLess, op1, op2); >@@ -5730,7 +5730,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jnlesseq: { > auto bytecode = currentInstruction->as<OpJnlesseq>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareLessEq, op1, op2); >@@ -5740,7 +5740,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jngreater: { > auto bytecode = currentInstruction->as<OpJngreater>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareGreater, op1, op2); >@@ -5750,7 +5750,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jngreatereq: { > auto bytecode = currentInstruction->as<OpJngreatereq>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareGreaterEq, op1, op2); >@@ -5760,7 +5760,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jneq: { > auto bytecode = currentInstruction->as<OpJneq>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareEq, op1, op2); >@@ -5770,7 +5770,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jnstricteq: { > auto bytecode = currentInstruction->as<OpJnstricteq>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareStrictEq, op1, op2); >@@ -5780,7 +5780,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jbelow: { > auto bytecode = currentInstruction->as<OpJbelow>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareBelow, op1, op2); >@@ -5790,7 +5790,7 @@ void ByteCodeParser::parseBlock(unsigned > > case op_jbeloweq: { > auto bytecode = currentInstruction->as<OpJbeloweq>(); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* op1 = get(bytecode.m_lhs); > Node* op2 = get(bytecode.m_rhs); > Node* condition = addToGraph(CompareBelowEq, op1, op2); >@@ -6107,7 +6107,7 @@ void ByteCodeParser::parseBlock(unsigned > JSCell* actualPointer = static_cast<JSCell*>( > actualPointerFor(m_inlineStackTop->m_codeBlock, specialPointer)); > FrozenValue* frozenPointer = m_graph.freeze(actualPointer); >- unsigned relativeOffset = jumpTarget(bytecode.m_target); >+ unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel); > Node* child = get(bytecode.m_value); > if (bytecode.metadata(codeBlock).m_hasJumped) { > Node* condition = addToGraph(CompareEqPtr, OpInfo(frozenPointer), child); >Index: Source/JavaScriptCore/jit/JITArithmetic32_64.cpp >=================================================================== >--- Source/JavaScriptCore/jit/JITArithmetic32_64.cpp (revision 240135) >+++ Source/JavaScriptCore/jit/JITArithmetic32_64.cpp (working copy) >@@ -50,7 +50,7 @@ void JIT::emit_compareAndJump(const Inst > auto bytecode = instruction->as<Op>(); > int op1 = bytecode.m_lhs.offset(); > int op2 = bytecode.m_rhs.offset(); >- unsigned target = jumpTarget(instruction, bytecode.m_target); >+ unsigned target = jumpTarget(instruction, bytecode.m_targetLabel); > > // Character less. > if (isOperandConstantChar(op1)) { >@@ -104,7 +104,7 @@ void JIT::emit_compareUnsignedAndJump(co > auto bytecode = instruction->as<Op>(); > int op1 = bytecode.m_lhs.offset(); > int op2 = bytecode.m_rhs.offset(); >- unsigned target = jumpTarget(instruction, bytecode.m_target); >+ unsigned target = jumpTarget(instruction, bytecode.m_targetLabel); > > if (isOperandConstantInt(op1)) { > emitLoad(op2, regT3, regT2); >@@ -145,7 +145,7 @@ void JIT::emit_compareAndJumpSlow(const > auto bytecode = instruction->as<Op>(); > int op1 = bytecode.m_lhs.offset(); > int op2 = bytecode.m_rhs.offset(); >- unsigned target = jumpTarget(instruction, bytecode.m_target); >+ unsigned target = jumpTarget(instruction, bytecode.m_targetLabel); > > linkAllSlowCases(iter); > >@@ -199,7 +199,7 @@ void JIT::emitBinaryDoubleOp(const Instr > > auto bytecode = instruction->as<Op>(); > int opcodeID = Op::opcodeID; >- int target = jumpTarget(instruction, bytecode.m_target); >+ int target = jumpTarget(instruction, bytecode.m_targetLabel); > int op1 = bytecode.m_lhs.offset(); > int op2 = bytecode.m_rhs.offset(); > >Index: Source/JavaScriptCore/jit/JITArithmetic.cpp >=================================================================== >--- Source/JavaScriptCore/jit/JITArithmetic.cpp (revision 240135) >+++ Source/JavaScriptCore/jit/JITArithmetic.cpp (working copy) >@@ -178,7 +178,7 @@ void JIT::emit_compareAndJump(const Inst > auto bytecode = instruction->as<Op>(); > int op1 = bytecode.m_lhs.offset(); > int op2 = bytecode.m_rhs.offset(); >- unsigned target = jumpTarget(instruction, bytecode.m_target); >+ unsigned target = jumpTarget(instruction, bytecode.m_targetLabel); > if (isOperandConstantChar(op1)) { > emitGetVirtualRegister(op2, regT0); > addSlowCase(branchIfNotCell(regT0)); >@@ -225,7 +225,7 @@ void JIT::emit_compareUnsignedAndJump(co > auto bytecode = instruction->as<Op>(); > int op1 = bytecode.m_lhs.offset(); > int op2 = bytecode.m_rhs.offset(); >- unsigned target = jumpTarget(instruction, bytecode.m_target); >+ unsigned target = jumpTarget(instruction, bytecode.m_targetLabel); > if (isOperandConstantInt(op2)) { > emitGetVirtualRegister(op1, regT0); > int32_t op2imm = getOperandConstantInt(op2); >@@ -269,7 +269,7 @@ void JIT::emit_compareAndJumpSlow(const > auto bytecode = instruction->as<Op>(); > int op1 = bytecode.m_lhs.offset(); > int op2 = bytecode.m_rhs.offset(); >- unsigned target = jumpTarget(instruction, bytecode.m_target); >+ unsigned target = jumpTarget(instruction, bytecode.m_targetLabel); > > // We generate inline code for the following cases in the slow path: > // - floating-point number to constant int immediate >Index: Source/JavaScriptCore/jit/JITOpcodes32_64.cpp >=================================================================== >--- Source/JavaScriptCore/jit/JITOpcodes32_64.cpp (revision 240135) >+++ Source/JavaScriptCore/jit/JITOpcodes32_64.cpp (working copy) >@@ -74,7 +74,7 @@ void JIT::emit_op_end(const Instruction* > void JIT::emit_op_jmp(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpJmp>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > addJump(jump(), target); > } > >@@ -372,7 +372,7 @@ void JIT::emit_op_jfalse(const Instructi > { > auto bytecode = currentInstruction->as<OpJfalse>(); > int cond = bytecode.m_condition.offset(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > > emitLoad(cond, regT1, regT0); > >@@ -387,7 +387,7 @@ void JIT::emit_op_jtrue(const Instructio > { > auto bytecode = currentInstruction->as<OpJtrue>(); > int cond = bytecode.m_condition.offset(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > > emitLoad(cond, regT1, regT0); > bool shouldCheckMasqueradesAsUndefined = true; >@@ -401,7 +401,7 @@ void JIT::emit_op_jeq_null(const Instruc > { > auto bytecode = currentInstruction->as<OpJeqNull>(); > int src = bytecode.m_value.offset(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > > emitLoad(src, regT1, regT0); > >@@ -427,7 +427,7 @@ void JIT::emit_op_jneq_null(const Instru > { > auto bytecode = currentInstruction->as<OpJneqNull>(); > int src = bytecode.m_value.offset(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > > emitLoad(src, regT1, regT0); > >@@ -455,7 +455,7 @@ void JIT::emit_op_jneq_ptr(const Instruc > auto& metadata = bytecode.metadata(m_codeBlock); > int src = bytecode.m_value.offset(); > Special::Pointer ptr = bytecode.m_specialPointer; >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > > emitLoad(src, regT1, regT0); > Jump notCell = branchIfNotCell(regT1); >@@ -514,7 +514,7 @@ void JIT::emitSlow_op_eq(const Instructi > void JIT::emit_op_jeq(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpJeq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > int src1 = bytecode.m_lhs.offset(); > int src2 = bytecode.m_rhs.offset(); > >@@ -554,7 +554,7 @@ void JIT::compileOpEqJumpSlow(Vector<Slo > void JIT::emitSlow_op_jeq(const Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) > { > auto bytecode = currentInstruction->as<OpJeq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > compileOpEqJumpSlow(iter, CompileOpEqType::Eq, target); > } > >@@ -606,7 +606,7 @@ void JIT::emitSlow_op_neq(const Instruct > void JIT::emit_op_jneq(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpJneq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > int src1 = bytecode.m_lhs.offset(); > int src2 = bytecode.m_rhs.offset(); > >@@ -621,7 +621,7 @@ void JIT::emit_op_jneq(const Instruction > void JIT::emitSlow_op_jneq(const Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter) > { > auto bytecode = currentInstruction->as<OpJneq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > compileOpEqJumpSlow(iter, CompileOpEqType::NEq, target); > } > >@@ -669,7 +669,7 @@ template<typename Op> > void JIT::compileOpStrictEqJump(const Instruction* currentInstruction, CompileOpStrictEqType type) > { > auto bytecode = currentInstruction->as<Op>(); >- int target = jumpTarget(currentInstruction, bytecode.m_target); >+ int target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > int src1 = bytecode.m_lhs.offset(); > int src2 = bytecode.m_rhs.offset(); > >@@ -708,7 +708,7 @@ void JIT::emitSlow_op_jstricteq(const In > linkAllSlowCases(iter); > > auto bytecode = currentInstruction->as<OpJstricteq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > callOperation(operationCompareStrictEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); > emitJumpSlowToHot(branchTest32(NonZero, returnValueGPR), target); > } >@@ -718,7 +718,7 @@ void JIT::emitSlow_op_jnstricteq(const I > linkAllSlowCases(iter); > > auto bytecode = currentInstruction->as<OpJnstricteq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > callOperation(operationCompareStrictEq, JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2)); > emitJumpSlowToHot(branchTest32(Zero, returnValueGPR), target); > } >@@ -1058,7 +1058,7 @@ void JIT::emit_op_to_this(const Instruct > void JIT::emit_op_check_tdz(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpCheckTdz>(); >- emitLoadTag(bytecode.m_target.offset(), regT0); >+ emitLoadTag(bytecode.m_targetVirtualRegister.offset(), regT0); > addSlowCase(branchIfEmpty(regT0)); > } > >@@ -1271,7 +1271,7 @@ void JIT::emit_op_profile_type(const Ins > auto bytecode = currentInstruction->as<OpProfileType>(); > auto& metadata = bytecode.metadata(m_codeBlock); > TypeLocation* cachedTypeLocation = metadata.m_typeLocation; >- int valueToProfile = bytecode.m_target.offset(); >+ int valueToProfile = bytecode.m_targetVirtualRegister.offset(); > > // Load payload in T0. Load tag in T3. > emitLoadPayload(valueToProfile, regT0); >Index: Source/JavaScriptCore/jit/JITOpcodes.cpp >=================================================================== >--- Source/JavaScriptCore/jit/JITOpcodes.cpp (revision 240135) >+++ Source/JavaScriptCore/jit/JITOpcodes.cpp (working copy) >@@ -86,7 +86,7 @@ void JIT::emit_op_end(const Instruction* > void JIT::emit_op_jmp(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpJmp>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > addJump(jump(), target); > } > >@@ -383,7 +383,7 @@ void JIT::emit_op_not(const Instruction* > void JIT::emit_op_jfalse(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpJfalse>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > > GPRReg value = regT0; > GPRReg scratch1 = regT1; >@@ -398,7 +398,7 @@ void JIT::emit_op_jeq_null(const Instruc > { > auto bytecode = currentInstruction->as<OpJeqNull>(); > int src = bytecode.m_value.offset(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > > emitGetVirtualRegister(src, regT0); > Jump isImmediate = branchIfNotCell(regT0); >@@ -422,7 +422,7 @@ void JIT::emit_op_jneq_null(const Instru > { > auto bytecode = currentInstruction->as<OpJneqNull>(); > int src = bytecode.m_value.offset(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > > emitGetVirtualRegister(src, regT0); > Jump isImmediate = branchIfNotCell(regT0); >@@ -448,7 +448,7 @@ void JIT::emit_op_jneq_ptr(const Instruc > auto& metadata = bytecode.metadata(m_codeBlock); > int src = bytecode.m_value.offset(); > Special::Pointer ptr = bytecode.m_specialPointer; >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > > emitGetVirtualRegister(src, regT0); > CCallHelpers::Jump equal = branchPtr(Equal, regT0, TrustedImmPtr(actualPointerFor(m_codeBlock, ptr))); >@@ -470,7 +470,7 @@ void JIT::emit_op_eq(const Instruction* > void JIT::emit_op_jeq(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpJeq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > emitGetVirtualRegisters(bytecode.m_lhs.offset(), regT0, bytecode.m_rhs.offset(), regT1); > emitJumpSlowCaseIfNotInt(regT0, regT1, regT2); > addJump(branch32(Equal, regT0, regT1), target); >@@ -479,7 +479,7 @@ void JIT::emit_op_jeq(const Instruction* > void JIT::emit_op_jtrue(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpJtrue>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > > GPRReg value = regT0; > GPRReg scratch1 = regT1; >@@ -503,7 +503,7 @@ void JIT::emit_op_neq(const Instruction* > void JIT::emit_op_jneq(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpJneq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > emitGetVirtualRegisters(bytecode.m_lhs.offset(), regT0, bytecode.m_rhs.offset(), regT1); > emitJumpSlowCaseIfNotInt(regT0, regT1, regT2); > addJump(branch32(NotEqual, regT0, regT1), target); >@@ -566,7 +566,7 @@ template<typename Op> > void JIT::compileOpStrictEqJump(const Instruction* currentInstruction, CompileOpStrictEqType type) > { > auto bytecode = currentInstruction->as<Op>(); >- int target = jumpTarget(currentInstruction, bytecode.m_target); >+ int target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > int src1 = bytecode.m_lhs.offset(); > int src2 = bytecode.m_rhs.offset(); > >@@ -607,7 +607,7 @@ void JIT::emitSlow_op_jstricteq(const In > linkAllSlowCases(iter); > > auto bytecode = currentInstruction->as<OpJstricteq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > callOperation(operationCompareStrictEq, regT0, regT1); > emitJumpSlowToHot(branchTest32(NonZero, returnValueGPR), target); > } >@@ -617,7 +617,7 @@ void JIT::emitSlow_op_jnstricteq(const I > linkAllSlowCases(iter); > > auto bytecode = currentInstruction->as<OpJnstricteq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > callOperation(operationCompareStrictEq, regT0, regT1); > emitJumpSlowToHot(branchTest32(Zero, returnValueGPR), target); > } >@@ -932,7 +932,7 @@ void JIT::emit_op_create_this(const Inst > void JIT::emit_op_check_tdz(const Instruction* currentInstruction) > { > auto bytecode = currentInstruction->as<OpCheckTdz>(); >- emitGetVirtualRegister(bytecode.m_target.offset(), regT0); >+ emitGetVirtualRegister(bytecode.m_targetVirtualRegister.offset(), regT0); > addSlowCase(branchIfEmpty(regT0)); > } > >@@ -965,7 +965,7 @@ void JIT::emitSlow_op_jeq(const Instruct > linkAllSlowCases(iter); > > auto bytecode = currentInstruction->as<OpJeq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > callOperation(operationCompareEq, regT0, regT1); > emitJumpSlowToHot(branchTest32(NonZero, returnValueGPR), target); > } >@@ -975,7 +975,7 @@ void JIT::emitSlow_op_jneq(const Instruc > linkAllSlowCases(iter); > > auto bytecode = currentInstruction->as<OpJneq>(); >- unsigned target = jumpTarget(currentInstruction, bytecode.m_target); >+ unsigned target = jumpTarget(currentInstruction, bytecode.m_targetLabel); > callOperation(operationCompareEq, regT0, regT1); > emitJumpSlowToHot(branchTest32(Zero, returnValueGPR), target); > } >@@ -1399,7 +1399,7 @@ void JIT::emit_op_profile_type(const Ins > auto bytecode = currentInstruction->as<OpProfileType>(); > auto& metadata = bytecode.metadata(m_codeBlock); > TypeLocation* cachedTypeLocation = metadata.m_typeLocation; >- int valueToProfile = bytecode.m_target.offset(); >+ int valueToProfile = bytecode.m_targetVirtualRegister.offset(); > > emitGetVirtualRegister(valueToProfile, regT0); > >Index: Source/JavaScriptCore/llint/LLIntSlowPaths.cpp >=================================================================== >--- Source/JavaScriptCore/llint/LLIntSlowPaths.cpp (revision 240135) >+++ Source/JavaScriptCore/llint/LLIntSlowPaths.cpp (working copy) >@@ -122,8 +122,8 @@ namespace JSC { namespace LLInt { > LLINT_END_IMPL(); \ > } while (false) > >-#define JUMP_OFFSET(target) \ >- ((target) ? (target) : exec->codeBlock()->outOfLineJumpOffset(pc)) >+#define JUMP_OFFSET(targetOffset) \ >+ ((targetOffset) ? (targetOffset) : exec->codeBlock()->outOfLineJumpOffset(pc)) > > #define JUMP_TO(target) do { \ > pc = reinterpret_cast<const Instruction*>(reinterpret_cast<const uint8_t*>(pc) + (target)); \ >@@ -133,7 +133,7 @@ namespace JSC { namespace LLInt { > bool __b_condition = (condition); \ > LLINT_CHECK_EXCEPTION(); \ > if (__b_condition) \ >- JUMP_TO(JUMP_OFFSET(bytecode.m_target)); \ >+ JUMP_TO(JUMP_OFFSET(bytecode.m_targetLabel)); \ > else \ > JUMP_TO(pc->size()); \ > LLINT_END_IMPL(); \ >@@ -656,7 +656,7 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id > if (!LLINT_ALWAYS_ACCESS_SLOW && slot.isCacheable()) { > auto& metadata = bytecode.metadata(exec); > { >- StructureID oldStructureID = metadata.m_structure; >+ StructureID oldStructureID = metadata.m_structureID; > if (oldStructureID) { > Structure* a = vm.heap.structureIDTable().get(oldStructureID); > Structure* b = baseValue.asCell()->structure(vm); >@@ -672,7 +672,7 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id > Structure* structure = baseCell->structure(vm); > if (slot.isValue()) { > // Start out by clearing out the old cache. >- metadata.m_structure = 0; >+ metadata.m_structureID = 0; > metadata.m_offset = 0; > > if (structure->propertyAccessesAreCacheable() >@@ -681,7 +681,7 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id > > ConcurrentJSLocker locker(codeBlock->m_lock); > >- metadata.m_structure = structure->id(); >+ metadata.m_structureID = structure->id(); > metadata.m_offset = slot.cachedOffset(); > } > } >@@ -736,13 +736,13 @@ static void setupGetByIdPrototypeCache(E > > if (slot.isUnset()) { > metadata.m_mode = GetByIdMode::Unset; >- metadata.m_modeMetadata.unsetMode.structure = structure->id(); >+ metadata.m_modeMetadata.unsetMode.structureID = structure->id(); > return; > } > ASSERT(slot.isValue()); > > metadata.m_mode = GetByIdMode::ProtoLoad; >- metadata.m_modeMetadata.protoLoadMode.structure = structure->id(); >+ metadata.m_modeMetadata.protoLoadMode.structureID = structure->id(); > metadata.m_modeMetadata.protoLoadMode.cachedOffset = offset; > metadata.m_modeMetadata.protoLoadMode.cachedSlot = slot.slotBase(); > // We know that this pointer will remain valid because it will be cleared by either a watchpoint fire or >@@ -773,13 +773,13 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id > auto mode = metadata.m_mode; > switch (mode) { > case GetByIdMode::Default: >- oldStructureID = metadata.m_modeMetadata.defaultMode.structure; >+ oldStructureID = metadata.m_modeMetadata.defaultMode.structureID; > break; > case GetByIdMode::Unset: >- oldStructureID = metadata.m_modeMetadata.unsetMode.structure; >+ oldStructureID = metadata.m_modeMetadata.unsetMode.structureID; > break; > case GetByIdMode::ProtoLoad: >- oldStructureID = metadata.m_modeMetadata.protoLoadMode.structure; >+ oldStructureID = metadata.m_modeMetadata.protoLoadMode.structureID; > break; > default: > oldStructureID = 0; >@@ -800,7 +800,7 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id > if (slot.isValue() && slot.slotBase() == baseValue) { > // Start out by clearing out the old cache. > metadata.m_mode = GetByIdMode::Default; >- metadata.m_modeMetadata.defaultMode.structure = 0; >+ metadata.m_modeMetadata.defaultMode.structureID = 0; > metadata.m_modeMetadata.defaultMode.cachedOffset = 0; > > // Prevent the prototype cache from ever happening. >@@ -812,7 +812,7 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id > > ConcurrentJSLocker locker(codeBlock->m_lock); > >- metadata.m_modeMetadata.defaultMode.structure = structure->id(); >+ metadata.m_modeMetadata.defaultMode.structureID = structure->id(); > metadata.m_modeMetadata.defaultMode.cachedOffset = slot.cachedOffset(); > } > } else if (UNLIKELY(metadata.m_hitCountForLLIntCaching && (slot.isValue() || slot.isUnset()))) { >@@ -857,7 +857,7 @@ LLINT_SLOW_PATH_DECL(slow_path_put_by_id > && slot.isCacheablePut()) { > > { >- StructureID oldStructureID = metadata.m_oldStructure; >+ StructureID oldStructureID = metadata.m_oldStructureID; > if (oldStructureID) { > Structure* a = vm.heap.structureIDTable().get(oldStructureID); > Structure* b = baseValue.asCell()->structure(vm); >@@ -872,9 +872,9 @@ LLINT_SLOW_PATH_DECL(slow_path_put_by_id > } > > // Start out by clearing out the old cache. >- metadata.m_oldStructure = 0; >+ metadata.m_oldStructureID = 0; > metadata.m_offset = 0; >- metadata.m_newStructure = 0; >+ metadata.m_newStructureID = 0; > metadata.m_structureChain.clear(); > > JSCell* baseCell = baseValue.asCell(); >@@ -896,9 +896,9 @@ LLINT_SLOW_PATH_DECL(slow_path_put_by_id > auto result = normalizePrototypeChain(exec, baseCell, sawPolyProto); > if (result != InvalidPrototypeChain && !sawPolyProto) { > ASSERT(structure->previousID()->isObject()); >- metadata.m_oldStructure = structure->previousID()->id(); >+ metadata.m_oldStructureID = structure->previousID()->id(); > metadata.m_offset = slot.cachedOffset(); >- metadata.m_newStructure = structure->id(); >+ metadata.m_newStructureID = structure->id(); > if (!(bytecode.m_flags & PutByIdIsDirect)) { > StructureChain* chain = structure->prototypeChain(exec, asObject(baseCell)); > ASSERT(chain); >@@ -908,7 +908,7 @@ LLINT_SLOW_PATH_DECL(slow_path_put_by_id > } > } else { > structure->didCachePropertyReplacement(vm, slot.cachedOffset()); >- metadata.m_oldStructure = structure->id(); >+ metadata.m_oldStructureID = structure->id(); > metadata.m_offset = slot.cachedOffset(); > } > } >Index: Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm >=================================================================== >--- Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm (revision 240135) >+++ Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm (working copy) >@@ -35,20 +35,20 @@ macro nextInstructionWide() > jmp [t1, t0, 4], BytecodePtrTag > end > >-macro getuOperandNarrow(op, fieldName, dst) >- loadb constexpr %op%_%fieldName%_index[PC], dst >+macro getuOperandNarrow(opcodeStruct, fieldName, dst) >+ loadb constexpr %opcodeStruct%_%fieldName%_index[PC], dst > end > >-macro getOperandNarrow(op, fieldName, dst) >- loadbsp constexpr %op%_%fieldName%_index[PC], dst >+macro getOperandNarrow(opcodeStruct, fieldName, dst) >+ loadbsp constexpr %opcodeStruct%_%fieldName%_index[PC], dst > end > >-macro getuOperandWide(op, fieldName, dst) >- loadi constexpr %op%_%fieldName%_index * 4 + 1[PC], dst >+macro getuOperandWide(opcodeStruct, fieldName, dst) >+ loadi constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PC], dst > end > >-macro getOperandWide(op, fieldName, dst) >- loadis constexpr %op%_%fieldName%_index * 4 + 1[PC], dst >+macro getOperandWide(opcodeStruct, fieldName, dst) >+ loadis constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PC], dst > end > > macro makeReturn(get, dispatch, fn) >@@ -62,13 +62,13 @@ macro makeReturn(get, dispatch, fn) > end) > end > >-macro makeReturnProfiled(op, get, metadata, dispatch, fn) >+macro makeReturnProfiled(opcodeStruct, get, metadata, dispatch, fn) > fn(macro (tag, payload) > move tag, t1 > move payload, t0 > > metadata(t5, t2) >- valueProfile(op, t5, t1, t0) >+ valueProfile(opcodeStruct, t5, t1, t0) > get(m_dst, t2) > storei t1, TagOffset[cfr, t2, 8] > storei t0, PayloadOffset[cfr, t2, 8] >@@ -77,13 +77,13 @@ macro makeReturnProfiled(op, get, metada > end > > >-macro dispatchAfterCall(size, op, dispatch) >+macro dispatchAfterCall(size, opcodeStruct, dispatch) > loadi ArgumentCount + TagOffset[cfr], PC >- get(size, op, m_dst, t3) >+ get(size, opcodeStruct, m_dst, t3) > storei r1, TagOffset[cfr, t3, 8] > storei r0, PayloadOffset[cfr, t3, 8] >- metadata(size, op, t2, t3) >- valueProfile(op, t2, r1, r0) >+ metadata(size, opcodeStruct, t2, t3) >+ valueProfile(opcodeStruct, t2, r1, r0) > dispatch() > end > >@@ -598,9 +598,9 @@ macro writeBarrierOnGlobalLexicalEnviron > end) > end > >-macro valueProfile(op, metadata, tag, payload) >- storei tag, %op%::Metadata::m_profile.m_buckets + TagOffset[metadata] >- storei payload, %op%::Metadata::m_profile.m_buckets + PayloadOffset[metadata] >+macro valueProfile(opcodeStruct, metadata, tag, payload) >+ storei tag, %opcodeStruct%::Metadata::m_profile.m_buckets + TagOffset[metadata] >+ storei payload, %opcodeStruct%::Metadata::m_profile.m_buckets + PayloadOffset[metadata] > end > > >@@ -682,7 +682,7 @@ macro branchIfException(label) > loadp Callee + PayloadOffset[cfr], t3 > andp MarkedBlockMask, t3 > loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3 >- btiz VM::m_exception[t3], .noException >+ btpz VM::m_exception[t3], .noException > jmp label > .noException: > end >@@ -734,7 +734,7 @@ end) > > llintOpWithReturn(op_get_scope, OpGetScope, macro (size, get, dispatch, return) > loadi Callee + PayloadOffset[cfr], t0 >- loadi JSCallee::m_scope[t0], t0 >+ loadp JSCallee::m_scope[t0], t0 > return (CellTag, t0) > end) > >@@ -756,7 +756,7 @@ end) > > > llintOp(op_check_tdz, OpCheckTdz, macro (size, get, dispatch) >- get(m_target, t0) >+ get(m_targetVirtualRegister, t0) > loadConstantOrVariableTag(size, t0, t1) > bineq t1, EmptyValueTag, .opNotTDZ > callSlowPath(_slow_path_throw_tdz_error) >@@ -786,8 +786,8 @@ llintOpWithReturn(op_not, OpNot, macro ( > end) > > >-macro equalityComparisonOp(name, op, integerComparison) >- llintOpWithReturn(op_%name%, op, macro (size, get, dispatch, return) >+macro equalityComparisonOp(opcodeName, opcodeStruct, integerComparison) >+ llintOpWithReturn(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, return) > get(m_rhs, t2) > get(m_lhs, t0) > loadConstantOrVariable(size, t2, t3, t1) >@@ -799,14 +799,14 @@ macro equalityComparisonOp(name, op, int > return(BooleanTag, t0) > > .opEqSlow: >- callSlowPath(_slow_path_%name%) >+ callSlowPath(_slow_path_%opcodeName%) > dispatch() > end) > end > > >-macro equalityJumpOp(name, op, integerComparison) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro equalityJumpOp(opcodeName, opcodeStruct, integerComparison) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_rhs, t2) > get(m_lhs, t0) > loadConstantOrVariable(size, t2, t3, t1) >@@ -818,17 +818,17 @@ macro equalityJumpOp(name, op, integerCo > dispatch() > > .jumpTarget: >- jump(m_target) >+ jump(m_targetLabel) > > .slow: >- callSlowPath(_llint_slow_path_%name%) >+ callSlowPath(_llint_slow_path_%opcodeName%) > nextInstruction() > end) > end > > >-macro equalNullComparisonOp(name, op, fn) >- llintOpWithReturn(name, op, macro (size, get, dispatch, return) >+macro equalNullComparisonOp(opcodeName, opcodeStruct, fn) >+ llintOpWithReturn(opcodeName, opcodeStruct, macro (size, get, dispatch, return) > get(m_operand, t0) > assertNotConstant(size, t0) > loadi TagOffset[cfr, t0, 8], t1 >@@ -869,8 +869,8 @@ llintOpWithReturn(op_is_undefined_or_nul > end) > > >-macro strictEqOp(name, op, equalityOperation) >- llintOpWithReturn(op_%name%, op, macro (size, get, dispatch, return) >+macro strictEqOp(opcodeName, opcodeStruct, equalityOperation) >+ llintOpWithReturn(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, return) > get(m_rhs, t2) > get(m_lhs, t0) > loadConstantOrVariable(size, t2, t3, t1) >@@ -885,14 +885,14 @@ macro strictEqOp(name, op, equalityOpera > return(BooleanTag, t0) > > .slow: >- callSlowPath(_slow_path_%name%) >+ callSlowPath(_slow_path_%opcodeName%) > dispatch() > end) > end > > >-macro strictEqualityJumpOp(name, op, equalityOperation) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro strictEqualityJumpOp(opcodeName, opcodeStruct, equalityOperation) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_rhs, t2) > get(m_lhs, t0) > loadConstantOrVariable(size, t2, t3, t1) >@@ -907,10 +907,10 @@ macro strictEqualityJumpOp(name, op, equ > dispatch() > > .jumpTarget: >- jump(m_target) >+ jump(m_targetLabel) > > .slow: >- callSlowPath(_llint_slow_path_%name%) >+ callSlowPath(_llint_slow_path_%opcodeName%) > nextInstruction() > end) > end >@@ -932,8 +932,8 @@ strictEqualityJumpOp(jnstricteq, OpJnstr > macro (left, right, target) bineq left, right, target end) > > >-macro preOp(name, op, operation) >- llintOp(op_%name%, op, macro (size, get, dispatch) >+macro preOp(opcodeName, opcodeStruct, operation) >+ llintOp(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch) > get(m_srcDst, t0) > bineq TagOffset[cfr, t0, 8], Int32Tag, .slow > loadi PayloadOffset[cfr, t0, 8], t1 >@@ -942,7 +942,7 @@ macro preOp(name, op, operation) > dispatch() > > .slow: >- callSlowPath(_slow_path_%name%) >+ callSlowPath(_slow_path_%opcodeName%) > dispatch() > end) > end >@@ -992,7 +992,7 @@ end) > llintOpWithMetadata(op_negate, OpNegate, macro (size, get, dispatch, metadata, return) > > macro arithProfile(type) >- ori type, OpNegate::Metadata::m_arithProfile[t5] >+ ori type, OpNegate::Metadata::m_arithProfile + ArithProfile::m_bits[t5] > end > > metadata(t5, t0) >@@ -1015,10 +1015,10 @@ llintOpWithMetadata(op_negate, OpNegate, > end) > > >-macro binaryOpCustomStore(name, op, integerOperationAndStore, doubleOperation) >- llintOpWithMetadata(op_%name%, op, macro (size, get, dispatch, metadata, return) >+macro binaryOpCustomStore(opcodeName, opcodeStruct, integerOperationAndStore, doubleOperation) >+ llintOpWithMetadata(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, metadata, return) > macro arithProfile(type) >- ori type, %op%::Metadata::m_arithProfile[t5] >+ ori type, %opcodeStruct%::Metadata::m_arithProfile + ArithProfile::m_bits[t5] > end > > metadata(t5, t2) >@@ -1063,13 +1063,13 @@ macro binaryOpCustomStore(name, op, inte > dispatch() > > .slow: >- callSlowPath(_slow_path_%name%) >+ callSlowPath(_slow_path_%opcodeName%) > dispatch() > end) > end > >-macro binaryOp(name, op, integerOperation, doubleOperation) >- binaryOpCustomStore(name, op, >+macro binaryOp(opcodeName, opcodeStruct, integerOperation, doubleOperation) >+ binaryOpCustomStore(opcodeName, opcodeStruct, > macro (int32Tag, left, right, slow, index) > integerOperation(left, right, slow) > storei int32Tag, TagOffset[cfr, index, 8] >@@ -1130,8 +1130,8 @@ llintOpWithReturn(op_unsigned, OpUnsigne > end) > > >-macro commonBitOp(opKind, name, op, operation) >- opKind(op_%name%, op, macro (size, get, dispatch, return) >+macro commonBitOp(opKind, opcodeName, opcodeStruct, operation) >+ opKind(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, return) > get(m_rhs, t2) > get(m_lhs, t0) > loadConstantOrVariable(size, t2, t3, t1) >@@ -1142,17 +1142,17 @@ macro commonBitOp(opKind, name, op, oper > return (t3, t0) > > .slow: >- callSlowPath(_slow_path_%name%) >+ callSlowPath(_slow_path_%opcodeName%) > dispatch() > end) > end > >-macro bitOp(name, op, operation) >- commonBitOp(llintOpWithReturn, name, op, operation) >+macro bitOp(opcodeName, opcodeStruct, operation) >+ commonBitOp(llintOpWithReturn, opcodeName, opcodeStruct, operation) > end > >-macro bitOpProfiled(name, op, operation) >- commonBitOp(llintOpWithProfile, name, op, operation) >+macro bitOpProfiled(opcodeName, opcodeStruct, operation) >+ commonBitOp(llintOpWithProfile, opcodeName, opcodeStruct, operation) > end > > >@@ -1328,10 +1328,10 @@ end > llintOpWithMetadata(op_get_by_id_direct, OpGetByIdDirect, macro (size, get, dispatch, metadata, return) > metadata(t5, t0) > get(m_base, t0) >- loadi OpGetByIdDirect::Metadata::m_structure[t5], t1 >+ loadp OpGetByIdDirect::Metadata::m_structureID[t5], t1 > loadConstantOrVariablePayload(size, t0, CellTag, t3, .opGetByIdDirectSlow) > loadi OpGetByIdDirect::Metadata::m_offset[t5], t2 >- bineq JSCell::m_structureID[t3], t1, .opGetByIdDirectSlow >+ bpneq JSCell::m_structureID[t3], t1, .opGetByIdDirectSlow > loadPropertyAtVariableOffset(t2, t3, t0, t1) > valueProfile(OpGetByIdDirect, t5, t0, t1) > return(t0, t1) >@@ -1349,10 +1349,10 @@ llintOpWithMetadata(op_get_by_id, OpGetB > > .opGetByIdProtoLoad: > bbneq t1, constexpr GetByIdMode::ProtoLoad, .opGetByIdArrayLength >- loadi OpGetById::Metadata::m_modeMetadata.protoLoadMode.structure[t5], t1 >+ loadp OpGetById::Metadata::m_modeMetadata.protoLoadMode.structureID[t5], t1 > loadConstantOrVariablePayload(size, t0, CellTag, t3, .opGetByIdSlow) > loadis OpGetById::Metadata::m_modeMetadata.protoLoadMode.cachedOffset[t5], t2 >- bineq JSCell::m_structureID[t3], t1, .opGetByIdSlow >+ bpneq JSCell::m_structureID[t3], t1, .opGetByIdSlow > loadp OpGetById::Metadata::m_modeMetadata.protoLoadMode.cachedSlot[t5], t3 > loadPropertyAtVariableOffset(t2, t3, t0, t1) > valueProfile(OpGetById, t5, t0, t1) >@@ -1373,17 +1373,17 @@ llintOpWithMetadata(op_get_by_id, OpGetB > > .opGetByIdUnset: > bbneq t1, constexpr GetByIdMode::Unset, .opGetByIdDefault >- loadi OpGetById::Metadata::m_modeMetadata.unsetMode.structure[t5], t1 >+ loadp OpGetById::Metadata::m_modeMetadata.unsetMode.structureID[t5], t1 > loadConstantOrVariablePayload(size, t0, CellTag, t3, .opGetByIdSlow) >- bineq JSCell::m_structureID[t3], t1, .opGetByIdSlow >+ bpneq JSCell::m_structureID[t3], t1, .opGetByIdSlow > valueProfile(OpGetById, t5, UndefinedTag, 0) > return(UndefinedTag, 0) > > .opGetByIdDefault: >- loadi OpGetById::Metadata::m_modeMetadata.defaultMode.structure[t5], t1 >+ loadp OpGetById::Metadata::m_modeMetadata.defaultMode.structureID[t5], t1 > loadConstantOrVariablePayload(size, t0, CellTag, t3, .opGetByIdSlow) > loadis OpGetById::Metadata::m_modeMetadata.defaultMode.cachedOffset[t5], t2 >- bineq JSCell::m_structureID[t3], t1, .opGetByIdSlow >+ bpneq JSCell::m_structureID[t3], t1, .opGetByIdSlow > loadPropertyAtVariableOffset(t2, t3, t0, t1) > valueProfile(OpGetById, t5, t0, t1) > return(t0, t1) >@@ -1399,8 +1399,8 @@ llintOpWithMetadata(op_put_by_id, OpPutB > metadata(t5, t3) > get(m_base, t3) > loadConstantOrVariablePayload(size, t3, CellTag, t0, .opPutByIdSlow) >- loadi JSCell::m_structureID[t0], t2 >- bineq t2, OpPutById::Metadata::m_oldStructure[t5], .opPutByIdSlow >+ loadp JSCell::m_structureID[t0], t2 >+ bpneq t2, OpPutById::Metadata::m_oldStructureID[t5], .opPutByIdSlow > > # At this point, we have: > # t5 -> metadata >@@ -1408,7 +1408,7 @@ llintOpWithMetadata(op_put_by_id, OpPutB > # t0 -> object base > # We will lose currentStructureID in the shenanigans below. > >- loadi OpPutById::Metadata::m_newStructure[t5], t1 >+ loadp OpPutById::Metadata::m_newStructureID[t5], t1 > > btiz t1, .opPutByIdNotTransition > >@@ -1417,7 +1417,7 @@ llintOpWithMetadata(op_put_by_id, OpPutB > loadp OpPutById::Metadata::m_structureChain[t5], t3 > btpz t3, .opPutByIdTransitionDirect > >- loadi OpPutById::Metadata::m_oldStructure[t5], t2 # Need old structure again. >+ loadp OpPutById::Metadata::m_oldStructureID[t5], t2 # Need old structure again. > loadp StructureChain::m_vector[t3], t3 > assert(macro (ok) btpnz t3, ok end) > >@@ -1431,10 +1431,10 @@ llintOpWithMetadata(op_put_by_id, OpPutB > btpnz t2, .opPutByIdTransitionChainLoop > > .opPutByIdTransitionChainDone: >- loadi OpPutById::Metadata::m_newStructure[t5], t1 >+ loadp OpPutById::Metadata::m_newStructureID[t5], t1 > > .opPutByIdTransitionDirect: >- storei t1, JSCell::m_structureID[t0] >+ storep t1, JSCell::m_structureID[t0] > get(m_value, t1) > loadConstantOrVariable(size, t1, t2, t3) > loadi OpPutById::Metadata::m_offset[t5], t1 >@@ -1507,8 +1507,8 @@ llintOpWithMetadata(op_get_by_val, OpGet > end) > > >-macro putByValOp(name, op) >- llintOpWithMetadata(op_%name%, op, macro (size, get, dispatch, metadata, return) >+macro putByValOp(opcodeName, opcodeStruct) >+ llintOpWithMetadata(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, metadata, return) > macro contiguousPutByVal(storeCallback) > biaeq t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], .outOfBounds > .storeResult: >@@ -1518,7 +1518,7 @@ macro putByValOp(name, op) > > .outOfBounds: > biaeq t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.vectorLength[t0], .opPutByValOutOfBounds >- storeb 1, %op%::Metadata::m_arrayProfile.m_mayStoreToHole[t5] >+ storeb 1, %opcodeStruct%::Metadata::m_arrayProfile.m_mayStoreToHole[t5] > addi 1, t3, t2 > storei t2, -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0] > jmp .storeResult >@@ -1529,7 +1529,7 @@ macro putByValOp(name, op) > get(m_base, t0) > loadConstantOrVariablePayload(size, t0, CellTag, t1, .opPutByValSlow) > move t1, t2 >- arrayProfile(%op%::Metadata::m_arrayProfile, t2, t5, t0) >+ arrayProfile(%opcodeStruct%::Metadata::m_arrayProfile, t2, t5, t0) > get(m_property, t0) > loadConstantOrVariablePayload(size, t0, Int32Tag, t3, .opPutByValSlow) > loadp JSObject::m_butterfly[t1], t0 >@@ -1583,7 +1583,7 @@ macro putByValOp(name, op) > dispatch() > > .opPutByValArrayStorageEmpty: >- storeb 1, %op%::Metadata::m_arrayProfile.m_mayStoreToHole[t5] >+ storeb 1, %opcodeStruct%::Metadata::m_arrayProfile.m_mayStoreToHole[t5] > addi 1, ArrayStorage::m_numValuesInVector[t0] > bib t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], .opPutByValArrayStorageStoreResult > addi 1, t3, t1 >@@ -1591,9 +1591,9 @@ macro putByValOp(name, op) > jmp .opPutByValArrayStorageStoreResult > > .opPutByValOutOfBounds: >- storeb 1, %op%::Metadata::m_arrayProfile.m_outOfBounds[t5] >+ storeb 1, %opcodeStruct%::Metadata::m_arrayProfile.m_outOfBounds[t5] > .opPutByValSlow: >- callSlowPath(_llint_slow_path_%name%) >+ callSlowPath(_llint_slow_path_%opcodeName%) > dispatch() > end) > end >@@ -1604,25 +1604,25 @@ putByValOp(put_by_val, OpPutByVal) > putByValOp(put_by_val_direct, OpPutByValDirect) > > >-macro llintJumpTrueOrFalseOp(name, op, conditionOp) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro llintJumpTrueOrFalseOp(opcodeName, opcodeStruct, conditionOp) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_condition, t1) > loadConstantOrVariablePayload(size, t1, BooleanTag, t0, .slow) > conditionOp(t0, .target) > dispatch() > > .target: >- jump(m_target) >+ jump(m_targetLabel) > > .slow: >- callSlowPath(_llint_slow_path_%name%) >+ callSlowPath(_llint_slow_path_%opcodeName%) > nextInstruction() > end) > end > > >-macro equalNullJumpOp(name, op, cellHandler, immediateHandler) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro equalNullJumpOp(opcodeName, opcodeStruct, cellHandler, immediateHandler) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_value, t0) > assertNotConstant(size, t0) > loadi TagOffset[cfr, t0, 8], t1 >@@ -1633,7 +1633,7 @@ macro equalNullJumpOp(name, op, cellHand > dispatch() > > .target: >- jump(m_target) >+ jump(m_targetLabel) > > .immediate: > ori 1, t1 >@@ -1665,7 +1665,7 @@ equalNullJumpOp(jneq_null, OpJneqNull, > > llintOpWithMetadata(op_jneq_ptr, OpJneqPtr, macro (size, get, dispatch, metadata, return) > get(m_value, t0) >- get(m_specialPointer, t1) >+ getu(size, OpJneqPtr, m_specialPointer, t1) > loadp CodeBlock[cfr], t2 > loadp CodeBlock::m_globalObject[t2], t2 > bineq TagOffset[cfr, t0, 8], CellTag, .opJneqPtrBranch >@@ -1674,15 +1674,15 @@ llintOpWithMetadata(op_jneq_ptr, OpJneqP > .opJneqPtrBranch: > metadata(t5, t2) > storeb 1, OpJneqPtr::Metadata::m_hasJumped[t5] >- get(m_target, t0) >+ get(m_targetLabel, t0) > jumpImpl(t0) > .opJneqPtrFallThrough: > dispatch() > end) > > >-macro compareUnsignedJumpOp(name, op, integerCompare) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro compareUnsignedJumpOp(opcodeName, opcodeStruct, integerCompare) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_lhs, t2) > get(m_rhs, t3) > loadConstantOrVariable(size, t2, t0, t1) >@@ -1691,13 +1691,13 @@ macro compareUnsignedJumpOp(name, op, in > dispatch() > > .jumpTarget: >- jump(m_target) >+ jump(m_targetLabel) > end) > end > > >-macro compareUnsignedOp(name, op, integerCompareAndSet) >- llintOpWithReturn(op_%name%, op, macro (size, get, dispatch, return) >+macro compareUnsignedOp(opcodeName, opcodeStruct, integerCompareAndSet) >+ llintOpWithReturn(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, return) > get(m_rhs, t2) > get(m_lhs, t0) > loadConstantOrVariable(size, t2, t3, t1) >@@ -1708,8 +1708,8 @@ macro compareUnsignedOp(name, op, intege > end > > >-macro compareJumpOp(name, op, integerCompare, doubleCompare) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro compareJumpOp(opcodeName, opcodeStruct, integerCompare, doubleCompare) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_lhs, t2) > get(m_rhs, t3) > loadConstantOrVariable(size, t2, t0, t1) >@@ -1740,10 +1740,10 @@ macro compareJumpOp(name, op, integerCom > dispatch() > > .jumpTarget: >- jump(m_target) >+ jump(m_targetLabel) > > .slow: >- callSlowPath(_llint_slow_path_%name%) >+ callSlowPath(_llint_slow_path_%opcodeName%) > nextInstruction() > end) > end >@@ -1751,7 +1751,7 @@ end > > llintOpWithJump(op_switch_imm, OpSwitchImm, macro (size, get, jump, dispatch) > get(m_scrutinee, t2) >- get(m_tableIndex, t3) >+ getu(size, OpSwitchImm, m_tableIndex, t3) > loadConstantOrVariable(size, t2, t1, t0) > loadp CodeBlock[cfr], t2 > loadp CodeBlock::m_rareData[t2], t2 >@@ -1779,7 +1779,7 @@ end) > > llintOpWithJump(op_switch_char, OpSwitchChar, macro (size, get, jump, dispatch) > get(m_scrutinee, t2) >- get(m_tableIndex, t3) >+ getu(size, OpSwitchChar, m_tableIndex, t3) > loadConstantOrVariable(size, t2, t1, t0) > loadp CodeBlock[cfr], t2 > loadp CodeBlock::m_rareData[t2], t2 >@@ -1814,43 +1814,43 @@ llintOpWithJump(op_switch_char, OpSwitch > end) > > >-macro arrayProfileForCall(op, getu) >+macro arrayProfileForCall(opcodeStruct, getu) > getu(m_argv, t3) > negi t3 > bineq ThisArgumentOffset + TagOffset[cfr, t3, 8], CellTag, .done > loadi ThisArgumentOffset + PayloadOffset[cfr, t3, 8], t0 > loadp JSCell::m_structureID[t0], t0 >- storep t0, %op%::Metadata::m_arrayProfile.m_lastSeenStructureID[t5] >+ storep t0, %opcodeStruct%::Metadata::m_arrayProfile.m_lastSeenStructureID[t5] > .done: > end > >-macro commonCallOp(name, slowPath, op, prepareCall, prologue) >- llintOpWithMetadata(name, op, macro (size, get, dispatch, metadata, return) >+macro commonCallOp(opcodeName, slowPath, opcodeStruct, prepareCall, prologue) >+ llintOpWithMetadata(opcodeName, opcodeStruct, macro (size, get, dispatch, metadata, return) > metadata(t5, t0) > > prologue(macro (fieldName, dst) >- getu(size, op, fieldName, dst) >+ getu(size, opcodeStruct, fieldName, dst) > end, metadata) > > get(m_callee, t0) >- loadp %op%::Metadata::m_callLinkInfo.callee[t5], t2 >+ loadp %opcodeStruct%::Metadata::m_callLinkInfo.callee[t5], t2 > loadConstantOrVariablePayload(size, t0, CellTag, t3, .opCallSlow) > bineq t3, t2, .opCallSlow >- getu(size, op, m_argv, t3) >+ getu(size, opcodeStruct, m_argv, t3) > lshifti 3, t3 > negi t3 > addp cfr, t3 # t3 contains the new value of cfr > storei t2, Callee + PayloadOffset[t3] >- getu(size, op, m_argc, t2) >+ getu(size, opcodeStruct, m_argc, t2) > storei PC, ArgumentCount + TagOffset[cfr] > storei t2, ArgumentCount + PayloadOffset[t3] > storei CellTag, Callee + TagOffset[t3] > move t3, sp >- prepareCall(%op%::Metadata::m_callLinkInfo.machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) >- callTargetFunction(size, op, dispatch, %op%::Metadata::m_callLinkInfo.machineCodeTarget[t5], JSEntryPtrTag) >+ prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) >+ callTargetFunction(size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], JSEntryPtrTag) > > .opCallSlow: >- slowPathForCall(size, op, dispatch, slowPath, prepareCall) >+ slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall) > end) > end > >@@ -1905,8 +1905,8 @@ commonOp(llint_op_catch, macro() end, ma > andp MarkedBlockMask, t3 > loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3 > >- loadi VM::m_exception[t3], t0 >- storei 0, VM::m_exception[t3] >+ loadp VM::m_exception[t3], t0 >+ storep 0, VM::m_exception[t3] > get(size, OpCatch, m_exception, t2) > storei t0, PayloadOffset[cfr, t2, 8] > storei CellTag, TagOffset[cfr, t2, 8] >@@ -2013,7 +2013,7 @@ macro nativeCallTrampoline(executableOff > error > end > >- btinz VM::m_exception[t3], .handleException >+ btpnz VM::m_exception[t3], .handleException > > functionEpilogue() > ret >@@ -2068,7 +2068,7 @@ macro internalFunctionCallTrampoline(off > error > end > >- btinz VM::m_exception[t3], .handleException >+ btpnz VM::m_exception[t3], .handleException > > functionEpilogue() > ret >@@ -2113,7 +2113,7 @@ llintOpWithMetadata(op_resolve_scope, Op > end > > metadata(t5, t0) >- loadp OpResolveScope::Metadata::m_resolveType[t5], t0 >+ loadi OpResolveScope::Metadata::m_resolveType[t5], t0 > > #rGlobalProperty: > bineq t0, GlobalProperty, .rGlobalVar >@@ -2161,10 +2161,10 @@ llintOpWithMetadata(op_resolve_scope, Op > end) > > >-macro loadWithStructureCheck(op, get, operand, slowPath) >+macro loadWithStructureCheck(opcodeStruct, get, operand, slowPath) > get(m_scope, t0) > loadp PayloadOffset[cfr, t0, 8], t0 >- loadp %op%::Metadata::m_structure[t5], t1 >+ loadp %opcodeStruct%::Metadata::m_structure[t5], t1 > bpneq JSCell::m_structureID[t0], t1, slowPath > end > >@@ -2195,7 +2195,7 @@ llintOpWithMetadata(op_get_from_scope, O > end > > metadata(t5, t0) >- loadi OpGetFromScope::Metadata::m_getPutInfo[t5], t0 >+ loadi OpGetFromScope::Metadata::m_getPutInfo + GetPutInfo::m_operand[t5], t0 > andi ResolveTypeMask, t0 > > #gGlobalProperty: >@@ -2253,7 +2253,7 @@ llintOpWithMetadata(op_put_to_scope, OpP > macro putProperty() > get(m_value, t1) > loadConstantOrVariable(size, t1, t2, t3) >- loadis OpPutToScope::Metadata::m_operand[t5], t1 >+ loadp OpPutToScope::Metadata::m_operand[t5], t1 > storePropertyAtVariableOffset(t1, t0, t2, t3) > end > >@@ -2270,7 +2270,7 @@ llintOpWithMetadata(op_put_to_scope, OpP > macro putClosureVar() > get(m_value, t1) > loadConstantOrVariable(size, t1, t2, t3) >- loadis OpPutToScope::Metadata::m_operand[t5], t1 >+ loadp OpPutToScope::Metadata::m_operand[t5], t1 > storei t2, JSLexicalEnvironment_variables + TagOffset[t0, t1, 8] > storei t3, JSLexicalEnvironment_variables + PayloadOffset[t0, t1, 8] > end >@@ -2282,14 +2282,14 @@ llintOpWithMetadata(op_put_to_scope, OpP > btpz t1, .noVariableWatchpointSet > notifyWrite(t1, .pDynamic) > .noVariableWatchpointSet: >- loadis OpPutToScope::Metadata::m_operand[t5], t1 >+ loadp OpPutToScope::Metadata::m_operand[t5], t1 > storei t2, JSLexicalEnvironment_variables + TagOffset[t0, t1, 8] > storei t3, JSLexicalEnvironment_variables + PayloadOffset[t0, t1, 8] > end > > > metadata(t5, t0) >- loadi OpPutToScope::Metadata::m_getPutInfo[t5], t0 >+ loadi OpPutToScope::Metadata::m_getPutInfo + GetPutInfo::m_operand[t5], t0 > andi ResolveTypeMask, t0 > > #pLocalClosureVar: >@@ -2368,7 +2368,7 @@ end) > llintOpWithProfile(op_get_from_arguments, OpGetFromArguments, macro (size, get, dispatch, return) > get(m_arguments, t0) > loadi PayloadOffset[cfr, t0, 8], t0 >- get(m_index, t1) >+ getu(size, OpGetFromArguments, m_index, t1) > loadi DirectArguments_storage + TagOffset[t0, t1, 8], t2 > loadi DirectArguments_storage + PayloadOffset[t0, t1, 8], t3 > return(t2, t3) >@@ -2381,7 +2381,7 @@ llintOp(op_put_to_arguments, OpPutToArgu > loadi PayloadOffset[cfr, t0, 8], t0 > get(m_value, t1) > loadConstantOrVariable(size, t1, t2, t3) >- get(m_index, t1) >+ getu(size, OpPutToArguments, m_index, t1) > storei t2, DirectArguments_storage + TagOffset[t0, t1, 8] > storei t3, DirectArguments_storage + PayloadOffset[t0, t1, 8] > dispatch() >@@ -2404,7 +2404,7 @@ llintOpWithMetadata(op_profile_type, OpP > loadp VM::m_typeProfilerLog[t1], t1 > > # t0 is holding the payload, t5 is holding the tag. >- get(m_target, t2) >+ get(m_targetVirtualRegister, t2) > loadConstantOrVariable(size, t2, t5, t0) > > bieq t5, EmptyValueTag, .opProfileTypeDone >@@ -2425,7 +2425,7 @@ llintOpWithMetadata(op_profile_type, OpP > storei 0, TypeProfilerLog::LogEntry::structureID[t2] > jmp .opProfileTypeSkipIsCell > .opProfileTypeIsCell: >- loadi JSCell::m_structureID[t0], t3 >+ loadp JSCell::m_structureID[t0], t3 > storei t3, TypeProfilerLog::LogEntry::structureID[t2] > .opProfileTypeSkipIsCell: > >@@ -2456,7 +2456,7 @@ end) > llintOpWithReturn(op_get_rest_length, OpGetRestLength, macro (size, get, dispatch, return) > loadi PayloadOffset + ArgumentCount[cfr], t0 > subi 1, t0 >- get(m_numParametersToSkip, t1) >+ getu(size, OpGetRestLength, m_numParametersToSkip, t1) > bilteq t0, t1, .storeZero > subi t1, t0 > jmp .finish >Index: Source/JavaScriptCore/llint/LowLevelInterpreter64.asm >=================================================================== >--- Source/JavaScriptCore/llint/LowLevelInterpreter64.asm (revision 240135) >+++ Source/JavaScriptCore/llint/LowLevelInterpreter64.asm (working copy) >@@ -36,20 +36,20 @@ macro nextInstructionWide() > jmp [t1, t0, PtrSize], BytecodePtrTag > end > >-macro getuOperandNarrow(op, fieldName, dst) >- loadb constexpr %op%_%fieldName%_index[PB, PC, 1], dst >+macro getuOperandNarrow(opcodeStruct, fieldName, dst) >+ loadb constexpr %opcodeStruct%_%fieldName%_index[PB, PC, 1], dst > end > >-macro getOperandNarrow(op, fieldName, dst) >- loadbsp constexpr %op%_%fieldName%_index[PB, PC, 1], dst >+macro getOperandNarrow(opcodeStruct, fieldName, dst) >+ loadbsp constexpr %opcodeStruct%_%fieldName%_index[PB, PC, 1], dst > end > >-macro getuOperandWide(op, fieldName, dst) >- loadi constexpr %op%_%fieldName%_index * 4 + 1[PB, PC, 1], dst >+macro getuOperandWide(opcodeStruct, fieldName, dst) >+ loadi constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PB, PC, 1], dst > end > >-macro getOperandWide(op, fieldName, dst) >- loadis constexpr %op%_%fieldName%_index * 4 + 1[PB, PC, 1], dst >+macro getOperandWide(opcodeStruct, fieldName, dst) >+ loadis constexpr %opcodeStruct%_%fieldName%_index * 4 + 1[PB, PC, 1], dst > end > > macro makeReturn(get, dispatch, fn) >@@ -61,30 +61,30 @@ macro makeReturn(get, dispatch, fn) > end) > end > >-macro makeReturnProfiled(op, get, metadata, dispatch, fn) >+macro makeReturnProfiled(opcodeStruct, get, metadata, dispatch, fn) > fn(macro (value) > move value, t3 > metadata(t1, t2) >- valueProfile(op, t1, t3) >+ valueProfile(opcodeStruct, t1, t3) > get(m_dst, t1) > storeq t3, [cfr, t1, 8] > dispatch() > end) > end > >-macro valueProfile(op, metadata, value) >- storeq value, %op%::Metadata::m_profile.m_buckets[metadata] >+macro valueProfile(opcodeStruct, metadata, value) >+ storeq value, %opcodeStruct%::Metadata::m_profile.m_buckets[metadata] > end > >-macro dispatchAfterCall(size, op, dispatch) >+macro dispatchAfterCall(size, opcodeStruct, dispatch) > loadi ArgumentCount + TagOffset[cfr], PC > loadp CodeBlock[cfr], PB > loadp CodeBlock::m_instructionsRawPointer[PB], PB > unpoison(_g_CodeBlockPoison, PB, t1) >- get(size, op, m_dst, t1) >+ get(size, opcodeStruct, m_dst, t1) > storeq r0, [cfr, t1, 8] >- metadata(size, op, t2, t1) >- valueProfile(op, t2, r0) >+ metadata(size, opcodeStruct, t2, t1) >+ valueProfile(opcodeStruct, t2, r0) > dispatch() > end > >@@ -710,7 +710,7 @@ end) > > > llintOp(op_check_tdz, OpCheckTdz, macro (size, get, dispatch) >- get(m_target, t0) >+ get(m_targetVirtualRegister, t0) > loadConstantOrVariable(size, t0, t1) > bqneq t1, ValueEmpty, .opNotTDZ > callSlowPath(_slow_path_throw_tdz_error) >@@ -741,8 +741,8 @@ llintOpWithReturn(op_not, OpNot, macro ( > end) > > >-macro equalityComparisonOp(name, op, integerComparison) >- llintOpWithReturn(op_%name%, op, macro (size, get, dispatch, return) >+macro equalityComparisonOp(opcodeName, opcodeStruct, integerComparison) >+ llintOpWithReturn(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, return) > get(m_rhs, t0) > get(m_lhs, t2) > loadConstantOrVariableInt32(size, t0, t1, .slow) >@@ -752,14 +752,14 @@ macro equalityComparisonOp(name, op, int > return(t0) > > .slow: >- callSlowPath(_slow_path_%name%) >+ callSlowPath(_slow_path_%opcodeName%) > dispatch() > end) > end > > >-macro equalNullComparisonOp(name, op, fn) >- llintOpWithReturn(name, op, macro (size, get, dispatch, return) >+macro equalNullComparisonOp(opcodeName, opcodeStruct, fn) >+ llintOpWithReturn(opcodeName, opcodeStruct, macro (size, get, dispatch, return) > get(m_operand, t0) > loadq [cfr, t0, 8], t0 > btqnz t0, tagMask, .immediate >@@ -799,8 +799,8 @@ llintOpWithReturn(op_is_undefined_or_nul > end) > > >-macro strictEqOp(name, op, equalityOperation) >- llintOpWithReturn(op_%name%, op, macro (size, get, dispatch, return) >+macro strictEqOp(opcodeName, opcodeStruct, equalityOperation) >+ llintOpWithReturn(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, return) > get(m_rhs, t0) > get(m_lhs, t2) > loadConstantOrVariable(size, t0, t1) >@@ -819,7 +819,7 @@ macro strictEqOp(name, op, equalityOpera > return(t0) > > .slow: >- callSlowPath(_slow_path_%name%) >+ callSlowPath(_slow_path_%opcodeName%) > dispatch() > end) > end >@@ -833,8 +833,8 @@ strictEqOp(nstricteq, OpNstricteq, > macro (left, right, result) cqneq left, right, result end) > > >-macro strictEqualityJumpOp(name, op, equalityOperation) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro strictEqualityJumpOp(opcodeName, opcodeStruct, equalityOperation) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_lhs, t2) > get(m_rhs, t3) > loadConstantOrVariable(size, t2, t0) >@@ -852,10 +852,10 @@ macro strictEqualityJumpOp(name, op, equ > dispatch() > > .jumpTarget: >- jump(m_target) >+ jump(m_targetLabel) > > .slow: >- callSlowPath(_llint_slow_path_%name%) >+ callSlowPath(_llint_slow_path_%opcodeName%) > nextInstruction() > end) > end >@@ -869,8 +869,8 @@ strictEqualityJumpOp(jnstricteq, OpJnstr > macro (left, right, target) bqneq left, right, target end) > > >-macro preOp(name, op, arithmeticOperation) >- llintOp(op_%name%, op, macro (size, get, dispatch) >+macro preOp(opcodeName, opcodeStruct, arithmeticOperation) >+ llintOp(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch) > get(m_srcDst, t0) > loadq [cfr, t0, 8], t1 > bqb t1, tagTypeNumber, .slow >@@ -879,7 +879,7 @@ macro preOp(name, op, arithmeticOperatio > storeq t1, [cfr, t0, 8] > dispatch() > .slow: >- callSlowPath(_slow_path_%name%) >+ callSlowPath(_slow_path_%opcodeName%) > dispatch() > end) > end >@@ -929,19 +929,19 @@ llintOpWithMetadata(op_negate, OpNegate, > get(m_operand, t0) > loadConstantOrVariable(size, t0, t3) > metadata(t1, t2) >- loadis OpNegate::Metadata::m_arithProfile[t1], t2 >+ loadi OpNegate::Metadata::m_arithProfile + ArithProfile::m_bits[t1], t2 > bqb t3, tagTypeNumber, .opNegateNotInt > btiz t3, 0x7fffffff, .opNegateSlow > negi t3 > orq tagTypeNumber, t3 > ori ArithProfileInt, t2 >- storei t2, OpNegate::Metadata::m_arithProfile[t1] >+ storei t2, OpNegate::Metadata::m_arithProfile + ArithProfile::m_bits[t1] > return(t3) > .opNegateNotInt: > btqz t3, tagTypeNumber, .opNegateSlow > xorq 0x8000000000000000, t3 > ori ArithProfileNumber, t2 >- storei t2, OpNegate::Metadata::m_arithProfile[t1] >+ storei t2, OpNegate::Metadata::m_arithProfile + ArithProfile::m_bits[t1] > return(t3) > > .opNegateSlow: >@@ -950,12 +950,12 @@ llintOpWithMetadata(op_negate, OpNegate, > end) > > >-macro binaryOpCustomStore(name, op, integerOperationAndStore, doubleOperation) >- llintOpWithMetadata(op_%name%, op, macro (size, get, dispatch, metadata, return) >+macro binaryOpCustomStore(opcodeName, opcodeStruct, integerOperationAndStore, doubleOperation) >+ llintOpWithMetadata(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, metadata, return) > metadata(t5, t0) > > macro profile(type) >- ori type, %op%::Metadata::m_arithProfile[t5] >+ ori type, %opcodeStruct%::Metadata::m_arithProfile + ArithProfile::m_bits[t5] > end > > get(m_rhs, t0) >@@ -1007,7 +1007,7 @@ macro binaryOpCustomStore(name, op, inte > dispatch() > > .slow: >- callSlowPath(_slow_path_%name%) >+ callSlowPath(_slow_path_%opcodeName%) > dispatch() > end) > end >@@ -1052,8 +1052,8 @@ binaryOpCustomStore(mul, OpMul, > macro (left, right) muld left, right end) > > >-macro binaryOp(name, op, integerOperation, doubleOperation) >- binaryOpCustomStore(name, op, >+macro binaryOp(opcodeName, opcodeStruct, integerOperation, doubleOperation) >+ binaryOpCustomStore(opcodeName, opcodeStruct, > macro (left, right, slow, index) > integerOperation(left, right, slow) > orq tagTypeNumber, right >@@ -1083,8 +1083,8 @@ llintOpWithReturn(op_unsigned, OpUnsigne > end) > > >-macro commonBitOp(opKind, name, op, operation) >- opKind(op_%name%, op, macro (size, get, dispatch, return) >+macro commonBitOp(opKind, opcodeName, opcodeStruct, operation) >+ opKind(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, return) > get(m_rhs, t0) > get(m_lhs, t2) > loadConstantOrVariable(size, t0, t1) >@@ -1096,17 +1096,17 @@ macro commonBitOp(opKind, name, op, oper > return(t0) > > .slow: >- callSlowPath(_slow_path_%name%) >+ callSlowPath(_slow_path_%opcodeName%) > dispatch() > end) > end > >-macro bitOp(name, op, operation) >- commonBitOp(llintOpWithReturn, name, op, operation) >+macro bitOp(opcodeName, opcodeStruct, operation) >+ commonBitOp(llintOpWithReturn, opcodeName, opcodeStruct, operation) > end > >-macro bitOpProfiled(name, op, operation) >- commonBitOp(llintOpWithProfile, name, op, operation) >+macro bitOpProfiled(opcodeName, opcodeStruct, operation) >+ commonBitOp(llintOpWithProfile, opcodeName, opcodeStruct, operation) > end > > bitOp(lshift, OpLshift, >@@ -1269,7 +1269,7 @@ llintOpWithMetadata(op_get_by_id_direct, > get(m_base, t0) > loadConstantOrVariableCell(size, t0, t3, .opGetByIdDirectSlow) > loadi JSCell::m_structureID[t3], t1 >- loadi OpGetByIdDirect::Metadata::m_structure[t2], t0 >+ loadi OpGetByIdDirect::Metadata::m_structureID[t2], t0 > bineq t0, t1, .opGetByIdDirectSlow > loadi OpGetByIdDirect::Metadata::m_offset[t2], t1 > loadPropertyAtVariableOffset(t1, t3, t0) >@@ -1291,7 +1291,7 @@ llintOpWithMetadata(op_get_by_id, OpGetB > .opGetByIdDefault: > bbneq t1, constexpr GetByIdMode::Default, .opGetByIdProtoLoad > loadi JSCell::m_structureID[t3], t1 >- loadi OpGetById::Metadata::m_modeMetadata.defaultMode.structure[t2], t0 >+ loadi OpGetById::Metadata::m_modeMetadata.defaultMode.structureID[t2], t0 > bineq t0, t1, .opGetByIdSlow > loadis OpGetById::Metadata::m_modeMetadata.defaultMode.cachedOffset[t2], t1 > loadPropertyAtVariableOffset(t1, t3, t0) >@@ -1301,7 +1301,7 @@ llintOpWithMetadata(op_get_by_id, OpGetB > .opGetByIdProtoLoad: > bbneq t1, constexpr GetByIdMode::ProtoLoad, .opGetByIdArrayLength > loadi JSCell::m_structureID[t3], t1 >- loadi OpGetById::Metadata::m_modeMetadata.protoLoadMode.structure[t2], t3 >+ loadi OpGetById::Metadata::m_modeMetadata.protoLoadMode.structureID[t2], t3 > bineq t3, t1, .opGetByIdSlow > loadis OpGetById::Metadata::m_modeMetadata.protoLoadMode.cachedOffset[t2], t1 > loadp OpGetById::Metadata::m_modeMetadata.protoLoadMode.cachedSlot[t2], t3 >@@ -1324,7 +1324,7 @@ llintOpWithMetadata(op_get_by_id, OpGetB > > .opGetByIdUnset: > loadi JSCell::m_structureID[t3], t1 >- loadi OpGetById::Metadata::m_modeMetadata.unsetMode.structure[t2], t0 >+ loadi OpGetById::Metadata::m_modeMetadata.unsetMode.structureID[t2], t0 > bineq t0, t1, .opGetByIdSlow > valueProfile(OpGetById, t2, ValueUndefined) > return(ValueUndefined) >@@ -1339,7 +1339,7 @@ llintOpWithMetadata(op_put_by_id, OpPutB > get(m_base, t3) > loadConstantOrVariableCell(size, t3, t0, .opPutByIdSlow) > metadata(t5, t2) >- loadis OpPutById::Metadata::m_oldStructure[t5], t2 >+ loadi OpPutById::Metadata::m_oldStructureID[t5], t2 > bineq t2, JSCell::m_structureID[t0], .opPutByIdSlow > > # At this point, we have: >@@ -1347,7 +1347,7 @@ llintOpWithMetadata(op_put_by_id, OpPutB > # t2 -> current structure ID > # t5 -> metadata > >- loadi OpPutById::Metadata::m_newStructure[t5], t1 >+ loadi OpPutById::Metadata::m_newStructureID[t5], t1 > btiz t1, .opPutByIdNotTransition > > # This is the transition case. t1 holds the new structureID. t2 holds the old structure ID. >@@ -1380,7 +1380,7 @@ llintOpWithMetadata(op_put_by_id, OpPutB > > .opPutByIdTransitionChainDone: > # Reload the new structure, since we clobbered it above. >- loadi OpPutById::Metadata::m_newStructure[t5], t1 >+ loadi OpPutById::Metadata::m_newStructureID[t5], t1 > > .opPutByIdTransitionDirect: > storei t1, JSCell::m_structureID[t0] >@@ -1574,8 +1574,8 @@ llintOpWithMetadata(op_get_by_val, OpGet > end) > > >-macro putByValOp(name, op) >- llintOpWithMetadata(op_%name%, op, macro (size, get, dispatch, metadata, return) >+macro putByValOp(opcodeName, opcodeStruct) >+ llintOpWithMetadata(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, metadata, return) > macro contiguousPutByVal(storeCallback) > biaeq t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], .outOfBounds > .storeResult: >@@ -1585,7 +1585,7 @@ macro putByValOp(name, op) > > .outOfBounds: > biaeq t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.vectorLength[t0], .opPutByValOutOfBounds >- storeb 1, %op%::Metadata::m_arrayProfile.m_mayStoreToHole[t5] >+ storeb 1, %opcodeStruct%::Metadata::m_arrayProfile.m_mayStoreToHole[t5] > addi 1, t3, t2 > storei t2, -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0] > jmp .storeResult >@@ -1595,7 +1595,7 @@ macro putByValOp(name, op) > loadConstantOrVariableCell(size, t0, t1, .opPutByValSlow) > move t1, t2 > metadata(t5, t0) >- arrayProfile(%op%::Metadata::m_arrayProfile, t2, t5, t0) >+ arrayProfile(%opcodeStruct%::Metadata::m_arrayProfile, t2, t5, t0) > get(m_property, t0) > loadConstantOrVariableInt32(size, t0, t3, .opPutByValSlow) > sxi2q t3, t3 >@@ -1650,7 +1650,7 @@ macro putByValOp(name, op) > dispatch() > > .opPutByValArrayStorageEmpty: >- storeb 1, %op%::Metadata::m_arrayProfile.m_mayStoreToHole[t5] >+ storeb 1, %opcodeStruct%::Metadata::m_arrayProfile.m_mayStoreToHole[t5] > addi 1, ArrayStorage::m_numValuesInVector[t0] > bib t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], .opPutByValArrayStorageStoreResult > addi 1, t3, t1 >@@ -1658,9 +1658,9 @@ macro putByValOp(name, op) > jmp .opPutByValArrayStorageStoreResult > > .opPutByValOutOfBounds: >- storeb 1, %op%::Metadata::m_arrayProfile.m_outOfBounds[t5] >+ storeb 1, %opcodeStruct%::Metadata::m_arrayProfile.m_outOfBounds[t5] > .opPutByValSlow: >- callSlowPath(_llint_slow_path_%name%) >+ callSlowPath(_llint_slow_path_%opcodeName%) > dispatch() > end) > end >@@ -1670,8 +1670,8 @@ putByValOp(put_by_val, OpPutByVal) > putByValOp(put_by_val_direct, OpPutByValDirect) > > >-macro llintJumpTrueOrFalseOp(name, op, conditionOp) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro llintJumpTrueOrFalseOp(opcodeName, opcodeStruct, conditionOp) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_condition, t1) > loadConstantOrVariable(size, t1, t0) > btqnz t0, ~0xf, .slow >@@ -1679,17 +1679,17 @@ macro llintJumpTrueOrFalseOp(name, op, c > dispatch() > > .target: >- jump(m_target) >+ jump(m_targetLabel) > > .slow: >- callSlowPath(_llint_slow_path_%name%) >+ callSlowPath(_llint_slow_path_%opcodeName%) > nextInstruction() > end) > end > > >-macro equalNullJumpOp(name, op, cellHandler, immediateHandler) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro equalNullJumpOp(opcodeName, opcodeStruct, cellHandler, immediateHandler) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_value, t0) > assertNotConstant(size, t0) > loadq [cfr, t0, 8], t0 >@@ -1699,7 +1699,7 @@ macro equalNullJumpOp(name, op, cellHand > dispatch() > > .target: >- jump(m_target) >+ jump(m_targetLabel) > > .immediate: > andq ~TagBitUndefined, t0 >@@ -1731,7 +1731,7 @@ equalNullJumpOp(jneq_null, OpJneqNull, > > llintOpWithMetadata(op_jneq_ptr, OpJneqPtr, macro (size, get, dispatch, metadata, return) > get(m_value, t0) >- get(m_specialPointer, t1) >+ getu(size, OpJneqPtr, m_specialPointer, t1) > loadp CodeBlock[cfr], t2 > loadp CodeBlock::m_globalObject[t2], t2 > loadp JSGlobalObject::m_specialPointers[t2, t1, PtrSize], t1 >@@ -1741,13 +1741,13 @@ llintOpWithMetadata(op_jneq_ptr, OpJneqP > .opJneqPtrTarget: > metadata(t5, t0) > storeb 1, OpJneqPtr::Metadata::m_hasJumped[t5] >- get(m_target, t0) >+ get(m_targetLabel, t0) > jumpImpl(t0) > end) > > >-macro compareJumpOp(name, op, integerCompare, doubleCompare) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro compareJumpOp(opcodeName, opcodeStruct, integerCompare, doubleCompare) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_lhs, t2) > get(m_rhs, t3) > loadConstantOrVariable(size, t2, t0) >@@ -1781,17 +1781,17 @@ macro compareJumpOp(name, op, integerCom > dispatch() > > .jumpTarget: >- jump(m_target) >+ jump(m_targetLabel) > > .slow: >- callSlowPath(_llint_slow_path_%name%) >+ callSlowPath(_llint_slow_path_%opcodeName%) > nextInstruction() > end) > end > > >-macro equalityJumpOp(name, op, integerComparison) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro equalityJumpOp(opcodeName, opcodeStruct, integerComparison) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_lhs, t2) > get(m_rhs, t3) > loadConstantOrVariableInt32(size, t2, t0, .slow) >@@ -1800,17 +1800,17 @@ macro equalityJumpOp(name, op, integerCo > dispatch() > > .jumpTarget: >- jump(m_target) >+ jump(m_targetLabel) > > .slow: >- callSlowPath(_llint_slow_path_%name%) >+ callSlowPath(_llint_slow_path_%opcodeName%) > nextInstruction() > end) > end > > >-macro compareUnsignedJumpOp(name, op, integerCompareMacro) >- llintOpWithJump(op_%name%, op, macro (size, get, jump, dispatch) >+macro compareUnsignedJumpOp(opcodeName, opcodeStruct, integerCompareMacro) >+ llintOpWithJump(op_%opcodeName%, opcodeStruct, macro (size, get, jump, dispatch) > get(m_lhs, t2) > get(m_rhs, t3) > loadConstantOrVariable(size, t2, t0) >@@ -1819,13 +1819,13 @@ macro compareUnsignedJumpOp(name, op, in > dispatch() > > .jumpTarget: >- jump(m_target) >+ jump(m_targetLabel) > end) > end > > >-macro compareUnsignedOp(name, op, integerCompareAndSet) >- llintOpWithReturn(op_%name%, op, macro (size, get, dispatch, return) >+macro compareUnsignedOp(opcodeName, opcodeStruct, integerCompareAndSet) >+ llintOpWithReturn(op_%opcodeName%, opcodeStruct, macro (size, get, dispatch, return) > get(m_lhs, t2) > get(m_rhs, t0) > loadConstantOrVariable(size, t0, t1) >@@ -1839,7 +1839,7 @@ end > > llintOpWithJump(op_switch_imm, OpSwitchImm, macro (size, get, jump, dispatch) > get(m_scrutinee, t2) >- get(m_tableIndex, t3) >+ getu(size, OpSwitchImm, m_tableIndex, t3) > loadConstantOrVariable(size, t2, t1) > loadp CodeBlock[cfr], t2 > loadp CodeBlock::m_rareData[t2], t2 >@@ -1867,7 +1867,7 @@ end) > > llintOpWithJump(op_switch_char, OpSwitchChar, macro (size, get, jump, dispatch) > get(m_scrutinee, t2) >- get(m_tableIndex, t3) >+ getu(size, OpSwitchChar, m_tableIndex, t3) > loadConstantOrVariable(size, t2, t1) > loadp CodeBlock[cfr], t2 > loadp CodeBlock::m_rareData[t2], t2 >@@ -1903,49 +1903,49 @@ end) > > > # we assume t5 contains the metadata, and we should not scratch that >-macro arrayProfileForCall(op, getu) >+macro arrayProfileForCall(opcodeStruct, getu) > getu(m_argv, t3) > negp t3 > loadq ThisArgumentOffset[cfr, t3, 8], t0 > btqnz t0, tagMask, .done > loadi JSCell::m_structureID[t0], t3 >- storei t3, %op%::Metadata::m_arrayProfile.m_lastSeenStructureID[t5] >+ storei t3, %opcodeStruct%::Metadata::m_arrayProfile.m_lastSeenStructureID[t5] > .done: > end > >-macro commonCallOp(name, slowPath, op, prepareCall, prologue) >- llintOpWithMetadata(name, op, macro (size, get, dispatch, metadata, return) >+macro commonCallOp(opcodeName, slowPath, opcodeStruct, prepareCall, prologue) >+ llintOpWithMetadata(opcodeName, opcodeStruct, macro (size, get, dispatch, metadata, return) > metadata(t5, t0) > > prologue(macro (fieldName, dst) >- getu(size, op, fieldName, dst) >+ getu(size, opcodeStruct, fieldName, dst) > end, metadata) > > get(m_callee, t0) >- loadp %op%::Metadata::m_callLinkInfo.callee[t5], t2 >+ loadp %opcodeStruct%::Metadata::m_callLinkInfo.callee[t5], t2 > loadConstantOrVariable(size, t0, t3) > bqneq t3, t2, .opCallSlow >- getu(size, op, m_argv, t3) >+ getu(size, opcodeStruct, m_argv, t3) > lshifti 3, t3 > negp t3 > addp cfr, t3 > storeq t2, Callee[t3] >- getu(size, op, m_argc, t2) >+ getu(size, opcodeStruct, m_argc, t2) > storei PC, ArgumentCount + TagOffset[cfr] > storei t2, ArgumentCount + PayloadOffset[t3] > move t3, sp > if POISON > loadp _g_JITCodePoison, t2 >- xorp %op%::Metadata::m_callLinkInfo.machineCodeTarget[t5], t2 >+ xorp %opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], t2 > prepareCall(t2, t1, t3, t4, JSEntryPtrTag) >- callTargetFunction(size, op, dispatch, t2, JSEntryPtrTag) >+ callTargetFunction(size, opcodeStruct, dispatch, t2, JSEntryPtrTag) > else >- prepareCall(%op%::Metadata::m_callLinkInfo.machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) >- callTargetFunction(size, op, dispatch, %op%::Metadata::m_callLinkInfo.machineCodeTarget[t5], JSEntryPtrTag) >+ prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag) >+ callTargetFunction(size, opcodeStruct, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.machineCodeTarget[t5], JSEntryPtrTag) > end > > .opCallSlow: >- slowPathForCall(size, op, dispatch, slowPath, prepareCall) >+ slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall) > end) > end > >@@ -2170,7 +2170,7 @@ llintOpWithMetadata(op_resolve_scope, Op > return(t0) > end > >- loadp OpResolveScope::Metadata::m_resolveType[t5], t0 >+ loadi OpResolveScope::Metadata::m_resolveType[t5], t0 > > #rGlobalProperty: > bineq t0, GlobalProperty, .rGlobalVar >@@ -2218,11 +2218,11 @@ llintOpWithMetadata(op_resolve_scope, Op > end) > > >-macro loadWithStructureCheck(op, get, slowPath) >+macro loadWithStructureCheck(opcodeStruct, get, slowPath) > get(m_scope, t0) > loadq [cfr, t0, 8], t0 > loadStructureWithScratch(t0, t2, t1, t3) >- loadp %op%::Metadata::m_structure[t5], t1 >+ loadp %opcodeStruct%::Metadata::m_structure[t5], t1 > bpneq t2, t1, slowPath > end > >@@ -2251,7 +2251,7 @@ llintOpWithMetadata(op_get_from_scope, O > return(t0) > end > >- loadi OpGetFromScope::Metadata::m_getPutInfo[t5], t0 >+ loadi OpGetFromScope::Metadata::m_getPutInfo + GetPutInfo::m_operand[t5], t0 > andi ResolveTypeMask, t0 > > #gGlobalProperty: >@@ -2309,7 +2309,7 @@ llintOpWithMetadata(op_put_to_scope, OpP > macro putProperty() > get(m_value, t1) > loadConstantOrVariable(size, t1, t2) >- loadis OpPutToScope::Metadata::m_operand[t5], t1 >+ loadp OpPutToScope::Metadata::m_operand[t5], t1 > storePropertyAtVariableOffset(t1, t0, t2) > end > >@@ -2325,7 +2325,7 @@ llintOpWithMetadata(op_put_to_scope, OpP > macro putClosureVar() > get(m_value, t1) > loadConstantOrVariable(size, t1, t2) >- loadis OpPutToScope::Metadata::m_operand[t5], t1 >+ loadp OpPutToScope::Metadata::m_operand[t5], t1 > storeq t2, JSLexicalEnvironment_variables[t0, t1, 8] > end > >@@ -2336,12 +2336,12 @@ llintOpWithMetadata(op_put_to_scope, OpP > btpz t3, .noVariableWatchpointSet > notifyWrite(t3, .pDynamic) > .noVariableWatchpointSet: >- loadis OpPutToScope::Metadata::m_operand[t5], t1 >+ loadp OpPutToScope::Metadata::m_operand[t5], t1 > storeq t2, JSLexicalEnvironment_variables[t0, t1, 8] > end > > macro checkTDZInGlobalPutToScopeIfNecessary() >- loadis OpPutToScope::Metadata::m_getPutInfo[t5], t0 >+ loadi OpPutToScope::Metadata::m_getPutInfo + GetPutInfo::m_operand[t5], t0 > andi InitializationModeMask, t0 > rshifti InitializationModeShift, t0 > bineq t0, NotInitialization, .noNeedForTDZCheck >@@ -2352,7 +2352,7 @@ llintOpWithMetadata(op_put_to_scope, OpP > end > > metadata(t5, t0) >- loadi OpPutToScope::Metadata::m_getPutInfo[t5], t0 >+ loadi OpPutToScope::Metadata::m_getPutInfo + GetPutInfo::m_operand[t5], t0 > andi ResolveTypeMask, t0 > > #pLocalClosureVar: >@@ -2466,7 +2466,7 @@ llintOpWithMetadata(op_profile_type, OpP > loadp TypeProfilerLog::m_currentLogEntryPtr[t1], t2 > > # t0 is holding the JSValue argument. >- get(m_target, t3) >+ get(m_targetVirtualRegister, t3) > loadConstantOrVariable(size, t3, t0) > > bqeq t0, ValueEmpty, .opProfileTypeDone >Index: Source/JavaScriptCore/llint/LowLevelInterpreter.asm >=================================================================== >--- Source/JavaScriptCore/llint/LowLevelInterpreter.asm (revision 240135) >+++ Source/JavaScriptCore/llint/LowLevelInterpreter.asm (working copy) >@@ -285,36 +285,36 @@ else > end > end > >-macro dispatch(advance) >- addp advance, PC >+macro dispatch(advanceReg) >+ addp advanceReg, PC > nextInstruction() > end > >-macro dispatchIndirect(offset) >- dispatch(offset) >+macro dispatchIndirect(offsetReg) >+ dispatch(offsetReg) > end > >-macro dispatchOp(size, op) >+macro dispatchOp(size, opcodeName) > macro dispatchNarrow() >- dispatch(constexpr %op%_length) >+ dispatch(constexpr %opcodeName%_length) > end > > macro dispatchWide() >- dispatch(constexpr %op%_length * 4 + 1) >+ dispatch(constexpr %opcodeName%_length * 4 + 1) > end > > size(dispatchNarrow, dispatchWide, macro (dispatch) dispatch() end) > end > >-macro getu(size, op, fieldName, dst) >+macro getu(size, opcodeStruct, fieldName, dst) > size(getuOperandNarrow, getuOperandWide, macro (getu) >- getu(op, fieldName, dst) >+ getu(opcodeStruct, fieldName, dst) > end) > end > >-macro get(size, op, fieldName, dst) >+macro get(size, opcodeStruct, fieldName, dst) > size(getOperandNarrow, getOperandWide, macro (get) >- get(op, fieldName, dst) >+ get(opcodeStruct, fieldName, dst) > end) > end > >@@ -334,9 +334,9 @@ macro metadata(size, opcode, dst, scratc > addp metadataTable, dst # return &metadataTable[offset] > end > >-macro jumpImpl(target) >- btiz target, .outOfLineJumpTarget >- dispatchIndirect(target) >+macro jumpImpl(targetOffsetReg) >+ btiz targetOffsetReg, .outOfLineJumpTarget >+ dispatchIndirect(targetOffsetReg) > .outOfLineJumpTarget: > callSlowPath(_llint_slow_path_out_of_line_jump_target) > nextInstruction() >@@ -358,39 +358,39 @@ macro op(l, fn) > end) > end > >-macro llintOp(name, op, fn) >- commonOp(llint_%name%, traceExecution, macro(size) >+macro llintOp(opcodeName, opcodeStruct, fn) >+ commonOp(llint_%opcodeName%, traceExecution, macro(size) > macro getImpl(fieldName, dst) >- get(size, op, fieldName, dst) >+ get(size, opcodeStruct, fieldName, dst) > end > > macro dispatchImpl() >- dispatchOp(size, name) >+ dispatchOp(size, opcodeName) > end > > fn(size, getImpl, dispatchImpl) > end) > end > >-macro llintOpWithReturn(name, op, fn) >- llintOp(name, op, macro(size, get, dispatch) >+macro llintOpWithReturn(opcodeName, opcodeStruct, fn) >+ llintOp(opcodeName, opcodeStruct, macro(size, get, dispatch) > makeReturn(get, dispatch, macro (return) > fn(size, get, dispatch, return) > end) > end) > end > >-macro llintOpWithMetadata(name, op, fn) >- llintOpWithReturn(name, op, macro (size, get, dispatch, return) >+macro llintOpWithMetadata(opcodeName, opcodeStruct, fn) >+ llintOpWithReturn(opcodeName, opcodeStruct, macro (size, get, dispatch, return) > macro meta(dst, scratch) >- metadata(size, op, dst, scratch) >+ metadata(size, opcodeStruct, dst, scratch) > end > fn(size, get, dispatch, meta, return) > end) > end > >-macro llintOpWithJump(name, op, impl) >- llintOpWithMetadata(name, op, macro(size, get, dispatch, metadata, return) >+macro llintOpWithJump(opcodeName, opcodeStruct, impl) >+ llintOpWithMetadata(opcodeName, opcodeStruct, macro(size, get, dispatch, metadata, return) > macro jump(fieldName) > get(fieldName, t0) > jumpImpl(t0) >@@ -400,9 +400,9 @@ macro llintOpWithJump(name, op, impl) > end) > end > >-macro llintOpWithProfile(name, op, fn) >- llintOpWithMetadata(name, op, macro(size, get, dispatch, metadata, return) >- makeReturnProfiled(op, get, metadata, dispatch, macro (returnProfiled) >+macro llintOpWithProfile(opcodeName, opcodeStruct, fn) >+ llintOpWithMetadata(opcodeName, opcodeStruct, macro(size, get, dispatch, metadata, return) >+ makeReturnProfiled(opcodeStruct, get, metadata, dispatch, macro (returnProfiled) > fn(size, get, dispatch, returnProfiled) > end) > end) >@@ -895,14 +895,14 @@ macro traceExecution() > end > end > >-macro callTargetFunction(size, op, dispatch, callee, callPtrTag) >+macro callTargetFunction(size, opcodeStruct, dispatch, callee, callPtrTag) > if C_LOOP > cloopCallJSFunction callee > else > call callee, callPtrTag > end > restoreStackPointerAfterCall() >- dispatchAfterCall(size, op, dispatch) >+ dispatchAfterCall(size, opcodeStruct, dispatch) > end > > macro prepareForRegularCall(callee, temp1, temp2, temp3, callPtrTag) >@@ -915,7 +915,7 @@ macro prepareForTailCall(callee, temp1, > > loadi PayloadOffset + ArgumentCount[cfr], temp2 > loadp CodeBlock[cfr], temp1 >- loadp CodeBlock::m_numParameters[temp1], temp1 >+ loadi CodeBlock::m_numParameters[temp1], temp1 > bilteq temp1, temp2, .noArityFixup > move temp1, temp2 > >@@ -970,7 +970,7 @@ macro prepareForTailCall(callee, temp1, > jmp callee, callPtrTag > end > >-macro slowPathForCall(size, op, dispatch, slowPath, prepareCall) >+macro slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall) > callCallSlowPath( > slowPath, > # Those are r0 and r1 >@@ -979,7 +979,7 @@ macro slowPathForCall(size, op, dispatch > move calleeFramePtr, sp > prepareCall(callee, t2, t3, t4, SlowPathPtrTag) > .dontUpdateSP: >- callTargetFunction(size, op, dispatch, callee, SlowPathPtrTag) >+ callTargetFunction(size, opcodeStruct, dispatch, callee, SlowPathPtrTag) > end) > end > >@@ -1192,8 +1192,8 @@ macro functionInitialization(profileArgS > loadp CodeBlock::m_argumentValueProfiles + RefCountedArray::m_data[t1], t3 > btpz t3, .argumentProfileDone # When we can't JIT, we don't allocate any argument value profiles. > mulp sizeof ValueProfile, t0, t2 # Aaaaahhhh! Need strength reduction! >- lshiftp 3, t0 >- addp t2, t3 >+ lshiftp 3, t0 # offset of last JSValue arguments on the stack. >+ addp t2, t3 # pointer to end of ValueProfile array in CodeBlock::m_argumentValueProfiles. > .argumentProfileLoop: > if JSVALUE64 > loadq ThisArgumentOffset - 8 + profileArgSkip * 8[cfr, t0], t2 >@@ -1433,9 +1433,9 @@ end > > > # Value-representation-agnostic code. >-macro slowPathOp(op) >- llintOp(op_%op%, unused, macro (unused, unused, dispatch) >- callSlowPath(_slow_path_%op%) >+macro slowPathOp(opcodeName) >+ llintOp(op_%opcodeName%, unused, macro (unused, unused, dispatch) >+ callSlowPath(_slow_path_%opcodeName%) > dispatch() > end) > end >@@ -1481,9 +1481,9 @@ slowPathOp(to_index_string) > slowPathOp(typeof) > slowPathOp(unreachable) > >-macro llintSlowPathOp(op) >- llintOp(op_%op%, unused, macro (unused, unused, dispatch) >- callSlowPath(_llint_slow_path_%op%) >+macro llintSlowPathOp(opcodeName) >+ llintOp(op_%opcodeName%, unused, macro (unused, unused, dispatch) >+ callSlowPath(_llint_slow_path_%opcodeName%) > dispatch() > end) > end >@@ -1538,7 +1538,7 @@ compareUnsignedOp(beloweq, OpBeloweq, > > > llintOpWithJump(op_jmp, OpJmp, macro (size, get, jump, dispatch) >- jump(m_target) >+ jump(m_targetLabel) > end) > > >@@ -1674,8 +1674,8 @@ commonCallOp(op_call, _llint_slow_path_c > end) > > >-macro callOp(name, op, prepareCall, fn) >- commonCallOp(op_%name%, _llint_slow_path_%name%, op, prepareCall, fn) >+macro callOp(opcodeName, opcodeStruct, prepareCall, fn) >+ commonCallOp(op_%opcodeName%, _llint_slow_path_%opcodeName%, opcodeStruct, prepareCall, fn) > end > > >@@ -1690,7 +1690,7 @@ end) > callOp(construct, OpConstruct, prepareForRegularCall, macro (getu, metadata) end) > > >-macro doCallVarargs(size, op, dispatch, frameSlowPath, slowPath, prepareCall) >+macro doCallVarargs(size, opcodeStruct, dispatch, frameSlowPath, slowPath, prepareCall) > callSlowPath(frameSlowPath) > branchIfException(_llint_throw_from_slow_path_trampoline) > # calleeFrame in r1 >@@ -1705,7 +1705,7 @@ macro doCallVarargs(size, op, dispatch, > subp r1, CallerFrameAndPCSize, sp > end > end >- slowPathForCall(size, op, dispatch, slowPath, prepareCall) >+ slowPathForCall(size, opcodeStruct, dispatch, slowPath, prepareCall) > end > > >Index: Source/JavaScriptCore/runtime/CommonSlowPaths.cpp >=================================================================== >--- Source/JavaScriptCore/runtime/CommonSlowPaths.cpp (revision 240135) >+++ Source/JavaScriptCore/runtime/CommonSlowPaths.cpp (working copy) >@@ -125,8 +125,8 @@ namespace JSC { > bool bCondition = (condition); \ > CHECK_EXCEPTION(); \ > if (bCondition) \ >- pc = bytecode.m_target \ >- ? reinterpret_cast<const Instruction*>(reinterpret_cast<const uint8_t*>(pc) + bytecode.m_target) \ >+ pc = bytecode.m_targetLabel \ >+ ? reinterpret_cast<const Instruction*>(reinterpret_cast<const uint8_t*>(pc) + bytecode.m_targetLabel) \ > : exec->codeBlock()->outOfLineJumpTarget(pc); \ > else \ > pc = reinterpret_cast<const Instruction*>(reinterpret_cast<const uint8_t*>(pc) + pc->size()); \ >Index: Source/JavaScriptCore/runtime/GetPutInfo.h >=================================================================== >--- Source/JavaScriptCore/runtime/GetPutInfo.h (revision 240135) >+++ Source/JavaScriptCore/runtime/GetPutInfo.h (working copy) >@@ -1,5 +1,5 @@ > /* >- * Copyright (C) 2015-2018 Apple Inc. All Rights Reserved. >+ * Copyright (C) 2015-2019 Apple Inc. All Rights Reserved. > * > * Redistribution and use in source and binary forms, with or without > * modification, are permitted provided that the following conditions >@@ -38,7 +38,7 @@ enum ResolveMode { > DoNotThrowIfNotFound > }; > >-enum ResolveType { >+enum ResolveType : unsigned { > // Lexical scope guaranteed a certain type of variable access. > GlobalProperty, > GlobalVar, >@@ -210,6 +210,8 @@ public: > static const unsigned modeBits = ((1 << 30) - 1) & ~initializationBits & ~typeBits; > static_assert((modeBits & initializationBits & typeBits) == 0x0, "There should be no intersection between ResolveMode ResolveType and InitializationMode"); > >+ GetPutInfo() = default; >+ > GetPutInfo(ResolveMode resolveMode, ResolveType resolveType, InitializationMode initializationMode) > : m_operand((resolveMode << modeShift) | (static_cast<unsigned>(initializationMode) << initializationShift) | resolveType) > { >@@ -228,7 +230,9 @@ public: > void dump(PrintStream&) const; > > private: >- Operand m_operand; >+ Operand m_operand { 0 }; >+ >+ friend class JSC::LLIntOffsetsExtractor; > }; > > enum GetOrPut { Get, Put };
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Flags:
ysuzuki
:
review+
Actions:
View
|
Formatted Diff
|
Diff
Attachments on
bug 193557
:
359424
|
359432