- Mar 18, 2011
-
-
Jakob Stoklund Olesen authored
This is not supposed to happen, but I have seen the x86 rematter getting confused when rematerializing partial redefs. llvm-svn: 127857
-
Jakob Stoklund Olesen authored
and early clobbers. Assert when trying to find an undefined value. llvm-svn: 127856
-
Ted Kremenek authored
Add new CrashRecoveryContextCleanup subclass: CrashRecoveryContextDeleteCleanup. This deletes the object, not just calls its destructor. llvm-svn: 127855
-
Rafael Espindola authored
llvm-svn: 127853
-
Eli Friedman authored
comparisons on x86. Essentially, the way this works is that SUB+SBB sets the relevant flags the same way a double-width CMP would. This is a substantial improvement over the generic lowering in LLVM. The output is also shorter than the gcc-generated output; I haven't done any detailed benchmarking, though. llvm-svn: 127852
-
Ted Kremenek authored
Augment CrashRecoveryContext to have registered "cleanup" objects that can be used to release resources during a crash. llvm-svn: 127849
-
Eli Friedman authored
llvm-svn: 127845
-
Johnny Chen authored
Remove the offending logic and update the test cases. llvm-svn: 127843
-
Andrew Trick authored
llvm-svn: 127842
-
Owen Anderson authored
llvm-svn: 127840
-
Andrew Trick authored
SCEV may generate expressions composed of multiple pointers, which can lead to invalid GEP expansion. Until we can teach SCEV to follow strict pointer rules, make sure no bad GEPs creep into IR. Fixes rdar://problem/9038671. llvm-svn: 127839
-
Andrew Trick authored
llvm-svn: 127837
-
- Mar 17, 2011
-
-
Rafael Espindola authored
instead of copying. llvm-svn: 127835
-
Devang Patel authored
This is done by lowering dbg.declare intrinsic into dbg.value intrinsic. Radar 9143931. llvm-svn: 127834
-
Johnny Chen authored
o A8.6.195 STR (register) -- Encoding T1 o A8.6.193 STR (immediate, Thumb) -- Encoding T1 It has been changed so that now they use different addressing modes and thus different MC representation (Operand Infos). Modify the disassembler to reflect the change, and add relevant tests. llvm-svn: 127833
-
Devang Patel authored
llvm-svn: 127832
-
Benjamin Kramer authored
BuildUDIV: If the divisor is even we can simplify the fixup of the multiplied value by introducing an early shift. This allows us to compile "unsigned foo(unsigned x) { return x/28; }" into shrl $2, %edi imulq $613566757, %rdi, %rax shrq $32, %rax ret instead of movl %edi, %eax imulq $613566757, %rax, %rcx shrq $32, %rcx subl %ecx, %eax shrl %eax addl %ecx, %eax shrl $4, %eax on x86_64 llvm-svn: 127829
-
Benjamin Kramer authored
Add an argument to APInt's magic udiv calculation to specify the number of bits that are known zero in the divided number. This will come in handy soon. llvm-svn: 127828
-
Jakob Stoklund Olesen authored
I have convinced myself that it can only happen when a phi value dies. When it happens, allocate new virtual registers for the components. llvm-svn: 127827
-
Stuart Hastings authored
llvm-svn: 127824
-
Richard Osborne authored
llvm-svn: 127821
-
Stuart Hastings authored
llvm-svn: 127814
-
Stuart Hastings authored
llvm-svn: 127813
-
Daniel Dunbar authored
been removed. llvm-svn: 127812
-
Cameron Zwarich authored
llvm-svn: 127809
-
Cameron Zwarich authored
llvm-svn: 127808
-
Cameron Zwarich authored
llvm-svn: 127807
-
Nick Lewycky authored
llvm-svn: 127801
-
NAKAMURA Takumi authored
test/CodeGen/X86/h-registers-1.ll: Add explicit -mtriple=x86_64-linux. It does not need to be checked on x86_64-win32 (aka Win64). llvm-svn: 127800
-
Nick Lewycky authored
llvm-svn: 127788
-
Eli Friedman authored
llvm-svn: 127786
-
Rafael Espindola authored
of an file. llvm-svn: 127781
-
Joerg Sonnenberger authored
While here, add VK_ARM_TPOFF and VK_ARM_GOTTPOFF, too. llvm-svn: 127780
-
Jakob Stoklund Olesen authored
llvm-svn: 127779
-
NAKAMURA Takumi authored
llvm-svn: 127775
-
- Mar 16, 2011
-
-
Jakob Stoklund Olesen authored
The register allocator needs to adjust its live interval unions when that happens. llvm-svn: 127774
-
Jakob Stoklund Olesen authored
llvm-svn: 127773
-
Jakob Stoklund Olesen authored
The live range of a virtual register may change which invalidates the cached interference information. llvm-svn: 127772
-
Jakob Stoklund Olesen authored
llvm-svn: 127771
-
Cameron Zwarich authored
rather than an int. Thankfully, this only causes LLVM to miss optimizations, not generate incorrect code. This just fixes the zext at the return. We still insert an i32 ZextAssert when reading a function's arguments, but it is followed by a truncate and another i8 ZextAssert so it is not optimized. llvm-svn: 127766
-