- Mar 21, 2011
-
-
Eli Friedman authored
llvm-svn: 127982
-
Evan Cheng authored
Re-apply r127953 with fixes: eliminate empty return block if it has no predecessors; update dominator tree if cfg is modified. llvm-svn: 127981
-
- Mar 19, 2011
-
-
Daniel Dunbar authored
to canonicalize IR", it broke a lot of things. llvm-svn: 127954
-
Evan Cheng authored
to have single return block (at least getting there) for optimizations. This is general goodness but it would prevent some tailcall optimizations. One specific case is code like this: int f1(void); int f2(void); int f3(void); int f4(void); int f5(void); int f6(void); int foo(int x) { switch(x) { case 1: return f1(); case 2: return f2(); case 3: return f3(); case 4: return f4(); case 5: return f5(); case 6: return f6(); } } => LBB0_2: ## %sw.bb callq _f1 popq %rbp ret LBB0_3: ## %sw.bb1 callq _f2 popq %rbp ret LBB0_4: ## %sw.bb3 callq _f3 popq %rbp ret This patch teaches codegenprep to duplicate returns when the return value is a phi and where the phi operands are produced by tail calls followed by an unconditional branch: sw.bb7: ; preds = %entry %call8 = tail call i32 @f5() nounwind br label %return sw.bb9: ; preds = %entry %call10 = tail call i32 @f6() nounwind br label %return return: %retval.0 = phi i32 [ %call10, %sw.bb9 ], [ %call8, %sw.bb7 ], ... [ 0, %entry ] ret i32 %retval.0 This allows codegen to generate better code like this: LBB0_2: ## %sw.bb jmp _f1 ## TAILCALL LBB0_3: ## %sw.bb1 jmp _f2 ## TAILCALL LBB0_4: ## %sw.bb3 jmp _f3 ## TAILCALL rdar://9147433 llvm-svn: 127953
-
Nadav Rotem authored
not have native support for this operation (such as X86). The legalized code uses two vector INT_TO_FP operations and is faster than scalarizing. llvm-svn: 127951
-
Johnny Chen authored
The relevant instruction table entries were changed sometime ago to no longer take <Rt2> as an operand. Modify ARMDisassemblerCore.cpp to accomodate the change and add a test case. llvm-svn: 127935
-
- Mar 18, 2011
-
-
Owen Anderson authored
Add support to the ARM asm parser for the register-shifted-register forms of basic instructions like ADD. More work left to be done to support other instances of shifter ops in the ISA. llvm-svn: 127917
-
-
Eli Friedman authored
llvm-svn: 127909
-
Owen Anderson authored
llvm-svn: 127900
-
Owen Anderson authored
llvm-svn: 127899
-
Justin Holewinski authored
- Emit mad instead of mad.rn for shader model 1.0 - Emit explicit mov.u32 instructions for reading global variables - (most PTX instructions cannot take global variable immediates) llvm-svn: 127895
-
Owen Anderson authored
llvm-svn: 127888
-
Joerg Sonnenberger authored
For now, only the default segments are supported. llvm-svn: 127875
-
Che-Liang Chiou authored
llvm-svn: 127874
-
Che-Liang Chiou authored
llvm-svn: 127873
-
Eli Friedman authored
comparisons on x86. Essentially, the way this works is that SUB+SBB sets the relevant flags the same way a double-width CMP would. This is a substantial improvement over the generic lowering in LLVM. The output is also shorter than the gcc-generated output; I haven't done any detailed benchmarking, though. llvm-svn: 127852
-
Johnny Chen authored
Remove the offending logic and update the test cases. llvm-svn: 127843
-
Owen Anderson authored
llvm-svn: 127840
-
- Mar 17, 2011
-
-
Johnny Chen authored
o A8.6.195 STR (register) -- Encoding T1 o A8.6.193 STR (immediate, Thumb) -- Encoding T1 It has been changed so that now they use different addressing modes and thus different MC representation (Operand Infos). Modify the disassembler to reflect the change, and add relevant tests. llvm-svn: 127833
-
Richard Osborne authored
llvm-svn: 127821
-
Cameron Zwarich authored
llvm-svn: 127809
-
Cameron Zwarich authored
llvm-svn: 127807
-
Nick Lewycky authored
llvm-svn: 127788
-
Eli Friedman authored
llvm-svn: 127786
-
- Mar 16, 2011
-
-
Cameron Zwarich authored
rather than an int. Thankfully, this only causes LLVM to miss optimizations, not generate incorrect code. This just fixes the zext at the return. We still insert an i32 ZextAssert when reading a function's arguments, but it is followed by a truncate and another i8 ZextAssert so it is not optimized. llvm-svn: 127766
-
Richard Osborne authored
llvm-svn: 127761
-
Richard Osborne authored
can event. llvm-svn: 127741
-
- Mar 15, 2011
-
-
Johnny Chen authored
1. The ARM Darwin *r9 call instructions were pseudo-ized recently. Modify the ARMDisassemblerCore.cpp file to accomodate the change. 2. The disassembler was unnecessarily adding 8 to the sign-extended imm24: imm32 = SignExtend(imm24:'00', 32); // A8.6.23 BL, BLX (immediate) // Encoding A1 It has no business doing such. Removed the offending logic. Add test cases to arm-tests.txt. llvm-svn: 127707
-
Bill Wendling authored
accept. If a value in the mask is out of range, it uses the value 0, for VTBL, or leaves the value unchanged, for VTBX. llvm-svn: 127700
-
Bill Wendling authored
llvm-svn: 127694
-
-
Richard Osborne authored
llvm-svn: 127681
-
Richard Osborne authored
llvm-svn: 127680
-
Richard Osborne authored
llvm-svn: 127678
-
Justin Holewinski authored
- Remove PTX 1.4 code generation - Change type of intrinsics to .v4.i32 instead of .v4.i16 - Add and/or/xor integer instructions llvm-svn: 127677
-
Duncan Sands authored
when building with assertions disabled. llvm-svn: 127675
-
Sean Callanan authored
in the instruction tables and fixed a few bugs that were causing decode conflicts. Rudimentary tests are coming up in the next patch. llvm-svn: 127646
-
Sean Callanan authored
instruction set. This code adds support for the VEX prefix and for the YMM registers accessible on AVX-enabled architectures. Instruction table support that enables AVX instructions for the disassembler is in an upcoming patch. llvm-svn: 127644
-
Johnny Chen authored
register operand was erroneously added. Remove an incorrect assert which triggers the bug. rdar://problem/9131529 llvm-svn: 127642
-