- Dec 23, 2010
-
-
Benjamin Kramer authored
llvm-svn: 122453
-
- Dec 22, 2010
-
-
Duncan Sands authored
if both A op B and A op C simplify. This fires fairly often but doesn't make that much difference. On gcc-as-one-file it removes two "and"s and turns one branch into a select. llvm-svn: 122399
-
Duncan Sands authored
instcombine is compared to instsimplify. llvm-svn: 122397
-
Owen Anderson authored
I still think that LVI should be handling this, but that capability is some ways off in the future, and this matters for some significant benchmarks. llvm-svn: 122378
-
- Dec 21, 2010
-
-
Owen Anderson authored
llvm-svn: 122371
-
Benjamin Kramer authored
llvm-svn: 122362
-
Duncan Sands authored
visit instructions before their uses, since InstructionSimplify does a better job in that case. All this prompted by Frits van Bommel. llvm-svn: 122343
-
Duncan Sands authored
not very important since the pass is only used for testing, but it does make it more realistic. Suggested by Frits van Bommel. llvm-svn: 122336
-
Duncan Sands authored
plenty left though!), in particular for multiplication. llvm-svn: 122330
-
- Dec 20, 2010
-
-
Duncan Sands authored
llvm-svn: 122265
-
Duncan Sands authored
it could only be tested indirectly, via instcombine, gvn or some other pass that makes use of InstructionSimplify, which means that testcases had to be carefully contrived to dance around any other transformations that that pass did. llvm-svn: 122264
-
Benjamin Kramer authored
llvm-svn: 122258
-
Benjamin Kramer authored
llvm-svn: 122249
-
Benjamin Kramer authored
Teach InstCombine to merge (icmp ult (X + CA), C1) | (icmp eq X, C2) into (icmp ult (X + CA), C1 + 1) if C2 + CA == C1. InstCombine creates these so now we compile x == 23 || x == 24 || x == 25 to %x.off = add i32 %x, -23 %1 = icmp ult i32 %x.off, 3 instead of %x.off = add i32 %x, -23 %1 = icmp ult i32 %x.off, 2 %cmp3 = icmp eq i32 %x, 25 %ret2 = or i1 %1, %cmp3 llvm-svn: 122248
-
Chris Lattner authored
llvm-svn: 122238
-
Chris Lattner authored
llvm-svn: 122237
-
Chris Lattner authored
to make sure that the reused alloca has sufficient alignment. llvm-svn: 122236
-
Chris Lattner authored
llvm-svn: 122235
-
Chris Lattner authored
argument. The generated alloca has to have at least the alignment of the byval, if not, the client may be making assumptions that the new alloca won't satisfy. llvm-svn: 122234
-
Mon P Wang authored
llvm-svn: 122215
-
Chris Lattner authored
llvm-svn: 122204
-
- Dec 19, 2010
-
-
Chris Lattner authored
llvm-svn: 122190
-
Chris Lattner authored
llvm-svn: 122183
-
Chris Lattner authored
This resolves a README entry and technically resolves PR4916, but we still get poor code for the testcase in that PR because GVN isn't CSE'ing uadd with add, filed as PR8817. Previously we got: _test7: ## @test7 addq %rsi, %rdi cmpq %rdi, %rsi movl $42, %eax cmovaq %rsi, %rax ret Now we get: _test7: ## @test7 addq %rsi, %rdi movl $42, %eax cmovbq %rsi, %rax ret llvm-svn: 122182
-
Chris Lattner authored
result is dead. This is required for my next patch to not regress the testsuite. llvm-svn: 122181
-
Chris Lattner authored
the old thing end up on the instcombine worklist. Not doing this can cause an extra top-level iteration of instcombine, burning compile time. llvm-svn: 122179
-
Chris Lattner authored
sadd formed is half the size of the original type. We can now compile this into a sadd.i8: unsigned char X(char a, char b) { int res = a+b; if ((unsigned )(res+128) > 255U) abort(); return res; } llvm-svn: 122178
-
Chris Lattner authored
checking to see if the high bits of the original add result were dead. Inserting a smaller add and zexting back to that size is not good enough. This is likely to be the fix for 8816. llvm-svn: 122177
-
Chris Lattner authored
profitable (or safe) to promote code when the add-with-constant has other uses. llvm-svn: 122175
-
Chris Lattner authored
helper function, clean up comments, and reduce indentation. No functionality change. llvm-svn: 122174
-
Chris Lattner authored
which doesn't affect the memory address being promoted. llvm-svn: 122172
-
Chris Lattner authored
does not make the alias set for that pointer volatile, just stores *to* the pointer. llvm-svn: 122171
-
Chris Lattner authored
llvm-svn: 122168
-
Chris Lattner authored
which have trapping constant exprs in them due to PHI nodes. Eliminating them can cause the constant expr to be evalutated on new paths if the input edges are critical. llvm-svn: 122164
-
- Dec 18, 2010
-
-
Chris Lattner authored
llvm-svn: 122156
-
Bill Wendling authored
llvm-svn: 122110
-
Nate Begeman authored
Add vector versions of some existing scalar transforms to aid codegen in matching psign & pblend operations to the IR produced by clang/gcc for their C idioms. llvm-svn: 122105
-
- Dec 17, 2010
-
-
Owen Anderson authored
Reapply r121905 (automatic synthesis of @llvm.sadd.with.overflow) with a fix for a bug that manifested itself on the DragonEgg self-host bot. Unfortunately, the testcase is pretty messy and doesn't reduce well due to interactions with other parts of InstCombine. llvm-svn: 122072
-
Benjamin Kramer authored
llvm-svn: 122054
-
Chris Lattner authored
comparisons formed by comparisons. For example, this: void foo(unsigned x) { if (x == 0 || x == 1 || x == 3 || x == 4 || x == 6) bar(); } compiles into: _foo: ## @foo ## BB#0: ## %entry cmpl $6, %edi ja LBB0_2 ## BB#1: ## %entry movl %edi, %eax movl $91, %ecx btq %rax, %rcx jb LBB0_3 instead of: _foo: ## @foo ## BB#0: ## %entry cmpl $2, %edi jb LBB0_4 ## BB#1: ## %switch.early.test cmpl $6, %edi ja LBB0_3 ## BB#2: ## %switch.early.test movl %edi, %eax movl $88, %ecx btq %rax, %rcx jb LBB0_4 This catches a bunch of cases in GCC, which look like this: %804 = load i32* @which_alternative, align 4, !tbaa !0 %805 = icmp ult i32 %804, 2 %806 = icmp eq i32 %804, 3 %or.cond121 = or i1 %805, %806 %807 = icmp eq i32 %804, 4 %or.cond124 = or i1 %or.cond121, %807 br i1 %or.cond124, label %.thread, label %808 turning this into a range comparison. llvm-svn: 122045
-