- Apr 16, 2010
-
-
Gabor Greif authored
with a fix for self-hosting rotate CallInst operands, i.e. move callee to the back of the operand array the motivation for this patch are laid out in my mail to llvm-commits: more efficient access to operands and callee, faster callgraph-construction, smaller compiler binary llvm-svn: 101465
-
Gabor Greif authored
llvm-svn: 101434
-
- Apr 15, 2010
-
-
Gabor Greif authored
with a fix rotate CallInst operands, i.e. move callee to the back of the operand array the motivation for this patch are laid out in my mail to llvm-commits: more efficient access to operands and callee, faster callgraph-construction, smaller compiler binary llvm-svn: 101397
-
Gabor Greif authored
llvm-svn: 101368
-
Gabor Greif authored
of the operand array the motivation for this patch are laid out in my mail to llvm-commits: more efficient access to operands and callee, faster callgraph-construction, smaller compiler binary llvm-svn: 101364
-
- Apr 12, 2010
-
-
Dan Gohman authored
llvm-svn: 101009
-
- Mar 19, 2010
-
-
Anton Korobeynikov authored
llvm-svn: 98911
-
- Mar 18, 2010
-
-
Dan Gohman authored
llvm-svn: 98853
-
- Mar 12, 2010
-
-
Duncan Sands authored
the inner GEP is not a ConstantInt. llvm-svn: 98359
-
- Mar 10, 2010
-
-
Dan Gohman authored
llvm-svn: 98178
-
- Feb 23, 2010
-
-
Dan Gohman authored
getelementptr. Despite only doing so in the case where x is a known array object and c can be converted to an index within range, this could still be invalid if c is actually the address of an object allocated outside of LLVM. Also, SCEVExpander, the original motivation for this code, has since been improved to avoid inttoptr+ptroint in more cases. llvm-svn: 96950
-
- Feb 22, 2010
-
-
Dan Gohman authored
operators. The test difference is just due to the multiplication operands being commuted (and thus requiring a more elaborate match). In optimized code, that expression would be folded. llvm-svn: 96816
-
Dan Gohman authored
llvm-svn: 96808
-
- Feb 17, 2010
-
-
Dan Gohman authored
llvm-svn: 96432
-
- Feb 16, 2010
-
-
Duncan Sands authored
and T->isPointerTy(). Convert most instances of the first form to the second form. Requested by Chris. llvm-svn: 96344
-
- Feb 15, 2010
-
-
Duncan Sands authored
isInteger, we now have isFloatTy and isIntegerTy. Requested by Chris! llvm-svn: 96223
-
- Feb 08, 2010
-
-
Dan Gohman authored
llvm-svn: 95582
-
- Feb 01, 2010
-
-
Dan Gohman authored
cases, and implement target-independent folding rules for alignof and offsetof. Also, reassociate reassociative operators when it leads to more folding. Generalize ScalarEvolution's isOffsetOf to recognize offsetof on arrays. Rename getAllocSizeExpr to getSizeOfExpr, and getFieldOffsetExpr to getOffsetOfExpr, for consistency with analagous ConstantExpr routines. Make the target-dependent folder promote GEP array indices to pointer-sized integers, to make implicit casting explicit and exposed to subsequent folding. And add a bunch of testcases for this new functionality, and a bunch of related existing functionality. llvm-svn: 94987
-
- Jan 08, 2010
-
-
Chris Lattner authored
result int by 8 for the first byte. While normally harmless, if the result is smaller than a byte, this shift is invalid. llvm-svn: 93018
-
- Jan 02, 2010
-
-
Chris Lattner authored
wrapping up PR3351. llvm-svn: 92410
-
- Dec 04, 2009
-
-
Chris Lattner authored
folding a load from constant. llvm-svn: 90545
-
- Dec 03, 2009
-
-
Chris Lattner authored
llvm-svn: 90369
-
- Nov 29, 2009
-
-
Nick Lewycky authored
This permits the devirtualization of llvm.org/PR3100#c9 when compiled by clang. llvm-svn: 90099
-
- Nov 23, 2009
-
-
Dan Gohman authored
ConstantExpr, not just the top-level operator. This allows it to fold many more constants. Also, make GlobalOpt call ConstantFoldConstantExpression on GlobalVariable initializers. llvm-svn: 89659
-
- Nov 10, 2009
-
-
Chris Lattner authored
individual operands instead of taking a temporary array llvm-svn: 86619
-
- Nov 06, 2009
-
-
Chris Lattner authored
from various APIs, addressing PR5325. llvm-svn: 86231
-
- Oct 25, 2009
-
-
Chris Lattner authored
This allows us to simplify this: union vec2d { double e[2]; double v __attribute__((vector_size(16))); }; typedef union vec2d vec2d; static vec2d a={{1,2}}, b={{3,4}}; vec2d foo () { return (vec2d){ .v = a.v + b.v * (vec2d){{5,5}}.v }; } down to: define %0 @foo() nounwind ssp { entry: %mrv5 = insertvalue %0 undef, double 1.600000e+01, 0 ; <%0> [#uses=1] %mrv6 = insertvalue %0 %mrv5, double 2.200000e+01, 1 ; <%0> [#uses=1] ret %0 %mrv6 } instead of: define %0 @foo() nounwind ssp { entry: %mrv5 = insertvalue %0 undef, double extractelement (<2 x double> fadd (<2 x double> fmul (<2 x double> bitcast (<1 x i128> <i128 85174437667405312423031577302488055808> to <2 x double>), <2 x double> <double 3.000000e+00, double 4.000000e+00>), <2 x double> <double 1.000000e+00, double 2.000000e+00>), i32 0), 0 ; <%0> [#uses=1] %mrv6 = insertvalue %0 %mrv5, double extractelement (<2 x double> fadd (<2 x double> fmul (<2 x double> bitcast (<1 x i128> <i128 85174437667405312423031577302488055808> to <2 x double>), <2 x double> <double 3.000000e+00, double 4.000000e+00>), <2 x double> <double 1.000000e+00, double 2.000000e+00>), i32 1), 1 ; <%0> [#uses=1] ret %0 %mrv6 } llvm-svn: 85040
-
Chris Lattner authored
ConstantExpr::getBitCast in various places. llvm-svn: 85039
-
Chris Lattner authored
instead of returning null on failure. No functionality change. llvm-svn: 85038
-
- Oct 24, 2009
-
-
Chris Lattner authored
llvm-svn: 84993
-
Chris Lattner authored
Duncan for the nice tiny testcase. llvm-svn: 84992
-
- Oct 23, 2009
-
-
Chris Lattner authored
implements something out of Target/README.txt producing: _foo: ## @foo movl 4(%esp), %eax movapd LCPI1_0, %xmm0 movapd %xmm0, (%eax) ret $4 instead of: _foo: ## @foo movl 4(%esp), %eax movapd _b, %xmm0 mulpd LCPI1_0, %xmm0 addpd _a, %xmm0 movapd %xmm0, (%eax) ret $4 llvm-svn: 84942
-
Chris Lattner authored
bytes (i256). llvm-svn: 84941
-
Chris Lattner authored
non-type-safe constant initializers. This sort of thing happens quite a bit for 4-byte loads out of string constants, unions, bitfields, and an interesting endianness check from sqlite, which is something like this: const int sqlite3one = 1; # define SQLITE_BIGENDIAN (*(char *)(&sqlite3one)==0) # define SQLITE_LITTLEENDIAN (*(char *)(&sqlite3one)==1) # define SQLITE_UTF16NATIVE (SQLITE_BIGENDIAN?SQLITE_UTF16BE:SQLITE_UTF16LE) all of these macros now constant fold away. This implements PR3152 and is based on a patch started by Eli, but heavily modified and extended. llvm-svn: 84936
-
- Oct 22, 2009
-
-
Chris Lattner authored
llvm-svn: 84841
-
Chris Lattner authored
to libanalysis. Instcombine shrinking... does this even make sense??? llvm-svn: 84840
-
Chris Lattner authored
Analysis/ConstantFolding.cpp. This doesn't change the behavior of instcombine but makes other clients of ConstantFoldInstruction able to handle loads. This was partially extracted from Eli's patch in PR3152. llvm-svn: 84836
-
- Oct 06, 2009
-
-
Evan Phoenix authored
llvm-svn: 83338
-
- Oct 05, 2009
-
-
Dan Gohman authored
ConstantFoldLoadThroughGEPConstantExpr. llvm-svn: 83311
-
Chris Lattner authored
llvm-svn: 83295
-