- Jan 09, 2018
-
-
Simon Pilgrim authored
Reduced from oss-fuzz #5032 test case llvm-svn: 322078
-
- Jan 03, 2018
-
-
Simon Pilgrim authored
Reduced from oss-fuzz #4871 test case llvm-svn: 321748
-
- Nov 07, 2017
-
-
Craig Topper authored
The hexagon test should be fixed now. Original commit message: This pulls shifts through a select+binop with a constant where the select conditionally executes the binop. We already do this for just the binop, but not with the select. This can allow us to get the select closer to other selects to enable removing one. Differential Revision: https://reviews.llvm.org/D39222 llvm-svn: 317600
-
- Nov 06, 2017
-
-
Hans Wennborg authored
This broke the CodeGen/Hexagon/loop-idiom/pmpy-mod.ll test on a bunch of buildbots. > This pulls shifts through a select+binop with a constant where the select conditionally executes the binop. We already do this for just the binop, but not with the select. > > This can allow us to get the select closer to other selects to enable removing one. > > Differential Revision: https://reviews.llvm.org/D39222 > > git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@317510 91177308-0d34-0410-b5e6-96231b3b80d8 llvm-svn: 317518
-
Craig Topper authored
This pulls shifts through a select+binop with a constant where the select conditionally executes the binop. We already do this for just the binop, but not with the select. This can allow us to get the select closer to other selects to enable removing one. Differential Revision: https://reviews.llvm.org/D39222 llvm-svn: 317510
-
- Aug 15, 2017
-
-
Amjad Aboud authored
Differential Revision: https://reviews.llvm.org/D36743 llvm-svn: 310949
-
Sanjay Patel authored
Narrow ops are better for bit-tracking, and in the case of vectors, may enable better codegen. As the trunc test shows, this can allow follow-on simplifications. There's a block of code in visitTrunc that deals with shifted ops with FIXME comments. It may be possible to remove some of that now, but I want to make sure there are no problems with this step first. http://rise4fun.com/Alive/Y3a Name: hoist_ashr_ahead_of_sext_1 %s = sext i8 %x to i32 %r = ashr i32 %s, 3 ; shift value is < than source bit width => %a = ashr i8 %x, 3 %r = sext i8 %a to i32 Name: hoist_ashr_ahead_of_sext_2 %s = sext i8 %x to i32 %r = ashr i32 %s, 8 ; shift value is >= than source bit width => %a = ashr i8 %x, 7 ; so clamp this shift value %r = sext i8 %a to i32 Name: junc_the_trunc %a = sext i16 %v to i32 %s = ashr i32 %a, 18 %t = trunc i32 %s to i16 => %t = ashr i16 %v, 15 llvm-svn: 310942
-
- Aug 08, 2017
-
-
Craig Topper authored
We already support pulling through an add with constant RHS. We can do the same for subtract. Differential Revision: https://reviews.llvm.org/D36443 llvm-svn: 310407
-
- Aug 05, 2017
-
-
Craig Topper authored
[InstCombine] Teach the code that pulls logical operators through constant shifts to handle vector splats too. llvm-svn: 310185
-
- Aug 04, 2017
-
-
Benjamin Kramer authored
Avoids unused variable warnings in Release builds. No functional change. llvm-svn: 310064
-
Sanjay Patel authored
Name: narrow_shift Pre: C1 < 8 %zx = zext i8 %x to i32 %l = lshr i32 %zx, C1 => %narrowC = trunc i32 C1 to i8 %ns = lshr i8 %x, %narrowC %l = zext i8 %ns to i32 http://rise4fun.com/Alive/jIV This isn't directly applicable to PR34046 as written, but we need to have more narrowing folds like this to be sure that rotate patterns are recognized. llvm-svn: 310060
-
- Jul 08, 2017
-
-
Craig Topper authored
Previously the InstCombiner class contained a pointer to an IR builder that had been passed to the constructor. Sometimes this would be passed to helper functions as either a pointer or the pointer would be dereferenced to be passed by reference. This patch makes it a reference everywhere including the InstCombiner class itself so there is more inconsistency. This a large, but mechanical patch. I've done very minimal formatting changes on it despite what clang-format wanted to do. llvm-svn: 307451
-
- Jun 24, 2017
-
-
Craig Topper authored
llvm-svn: 306205
-
- Jun 12, 2017
-
-
Sanjay Patel authored
This is a follow-up to https://reviews.llvm.org/D33879 / https://reviews.llvm.org/rL304939 , and was discussed in https://reviews.llvm.org/D33338. We prefer this form because a narrower shift may be cheaper, and we can more easily fold a zext than a sext. http://rise4fun.com/Alive/slVe Name: shz %s = sext i8 %x to i12 %r = lshr i12 %s, 4 => %a = ashr i8 %x, 4 %r = zext i8 %a to i12 llvm-svn: 305190
-
- Jun 09, 2017
-
-
Craig Topper authored
Summary: This matches the behavior we already had for compares and makes us consistent everywhere. Reviewers: dberlin, hfinkel, spatel Reviewed By: dberlin Subscribers: llvm-commits Differential Revision: https://reviews.llvm.org/D33604 llvm-svn: 305049
-
- Jun 07, 2017
-
-
Sanjay Patel authored
This was discussed in D33338. We have larger pattern-matching ending in a truncate that we can reduce or remove by handling these smaller patterns first. Further motivation is that narrower shift ops are easier for value tracking and zext is better than sext. http://rise4fun.com/Alive/rhh Name: boolshift %sext = sext i1 %x to i8 %r = lshr i8 %sext, 7 => %r = zext i1 %x to i8 Name: noboolshift %sext = sext i3 %x to i8 %r = lshr i8 %sext, 7 => %sh = lshr i3 %x, 2 %r = zext i3 %sh to i8 Differential Revision: https://reviews.llvm.org/D33879 llvm-svn: 304939
-
- May 26, 2017
-
-
Craig Topper authored
[InstCombine] Pass the DominatorTree, AssumptionCache, and context instruction to a few calls to isKnownPositive, isKnownNegative, and isKnownNonZero Every other place in InstCombine that uses these methods in ValueTracking already pass this information. This makes the remaining sites consistent. Differential Revision: https://reviews.llvm.org/D33567 llvm-svn: 304018
-
- Apr 26, 2017
-
-
Daniel Berlin authored
InstCombine: Use the new SimplifyQuery versions of Simplify*. Use AssumptionCache, DominatorTree, TargetLibraryInfo everywhere. llvm-svn: 301464
-
- Apr 20, 2017
-
-
Craig Topper authored
getSignBit is a static function that creates an APInt with only the sign bit set. getSignMask seems like a better name to convey its functionality. In fact several places use it and then store in an APInt named SignMask. Differential Revision: https://reviews.llvm.org/D32108 llvm-svn: 300856
-
- Apr 18, 2017
-
-
Craig Topper authored
This patch uses lshrInPlace to replace code where the object that lshr is called on is being overwritten with the result. This adds an lshrInPlace(const APInt &) version as well. Differential Revision: https://reviews.llvm.org/D32155 llvm-svn: 300566
-
- Feb 10, 2017
-
-
Sanjay Patel authored
This fold already existed for vectors but only when 'C1' was a splat constant (but 'C2' could be any constant). There were no tests for any vector constants, so I'm adding a test that shows non-splat constants for both operands. llvm-svn: 294650
-
- Feb 01, 2017
-
-
Sanjay Patel authored
Although this is 'no-functional-change-intended', I'm adding tests for shl-shl and lshr-lshr pairs because there is no existing test coverage for those folds. It seems like we should be able to remove some code from foldShiftedShift() at this point because we're handling those patterns on the general path. llvm-svn: 293814
-
- Jan 31, 2017
-
-
Sanjay Patel authored
llvm-svn: 293570
-
Sanjay Patel authored
llvm-svn: 293562
-
- Jan 30, 2017
-
-
Sanjay Patel authored
llvm-svn: 293524
-
Sanjay Patel authored
llvm-svn: 293508
-
Sanjay Patel authored
llvm-svn: 293507
-
Sanjay Patel authored
The original shift is bigger, so this may qualify as 'obvious', but here's an attempt at an Alive-based proof: Name: exact Pre: (C1 u< C2) %a = shl i8 %x, C1 %b = lshr exact i8 %a, C2 => %c = lshr exact i8 %x, C2 - C1 %b = and i8 %c, ((1 << width(C1)) - 1) u>> C2 Optimization is correct! llvm-svn: 293498
-
Sanjay Patel authored
llvm-svn: 293489
-
- Jan 29, 2017
-
-
Sanjay Patel authored
llvm-svn: 293435
-
- Jan 26, 2017
-
-
Sanjay Patel authored
We already have this fold when the lshr has one use, but it doesn't need that restriction. We may be able to remove some code from foldShiftedShift(). Also, move the similar: (X << C) >>u C --> X & (-1 >>u C) ...directly into visitLShr to help clean up foldShiftByConstOfShiftByConst(). That whole function seems questionable since it is called by commonShiftTransforms(), but there's really not much in common if we're checking the shift opcodes for every fold. llvm-svn: 293215
-
Sanjay Patel authored
llvm-svn: 293208
-
- Jan 21, 2017
-
-
Sanjay Patel authored
We may be able to assert that no shl-shl or lshr-lshr pairs ever get here because we should have already handled those in foldShiftedShift(). llvm-svn: 292726
-
- Jan 17, 2017
-
-
Sanjay Patel authored
llvm-svn: 292230
-
- Jan 16, 2017
-
-
Sanjay Patel authored
llvm-svn: 292164
-
Sanjay Patel authored
It's not clear what 'First' and 'Second' mean, so use 'Inner' and 'Outer' to match foldShiftedShift() and add comments with formulas, so it's easier to see what's going on. llvm-svn: 292153
-
Sanjay Patel authored
Some existing 'FIXME' tests are still not folded because of splat holes in value tracking. llvm-svn: 292151
-
Sanjay Patel authored
Reduces code duplication and makes it easier to extend these folds for vectors. llvm-svn: 292145
-
- Jan 15, 2017
-
-
Sanjay Patel authored
llvm-svn: 292073
-
Sanjay Patel authored
llvm-svn: 292064
-