- Sep 02, 2008
-
-
Nuno Lopes authored
llvm-svn: 55632
-
Nuno Lopes authored
# first commit to llvm, so whatch out :) llvm-svn: 55631
-
Matthijs Kooijman authored
llvm-svn: 55628
-
Evan Cheng authored
llvm-svn: 55626
-
Evan Cheng authored
llvm-svn: 55625
-
Evan Cheng authored
llvm-svn: 55624
-
Evan Cheng authored
Change getBinaryCodeForInstr prototype. First operand MachineInstr& should be const. Make corresponding changes. llvm-svn: 55623
-
- Sep 01, 2008
-
-
Gabor Greif authored
The first can update the SDNode in an SDValue while the second is called with SDNode* and returns a possibly updated SDNode*. This patch has no intended functional impact, but helps eliminating ugly temporary SDValues. llvm-svn: 55608
-
Duncan Sands authored
(what matters is that it is added to the worklist), it seems more logical to return it. llvm-svn: 55606
-
Duncan Sands authored
llvm-svn: 55605
-
Duncan Sands authored
attributes on functions, based on the result of alias analysis. It's not hardwired to use GlobalsModRef even though this is the only (AFAIK) alias analysis that results in this pass actually doing something. Enable as follows: opt ... -globalsmodref-aa -markmodref ... Advantages of this pass: (1) records the result of globalsmodref in the bitcode, meaning it is available for use by later passes (currently the pass manager isn't smart enough to magically make an advanced alias analysis available to all later passes), which may expose more optimization opportunities; (2) hopefully speeds up compilation when code is optimized twice, for example when a file is compiled to bitcode, then later LTO is done on it: marking functions readonly/readnone when producing the initial bitcode should speed up alias analysis during LTO; (3) good for discovering that globalsmodref doesn't work very well :) Not currently turned on by default. llvm-svn: 55604
-
Evan Cheng authored
llvm-svn: 55601
-
Evan Cheng authored
llvm-svn: 55599
-
Evan Cheng authored
llvm-svn: 55598
-
Evan Cheng authored
llvm-svn: 55597
-
Evan Cheng authored
llvm-svn: 55596
-
Evan Cheng authored
llvm-svn: 55594
-
Evan Cheng authored
llvm-svn: 55593
-
- Aug 31, 2008
-
-
Evan Cheng authored
llvm-svn: 55591
-
Evan Cheng authored
llvm-svn: 55590
-
Gabor Greif authored
llvm-svn: 55588
-
Bill Wendling authored
instructions in CellSPU as "Expand" so that they won't be generated. I added a "FIXME" so that this hack can be addressed and reverted once ISD::ROTR is supported in the .td files. llvm-svn: 55582
-
Bill Wendling authored
Dale, Could you please review this? llvm-svn: 55581
-
Bill Wendling authored
combiner can now generate ROTR if the backend says that it can handle it. Cell SPU says this, but gets an error from code gen saying that it can't select ROTR. I'm xfailing this test until this can be fixed. llvm-svn: 55579
-
Bill Wendling authored
llvm-svn: 55578
-
Bill Wendling authored
llvm-svn: 55577
-
Bill Wendling authored
llvm-svn: 55576
-
Bill Wendling authored
// fold (or (shl x, (*ext y)), (srl x, (*ext (sub 32, y)))) -> // (rotl x, y) // fold (or (shl x, (*ext y)), (srl x, (*ext (sub 32, y)))) -> // (rotr x, (sub 32, y)) Example: (x == 0xDEADBEEF and y == 4) (x << 4) | (x >> 28) => 0xEADBEEF0 | 0x0000000D => 0xEADBEEFD (rotl x, 4) => 0xEADBEEFD (rotr x, 28) => 0xEADBEEFD - Fix comment and code for second version. It wasn't using the rot* propertly. // fold (or (shl x, (*ext (sub 32, y))), (srl x, (*ext r))) -> // (rotr x, y) // fold (or (shl x, (*ext (sub 32, y))), (srl x, (*ext r))) -> // (rotl x, (sub 32, y)) (x << 28) | (x >> 4) => 0xD0000000 | 0x0DEADBEE => 0xDDEADBEE (rotl x, 4) => 0xEADBEEFD (rotr x, 28) => (0xEADBEEFD) llvm-svn: 55575
-
Gabor Greif authored
llvm-svn: 55574
-
- Aug 30, 2008
-
-
Gabor Greif authored
llvm-svn: 55571
-
Gordon Henriksen authored
Based on patch by Giorgos Korfiatis. llvm-svn: 55570
-
Gordon Henriksen authored
Breakage was exposed in the Ocaml bindings tests after Chris uncommented an assertion in r55084. llvm-svn: 55566
-
Gabor Greif authored
llvm-svn: 55565
-
Evan Cheng authored
Re-apply 55467 with fix. If copy is being replaced by remat'ed def, transfer the implicit defs onto the remat'ed instruction. llvm-svn: 55564
-
Evan Cheng authored
llvm-svn: 55563
-
Evan Cheng authored
For now, can't mark XOR64rr isAsCheapAsAMove. It's technically correct. But various passes cannot handle remating these. llvm-svn: 55562
-
Evan Cheng authored
Transform (x << (y&31)) -> (x << y). This takes advantage of the fact x86 shift instructions 2nd operand (shift count) is limited to 0 to 31 (or 63 in the x86-64 case). llvm-svn: 55558
-
Daniel Dunbar authored
support it. llvm-svn: 55557
-
Dale Johannesen authored
llvm-svn: 55556
-
Owen Anderson authored
Fix an issue where a use might be selected before a def, and then we didn't respect the pre-chosen vreg assignment when selecting the def. This is the naive solution to the problem: insert a copy to the pre-chosen vreg. Other solutions might be preferable, such as: 1) Passing the dest reg into FastEmit_. However, this would require the higher level code to know about reg classes, which they don't currently. 2) Selecting blocks in reverse postorder. This has some compile time cost for computing the order, and we'd need to measure its impact. llvm-svn: 55555
-