Skip to content
  1. Feb 13, 2009
  2. Feb 12, 2009
  3. Jan 23, 2009
    • Gabor Greif's avatar
      Simplify the logic of getting hold of a PHI predecessor block. · eb61fcf2
      Gabor Greif authored
      There is now a direct way from value-use-iterator to incoming block in PHINode's API.
      This way we avoid the iterator->index->iterator trip, and especially the costly
      getOperandNo() invocation. Additionally there is now an assertion that the iterator
      really refers to one of the PHI's Uses.
      
      llvm-svn: 62869
      eb61fcf2
  4. Jan 18, 2009
  5. Jan 12, 2009
  6. Jan 05, 2009
  7. Dec 19, 2008
    • Evan Cheng's avatar
      - CodeGenPrepare does not split loop back edges but it only knows about back... · 3b3de7c2
      Evan Cheng authored
      - CodeGenPrepare does not split loop back edges but it only knows about back edges of single block loops. It now does a DFS walk to find loop back edges.
      - Use SplitBlockPredecessors to factor out common predecessors of the critical edge destination. This is disabled for now due to some regressions.
      
      llvm-svn: 61248
      3b3de7c2
  8. Nov 28, 2008
  9. Nov 27, 2008
  10. Nov 26, 2008
    • Chris Lattner's avatar
      Turn on my codegen prepare heuristic by default. It doesn't affect · 397a11cc
      Chris Lattner authored
      performance in most cases on the Grawp tester, but does speed some 
      things up (like shootout/hash by 15%).  This also doesn't impact 
      compile time in a noticable way on the Grawp tester.
      
      It also, of course, gets the testcase it was designed for right :)
      
      llvm-svn: 60120
      397a11cc
    • Chris Lattner's avatar
      teach the new heuristic how to handle inline asm. · fef04acc
      Chris Lattner authored
      llvm-svn: 60088
      fef04acc
    • Chris Lattner's avatar
      Improve ValueAlreadyLiveAtInst with a cheap and dirty, but effective · 6d71b7fb
      Chris Lattner authored
      heuristic: the value is already live at the new memory operation if
      it is used by some other instruction in the memop's block.  This is
      cheap and simple to compute (moreso than full liveness).
      
      This improves the new heuristic even more.  For example, it cuts two
      out of three new instructions out of 255.vortex:DbmFileInGrpHdr, 
      which is one of the functions that the heuristic regressed.  This
      overall eliminates another 40 instructions from 403.gcc and visibly
      reduces register pressure in 255.vortex (though this only actually
      ends up saving the 2 instructions from the whole program).
      
      llvm-svn: 60084
      6d71b7fb
    • Chris Lattner's avatar
      Start rewroking a subpiece of the profitability heuristic to be · e34fe2c5
      Chris Lattner authored
      phrased in terms of liveness instead of as a horrible hack.  :)
      
      In pratice, this doesn't change the generated code for either 
      255.vortex or 403.gcc, but it could cause minor code changes in 
      theory.  This is framework for coming changes.
      
      llvm-svn: 60082
      e34fe2c5
    • Chris Lattner's avatar
      add a comment, make save/restore logic more obvious. · 383a797f
      Chris Lattner authored
      llvm-svn: 60076
      383a797f
    • Chris Lattner's avatar
      This adds in some code (currently disabled unless you pass · eb3e4fb6
      Chris Lattner authored
      -enable-smarter-addr-folding to llc) that gives CGP a better
      cost model for when to sink computations into addressing modes.
      The basic observation is that sinking increases register 
      pressure when part of the addr computation has to be available
      for other reasons, such as having a use that is a non-memory
      operation.  In cases where it works, it can substantially reduce
      register pressure.
      
      This code is currently an overall win on 403.gcc and 255.vortex
      (the two things I've been looking at), but there are several 
      things I want to do before enabling it by default:
      
      1. This isn't doing any caching of results, so it is much slower 
         than it could be.  It currently slows down release-asserts llc 
         by 1.7% on 176.gcc: 27.12s -> 27.60s.
      2. This doesn't think about inline asm memory operands yet.
      3. The cost model botches the case when the needed value is live
         across the computation for other reasons.
      
      I'll continue poking at this, and eventually turn it on as llcbeta.
      
      llvm-svn: 60074
      eb3e4fb6
    • Chris Lattner's avatar
      Teach CodeGenPrepare to look through Bitcast instructions when attempting to · a9ab165b
      Chris Lattner authored
      optimize addressing modes.  This allows us to optimize things like isel-sink2.ll
      into:
      
      	movl	4(%esp), %eax
      	cmpb	$0, 4(%eax)
      	jne	LBB1_2	## F
      LBB1_1:	## TB
      	movl	$4, %eax
      	ret
      LBB1_2:	## F
      	movzbl	7(%eax), %eax
      	ret
      
      instead of:
      
      _test:
      	movl	4(%esp), %eax
      	cmpb	$0, 4(%eax)
      	leal	4(%eax), %eax
      	jne	LBB1_2	## F
      LBB1_1:	## TB
      	movl	$4, %eax
      	ret
      LBB1_2:	## F
      	movzbl	3(%eax), %eax
      	ret
      
      This shrinks (e.g.) 403.gcc from 1133510 to 1128345 lines of .s.
      
      Note that the 2008-10-16-SpillerBug.ll testcase is dubious at best, I doubt
      it is really testing what it thinks it is.
      
      llvm-svn: 60068
      a9ab165b
  11. Nov 25, 2008
  12. Nov 24, 2008
  13. Sep 24, 2008
  14. Sep 04, 2008
  15. Jul 27, 2008
  16. Jun 08, 2008
    • Duncan Sands's avatar
      Remove comparison methods for MVT. The main cause · 11dd4245
      Duncan Sands authored
      of apint codegen failure is the DAG combiner doing
      the wrong thing because it was comparing MVT's using
      < rather than comparing the number of bits.  Removing
      the < method makes this mistake impossible to commit.
      Instead, add helper methods for comparing bits and use
      them.
      
      llvm-svn: 52098
      11dd4245
  17. Jun 06, 2008
    • Duncan Sands's avatar
      Wrap MVT::ValueType in a struct to get type safety · 13237ac3
      Duncan Sands authored
      and better control the abstraction.  Rename the type
      to MVT.  To update out-of-tree patches, the main
      thing to do is to rename MVT::ValueType to MVT, and
      rewrite expressions like MVT::getSizeInBits(VT) in
      the form VT.getSizeInBits().  Use VT.getSimpleVT()
      to extract a MVT::SimpleValueType for use in switch
      statements (you will get an assert failure if VT is
      an extended value type - these shouldn't exist after
      type legalization).
      This results in a small speedup of codegen and no
      new testsuite failures (x86-64 linux).
      
      llvm-svn: 52044
      13237ac3
  18. May 23, 2008
  19. May 16, 2008
  20. May 13, 2008
  21. May 08, 2008
  22. Apr 27, 2008
    • Chris Lattner's avatar
      Implement a signficant optimization for inline asm: · 22379734
      Chris Lattner authored
      When choosing between constraints with multiple options,
      like "ir", test to see if we can use the 'i' constraint and
      go with that if possible.  This produces more optimal ASM in
      all cases (sparing a register and an instruction to load it),
      and fixes inline asm like this:
      
      void test () {
        asm volatile (" %c0 %1 " : : "imr" (42), "imr"(14));
      }
      
      Previously we would dump "42" into a memory location (which
      is ok for the 'm' constraint) which would cause a problem
      because the 'c' modifier is not valid on memory operands.
      
      Isn't it great how inline asm turns 'missed optimization'
      into 'compile failed'??
      
      Incidentally, this was the todo in 
      PowerPC/2007-04-24-InlineAsm-I-Modifier.ll
      
      Please do NOT pull this into Tak.
      
      llvm-svn: 50315
      22379734
    • Chris Lattner's avatar
      Move a bunch of inline asm code out of line. · 4793515a
      Chris Lattner authored
      llvm-svn: 50313
      4793515a
Loading