Skip to content
  1. Nov 27, 2008
  2. Nov 26, 2008
    • Chris Lattner's avatar
      Turn on my codegen prepare heuristic by default. It doesn't affect · 397a11cc
      Chris Lattner authored
      performance in most cases on the Grawp tester, but does speed some 
      things up (like shootout/hash by 15%).  This also doesn't impact 
      compile time in a noticable way on the Grawp tester.
      
      It also, of course, gets the testcase it was designed for right :)
      
      llvm-svn: 60120
      397a11cc
    • Chris Lattner's avatar
      teach the new heuristic how to handle inline asm. · fef04acc
      Chris Lattner authored
      llvm-svn: 60088
      fef04acc
    • Chris Lattner's avatar
      Improve ValueAlreadyLiveAtInst with a cheap and dirty, but effective · 6d71b7fb
      Chris Lattner authored
      heuristic: the value is already live at the new memory operation if
      it is used by some other instruction in the memop's block.  This is
      cheap and simple to compute (moreso than full liveness).
      
      This improves the new heuristic even more.  For example, it cuts two
      out of three new instructions out of 255.vortex:DbmFileInGrpHdr, 
      which is one of the functions that the heuristic regressed.  This
      overall eliminates another 40 instructions from 403.gcc and visibly
      reduces register pressure in 255.vortex (though this only actually
      ends up saving the 2 instructions from the whole program).
      
      llvm-svn: 60084
      6d71b7fb
    • Chris Lattner's avatar
      Start rewroking a subpiece of the profitability heuristic to be · e34fe2c5
      Chris Lattner authored
      phrased in terms of liveness instead of as a horrible hack.  :)
      
      In pratice, this doesn't change the generated code for either 
      255.vortex or 403.gcc, but it could cause minor code changes in 
      theory.  This is framework for coming changes.
      
      llvm-svn: 60082
      e34fe2c5
    • Chris Lattner's avatar
      add a comment, make save/restore logic more obvious. · 383a797f
      Chris Lattner authored
      llvm-svn: 60076
      383a797f
    • Chris Lattner's avatar
      This adds in some code (currently disabled unless you pass · eb3e4fb6
      Chris Lattner authored
      -enable-smarter-addr-folding to llc) that gives CGP a better
      cost model for when to sink computations into addressing modes.
      The basic observation is that sinking increases register 
      pressure when part of the addr computation has to be available
      for other reasons, such as having a use that is a non-memory
      operation.  In cases where it works, it can substantially reduce
      register pressure.
      
      This code is currently an overall win on 403.gcc and 255.vortex
      (the two things I've been looking at), but there are several 
      things I want to do before enabling it by default:
      
      1. This isn't doing any caching of results, so it is much slower 
         than it could be.  It currently slows down release-asserts llc 
         by 1.7% on 176.gcc: 27.12s -> 27.60s.
      2. This doesn't think about inline asm memory operands yet.
      3. The cost model botches the case when the needed value is live
         across the computation for other reasons.
      
      I'll continue poking at this, and eventually turn it on as llcbeta.
      
      llvm-svn: 60074
      eb3e4fb6
    • Chris Lattner's avatar
      Teach CodeGenPrepare to look through Bitcast instructions when attempting to · a9ab165b
      Chris Lattner authored
      optimize addressing modes.  This allows us to optimize things like isel-sink2.ll
      into:
      
      	movl	4(%esp), %eax
      	cmpb	$0, 4(%eax)
      	jne	LBB1_2	## F
      LBB1_1:	## TB
      	movl	$4, %eax
      	ret
      LBB1_2:	## F
      	movzbl	7(%eax), %eax
      	ret
      
      instead of:
      
      _test:
      	movl	4(%esp), %eax
      	cmpb	$0, 4(%eax)
      	leal	4(%eax), %eax
      	jne	LBB1_2	## F
      LBB1_1:	## TB
      	movl	$4, %eax
      	ret
      LBB1_2:	## F
      	movzbl	3(%eax), %eax
      	ret
      
      This shrinks (e.g.) 403.gcc from 1133510 to 1128345 lines of .s.
      
      Note that the 2008-10-16-SpillerBug.ll testcase is dubious at best, I doubt
      it is really testing what it thinks it is.
      
      llvm-svn: 60068
      a9ab165b
  3. Nov 25, 2008
  4. Nov 24, 2008
  5. Sep 24, 2008
  6. Sep 04, 2008
  7. Jul 27, 2008
  8. Jun 08, 2008
    • Duncan Sands's avatar
      Remove comparison methods for MVT. The main cause · 11dd4245
      Duncan Sands authored
      of apint codegen failure is the DAG combiner doing
      the wrong thing because it was comparing MVT's using
      < rather than comparing the number of bits.  Removing
      the < method makes this mistake impossible to commit.
      Instead, add helper methods for comparing bits and use
      them.
      
      llvm-svn: 52098
      11dd4245
  9. Jun 06, 2008
    • Duncan Sands's avatar
      Wrap MVT::ValueType in a struct to get type safety · 13237ac3
      Duncan Sands authored
      and better control the abstraction.  Rename the type
      to MVT.  To update out-of-tree patches, the main
      thing to do is to rename MVT::ValueType to MVT, and
      rewrite expressions like MVT::getSizeInBits(VT) in
      the form VT.getSizeInBits().  Use VT.getSimpleVT()
      to extract a MVT::SimpleValueType for use in switch
      statements (you will get an assert failure if VT is
      an extended value type - these shouldn't exist after
      type legalization).
      This results in a small speedup of codegen and no
      new testsuite failures (x86-64 linux).
      
      llvm-svn: 52044
      13237ac3
  10. May 23, 2008
  11. May 16, 2008
  12. May 13, 2008
  13. May 08, 2008
  14. Apr 27, 2008
    • Chris Lattner's avatar
      Implement a signficant optimization for inline asm: · 22379734
      Chris Lattner authored
      When choosing between constraints with multiple options,
      like "ir", test to see if we can use the 'i' constraint and
      go with that if possible.  This produces more optimal ASM in
      all cases (sparing a register and an instruction to load it),
      and fixes inline asm like this:
      
      void test () {
        asm volatile (" %c0 %1 " : : "imr" (42), "imr"(14));
      }
      
      Previously we would dump "42" into a memory location (which
      is ok for the 'm' constraint) which would cause a problem
      because the 'c' modifier is not valid on memory operands.
      
      Isn't it great how inline asm turns 'missed optimization'
      into 'compile failed'??
      
      Incidentally, this was the todo in 
      PowerPC/2007-04-24-InlineAsm-I-Modifier.ll
      
      Please do NOT pull this into Tak.
      
      llvm-svn: 50315
      22379734
    • Chris Lattner's avatar
      Move a bunch of inline asm code out of line. · 4793515a
      Chris Lattner authored
      llvm-svn: 50313
      4793515a
  15. Apr 25, 2008
    • Dan Gohman's avatar
      Remove the code from CodeGenPrepare that moved getresult instructions · ca95a5f4
      Dan Gohman authored
      to the block that defines their operands. This doesn't work in the
      case that the operand is an invoke, because invoke is a terminator
      and must be the last instruction in a block.
      
      Replace it with support in SelectionDAGISel for copying struct values
      into sequences of virtual registers.
      
      llvm-svn: 50279
      ca95a5f4
  16. Apr 06, 2008
  17. Mar 21, 2008
  18. Mar 19, 2008
  19. Feb 26, 2008
  20. Jan 20, 2008
  21. Dec 29, 2007
  22. Dec 25, 2007
Loading