Skip to content
  1. Nov 04, 2010
  2. Nov 03, 2010
  3. Nov 01, 2010
  4. Oct 31, 2010
  5. Oct 30, 2010
    • Bob Wilson's avatar
      Overhaul memory barriers in the ARM backend. Radar 8601999. · 7ed59714
      Bob Wilson authored
      There were a number of issues to fix up here:
      * The "device" argument of the llvm.memory.barrier intrinsic should be
      used to distinguish the "Full System" domain from the "Inner Shareable"
      domain.  It has nothing to do with using DMB vs. DSB instructions.
      * The compiler should never need to emit DSB instructions.  Remove the
      ARMISD::SYNCBARRIER node and also remove the instruction patterns for DSB.
      * Merge the separate DMB/DSB instructions for options only used for the
      disassembler with the default DMB/DSB instructions.  Add the default
      "full system" option ARM_MB::SY to the ARM_MB::MemBOpt enum.
      * Add a separate ARMISD::MEMBARRIER_MCR node for subtargets that implement
      a data memory barrier using the MCR instruction.
      * Fix up encodings for these instructions (except MCR).
      I also updated the tests and added a few new ones to check for DMB options
      that were not currently being exercised.
      
      llvm-svn: 117756
      7ed59714
    • Jim Grosbach's avatar
      Remove hard tab characters. · 069f38d1
      Jim Grosbach authored
      llvm-svn: 117742
      069f38d1
  6. Oct 28, 2010
  7. Oct 25, 2010
  8. Oct 15, 2010
  9. Oct 14, 2010
  10. Oct 07, 2010
  11. Oct 06, 2010
    • Evan Cheng's avatar
      - Add TargetInstrInfo::getOperandLatency() to compute operand latencies. This · 49d4c0bd
      Evan Cheng authored
        allow target to correctly compute latency for cases where static scheduling
        itineraries isn't sufficient. e.g. variable_ops instructions such as
        ARM::ldm.
        This also allows target without scheduling itineraries to compute operand
        latencies. e.g. X86 can return (approximated) latencies for high latency
        instructions such as division.
      - Compute operand latencies for those defined by load multiple instructions,
        e.g. ldm and those used by store multiple instructions, e.g. stm.
      
      llvm-svn: 115755
      49d4c0bd
  12. Oct 02, 2010
  13. Sep 30, 2010
  14. Sep 29, 2010
  15. Sep 25, 2010
  16. Sep 24, 2010
  17. Sep 21, 2010
    • Chris Lattner's avatar
      fix a long standing wart: all the ComplexPattern's were being · 0e023ea0
      Chris Lattner authored
      passed the root of the match, even though only a few patterns
      actually needed this (one in X86, several in ARM [which should
      be refactored anyway], and some in CellSPU that I don't feel 
      like detangling).   Instead of requiring all ComplexPatterns to
      take the dead root, have targets opt into getting the root by
      putting SDNPWantRoot on the ComplexPattern.
      
      llvm-svn: 114471
      0e023ea0
  18. Sep 09, 2010
  19. Sep 06, 2010
  20. Sep 01, 2010
    • Chris Lattner's avatar
      temporarily revert r112664, it is causing a decoding conflict, and · 39eccb47
      Chris Lattner authored
      the testcases should be merged.
      
      llvm-svn: 112711
      39eccb47
    • Bill Wendling's avatar
      We have a chance for an optimization. Consider this code: · 6789f8b6
      Bill Wendling authored
      int x(int t) {
        if (t & 256)
          return -26;
        return 0;
      }
      
      We generate this:
      
           tst.w   r0, #256
           mvn     r0, #25
           it      eq
           moveq   r0, #0
      
      while gcc generates this:
      
           ands    r0, r0, #256
           it      ne
           mvnne   r0, #25
           bx      lr
      
      Scandalous really!
      
      During ISel time, we can look for this particular pattern. One where we have a
      "MOVCC" that uses the flag off of a CMPZ that itself is comparing an AND
      instruction to 0. Something like this (greatly simplified):
      
        %r0 = ISD::AND ...
        ARMISD::CMPZ %r0, 0         @ sets [CPSR]
        %r0 = ARMISD::MOVCC 0, -26  @ reads [CPSR]
      
      All we have to do is convert the "ISD::AND" into an "ARM::ANDS" that sets [CPSR]
      when it's zero. The zero value will all ready be in the %r0 register and we only
      need to change it if the AND wasn't zero. Easy!
      
      llvm-svn: 112664
      6789f8b6
  21. Aug 31, 2010
  22. Aug 30, 2010
Loading