Skip to content
  1. Nov 08, 2006
    • Chris Lattner's avatar
      optimize single MBB loops better. In particular, produce: · bf3b57f2
      Chris Lattner authored
      LBB1_57:        #bb207.i
              movl 72(%esp), %ecx
              movb (%ecx,%eax), %cl
              movl 80(%esp), %edx
              movb %cl, 1(%edx,%eax)
              incl %eax
              cmpl $143, %eax
              jne LBB1_57     #bb207.i
              jmp LBB1_64     #cond_next255.i
      
      intead of:
      
      LBB1_57:        #bb207.i
              movl 72(%esp), %ecx
              movb (%ecx,%eax), %cl
              movl 80(%esp), %edx
              movb %cl, 1(%edx,%eax)
              incl %eax
              cmpl $143, %eax
              je LBB1_64      #cond_next255.i
              jmp LBB1_57     #bb207.i
      
      This eliminates a branch per iteration of the loop.  This hurted PPC
      particularly, because the extra branch meant another dispatch group for each
      iteration of the loop.
      
      llvm-svn: 31530
      bf3b57f2
    • Devang Patel's avatar
      Beautify. · 800596d6
      Devang Patel authored
      Clarify comments.
      
      llvm-svn: 31529
      800596d6
  2. Nov 07, 2006
Loading