Skip to content
  1. Oct 08, 2004
  2. Oct 07, 2004
  3. Oct 06, 2004
    • Chris Lattner's avatar
      Implement GlobalConstifier/trivialstore.llx, and also do some · 1f849a08
      Chris Lattner authored
      simplifications of the resultant program to avoid making later passes
      do it all.
      
      This allows us to constify globals that just have the same constant that
      they are initialized stored into them.
      
      Suprisingly this comes up ALL of the freaking time, dozens of times in
      SPEC, 30 times in vortex alone.
      
      For example, on 256.bzip2, it allows us to constify these two globals:
      
      %smallMode = internal global ubyte 0             ; <ubyte*> [#uses=8]
      %verbosity = internal global int 0               ; <int*> [#uses=49]
      
      Which (with later optimizations) results in the bytecode file shrinking
      from 82286 to 69686 bytes!  Lets hear it for IPO :)
      
      For the record, it's nuking lots of "if (verbosity > 2) { do lots of stuff }"
      code.
      
      llvm-svn: 16793
      1f849a08
    • Chris Lattner's avatar
      Dont' let null nodes sneak past cast instructions · af88fcd4
      Chris Lattner authored
      llvm-svn: 16779
      af88fcd4
    • Chris Lattner's avatar
      Change Type::isAbstract to have better comments, a more correct name · 43e03c9c
      Chris Lattner authored
      (PromoteAbstractToConcrete), and to use a set to avoid recomputation.
      In particular, this set eliminates the potentially exponential cases
      from this little recursive algorithm.
      
      On a particularly nasty testcase, llvm-dis on the .bc file went from 34
      minutes (which is when I killed it, it still hadn't finished) to 0.57s.
      Remember kids, exponential algorithms are bad.
      
      llvm-svn: 16772
      43e03c9c
    • Chris Lattner's avatar
      Correct some typeos · f94f985b
      Chris Lattner authored
      llvm-svn: 16770
      f94f985b
    • Chris Lattner's avatar
      Instcombine: -(X sdiv C) -> (X sdiv -C), tested by sub.ll:test16 · 0aee4b79
      Chris Lattner authored
      llvm-svn: 16769
      0aee4b79
    • Chris Lattner's avatar
      Remove debugging code, fix encoding problem. This fixes the problems · 93867e51
      Chris Lattner authored
      the JIT had last night.
      
      llvm-svn: 16766
      93867e51
    • Nate Begeman's avatar
      Turning on fsel code gen now that we can do so would be good. · 9a1fbaf1
      Nate Begeman authored
      llvm-svn: 16765
      9a1fbaf1
    • Nate Begeman's avatar
      Implement floating point select for lt, gt, le, ge using the powerpc fsel · fac8529d
      Nate Begeman authored
      instruction.
      
      Now, rather than emitting the following loop out of bisect:
      .LBB_main_19:	; no_exit.0.i
      	rlwinm r3, r2, 3, 0, 28
      	lfdx f1, r3, r27
      	addis r3, r30, ha16(.CPI_main_1-"L00000$pb")
      	lfd f2, lo16(.CPI_main_1-"L00000$pb")(r3)
      	fsub f2, f2, f1
      	addis r3, r30, ha16(.CPI_main_1-"L00000$pb")
      	lfd f4, lo16(.CPI_main_1-"L00000$pb")(r3)
      	fcmpu cr0, f1, f4
      	bge .LBB_main_64	; no_exit.0.i
      .LBB_main_63:	; no_exit.0.i
      	b .LBB_main_65	; no_exit.0.i
      .LBB_main_64:	; no_exit.0.i
      	fmr f2, f1
      .LBB_main_65:	; no_exit.0.i
      	addi r3, r2, 1
      	rlwinm r3, r3, 3, 0, 28
      	lfdx f1, r3, r27
      	addis r3, r30, ha16(.CPI_main_1-"L00000$pb")
      	lfd f4, lo16(.CPI_main_1-"L00000$pb")(r3)
      	fsub f4, f4, f1
      	addis r3, r30, ha16(.CPI_main_1-"L00000$pb")
      	lfd f5, lo16(.CPI_main_1-"L00000$pb")(r3)
      	fcmpu cr0, f1, f5
      	bge .LBB_main_67	; no_exit.0.i
      .LBB_main_66:	; no_exit.0.i
      	b .LBB_main_68	; no_exit.0.i
      .LBB_main_67:	; no_exit.0.i
      	fmr f4, f1
      .LBB_main_68:	; no_exit.0.i
      	fadd f1, f2, f4
      	addis r3, r30, ha16(.CPI_main_2-"L00000$pb")
      	lfd f2, lo16(.CPI_main_2-"L00000$pb")(r3)
      	fmul f1, f1, f2
      	rlwinm r3, r2, 3, 0, 28
      	lfdx f2, r3, r28
      	fadd f4, f2, f1
      	fcmpu cr0, f4, f0
      	bgt .LBB_main_70	; no_exit.0.i
      .LBB_main_69:	; no_exit.0.i
      	b .LBB_main_71	; no_exit.0.i
      .LBB_main_70:	; no_exit.0.i
      	fmr f0, f4
      .LBB_main_71:	; no_exit.0.i
      	fsub f1, f2, f1
      	addi r2, r2, -1
      	fcmpu cr0, f1, f3
      	blt .LBB_main_73	; no_exit.0.i
      .LBB_main_72:	; no_exit.0.i
      	b .LBB_main_74	; no_exit.0.i
      .LBB_main_73:	; no_exit.0.i
      	fmr f3, f1
      .LBB_main_74:	; no_exit.0.i
      	cmpwi cr0, r2, -1
      	fmr f16, f0
      	fmr f17, f3
      	bgt .LBB_main_19	; no_exit.0.i
      
      We emit this instead:
      .LBB_main_19:	; no_exit.0.i
      	rlwinm r3, r2, 3, 0, 28
      	lfdx f1, r3, r27
      	addis r3, r30, ha16(.CPI_main_1-"L00000$pb")
      	lfd f2, lo16(.CPI_main_1-"L00000$pb")(r3)
      	fsub f2, f2, f1
      	fsel f1, f1, f1, f2
      	addi r3, r2, 1
      	rlwinm r3, r3, 3, 0, 28
      	lfdx f2, r3, r27
      	addis r3, r30, ha16(.CPI_main_1-"L00000$pb")
      	lfd f4, lo16(.CPI_main_1-"L00000$pb")(r3)
      	fsub f4, f4, f2
      	fsel f2, f2, f2, f4
      	fadd f1, f1, f2
      	addis r3, r30, ha16(.CPI_main_2-"L00000$pb")
      	lfd f2, lo16(.CPI_main_2-"L00000$pb")(r3)
      	fmul f1, f1, f2
      	rlwinm r3, r2, 3, 0, 28
      	lfdx f2, r3, r28
      	fadd f4, f2, f1
      	fsub f5, f0, f4
      	fsel f0, f5, f0, f4
      	fsub f1, f2, f1
      	addi r2, r2, -1
      	fsub f2, f1, f3
      	fsel f3, f2, f3, f1
      	cmpwi cr0, r2, -1
      	fmr f16, f0
      	fmr f17, f3
      	bgt .LBB_main_19	; no_exit.0.i
      
      llvm-svn: 16764
      fac8529d
    • Chris Lattner's avatar
      Codegen signed mod by 2 or -2 more efficiently. Instead of generating: · 6835dedb
      Chris Lattner authored
      t:
              mov %EDX, DWORD PTR [%ESP + 4]
              mov %ECX, 2
              mov %EAX, %EDX
              sar %EDX, 31
              idiv %ECX
              mov %EAX, %EDX
              ret
      
      Generate:
      t:
              mov %ECX, DWORD PTR [%ESP + 4]
      ***     mov %EAX, %ECX
              cdq
              and %ECX, 1
              xor %ECX, %EDX
              sub %ECX, %EDX
      ***     mov %EAX, %ECX
              ret
      
      Note that the two marked moves are redundant, and should be eliminated by the
      register allocator, but aren't.
      
      Compare this to GCC, which generates:
      
      t:
              mov     %eax, DWORD PTR [%esp+4]
              mov     %edx, %eax
              shr     %edx, 31
              lea     %ecx, [%edx+%eax]
              and     %ecx, -2
              sub     %eax, %ecx
              ret
      
      or ICC 8.0, which generates:
      
      t:
              movl      4(%esp), %ecx                                 #3.5
              movl      $-2147483647, %eax                            #3.25
              imull     %ecx                                          #3.25
              movl      %ecx, %eax                                    #3.25
              sarl      $31, %eax                                     #3.25
              addl      %ecx, %edx                                    #3.25
              subl      %edx, %eax                                    #3.25
              addl      %eax, %eax                                    #3.25
              negl      %eax                                          #3.25
              subl      %eax, %ecx                                    #3.25
              movl      %ecx, %eax                                    #3.25
              ret                                                     #3.25
      
      We would be in great shape if not for the moves.
      
      llvm-svn: 16763
      6835dedb
    • Chris Lattner's avatar
      Really fix FreeBSD, which apparently doesn't tolerate the extern. · e4c60eb7
      Chris Lattner authored
      Thanks to Jeff Cohen for pointing out my goof.
      
      llvm-svn: 16762
      e4c60eb7
    • Chris Lattner's avatar
      Fix a scary bug with signed division by a power of two. We used to generate: · 7bd8f133
      Chris Lattner authored
      s:   ;; X / 4
              mov %EAX, DWORD PTR [%ESP + 4]
              mov %ECX, %EAX
              sar %ECX, 1
              shr %ECX, 30
              mov %EDX, %EAX
              add %EDX, %ECX
              sar %EAX, 2
              ret
      
      When we really meant:
      
      s:
              mov %EAX, DWORD PTR [%ESP + 4]
              mov %ECX, %EAX
              sar %ECX, 1
              shr %ECX, 30
              add %EAX, %ECX
              sar %EAX, 2
              ret
      
      Hey, this also reduces register pressure too :)
      
      llvm-svn: 16761
      7bd8f133
    • Chris Lattner's avatar
      Codegen signed divides by 2 and -2 more efficiently. In particular · 147edd2f
      Chris Lattner authored
      instead of:
      
      s:   ;; X / 2
              movl 4(%esp), %eax
              movl %eax, %ecx
              shrl $31, %ecx
              movl %eax, %edx
              addl %ecx, %edx
              sarl $1, %eax
              ret
      
      t:   ;; X / -2
              movl 4(%esp), %eax
              movl %eax, %ecx
              shrl $31, %ecx
              movl %eax, %edx
              addl %ecx, %edx
              sarl $1, %eax
              negl %eax
              ret
      
      Emit:
      
      s:
              movl 4(%esp), %eax
              cmpl $-2147483648, %eax
              sbbl $-1, %eax
              sarl $1, %eax
              ret
      
      t:
              movl 4(%esp), %eax
              cmpl $-2147483648, %eax
              sbbl $-1, %eax
              sarl $1, %eax
              negl %eax
              ret
      
      llvm-svn: 16760
      147edd2f
    • Chris Lattner's avatar
      Add some new instructions. Fix the asm string for sbb32rr · e9bfa5a2
      Chris Lattner authored
      llvm-svn: 16759
      e9bfa5a2
    • Chris Lattner's avatar
      Reduce code growth implied by the tail duplication pass by not duplicating · 2ce32df8
      Chris Lattner authored
      an instruction if it can be hoisted to a common dominator of the block.
      This implements: test/Regression/Transforms/TailDup/MergeTest.ll
      
      llvm-svn: 16758
      2ce32df8
    • Chris Lattner's avatar
      FreeBSD uses GCC. Patch contributed by Jeff Cohen! · 32ed828f
      Chris Lattner authored
      llvm-svn: 16756
      32ed828f
  4. Oct 05, 2004
Loading