Skip to content
  1. Jan 26, 2010
  2. Jan 25, 2010
  3. Jan 24, 2010
  4. Jan 23, 2010
  5. Jan 15, 2010
  6. Jan 11, 2010
  7. Jan 09, 2010
  8. Jan 08, 2010
  9. Jan 07, 2010
  10. Jan 06, 2010
    • Evan Cheng's avatar
      Teach dag combine to fold the following transformation more aggressively: · 166a4e6c
      Evan Cheng authored
      (OP (trunc x), (trunc y)) -> (trunc (OP x, y))
      
      Unfortunately this simple change causes dag combine to infinite looping. The problem is the shrink demanded ops optimization tend to canonicalize expressions in the opposite manner. That is badness. This patch disable those optimizations in dag combine but instead it is done as a late pass in sdisel.
      
      This also exposes some deficiencies in dag combine and x86 setcc / brcond lowering. Teach them to look pass ISD::TRUNCATE in various places.
      
      llvm-svn: 92849
      166a4e6c
  11. Jan 05, 2010
  12. Jan 04, 2010
  13. Dec 29, 2009
  14. Dec 22, 2009
  15. Dec 17, 2009
  16. Dec 16, 2009
  17. Dec 15, 2009
  18. Dec 11, 2009
  19. Dec 09, 2009
  20. Nov 30, 2009
  21. Nov 24, 2009
  22. Nov 21, 2009
  23. Nov 16, 2009
    • Jeffrey Yasskin's avatar
      Make X86-64 in the Large model always emit 64-bit calls. · 10d3604a
      Jeffrey Yasskin authored
      The large code model is documented at
      http://www.x86-64.org/documentation/abi.pdf and says that calls should
      assume their target doesn't live within the 32-bit pc-relative offset
      that fits in the call instruction.
      
      To do this, we turn off the global-address->target-global-address
      conversion in X86TargetLowering::LowerCall(). The first attempt at
      this broke the lazy JIT because it can separate the movabs(imm->reg)
      from the actual call instruction. The lazy JIT receives the address of
      the movabs as a relocation and needs to record the return address from
      the call; and then when that call happens, it needs to patch the
      movabs with the newly-compiled target. We could thread the call
      instruction into the relocation and record the movabs<->call mapping
      explicitly, but that seems to require at least as much new
      complication in the code generator as this change.
      
      To fix this, we make lazy functions _always_ go through a call
      stub. You'd think we'd only have to force lazy calls through a stub on
      difficult platforms, but that turns out to break indirect calls
      through a function pointer. The right fix for that is to distinguish
      between calls and address-of operations on uncompiled functions, but
      that's complex enough to leave for someone else to do.
      
      Another attempt at this defined a new CALL64i pseudo-instruction,
      which expanded to a 2-instruction sequence in the assembly output and
      was special-cased in the X86CodeEmitter's emitInstruction()
      function. That broke indirect calls in the same way as above.
      
      This patch also removes a hack forcing Darwin to the small code model.
      Without far-call-stubs, the small code model requires things of the
      JITMemoryManager that the DefaultJITMemoryManager can't provide.
      
      Thanks to echristo for lots of testing!
      
      llvm-svn: 88984
      10d3604a
  24. Nov 12, 2009
    • David Greene's avatar
      · 1fbe0544
      David Greene authored
      Add a bool flag to StackObjects telling whether they reference spill
      slots.  The AsmPrinter will use this information to determine whether to
      print a spill/reload comment.
      
      Remove default argument values.  It's too easy to pass a wrong argument
      value when multiple arguments have default values.  Make everything
      explicit to trap bugs early.
      
      Update all targets to adhere to the new interfaces..
      
      llvm-svn: 87022
      1fbe0544
    • Benjamin Kramer's avatar
      Add compare_lower and equals_lower methods to StringRef. Switch all users of · 68e4945c
      Benjamin Kramer authored
      StringsEqualNoCase (from StringExtras.h) to it.
      
      llvm-svn: 87020
      68e4945c
  25. Nov 08, 2009
    • Nate Begeman's avatar
      x86 vector shuffle cleanup/fixes: · 3a313df6
      Nate Begeman authored
      1. rename the movhp patfrag to movlhps, since thats what it actually matches
      2. eliminate the bogus movhps load and store patterns, they were incorrect.  The load transforms are already handled (correctly) by shufps/unpack.
      3. revert a recent test change to its correct form.
      
      llvm-svn: 86415
      3a313df6
  26. Nov 07, 2009
  27. Oct 30, 2009
  28. Oct 28, 2009
Loading