Skip to content
  1. Oct 19, 2012
  2. Oct 18, 2012
    • Kevin Enderby's avatar
      Fix a bug where a 32-bit address with the high bit does not get symbolicated · b23926d3
      Kevin Enderby authored
      because the value is incorrectly being signed extended when passed to
      SymbolLookUp().
      
      llvm-svn: 166234
      b23926d3
    • Ulrich Weigand's avatar
      This patch fixes failures in the SingleSource/Regression/C/uint64_to_float · d34b5bd6
      Ulrich Weigand authored
      test case on PowerPC caused by rounding errors when converting from a 64-bit
      integer to a single-precision floating point. The reason for this are
      double-rounding effects, since on PowerPC we have to convert to an
      intermediate double-precision value first, which gets rounded to the
      final single-precision result.
      
      The patch fixes the problem by preparing the 64-bit integer so that the
      first conversion step to double-precision will always be exact, and the
      final rounding step will result in the correctly-rounded single-precision
      result.  The generated code sequence is equivalent to what GCC would generate.
      
      When -enable-unsafe-fp-math is in effect, that extra effort is omitted
      and we accept possible rounding errors (just like GCC does as well).
      
      llvm-svn: 166178
      d34b5bd6
    • Bob Wilson's avatar
      Temporarily revert the TargetTransform changes. · d6d9ccca
      Bob Wilson authored
      The TargetTransform changes are breaking LTO bootstraps of clang.  I am
      working with Nadav to figure out the problem, but I am reverting it for now
      to get our buildbots working.
      
      This reverts svn commits: 165665 165669 165670 165786 165787 165997
      and I have also reverted clang svn 165741
      
      llvm-svn: 166168
      d6d9ccca
    • Reed Kotler's avatar
      Add conditional branch instructions and their patterns. · 6743924a
      Reed Kotler authored
      llvm-svn: 166134
      6743924a
  3. Oct 17, 2012
  4. Oct 16, 2012
    • Michael Liao's avatar
      Support v8f32 to v8i8/vi816 conversion through custom lowering · 02ca3454
      Michael Liao authored
      - Add custom FP_TO_SINT on v8i16 (and v8i8 which is legalized as v8i16 due to
        vector element-wise widening) to reduce DAG combiner and its overhead added
        in X86 backend.
      
      llvm-svn: 166036
      02ca3454
    • Bill Schmidt's avatar
      This patch addresses PR13949. · 48081cad
      Bill Schmidt authored
      For the PowerPC 64-bit ELF Linux ABI, aggregates of size less than 8
      bytes are to be passed in the low-order bits ("right-adjusted") of the
      doubleword register or memory slot assigned to them.  A previous patch
      addressed this for aggregates passed in registers.  However, small
      aggregates passed in the overflow portion of the parameter save area are
      still being passed left-adjusted.
      
      The fix is made in PPCTargetLowering::LowerCall_Darwin_Or_64SVR4 on the
      caller side, and in PPCTargetLowering::LowerFormalArguments_64SVR4 on
      the callee side.  The main fix on the callee side simply extends
      existing logic for 1- and 2-byte objects to 1- through 7-byte objects,
      and correcting a constant left over from 32-bit code.  There is also a
      fix to a bogus calculation of the offset to the following argument in
      the parameter save area.
      
      On the caller side, again a constant left over from 32-bit code is
      fixed.  Additionally, some code for 1, 2, and 4-byte objects is
      duplicated to handle the 3, 5, 6, and 7-byte objects for SVR4 only.  The
      LowerCall_Darwin_Or_64SVR4 logic is getting fairly convoluted trying to
      handle both ABIs, and I propose to separate this into two functions in a
      future patch, at which time the duplication can be removed.
      
      The patch adds a new test (structsinmem.ll) to demonstrate correct
      passing of structures of all seven sizes.  Eight dummy parameters are
      used to force these structures to be in the overflow portion of the
      parameter save area.
      
      As a side effect, this corrects the case when aggregates passed in
      registers are saved into the first eight doublewords of the parameter
      save area:  Previously they were stored left-justified, and now are
      properly stored right-justified.  This requires changing the expected
      output of existing test case structsinregs.ll.
      
      llvm-svn: 166022
      48081cad
    • Stepan Dyatkovskiy's avatar
      Issue: · e59a920b
      Stepan Dyatkovskiy authored
      Stack is formed improperly for long structures passed as byval arguments for
      EABI mode.
      
      If we took AAPCS reference, we can found the next statements:
      
      A: "If the argument requires double-word alignment (8-byte), the NCRN (Next
      Core Register Number) is rounded up to the next even register number." (5.5
      Parameter Passing, Stage C, C.3).
      
      B: "The alignment of an aggregate shall be the alignment of its most-aligned
      component." (4.3 Composite Types, 4.3.1 Aggregates).
      
      So if we have structure with doubles (9 double fields) and 3 Core unused
      registers (r1, r2, r3): caller should use r2 and r3 registers only.
      Currently r1,r2,r3 set is used, but it is invalid.
      
      Callee VA routine should also use r2 and r3 regs only. All is ok here. This
      behaviour is guessed by rounding up SP address with ADD+BFC operations.
      
      Fix:
      Main fix is in ARMTargetLowering::HandleByVal. If we detected AAPCS mode and
      8 byte alignment, we waste odd registers then.
      
      P.S.:
      I also improved LDRB_POST_IMM regression test. Since ldrb instruction will
      not generated by current regression test after this patch. 
      
      llvm-svn: 166018
      e59a920b
    • NAKAMURA Takumi's avatar
      Reapply r165661, Patch by Shuxin Yang <shuxin.llvm@gmail.com>. · 1705a999
      NAKAMURA Takumi authored
      Original message:
      
      The attached is the fix to radar://11663049. The optimization can be outlined by following rules:
      
         (select (x != c), e, c) -> select (x != c), e, x),
         (select (x == c), c, e) -> select (x == c), x, e)
      where the <c> is an integer constant.
      
       The reason for this change is that : on x86, conditional-move-from-constant needs two instructions;
      however, conditional-move-from-register need only one instruction.
      
        While the LowerSELECT() sounds to be the most convenient place for this optimization, it turns out to be a bad place. The reason is that by replacing the constant <c> with a symbolic value, it obscure some instruction-combining opportunities which would otherwise be very easy to spot. For that reason, I have to postpone the change to last instruction-combining phase.
      
        The change passes the test of "make check-all -C <build-root/test" and "make -C project/test-suite/SingleSource".
      
      Original message since r165661:
      
      My previous change has a bug: I negated the condition code of a CMOV, and go ahead creating a new CMOV using the *ORIGINAL* condition code.
      
      llvm-svn: 166017
      1705a999
    • Craig Topper's avatar
    • Bill Wendling's avatar
      Pass in the context to the Attributes::get method. · 4f69e148
      Bill Wendling authored
      llvm-svn: 166007
      4f69e148
    • Michael Liao's avatar
      Add __builtin_setjmp/_longjmp supprt in X86 backend · 97bf363a
      Michael Liao authored
      - Besides used in SjLj exception handling, __builtin_setjmp/__longjmp is also
        used as a light-weight replacement of setjmp/longjmp which are used to
        implementation continuation, user-level threading, and etc. The support added
        in this patch ONLY addresses this usage and is NOT intended to support SjLj
        exception handling as zero-cost DWARF exception handling is used by default
        in X86.
      
      llvm-svn: 165989
      97bf363a
  5. Oct 15, 2012
  6. Oct 14, 2012
  7. Oct 13, 2012
  8. Oct 12, 2012
  9. Oct 11, 2012
    • Micah Villmow's avatar
      Revert 165732 for further review. · 0c61134d
      Micah Villmow authored
      llvm-svn: 165747
      0c61134d
    • Micah Villmow's avatar
      Add in the first iteration of support for llvm/clang/lldb to allow variable... · 08318973
      Micah Villmow authored
      Add in the first iteration of support for llvm/clang/lldb to allow variable per address space pointer sizes to be optimized correctly.
      
      llvm-svn: 165726
      08318973
    • Bill Schmidt's avatar
      This patch addresses PR13947. · 22162470
      Bill Schmidt authored
      For function calls on the 64-bit PowerPC SVR4 target, each parameter
      is mapped to as many doublewords in the parameter save area as
      necessary to hold the parameter.  The first 13 non-varargs
      floating-point values are passed in registers; any additional
      floating-point parameters are passed in the parameter save area.  A
      single-precision floating-point parameter (32 bits) must be mapped to
      the second (rightmost, low-order) word of its assigned doubleword
      slot.
      
      Currently LLVM violates this ABI requirement by mapping such a
      parameter to the first (leftmost, high-order) word of its assigned
      doubleword slot.  This is internally self-consistent but will not
      interoperate correctly with libraries compiled with an ABI-compliant
      compiler.
      
      This patch corrects the problem by adjusting the parameter addressing
      on both sides of the calling convention.
      
      llvm-svn: 165714
      22162470
Loading