Skip to content
  1. Aug 08, 2013
    • Eric Christopher's avatar
      Reflow for loop. · d25f7fc4
      Eric Christopher authored
      llvm-svn: 187954
      d25f7fc4
    • Eric Christopher's avatar
      Be more rigorous about the sizes of forms and attributes. · 31b0576b
      Eric Christopher authored
      llvm-svn: 187953
      31b0576b
    • Bill Wendling's avatar
      Reapply r185872 now that the address sanitizer has been changed to support this. · b80f9791
      Bill Wendling authored
      Original commit message:
      
      Stop emitting weak symbols into the "coal" sections.
      
      The Mach-O linker has been able to support the weak-def bit on any symbol for
      quite a while now. The compiler however continued to place these symbols into a
      "coal" section, which required the linker to map them back to the base section
      name.
      
      Replace the sections like this:
      
        __TEXT/__textcoal_nt   instead use  __TEXT/__text
        __TEXT/__const_coal    instead use  __TEXT/__const
        __DATA/__datacoal_nt   instead use  __DATA/__data
      
      <rdar://problem/14265330>
      
      llvm-svn: 187939
      b80f9791
    • Hal Finkel's avatar
      Add ISD::FROUND for libm round() · 171817ee
      Hal Finkel authored
      All libm floating-point rounding functions, except for round(), had their own
      ISD nodes. Recent PowerPC cores have an instruction for round(), and so here I'm
      adding ISD::FROUND so that round() can be custom lowered as well.
      
      For the most part, this is straightforward. I've added an intrinsic
      and a matching ISD node just like those for nearbyint() and friends. The
      SelectionDAG pattern I've named frnd (because ISD::FP_ROUND has already claimed
      fround).
      
      This will be used by the PowerPC backend in a follow-up commit.
      
      llvm-svn: 187926
      171817ee
  2. Aug 07, 2013
  3. Aug 06, 2013
    • Tim Northover's avatar
      Refactor isInTailCallPosition handling · a4415854
      Tim Northover authored
      This change came about primarily because of two issues in the existing code.
      Niether of:
      
      define i64 @test1(i64 %val) {
        %in = trunc i64 %val to i32
        tail call i32 @ret32(i32 returned %in)
        ret i64 %val
      }
      
      define i64 @test2(i64 %val) {
        tail call i32 @ret32(i32 returned undef)
        ret i32 42
      }
      
      should be tail calls, and the function sameNoopInput is responsible. The main
      problem is that it is completely symmetric in the "tail call" and "ret" value,
      but in reality different things are allowed on each side.
      
      For these cases:
      1. Any truncation should lead to a larger value being generated by "tail call"
         than needed by "ret".
      2. Undef should only be allowed as a source for ret, not as a result of the
         call.
      
      Along the way I noticed that a mismatch between what this function treats as a
      valid truncation and what the backends see can lead to invalid calls as well
      (see x86-32 test case).
      
      This patch refactors the code so that instead of being based primarily on
      values which it recurses into when necessary, it starts by inspecting the type
      and considers each fundamental slot that the backend will see in turn. For
      example, given a pathological function that returned {{}, {{}, i32, {}}, i32}
      we would consider each "real" i32 in turn, and ask if it passes through
      unchanged. This is much closer to what the backend sees as a result of
      ComputeValueVTs.
      
      Aside from the bug fixes, this eliminates the recursion that's going on and, I
      believe, makes the bulk of the code significantly easier to understand. The
      trade-off is the nasty iterators needed to find the real types inside a
      returned value.
      
      llvm-svn: 187787
      a4415854
    • NAKAMURA Takumi's avatar
      e359e856
    • Eric Christopher's avatar
      Recommit previous cleanup with a fix for c++98 ambiguity. · 0062f2ed
      Eric Christopher authored
      llvm-svn: 187752
      0062f2ed
    • Tom Stellard's avatar
      TargetLowering: Add getVectorIdxTy() function v2 · d42c5949
      Tom Stellard authored
      This virtual function can be implemented by targets to specify the type
      to use for the index operand of INSERT_VECTOR_ELT, EXTRACT_VECTOR_ELT,
      INSERT_SUBVECTOR, EXTRACT_SUBVECTOR.  The default implementation returns
      the result from TargetLowering::getPointerTy()
      
      The previous code was using TargetLowering::getPointerTy() for vector
      indices, because this is guaranteed to be legal on all targets.  However,
      using TargetLowering::getPointerTy() can be a problem for targets with
      pointer sizes that differ across address spaces.  On such targets,
      when vectors need to be loaded or stored to an address space other than the
      default 'zero' address space (which is the address space assumed by
      TargetLowering::getPointerTy()), having an index that
      is a different size than the pointer can lead to inefficient
      pointer calculations, (e.g. 64-bit adds for a 32-bit address space).
      
      There is no intended functionality change with this patch.
      
      llvm-svn: 187748
      d42c5949
    • Eric Christopher's avatar
      Revert "Use existing builtin hashing functions to make this routine more" · 432c99af
      Eric Christopher authored
      This reverts commit r187745.
      
      llvm-svn: 187747
      432c99af
    • Eric Christopher's avatar
      Use existing builtin hashing functions to make this routine more · d728355a
      Eric Christopher authored
      simple.
      
      llvm-svn: 187745
      d728355a
  4. Aug 05, 2013
  5. Aug 02, 2013
  6. Aug 01, 2013
  7. Jul 31, 2013
    • Eric Christopher's avatar
      Fix crashing on invalid inline asm with matching constraints. · e6656ac8
      Eric Christopher authored
      For a testcase like the following:
      
       typedef unsigned long uint64_t;
      
       typedef struct {
         uint64_t lo;
         uint64_t hi;
       } blob128_t;
      
       void add_128_to_128(const blob128_t *in, blob128_t *res) {
         asm ("PAND %1, %0" : "+Q"(*res) : "Q"(*in));
       }
      
      where we'll fail to allocate the register for the output constraint,
      our matching input constraint will not find a register to match,
      and could try to search past the end of the current operands array.
      
      On the idea that we'd like to attempt to keep compilation going
      to find more errors in the module, change the error cases when
      we're visiting inline asm IR to return immediately and avoid
      trying to create a node in the DAG. This leaves us with only
      a single error message per inline asm instruction, but allows us
      to safely keep going in the general case.
      
      llvm-svn: 187470
      e6656ac8
    • Eric Christopher's avatar
      Reflow this to be easier to read. · 029af150
      Eric Christopher authored
      llvm-svn: 187459
      029af150
  8. Jul 30, 2013
  9. Jul 29, 2013
    • Nico Rieck's avatar
      Use proper section suffix for COFF weak symbols · 7fdaee8f
      Nico Rieck authored
      32-bit symbols have "_" as global prefix, but when forming the name of
      COMDAT sections this prefix is ignored. The current behavior assumes that
      this prefix is always present which is not the case for 64-bit and names
      are truncated.
      
      llvm-svn: 187356
      7fdaee8f
  10. Jul 27, 2013
  11. Jul 26, 2013
Loading