- Dec 06, 2012
-
-
NAKAMURA Takumi authored
llvm-svn: 169504
-
Kostya Serebryany authored
llvm-svn: 169503
-
Dmitry Vyukov authored
llvm-svn: 169502
-
Dmitry Vyukov authored
llvm-svn: 169501
-
Daniel Jasper authored
llvm-svn: 169500
-
Kostya Serebryany authored
llvm-svn: 169499
-
Matthew Curtis authored
llvm-svn: 169498
-
Kostya Serebryany authored
llvm-svn: 169497
-
Kostya Serebryany authored
llvm-svn: 169496
-
Matthew Curtis authored
paths - Inherit from Linux rather than ToolChain - Override AddClangSystemIncludeArgs and AddClangCXXStdlibIncludeArgs to properly set include paths. llvm-svn: 169495
-
Dmitry Vyukov authored
llvm-svn: 169494
-
Dmitry Vyukov authored
With this change reports say what mutexes the threads hold around the racy memory accesses. llvm-svn: 169493
-
NAKAMURA Takumi authored
clang/test/CodeGen/2008-01-07-UnusualIntSize.c: Add triple x86_64. It doesn't assume 32-bit target, for now. llvm-svn: 169492
-
Evgeniy Stepanov authored
llvm-svn: 169491
-
Evgeniy Stepanov authored
Instead of unconditionally storing origin with every application store, only do this when the shadow of the stored value is != 0. This change also delays instrumentation of stores until after the walk over function's instructions, because adding new basic blocks confuses InstVisitor. We only keep 1 origin value per 4 bytes of application memory. This change fixes the bug when a store of a single clean byte wiped the origin for the whole 4-byte area. Since stores of uninitialized values are relatively uncommon, this change improves performance of track-origins mode by 5% median and by up to 47% on specs. llvm-svn: 169490
-
Chandler Carruth authored
generally support the C++11 memory model requirements for bitfield accesses by relying more heavily on LLVM's memory model. The primary change this introduces is to move from a manually aligned and strided access pattern across the bits of the bitfield to a much simpler lump access of all bits in the bitfield followed by math to extract the bits relevant for the particular field. This simplifies the code significantly, but relies on LLVM to intelligently lowering these integers. I have tested LLVM's lowering both synthetically and in benchmarks. The lowering appears to be functional, and there are no really significant performance regressions. Different code patterns accessing bitfields will vary in how this impacts them. The only real regressions I'm seeing are a few patterns where the LLVM code generation for loads that feed directly into a mask operation don't take advantage of the x86 ability to do a smaller load and a cheap zero-extension. This doesn't regress any benchmark in the nightly test suite on my box past the noise threshold, but my box is quite noisy. I'll be watching the LNT numbers, and will look into further improvements to the LLVM lowering as needed. llvm-svn: 169489
-
Daniel Jasper authored
Also, small fix for handling the first token correctly. Review: http://llvm-reviews.chandlerc.com/D177 llvm-svn: 169488
-
Andy Gibbs authored
llvm-svn: 169487
-
Bill Wendling authored
s/getLowerBoundDefault/getDefaultLowerBound/ for consistency. Also put the more natural check first in the if-then statement. llvm-svn: 169486
-
Bill Wendling authored
llvm-svn: 169485
-
Bill Wendling authored
Some languages, e.g. Ada and Pascal, allow you to specify that the array bounds are different from the default (1 in these cases). If we have a lower bound that's non-default, then we emit the lower bound. We also calculate the correct upper bound in those cases. llvm-svn: 169484
-
Craig Topper authored
Remove intrinsic specific instructions for (V)MOVQUmr with patterns pointing to the normal instructions. llvm-svn: 169482
-
Ted Kremenek authored
Use the BlockDecl captures list to infer the direct captures for a BlockDataRegion. Fixes <rdar://problem/12415065>. We still need to do a recursive walk to determine all static/global variables referenced by a block, which is needed for region invalidation. llvm-svn: 169481
-
Ted Kremenek authored
This is a nice conceptual cleanup. llvm-svn: 169480
-
Ted Kremenek authored
llvm-svn: 169479
-
Ted Kremenek authored
llvm-svn: 169478
-
Craig Topper authored
llvm-svn: 169477
-
http://stackoverflow.com/questions/13521163Richard Smith authored
Don't require that, during template deduction, a template specialization type as a function parameter has at least as many template arguments as one used in a function argument (not even if the argument has been resolved to an exact type); the additional parameters might be provided by default template arguments in the template. We don't need this check, since we now implement [temp.deduct.call]p4 with an additional check after deduction. llvm-svn: 169475
-
Kostya Serebryany authored
llvm-svn: 169474
-
Richard Smith authored
Don't use dyn_cast on a Type* which might not be canonical. Fixes an extremely obscure record layout bug. llvm-svn: 169467
-
Jason Molenda authored
RegisterIsCalleeSaved. Add ebp back to the list of registers that are callee saved. <rdar://problem/12817918> llvm-svn: 169466
-
rdar://problem/12560257Greg Clayton authored
Fixed zero sized arrays to work correctly. This will only happen once we get a clang that emits correct debug info for zero sized arrays. For now I have marked the TestStructTypes.py as an expected failure. llvm-svn: 169465
-
Evan Cheng authored
llvm-svn: 169464
-
NAKAMURA Takumi authored
llvm/test/CodeGen/ARM/extload-knownzero.ll: Try to unbreak, to add -O0. I guess Chad expects fastisel here. llvm-svn: 169463
-
NAKAMURA Takumi authored
It broke many builders. llvm-svn: 169462
-
Sean Callanan authored
of the "self"/"this" pointer for the current stack frame before wrapping expressions in C++ or Objective-C methods. This works around bad debug info where the compiler emits a "this" or "self" but doesn't give any way to find its location. <rdar://problem/12809985> llvm-svn: 169461
-
Chad Rosier authored
rdar://12821569 llvm-svn: 169460
-
Evan Cheng authored
and extload's. If they are implemented as zero-extend, or implicitly zero-extend, then this can enable more demanded bits optimizations. e.g. define void @foo(i16* %ptr, i32 %a) nounwind { entry: %tmp1 = icmp ult i32 %a, 100 br i1 %tmp1, label %bb1, label %bb2 bb1: %tmp2 = load i16* %ptr, align 2 br label %bb2 bb2: %tmp3 = phi i16 [ 0, %entry ], [ %tmp2, %bb1 ] %cmp = icmp ult i16 %tmp3, 24 br i1 %cmp, label %bb3, label %exit bb3: call void @bar() nounwind br label %exit exit: ret void } This compiles to the followings before: push {lr} mov r2, #0 cmp r1, #99 bhi LBB0_2 @ BB#1: @ %bb1 ldrh r2, [r0] LBB0_2: @ %bb2 uxth r0, r2 cmp r0, #23 bhi LBB0_4 @ BB#3: @ %bb3 bl _bar LBB0_4: @ %exit pop {lr} bx lr The uxth is not needed since ldrh implicitly zero-extend the high bits. With this change it's eliminated. rdar://12771555 llvm-svn: 169459
-
NAKAMURA Takumi authored
llvm-svn: 169458
-
Fariborz Jahanian authored
<declaration> XML tag. // rdar://12378714 llvm-svn: 169457
-