- May 07, 2012
-
-
Eric Christopher authored
Patch by Jack Carter. llvm-svn: 156278
-
Eric Christopher authored
non-floating point general registers allow 8 and 16-bit elements. Patch by Jack Carter. llvm-svn: 156277
-
Jim Grosbach authored
llvm-svn: 156276
-
Aaron Ballman authored
llvm-svn: 156275
-
Richard Smith authored
in-class initializer for one of its fields. Value-initialization of such a type should use the in-class initializer! The former was just a bug, the latter is a (reported) standard defect. llvm-svn: 156274
-
David Blaikie authored
llvm-svn: 156273
-
Aaron Ballman authored
Detecting illegal instantiations of abstract types when using a function-style cast. Fixed PR12658. llvm-svn: 156271
-
Jordy Rose authored
[analyzer] Reduce parallel code paths in SimpleSValBuilder::evalBinOpNN, and handle mixed-type operations more generally. The logical change is that the integers in SymIntExprs may not have the same type as the symbols they are paired with. This was already the case with taint-propagation expressions created by SValBuilder::makeSymExprValNN, but I think those integers may never have been used. SimpleSValBuilder should be able to handle mixed-integer-type SymIntExprs fine now, though, and the constraint managers were already being defensive (though not entirely correct). All existing tests pass. The logic in evalBinOpNN has been simplified so that conversion is done as late as possible. As a result, most of the switch cases have been reduced to do the minimal amount of work, delegating to another case when they can by substituting ConcreteInts and (as before) reversing the left and right arguments when useful. Comparisons require special handling in two places (building SymIntExprs and evaluating constant-constant operations) because we don't /know/ the best type for comparing the two values. I've approximated the rules in Sema [C99 6.3.1.8] but it'd be nice to refactor Sema's actual algorithm into ASTContext. This is also groundwork for handling mixed-type constraints better than we do now. llvm-svn: 156270
-
- May 06, 2012
-
-
Rafael Espindola authored
for having a uniform logic for adding attributes to a decl. This in turn is needed to fix the FIXME: // FIXME: This needs to happen before we merge declarations. Then, // let attribute merging cope with attribute conflicts. ProcessDeclAttributes(S, NewFD, D, /*NonInheritable=*/false, /*Inheritable=*/true); The idea is that mergeAvailabilityAttr will become a method. Once attributes are processed before merging, it will be called from handleAvailabilityAttr to handle multiple attributes in one decl: void f(int) __attribute__((availability(ios,deprecated=3.0), availability(ios,introduced=2.0))); and from SemaDecl.cpp to handle multiple decls: void f(int) __attribute__((availability(ios,deprecated=3.0))); void f(int) __attribute__((availability(ios,introduced=2.0))); As a bonus, use the new structure to diagnose incompatible availability attributes added to different decls (see included testcases). llvm-svn: 156269
-
Craig Topper authored
Use MVT instead of EVT as the argument to all the shuffle decode functions. Simplify some of the decode functions. llvm-svn: 156268
-
Craig Topper authored
Add VPERMQ/VPERMPD to the list of target specific shuffles that can be looked through for DAG combine purposes. llvm-svn: 156266
-
Craig Topper authored
llvm-svn: 156265
-
Filipe Cabecinhas authored
llvm-svn: 156264
-
Jim Grosbach authored
Previously, if an instruction definition was missing the mnemonic, the next line would just assert(). Issue a real diagnostic instead. llvm-svn: 156263
-
Rafael Espindola authored
llvm-svn: 156261
-
Chris Lattner authored
llvm-svn: 156260
-
Aaron Ballman authored
llvm-svn: 156259
-
Benjamin Kramer authored
The primitive conservative heuristic seems to give a slight overall improvement while not regressing stuff. Make it available to wider testing. If you notice any speed regressions (or significant code size regressions) let me know! llvm-svn: 156258
-
Jakub Staszak authored
llvm-svn: 156257
-
Hongbin Zheng authored
llvm-svn: 156256
-
Hongbin Zheng authored
llvm-svn: 156255
-
Hongbin Zheng authored
llvm-svn: 156254
-
NAKAMURA Takumi authored
It caused test/Index/index-many-call-ops.cpp to fail in stage2 c-index-test on selfhosting i686-cygwin and x86_64-linux since r156229 (Reverting making RecursiveASTVisitor data recursive). llvm-svn: 156253
-
NAKAMURA Takumi authored
llvm-svn: 156252
-
NAKAMURA Takumi authored
FIXME: GetRandomNumber() is not implemented in Win32. llvm-svn: 156251
-
Richard Smith authored
in the same class, even if they convert to the same type. Fixes PR12712. llvm-svn: 156247
-
Chris Lattner authored
of work for a drive-by fix :) llvm-svn: 156246
-
Chris Lattner authored
llvm-svn: 156245
-
Chris Lattner authored
llvm-svn: 156244
-
- May 05, 2012
-
-
Chris Lattner authored
refactor some code to expose column numbers more and make diagnostic printing slightly more efficient. llvm-svn: 156243
-
Jim Grosbach authored
llvm-svn: 156241
-
Daniel Dunbar authored
llvm-svn: 156240
-
Daniel Dunbar authored
llvm-svn: 156239
-
Daniel Dunbar authored
- Just use sys::Process::GetRandomNumber instead of having two poor implementations. - This is ~70 times (!) faster on my OS X machine. llvm-svn: 156238
-
Daniel Dunbar authored
- Primitive API, but we rarely have need for random numbers. llvm-svn: 156237
-
Daniel Dunbar authored
llvm-svn: 156236
-
Benjamin Kramer authored
We might just use symlinks here, but I'm afraid of possible portability issues. llvm-svn: 156235
-
Benjamin Kramer authored
This came up when a change in block placement formed a cmov and slowed down a hot loop by 50%: ucomisd (%rdi), %xmm0 cmovbel %edx, %esi cmov is a really bad choice in this context because it doesn't get branch prediction. If we emit it as a branch, an out-of-order CPU can do a better job (if the branch is predicted right) and avoid waiting for the slow load+compare instruction to finish. Of course it won't help if the branch is unpredictable, but those are really rare in practice. This patch uses a dumb conservative heuristic, it turns all cmovs that have one use and a direct memory operand into branches. cmovs usually save some code size, so we disable the transform in -Os mode. In-Order architectures are unlikely to benefit as well, those are included in the "predictableSelectIsExpensive" flag. It would be better to reuse branch probability info here, but BPI doesn't support select instructions currently. It would make sense to use the same heuristics as the if-converter pass, which does the opposite direction of this transform. Test suite shows a small improvement here and there on corei7-level machines, but the actual results depend a lot on the used microarchitecture. The transformation is currently disabled by default and available by passing the -enable-cgp-select2branch flag to the code generator. Thanks to Chandler for the initial test case to him and Evan Cheng for providing me with comments and test-suite numbers that were more stable than mine :) llvm-svn: 156234
-
Benjamin Kramer authored
This will be used to determine whether it's profitable to turn a select into a branch when the branch is likely to be predicted. Currently enabled for everything but Atom on X86 and Cortex-A9 devices on ARM. I'm not entirely happy with the name of this flag, suggestions welcome ;) llvm-svn: 156233
-
Benjamin Kramer authored
llvm-svn: 156232
-