- Feb 03, 2009
-
-
Dan Gohman authored
is given, override the subtarget settings and enable 64-bit support. This restores the earlier behavior, and fixes regressions on Non-64-bit-capable x86-32 hosts. This isn't necessarily the best approach, but the most obvious alternative is to require -mcpu=x86-64 or -mattr=+64bit to be used with -march=x86-64 when the host doesn't have 64-bit support. This makes things little more consistent, but it's less convenient, and it has the practical drawback of requiring lots of test changes, so I opted for the above approach for now. llvm-svn: 63642
-
Bill Wendling authored
created. Specifically, those BuildMIs which use "DebugLoc::getUnknownLoc()". I'll remove them soon. llvm-svn: 63584
-
Dan Gohman authored
SSE2, however it's possible to disable SSE2, and the subtarget support code thinks that if 64-bit implies SSE2 and SSE2 is disabled then 64-bit should also be disabled. Instead, just mark all the 64-bit subtargets as explicitly supporting SSE2. Also, move the code that makes -march=x86-64 enable 64-bit support by default to only apply when there is no explicit subtarget. If you need to specify a subtarget and you want 64-bit code, you'll need to select a subtarget that supports 64-bit code. llvm-svn: 63575
-
- Feb 02, 2009
-
-
Torok Edwin authored
Add an assert to check HasX86_64 status. llvm-svn: 63552
-
Torok Edwin authored
llvm-svn: 63542
-
Sanjiv Gupta authored
Made the common case of default address space directive as non-virtual for performance reasons. Provide a single virtual interface for directives of all sizes in non-default address spaces. llvm-svn: 63521
-
Evan Cheng authored
llvm-svn: 63509
-
Evan Cheng authored
llvm-svn: 63506
-
Evan Cheng authored
Teach LowerBRCOND to recognize (xor (setcc x), 1). The xor inverts the condition. It's normally transformed by the dag combiner, unless the condition is set by a arithmetic op with overflow. llvm-svn: 63505
-
- Feb 01, 2009
-
-
Torok Edwin authored
var-args, and don't allow FP return values llvm-svn: 63495
-
Duncan Sands authored
crashes or wrong code with codegen of large integers: eliminate the legacy getIntegerVTBitMask and getIntegerVTSignBit methods, which returned their value as a uint64_t, so couldn't handle huge types. llvm-svn: 63494
-
- Jan 31, 2009
-
-
Dale Johannesen authored
argument. Adjust all callers and overloaded versions. llvm-svn: 63444
-
Bill Wendling authored
llvm-svn: 63442
-
- Jan 30, 2009
-
-
Sanjiv Gupta authored
llvm-svn: 63387
-
Sanjiv Gupta authored
llvm-svn: 63382
-
Mon P Wang authored
an illegal type. llvm-svn: 63380
-
Sanjiv Gupta authored
Enable emitting of constant values in non-default address space as well. The APIs emitting constants now take an additional parameter signifying the address space in which to emit. The APIs like getData8BitsDirective() etc are made virtual enabling targets to be able to define appropirate directivers for various sizes and address spaces. llvm-svn: 63377
-
- Jan 29, 2009
-
-
Dan Gohman authored
dagcombines that help it match in several more cases. Add several more cases to test/CodeGen/X86/bt.ll. This doesn't yet include matching for BT with an immediate operand, it just covers more register+register cases. llvm-svn: 63266
-
Mon P Wang authored
llvm-svn: 63252
-
- Jan 28, 2009
-
-
Duncan Sands authored
llvm-svn: 63198
-
Evan Cheng authored
The memory alignment requirement on some of the mov{h|l}p{d|s} patterns are 16-byte. That is overly strict. These instructions read / write f64 memory locations without alignment requirement. llvm-svn: 63195
-
Mon P Wang authored
llvm-svn: 63193
-
Evan Cheng authored
llvm-svn: 63161
-
- Jan 27, 2009
-
-
Anton Korobeynikov authored
mergeable string section. I don't see any bad impact of such decision (rather then placing it into mergeable const section, as it was before), but at least Darwin linker won't complain anymore. The problem in LLVM is that we don't have special type for string constants (like gcc does). Even more, we have two separate types: ConstatArray for non-null strings and ConstantAggregateZero for null stuff.... It's a bit weird :) llvm-svn: 63142
-
Dan Gohman authored
llvm-svn: 63121
-
Dan Gohman authored
llvm-svn: 63119
-
Dan Gohman authored
instead of via a by-reference argument. No functionality change. llvm-svn: 63118
-
Evan Cheng authored
llvm-svn: 63090
-
Dan Gohman authored
llvm-svn: 63088
-
Dan Gohman authored
llvm-svn: 63078
-
Dan Gohman authored
Don't use the Red Zone when dynamic stack realignment is needed. This could be implemented, but most x86-64 ABIs don't require dynamic stack realignment so it isn't urgent. llvm-svn: 63074
-
- Jan 26, 2009
-
-
Scott Michel authored
- Update DWARF debugging support. llvm-svn: 63059
-
Scott Michel authored
doesn't support it. The default is set to 'true', so this should not impact any other target backends. llvm-svn: 63058
-
Dan Gohman authored
disabled by default; I'll enable it when I hook it up with the llvm-gcc flag which controls it. llvm-svn: 63056
-
Evan Cheng authored
Enhance logic in X86DAGToDAGISel::PreprocessForRMW which move load inside callseq_start to allow it to be folded into a call. It was not considering the cases where a token factor is between the load and the callseq_start. llvm-svn: 63022
-
Dan Gohman authored
tidy up SDUse and related code. - Replace the operator= member functions with a set method, like LLVM Use has, and variants setInitial and setNode, which take care up updating use lists, like LLVM Use's does. This simplifies code that calls these functions. - getSDValue() is renamed to get(), as in LLVM Use, though most places can either use the implicit conversion to SDValue or the convenience functions instead. - Fix some more node vs. value terminology issues. Also, eliminate the one remaining use of SDOperandPtr, and SDOperandPtr itself. llvm-svn: 62995
-
Scott Michel authored
llvm-svn: 62991
-
Scott Michel authored
- Rename fcmp.ll test to fcmp32.ll, start adding new double tests to fcmp64.ll - Fix select_bits.ll test - Capitulate to the DAGCombiner and move i64 constant loads to instruction selection (SPUISelDAGtoDAG.cpp). <rant>DAGCombiner will insert all kinds of 64-bit optimizations after operation legalization occurs and now we have to do most of the work that instruction selection should be doing twice (once to determine if v2i64 build_vector can be handled by SelectCode(), which then runs all of the predicates a second time to select the necessary instructions.) But, CellSPU is a good citizen.</rant> llvm-svn: 62990
-
Nate Begeman authored
llvm-svn: 62989
-
Nate Begeman authored
llvm-svn: 62988
-