- Apr 09, 2013
-
-
Eric Christopher authored
therefore not at all) of the pc or statement list. We also don't need to emit the compilation dir so save so space and time and don't bother. Fix up the testcase accordingly and verify that we don't emit the attributes or the items that they use. llvm-svn: 179114
-
Hal Finkel authored
Some general cleanup and only scan the end of a BB for branches (once we're done with the terminators and debug values, then there should not be any other branches). These address post-commit review suggestions by Bill Schmidt. No functionality change intended. llvm-svn: 179112
-
Nadav Rotem authored
llvm-svn: 179111
-
Chad Rosier authored
rather than deriving the StringRef from the Start and End SMLocs. Using the Start and End SMLocs works fine for operands such as [Symbol], but not for operands such as [Symbol + ImmDisp]. All existing test cases that reference a variable exercise this patch. rdar://13602265 llvm-svn: 179109
-
Benjamin Kramer authored
This pattern occurs in SROA output due to the way vector arguments are lowered on ARM. The testcase from PR15525 now compiles into this, which is better than the code we got with the old scalarrepl: _Store: ldr.w r9, [sp] vmov d17, r3, r9 vmov d16, r1, r2 vst1.8 {d16, d17}, [r0] bx lr Differential Revision: http://llvm-reviews.chandlerc.com/D647 llvm-svn: 179106
-
Hal Finkel authored
On PowerPC, non-vector loads and stores have r+i forms; however, in functions with large stack frames these were not being used to access slots far from the stack pointer because such slots were out of range for the signed 16-bit immediate offset field. This increases register pressure because we need a separate register for each offset (when the r+r form is used). By enabling virtual base registers, we can deal with large stack frames without unduly increasing register pressure. llvm-svn: 179105
-
Hal Finkel authored
llvm-svn: 179104
-
Eli Bendersky authored
Some translations here are not 1x1 because there are grep|grep chains that are non-trivial to implement in terms of FileCheck features. I made an effort for the tests to remain as similar as possible; do let me know if you notice anything fishy. The good news are that some buggy tests were fixed (grep | not grep - a bug waiting to happen). llvm-svn: 179102
-
Rafael Espindola authored
For now it is templated only on being 64 or 32 bits. I will add little/big endian next. llvm-svn: 179097
-
Alexey Samsonov authored
DWARF parser: Fix DWARF-2/3 incompatibility: size of DW_FORM_ref_addr is the same as DW_FORM_addr in DWARF2, and is 4/8 bytes on 32/64-bit DWARF starting from DWARF3. Adding a test for this is a huge pain - generating and uploading pre-built binary with DWARF3 debug info is way too ugly, and writing fine-grained unittests for DebugInfo is impossible, as it doesn't expose any headers in include/llvm. That said, I'm going to choose the second approach and submit the patch exposing DebugInfo headers for review soon enough. llvm-svn: 179095
-
Michael Gottesman authored
llvm-svn: 179087
-
Jakob Stoklund Olesen authored
llvm-svn: 179086
-
Nadav Rotem authored
llvm-svn: 179085
-
Nadav Rotem authored
llvm-svn: 179084
-
Jakob Stoklund Olesen authored
The save area is twice as big and there is no struct return slot. The stack pointer is always 16-byte aligned (after adding the bias). Also eliminate the stack adjustment instructions around calls when the function has a reserved stack frame. llvm-svn: 179083
-
Rafael Espindola authored
llvm-svn: 179076
-
Rafael Espindola authored
Use it when we don't need to know if we have a 32 or 64 bit SymbolTableEntry. llvm-svn: 179074
-
Joe Groff authored
Some parts of PointerIntPair assumed that the IntType of the pair was implicitly convertible to intptr_t, which is not the case for enum class values. Add a static_cast<intptr_t> to make these conversions explicit and allow PointerIntPair to be used with an enum class IntType. While we're here, rename some of the argument values so we don't have variables named "Int" floating around. llvm-svn: 179073
-
Rafael Espindola authored
Use it to share code and when we don't need to know if we have a 32 or 64 bit Section. llvm-svn: 179072
-
Nadav Rotem authored
Users may overide new-operators and implement any function that they like. llvm-svn: 179071
-
NAKAMURA Takumi authored
llvm-svn: 179066
-
Shuxin Yang authored
I brazenly think this change is slightly simpler than r178793 because: - no "state" in functor - "OpndPtrs[i]" looks simpler than "&Opnds[OpndIndices[i]]" While I can reproduce the probelm in Valgrind, it is rather difficult to come up a standalone testing case. The reason is that when an iterator is invalidated, the stale invalidated elements are not yet clobbered by nonsense data, so the optimizer can still proceed successfully. Thank Benjamin for fixing this bug and generously providing the test case. llvm-svn: 179062
-
- Apr 08, 2013
-
-
Nadav Rotem authored
llvm-svn: 179060
-
Rafael Espindola authored
llvm-svn: 179051
-
Rafael Espindola authored
llvm-svn: 179048
-
Eli Bendersky authored
llvm-svn: 179047
-
Eli Bendersky authored
llvm-svn: 179043
-
Matt Arsenault authored
First feature is not CPU subtype anymore since r134127 llvm-svn: 179038
-
Eli Bendersky authored
llvm-svn: 179036
-
Arnold Schwaighofer authored
The costs are overfitted so that I can still use the legalization factor. For example the following kernel has about half the throughput vectorized than unvectorized when compiled with SSE2. Before this patch we would vectorize it. unsigned short A[1024]; double B[1024]; void f() { int i; for (i = 0; i < 1024; ++i) { B[i] = (double) A[i]; } } radar://13599001 llvm-svn: 179033
-
Chad Rosier authored
rdar://13521249 llvm-svn: 179030
-
Hal Finkel authored
PowerPC has a conditional branch to the link register (return) instruction: BCLR. This should be used any time when we'd otherwise have a conditional branch to a return. This adds a small pass, PPCEarlyReturn, which runs just prior to the branch selection pass (and, importantly, after block placement) to generate these conditional returns when possible. It will also eliminate unconditional branches to returns (these happen rarely; most of the time these have already been tail duplicated by the time PPCEarlyReturn is invoked). This is a nice optimization for small functions that do not maintain a stack frame. llvm-svn: 179026
-
Alexey Samsonov authored
llvm-svn: 179023
-
Rafael Espindola authored
llvm-svn: 179021
-
Vincent Lejeune authored
llvm-svn: 179020
-
Chandler Carruth authored
nested quoting schemes, and they're not important here... llvm-svn: 179014
-
Chandler Carruth authored
llvm-svn: 179010
-
Chandler Carruth authored
llvm-svn: 179009
-
Tim Northover authored
llvm-svn: 179006
-
Tim Northover authored
I've managed to convince myself that AArch64's acquire/release instructions are sufficient to guarantee C++11's required semantics, even in the sequentially-consistent case. llvm-svn: 179005
-