- Feb 26, 2014
-
-
Chandler Carruth authored
address spaces. This isn't really a correctness issue (the values are truncated) but its much cleaner. Patch by Matt Arsenault! llvm-svn: 202252
-
Chandler Carruth authored
integers. Complements the interfaces it is wrapping. llvm-svn: 202251
-
Evgeniy Stepanov authored
__android_log_write has an implicit message length limit. Print one line at a time. llvm-svn: 202250
-
Evgeniy Stepanov authored
llvm-svn: 202249
-
Rui Ueyama authored
If all input files are compatible with Structured Exception Handling, linker is supposed to create an exectuable with a table for SEH handlers. The table consists of exception handlers entry point addresses. The basic idea of SEH in x86 Microsoft ABI is to list all valid entry points of exception handlers in an read-only memory, so that an attacker cannot override the addresses in it. In x86 ABI, data for exception handling is mostly on stack, so it's volnerable to stack overflow attack. In order to protect against it, Windows runtime uses the table to check a return address, to ensure that the address is really an valid entry point for an exception handler. Compiler emits a list of exception handler functions to .sxdata section. It also emits a marker symbol "@feat.00" to indicate that the object is compatible with SEH. SEH is a relatively new feature for COFF, and mixing SEH-compatible and SEH-incompatible objects will result in an invalid executable, so is the marker. If all input files are compatible with SEH, LLD emits a SEH table. SEH table needs to be pointed by Load Configuration strucutre, so when emitting a SEH table LLD emits it too. The address of a Load Configuration will be stored to the file header. llvm-svn: 202248
-
Chandler Carruth authored
the default. Based on the patch by Matt Arsenault, D1764! I switched one place to use the more direct pointer type to compute the desired address space, and I reworked the memcpy rewriting section to reflect significant refactorings that this patch helped inspire. Thanks to several of the folks who helped review and improve the patch as well. llvm-svn: 202247
-
Evgeniy Stepanov authored
llvm-svn: 202246
-
Evgeniy Stepanov authored
llvm-svn: 202245
-
Alexey Samsonov authored
llvm-svn: 202244
-
Todd Fiala authored
Bug fix for pr18841: http://llvm.org/bugs/show_bug.cgi?id=18841 This change creates a stub Python readline.so module that does almost nothing. Its whole purpose is to prevent Python from loading the real module, something it does during the embedded Python interpreter's initialization sequence (and way before lldb ever requests it within embedded_interpreter.py). On Ubuntu 12.04 and 13.10 x86_64, and in the Python 2.7.6 tree, the stock Python readline module links against the GNU readline library. This appears to be the case on all Pythons except where __APPLE__ is defined. LLDB now requires linking against the libedit library. Something about having both libedit.so and libreadline.so linked into the same process space is causing the Python readline.so to trigger a NULL memory access. I have put in a separate patch to python.org. This suppression of embedded interpreter readline support can be removed if at least any one of the following happens: 1. The stock python distribution accepts a patch similar to what I submitted to Python 2.7.6's Modules/readline.c file. 2. The stock python distribution implements Modules/readline.c in terms of libedit's readline compatibility mode (i.e. essentially compiles it the way __APPLE__ compiles that module) under Linux. 3. a clean-room implementation of the python readline module is implemented against libedit (either readline compatibility mode or native libedit). This could be implemented within the readline.cpp file that this change introduces. It cannot be a fork of python's readline.c module due to llvm licensing. The net effect of this change on Linux is that the embedded python's readline support will not exist. llvm-svn: 202243
-
Chandler Carruth authored
to work independently for the slice side and the other side. This allows us to only compute the minimum of the two when we actually rewrite to a memcpy that needs to take the minimum, and preserve higher alignment for one side or the other when rewriting to loads and stores. This fix was inspired by seeing the result of some refactoring that makes addrspace handling better. llvm-svn: 202242
-
NAKAMURA Takumi authored
[CMake] Use target_link_libraries(INTERFACE|PRIVATE) on CMake-2.8.12 to increase opportunity for parallel build. target_link_libraries(INTERFACE) doesn't bring inter-target dependencies in add_library, although final targets have dependencies to whole dependent libraries. It makes most libraries can be built in parallel. target_link_libraries(PRIVATE) is used to shaared library. Each dependent library is linked to the target.so, and its user will not see its grandchildren. For example, - libclang.so has sufficient libclang*.a(s). - c-index-test requires just only libclang.so. FIXME: lld is tweaked minimally. Adding INTERFACE in each library would be better thing. llvm-svn: 202241
-
Craig Topper authored
llvm-svn: 202240
-
NAKAMURA Takumi authored
For now, use both keywords, INTERFACE and PRIVATE via the variable, - ${cmake_2_8_12_INTERFACE} - ${cmake_2_8_12_PRIVATE} They could be cleaned up when we introduce 2.8.12. llvm-svn: 202239
-
NAKAMURA Takumi authored
llvm-svn: 202238
-
NAKAMURA Takumi authored
llvm-svn: 202237
-
NAKAMURA Takumi authored
Please give LLVMObject explicitly in each subdirectory if any of subdirectories required it. llvm-svn: 202236
-
NAKAMURA Takumi authored
llvm-svn: 202235
-
Craig Topper authored
llvm-svn: 202234
-
Craig Topper authored
llvm-svn: 202233
-
Chandler Carruth authored
D1764, which in turn set off the other refactorings to make 'getSliceAlign()' a sensible thing. There are two possible inputs to the required alignment of a memory transfer intrinsic: the alignment constraints of the source and the destination. If we are *only* introducing a (potentially new) offset onto one side of the transfer, we don't need to consider the alignment constraints of the other side. Use this to simplify the logic feeding into alignment computation for unsplit transfers. Also, hoist the clamp of the magical zero alignment for these intrinsics to the more customary one alignment early. This lets several other conditions melt away. No functionality changed. There is a further improvement this exposes which *will* change functionality, but that's arriving in a separate patch. llvm-svn: 202232
-
Chandler Carruth authored
rewriting logic: don't pass custom offsets for the adjusted pointer to the new alloca. We always passed NewBeginOffset here. Sometimes we spelled it BeginOffset, but only when they were in fact equal. Whats worse, the API is set up so that you can't reasonably call it with anything else -- it assumes that you're passing it an offset relative to the *original* alloca that happens to fall within the new one. That's the whole point of NewBeginOffset, it's the clamped beginning offset. No functionality changed. llvm-svn: 202231
-
Chandler Carruth authored
alignment of the slice being rewritten, not any arbitrary offset. Every caller is really just trying to compute the alignment for the whole slice, never for some arbitrary alignment. They are also just passing a type when they have one to see if we can skip an explicit alignment in the IR by using the type's alignment. This makes for a much simpler interface. Another refactoring inspired by the addrspace patch for SROA, although only loosely related. llvm-svn: 202230
-
Chandler Carruth authored
consistency with memcpy rewriting, and fix a latent bug in the alignment management for memset. The alignment issue is that getAdjustedAllocaPtr is computing the *relative* offset into the new alloca, but the alignment isn't being set to the relative offset, it was using the the absolute offset which is into the old alloca. I don't think its possible to write a test case that actually reaches this code where the resulting alignment would be observably different, but the intent was clearly to use the relative offset within the new alloca. llvm-svn: 202229
-
Chandler Carruth authored
rather than passing them as arguments. While I generally prefer actual arguments, in this case the readability loss is substantial. By using members we avoid repeatedly calculating the offsets, and once we're using members it is useful to ensure that those names *always* refer to the original-alloca-relative new offset for a rewritten slice. No functionality changed. Follow-up refactoring, all toward getting the address space patch merged. llvm-svn: 202228
-
Chandler Carruth authored
slice being rewritten. We had the same code scattered across most of the visits. Instead, compute the new offsets and the slice size once when we start to visit a particular slice, and use the member variables from then on. This reduces quite a bit of code duplication. No functionality changed. Refactoring inspired to make it easier to apply the address space patch to SROA. llvm-svn: 202227
-
NAKAMURA Takumi authored
llvm-svn: 202226
-
Ben Langmuir authored
llvm-svn: 202225
-
Chandler Carruth authored
checking in SROA. The primary change is to just rely on uge for checking that the offset is within the allocation size. This removes the explicit checks against isNegative which were terribly error prone (including the reversed logic that led to PR18615) and prevented us from supporting stack allocations larger than half the address space.... Ok, so maybe the latter isn't *common* but it's a silly restriction to have. Also, we used to try to support a PHI node which loaded from before the start of the allocation if any of the loaded bytes were within the allocation. This doesn't make any sense, we have never really supported loading or storing *before* the allocation starts. The simplified logic just doesn't care. We continue to allow loading past the end of the allocation in part to support cases where there is a PHI and some loads are larger than others and the larger ones reach past the end of the allocation. We could solve this a different and more conservative way, but I'm still somewhat paranoid about this. llvm-svn: 202224
-
Richard Trieu authored
is converted to a true value. Detected by Clang's improved -Wbool-conversion llvm-svn: 202223
-
Nick Lewycky authored
llvm-svn: 202222
-
Eric Christopher authored
llvm-svn: 202221
-
Eric Christopher authored
llvm-svn: 202220
-
Eric Christopher authored
llvm-svn: 202219
-
Nick Lewycky authored
Delete two declared overloads of CallInst::CallInst that are never defined or used. No functionality change. llvm-svn: 202218
-
Rui Ueyama authored
llvm-svn: 202217
-
Richard Trieu authored
null comparison when the pointer is known to be non-null. This catches the array to pointer decay, function to pointer decay and address of variables. This does not catch address of function since this has been previously used to silence a warning. Pointer to bool conversion is under -Wbool-conversion. Pointer to null comparison is under -Wtautological-pointer-compare, a sub-group of -Wtautological-compare. void foo() { int arr[5]; int x; // warn on these conditionals if (foo); if (arr); if (&x); if (foo == null); if (arr == null); if (&x == null); if (&foo); // no warning } llvm-svn: 202216
-
Rui Ueyama authored
IMAGE_DLL_CHARACTERISTICS_NO_SEH flag should be set only when SEH is disabled. llvm-svn: 202215
-
Marshall Clow authored
Implement LWG issue 2306: match_results::reference should be value_type&, not const value_type&. This is a general move by the LWG to have the reference type of read-only containers be a non-const reference; however, there are no methods that return a non-const reference to a match_result entry, so there's no worries about getting a non-const reference to a constant object. llvm-svn: 202214
-
Paul Robinson authored
llvm-svn: 202213
-