- Mar 13, 2009
-
-
Chris Lattner authored
really horrible extensions that are disabled by default but that can be accepted by -fheinous-gnu-extensions (but which always emit a warning when enabled). As our first instance of this, implement PR3788/PR3794, which allows non-lvalues in inline asms in contexts where lvalues are required. bleh. llvm-svn: 66910
-
Chris Lattner authored
llvm-svn: 66909
-
Daniel Dunbar authored
llvm-svn: 66908
-
Daniel Dunbar authored
-ccc-print-options. llvm-svn: 66907
-
Daniel Dunbar authored
llvm-svn: 66906
-
Ted Kremenek authored
llvm-svn: 66905
-
rdar://problem/6675489Steve Naroff authored
Also changed BlockDecl API to be more consistent (wrt FunctionDecl). llvm-svn: 66904
-
Chris Lattner authored
codegen to the same thing as integer truncates to i8 (the top bits are just undefined). This implements rdar://6667338 llvm-svn: 66902
-
Ted Kremenek authored
conditions. Currently the analyzer does not reason well about promotions/truncations of symbolic values, so at branch conditions when we see: if (condition) and condition is something like a 'short' or 'char', essentially ignore the promotion to 'int' so that we track constraints on the original symbolic value. We only ignore the casts if the underlying type has the same or fewer bits as the converted type. This fixes: <rdar://problem/6619921> llvm-svn: 66899
-
Chris Lattner authored
errors when thrown. This gets us nice errors like this from tblgen: CMOVL32rr: (set GR32:i32:$dst, (X86cmov GR32:$src1, GR32:$src2)) /Users/sabre/llvm/Debug/bin/tblgen: error: Included from X86.td:116: Parsing X86InstrInfo.td:922: In CMOVL32rr: X86cmov node requires exactly 4 operands! def CMOVL32rr : I<0x4C, MRMSrcReg, // if <s, GR32 = GR32 ^ instead of just: CMOVL32rr: (set GR32:i32:$dst, (X86cmov GR32:$src1, GR32:$src2)) /Users/sabre/llvm/Debug/bin/tblgen: In CMOVL32rr: X86cmov node requires exactly 4 operands! This is all I plan to do with this, but it should be easy enough to improve if anyone cares (e.g. keeping more loc info in "dag" expr records in tblgen. llvm-svn: 66898
-
Chris Lattner authored
llvm-svn: 66897
-
rdar://problem/6451399Steve Naroff authored
This solution is much simpler (and doesn't add any per-scope overhead, which concerned Chris). The only downside is the LabelMap is now declared in two places (Sema and BlockSemaInfo). My original fix tried to unify the LabelMap in "Scope" (which would support nested functions in general). In any event, this fixes the bug given the current language definition. If/when we decide to support GCC style nested functions, this will need to be tweaked. llvm-svn: 66896
-
Chris Lattner authored
llvm-svn: 66895
-
Ted Kremenek authored
llvm-svn: 66894
-
Steve Naroff authored
Remove ActiveScope (revert http://llvm.org/viewvc/llvm-project?view=rev&revision=65694 and http://llvm.org/viewvc/llvm-project?view=rev&revision=66741). Will replace with something better today... llvm-svn: 66893
-
Ted Kremenek authored
is 64-bit. I used his suggestion of doing a direct bitwidth/signedness conversion of the 'offset' instead of just changing the sign. For more information, see: http://lists.cs.uiuc.edu/pipermail/cfe-dev/2009-March/004587.html llvm-svn: 66892
-
Douglas Gregor authored
llvm-svn: 66891
-
Duncan Sands authored
llvm-svn: 66890
-
Daniel Dunbar authored
llvm-svn: 66889
-
Daniel Dunbar authored
llvm-svn: 66888
-
Daniel Dunbar authored
llvm-svn: 66887
-
Daniel Dunbar authored
llvm-svn: 66886
-
Daniel Dunbar authored
to perform). Still doesn't do anything interesting. - This code came out much cleaner than in ccc with the reworked phases & mapping of types to lists of compilation steps (phases) to perform. llvm-svn: 66885
-
Gabor Greif authored
llvm-svn: 66884
-
Daniel Dunbar authored
llvm-svn: 66883
-
Daniel Dunbar authored
llvm-svn: 66882
-
Daniel Dunbar authored
provide information about what steps should be done for a particular file type. llvm-svn: 66881
-
Daniel Dunbar authored
llvm-svn: 66880
-
Daniel Dunbar authored
llvm-svn: 66879
-
Bill Wendling authored
instructions. Prevent that if we don't want implicit uses of SSE. llvm-svn: 66877
-
Argyrios Kyrtzidis authored
llvm-svn: 66876
-
Evan Cheng authored
Fix some significant problems with constant pools that resulted in unnecessary paddings between constant pool entries, larger than necessary alignments (e.g. 8 byte alignment for .literal4 sections), and potentially other issues. 1. ConstantPoolSDNode alignment field is log2 value of the alignment requirement. This is not consistent with other SDNode variants. 2. MachineConstantPool alignment field is also a log2 value. 3. However, some places are creating ConstantPoolSDNode with alignment value rather than log2 values. This creates entries with artificially large alignments, e.g. 256 for SSE vector values. 4. Constant pool entry offsets are computed when they are created. However, asm printer group them by sections. That means the offsets are no longer valid. However, asm printer uses them to determine size of padding between entries. 5. Asm printer uses expensive data structure multimap to track constant pool entries by sections. 6. Asm printer iterate over SmallPtrSet when it's emitting constant pool entries. This is non-deterministic. Solutions: 1. ConstantPoolSDNode alignment field is changed to keep non-log2 value. 2. MachineConstantPool alignment field is also changed to keep non-log2 value. 3. Functions that create ConstantPool nodes are passing in non-log2 alignments. 4. MachineConstantPoolEntry no longer keeps an offset field. It's replaced with an alignment field. Offsets are not computed when constant pool entries are created. They are computed on the fly in asm printer and JIT. 5. Asm printer uses cheaper data structure to group constant pool entries. 6. Asm printer compute entry offsets after grouping is done. 7. Change JIT code to compute entry offsets on the fly. llvm-svn: 66875
-
Evan Cheng authored
llvm-svn: 66874
-
Chris Lattner authored
lexer into its own TGSourceMgr class. llvm-svn: 66873
-
Owen Anderson authored
llvm-svn: 66870
-
Chris Lattner authored
for i32/i64 expressions (we could also do i16 on cpus where i16 lea is fast, but I didn't add this). On the example, we now generate: _test: movl 4(%esp), %eax cmpl $42, (%eax) setl %al movzbl %al, %eax leal 4(%eax,%eax,8), %eax ret instead of: _test: movl 4(%esp), %eax cmpl $41, (%eax) movl $4, %ecx movl $13, %eax cmovg %ecx, %eax ret llvm-svn: 66869
-
Chris Lattner authored
example to: _test: movl 4(%esp), %eax cmpl $41, (%eax) setg %al movzbl %al, %eax orl $4294967294, %eax ret instead of: movl 4(%esp), %eax cmpl $41, (%eax) movl $4294967294, %ecx movl $4294967295, %eax cmova %ecx, %eax ret which is smaller in code size and faster. rdar://6668608 llvm-svn: 66868
-
Bill Wendling authored
llvm-svn: 66867
-
Bill Wendling authored
llvm-svn: 66866
-
Dan Gohman authored
operands can't both be fully folded at the same time. For example, in the included testcase, a global variable is being added with an add of two values. The global variable wants RIP-relative addressing, so it can't share the address with another base register, but it's still possible to fold the initial add. llvm-svn: 66865
-