- Jan 23, 2005
-
-
Chris Lattner authored
llvm-svn: 19791
-
Chris Lattner authored
llvm-svn: 19789
-
Chris Lattner authored
it. llvm-svn: 19788
-
Chris Lattner authored
llvm-svn: 19787
-
Chris Lattner authored
llvm-svn: 19786
-
Chris Lattner authored
llvm-svn: 19782
-
Chris Lattner authored
The first half of correct chain insertion for libcalls. This is not enough to fix Fhourstones yet though. llvm-svn: 19781
-
Chris Lattner authored
the new TLI that is available. Implement support for handling out of range shifts. This allows us to compile this code (a 64-bit rotate): unsigned long long f3(unsigned long long x) { return (x << 32) | (x >> (64-32)); } into this: f3: mov %EDX, DWORD PTR [%ESP + 4] mov %EAX, DWORD PTR [%ESP + 8] ret GCC produces this: $ gcc t.c -masm=intel -O3 -S -o - -fomit-frame-pointer .. f3: push %ebx mov %ebx, DWORD PTR [%esp+12] mov %ecx, DWORD PTR [%esp+8] mov %eax, %ebx mov %edx, %ecx pop %ebx ret The Simple ISEL produces (eww gross): f3: sub %ESP, 4 mov DWORD PTR [%ESP], %ESI mov %EDX, DWORD PTR [%ESP + 8] mov %ECX, DWORD PTR [%ESP + 12] mov %EAX, 0 mov %ESI, 0 or %EAX, %ECX or %EDX, %ESI mov %ESI, DWORD PTR [%ESP] add %ESP, 4 ret llvm-svn: 19780
-
Chris Lattner authored
llvm-svn: 19779
-
Chris Lattner authored
llvm-svn: 19778
-
Chris Lattner authored
llvm-svn: 19777
-
Reid Spencer authored
llvm-svn: 19776
-
Reid Spencer authored
doesn't support certain directives and symbols on cygwin are prefixed with an underscore. This patch makes the necessary adjustments to the output. llvm-svn: 19775
-
Chris Lattner authored
llvm-svn: 19774
-
Chris Lattner authored
llvm-svn: 19773
-
Chris Lattner authored
llvm-svn: 19772
-
Chris Lattner authored
Delete dead functions. llvm-svn: 19771
-
Chris Lattner authored
Also, make DiffFilesWithTolerance take sys::Path objects instead of std::strings. llvm-svn: 19770
-
Chris Lattner authored
llvm-svn: 19769
-
Chris Lattner authored
llvm-svn: 19768
-
Chris Lattner authored
used by other tools. llvm-svn: 19767
-
Chris Lattner authored
llvm-svn: 19766
-
Chris Lattner authored
llvm-svn: 19765
-
Andrew Lenharth authored
llvm-svn: 19764
-
Chris Lattner authored
llvm-svn: 19763
-
- Jan 22, 2005
-
-
Reid Spencer authored
won't be propagated to the configure script until there's a need to change configure.ac for some larger purpose. llvm-svn: 19762
-
Chris Lattner authored
llvm-svn: 19761
-
Chris Lattner authored
differences, which means that identical instructions (after stripping off the first literal string) do not run any different code at all. On the X86, this turns this code: switch (MI->getOpcode()) { case X86::ADC32mi: printOperand(MI, 4, MVT::i32); break; case X86::ADC32mi8: printOperand(MI, 4, MVT::i8); break; case X86::ADC32mr: printOperand(MI, 4, MVT::i32); break; case X86::ADD32mi: printOperand(MI, 4, MVT::i32); break; case X86::ADD32mi8: printOperand(MI, 4, MVT::i8); break; case X86::ADD32mr: printOperand(MI, 4, MVT::i32); break; case X86::AND32mi: printOperand(MI, 4, MVT::i32); break; case X86::AND32mi8: printOperand(MI, 4, MVT::i8); break; case X86::AND32mr: printOperand(MI, 4, MVT::i32); break; case X86::CMP32mi: printOperand(MI, 4, MVT::i32); break; case X86::CMP32mr: printOperand(MI, 4, MVT::i32); break; case X86::MOV32mi: printOperand(MI, 4, MVT::i32); break; case X86::MOV32mr: printOperand(MI, 4, MVT::i32); break; case X86::OR32mi: printOperand(MI, 4, MVT::i32); break; case X86::OR32mi8: printOperand(MI, 4, MVT::i8); break; case X86::OR32mr: printOperand(MI, 4, MVT::i32); break; case X86::ROL32mi: printOperand(MI, 4, MVT::i8); break; case X86::ROR32mi: printOperand(MI, 4, MVT::i8); break; case X86::SAR32mi: printOperand(MI, 4, MVT::i8); break; case X86::SBB32mi: printOperand(MI, 4, MVT::i32); break; case X86::SBB32mi8: printOperand(MI, 4, MVT::i8); break; case X86::SBB32mr: printOperand(MI, 4, MVT::i32); break; case X86::SHL32mi: printOperand(MI, 4, MVT::i8); break; case X86::SHLD32mrCL: printOperand(MI, 4, MVT::i32); break; case X86::SHR32mi: printOperand(MI, 4, MVT::i8); break; case X86::SHRD32mrCL: printOperand(MI, 4, MVT::i32); break; case X86::SUB32mi: printOperand(MI, 4, MVT::i32); break; case X86::SUB32mi8: printOperand(MI, 4, MVT::i8); break; case X86::SUB32mr: printOperand(MI, 4, MVT::i32); break; case X86::TEST32mi: printOperand(MI, 4, MVT::i32); break; case X86::TEST32mr: printOperand(MI, 4, MVT::i32); break; case X86::TEST8mi: printOperand(MI, 4, MVT::i8); break; case X86::XCHG32mr: printOperand(MI, 4, MVT::i32); break; case X86::XOR32mi: printOperand(MI, 4, MVT::i32); break; case X86::XOR32mi8: printOperand(MI, 4, MVT::i8); break; case X86::XOR32mr: printOperand(MI, 4, MVT::i32); break; } into this: switch (MI->getOpcode()) { case X86::ADC32mi: case X86::ADC32mr: case X86::ADD32mi: case X86::ADD32mr: case X86::AND32mi: case X86::AND32mr: case X86::CMP32mi: case X86::CMP32mr: case X86::MOV32mi: case X86::MOV32mr: case X86::OR32mi: case X86::OR32mr: case X86::SBB32mi: case X86::SBB32mr: case X86::SHLD32mrCL: case X86::SHRD32mrCL: case X86::SUB32mi: case X86::SUB32mr: case X86::TEST32mi: case X86::TEST32mr: case X86::XCHG32mr: case X86::XOR32mi: case X86::XOR32mr: printOperand(MI, 4, MVT::i32); break; case X86::ADC32mi8: case X86::ADD32mi8: case X86::AND32mi8: case X86::OR32mi8: case X86::ROL32mi: case X86::ROR32mi: case X86::SAR32mi: case X86::SBB32mi8: case X86::SHL32mi: case X86::SHR32mi: case X86::SUB32mi8: case X86::TEST8mi: case X86::XOR32mi8: printOperand(MI, 4, MVT::i8); break; } After this, the generated asmwriters look pretty much as though they were generated by hand. This shrinks the X86 asmwriter.inc files from 55101->39669 and 55429->39551 bytes each, and PPC from 16766->12859 bytes. llvm-svn: 19760
-
Chris Lattner authored
strings starts out with a constant string, we emit the string first, using a table lookup (instead of a switch statement). Because this is usually the opcode portion of the asm string, the differences between the instructions have now been greatly reduced. This allows many more case statements to be grouped together. This patch also allows instruction cases to be grouped together when the instruction patterns are exactly identical (common after the opcode string has been ripped off), and when the differing operand is a MachineInstr operand that needs to be formatted. The end result of this is a mean and lean generated AsmPrinter! llvm-svn: 19759
-
Chris Lattner authored
llvm-svn: 19758
-
Jeff Cohen authored
llvm-svn: 19757
-
Chris Lattner authored
llvm-svn: 19756
-
Chris Lattner authored
emitting code like this: case PPC::ADD: O << "add "; printOperand(MI, 0, MVT::i64); O << ", "; prin tOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << '\n '; break; case PPC::ADDC: O << "addc "; printOperand(MI, 0, MVT::i64); O << ", "; pr intOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << ' \n'; break; case PPC::ADDE: O << "adde "; printOperand(MI, 0, MVT::i64); O << ", "; pr intOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << ' \n'; break; ... Emit code like this: case PPC::ADD: case PPC::ADDC: case PPC::ADDE: ... switch (MI->getOpcode()) { case PPC::ADD: O << "add "; break; case PPC::ADDC: O << "addc "; break; case PPC::ADDE: O << "adde "; break; ... } printOperand(MI, 0, MVT::i64); O << ", "; printOperand(MI, 1, MVT::i64); O << ", "; printOperand(MI, 2, MVT::i64); O << "\n"; break; This shrinks the PPC asm writer from 24785->15205 bytes (even though the new asmwriter has much more whitespace than the old one), and the X86 printers shrink quite a bit too. The important implication of this is that GCC no longer hits swap when building the PPC backend in optimized mode. Thus this fixes PR448. -Chris llvm-svn: 19755
-
Chris Lattner authored
llvm-svn: 19754
-
Chris Lattner authored
llvm-svn: 19753
-
Jeff Cohen authored
llvm-svn: 19752
-
Jeff Cohen authored
llvm-svn: 19751
-
Jeff Cohen authored
llvm-svn: 19750
-
Chris Lattner authored
and more understandable. It also allows us to do simple things like fold consequtive literal strings together. For example, instead of emitting this for the X86 backend: O << "adc" << "l" << " "; we now generate this: O << "adcl "; *whoa* :) This shrinks the X86 asmwriters from 62729->58267 and 65176->58644 bytes for the intel/att asm writers respectively. llvm-svn: 19749
-
Jeff Cohen authored
llvm-svn: 19748
-