- Dec 01, 2008
-
-
Chris Lattner authored
llvm-svn: 60365
-
Douglas Gregor authored
llvm-svn: 60364
-
Daniel Dunbar authored
llvm-svn: 60363
-
Daniel Dunbar authored
__ASSEMBLER__ properly. Patch from Roman Divacky (with minor formatting changes). Thanks! llvm-svn: 60362
-
Douglas Gregor authored
llvm-svn: 60361
-
Douglas Gregor authored
llvm-svn: 60360
-
Douglas Gregor authored
llvm-svn: 60359
-
Scott Michel authored
- Fix v2[if]64 vector insertion code before IBM files a bug report. - Ensure that zero (0) offsets relative to $sp don't trip an assert (add $sp, 0 gets legalized to $sp alone, tripping an assert) - Shuffle masks passed to SPUISD::SHUFB are now v16i8 or v4i32 llvm-svn: 60358
-
Douglas Gregor authored
llvm-svn: 60357
-
Douglas Gregor authored
llvm-svn: 60355
-
Chris Lattner authored
llvm-svn: 60354
-
Chris Lattner authored
llvm-svn: 60353
-
Chris Lattner authored
damaged approximation. This should fix it on big endian platforms and on 64-bit. llvm-svn: 60352
-
Duncan Sands authored
MERGE_VALUES node with only one operand, so get rid of special code that only existed to handle that possibility. llvm-svn: 60349
-
Duncan Sands authored
ReplaceNodeResults: rather than returning a node which must have the same number of results as the original node (which means mucking around with MERGE_VALUES, and which is also easy to get wrong since SelectionDAG folding may mean you don't get the node you expect), return the results in a vector. llvm-svn: 60348
-
Bill Wendling authored
don't have overlapping bits. llvm-svn: 60344
-
Bill Wendling authored
llvm-svn: 60343
-
Bill Wendling authored
llvm-svn: 60341
-
Bill Wendling authored
Move pattern check outside of the if-then statement. This prevents us from fiddling with constants unless we have to. llvm-svn: 60340
-
Chris Lattner authored
llvm-svn: 60339
-
Chris Lattner authored
that it isn't reallocated all the time. This is a tiny speedup for GVN: 3.90->3.88s llvm-svn: 60338
-
Chris Lattner authored
llvm-svn: 60337
-
Chris Lattner authored
llvm-svn: 60336
-
Chris Lattner authored
instead of std::sort. This shrinks the release-asserts LSR.o file by 1100 bytes of code on my system. We should start using array_pod_sort where possible. llvm-svn: 60335
-
Anders Carlsson authored
llvm-svn: 60334
-
Anders Carlsson authored
llvm-svn: 60333
-
Chris Lattner authored
This is a lot cheaper and conceptually simpler. llvm-svn: 60332
-
Anders Carlsson authored
llvm-svn: 60331
-
Chris Lattner authored
DeadInsts ivar, just use it directly. llvm-svn: 60330
-
Chris Lattner authored
buggy rewrite, this notifies ScalarEvolution of a pending instruction about to be removed and then erases it, instead of erasing it then notifying. llvm-svn: 60329
-
Chris Lattner authored
xor in testcase (or is a substring). llvm-svn: 60328
-
Chris Lattner authored
new instructions it simplifies. Because we're threading jumps on edges with constants coming in from PHI's, we inherently are exposing a lot more constants to the new block. Folding them and deleting dead conditions allows the cost model in jump threading to be more accurate as it iterates. llvm-svn: 60327
-
Chris Lattner authored
prevents the passmgr from adding yet-another domtree invocation for Verifier if there is already one live. llvm-svn: 60326
-
Chris Lattner authored
instead of using FoldPHIArgBinOpIntoPHI. In addition to being more obvious, this also fixes a problem where instcombine wouldn't merge two phis that had different variable indices. This prevented instcombine from factoring big chunks of code in 403.gcc. For example: insn_cuid.exit: - %tmp336 = load i32** @uid_cuid, align 4 - %tmp337 = getelementptr %struct.rtx_def* %insn_addr.0.ph.i, i32 0, i32 3 - %tmp338 = bitcast [1 x %struct.rtunion]* %tmp337 to i32* - %tmp339 = load i32* %tmp338, align 4 - %tmp340 = getelementptr i32* %tmp336, i32 %tmp339 br label %bb62 bb61: - %tmp341 = load i32** @uid_cuid, align 4 - %tmp342 = getelementptr %struct.rtx_def* %insn, i32 0, i32 3 - %tmp343 = bitcast [1 x %struct.rtunion]* %tmp342 to i32* - %tmp344 = load i32* %tmp343, align 4 - %tmp345 = getelementptr i32* %tmp341, i32 %tmp344 br label %bb62 bb62: - %iftmp.62.0.in = phi i32* [ %tmp345, %bb61 ], [ %tmp340, %insn_cuid.exit ] + %insn.pn2 = phi %struct.rtx_def* [ %insn, %bb61 ], [ %insn_addr.0.ph.i, %insn_cuid.exit ] + %tmp344.pn.in.in = getelementptr %struct.rtx_def* %insn.pn2, i32 0, i32 3 + %tmp344.pn.in = bitcast [1 x %struct.rtunion]* %tmp344.pn.in.in to i32* + %tmp341.pn = load i32** @uid_cuid + %tmp344.pn = load i32* %tmp344.pn.in + %iftmp.62.0.in = getelementptr i32* %tmp341.pn, i32 %tmp344.pn %iftmp.62.0 = load i32* %iftmp.62.0.in llvm-svn: 60325
-
Anders Carlsson authored
llvm-svn: 60324
-
Anders Carlsson authored
llvm-svn: 60323
-
Chris Lattner authored
important because it is sinking the loads using the GEPs, but not the GEPs themselves. This triggers 647 times on 403.gcc and makes the .s file much much nicer. For example before: je LBB1_87 ## bb78 LBB1_62: ## bb77 leal 84(%esi), %eax LBB1_63: ## bb79 movl (%eax), %eax ... LBB1_87: ## bb78 movl $0, 4(%esp) movl %esi, (%esp) call L_make_decl_rtl$stub jmp LBB1_62 ## bb77 after: jne LBB1_63 ## bb79 LBB1_62: ## bb78 movl $0, 4(%esp) movl %esi, (%esp) call L_make_decl_rtl$stub LBB1_63: ## bb79 movl 84(%esi), %eax The input code was (and the GEPs are merged and the PHI is now eliminated by instcombine): br i1 %tmp233, label %bb78, label %bb77 bb77: %tmp234 = getelementptr %struct.tree_node* %t_addr.3, i32 0, i32 0, i32 22 br label %bb79 bb78: call void @make_decl_rtl(%struct.tree_node* %t_addr.3, i8* null) nounwind %tmp235 = getelementptr %struct.tree_node* %t_addr.3, i32 0, i32 0, i32 22 br label %bb79 bb79: %iftmp.12.0.in = phi %struct.rtx_def** [ %tmp235, %bb78 ], [ %tmp234, %bb77 ] %iftmp.12.0 = load %struct.rtx_def** %iftmp.12.0.in llvm-svn: 60322
-
Anders Carlsson authored
llvm-svn: 60321
-
Anders Carlsson authored
llvm-svn: 60320
-
Anders Carlsson authored
Add Sema::isNullPointerConstant which extwarns if necessary. Use it in Sema::CheckConditionalOperands. llvm-svn: 60319
-