Enhance the truncstore optimization code to handle shifted
values and propagate demanded bits through them in simple cases. This allows this code: void foo(char *P) { strcpy(P, "abc"); } to compile to: _foo: ldrb r3, [r1] ldrb r2, [r1, #+1] ldrb r12, [r1, #+2]! ldrb r1, [r1, #+1] strb r1, [r0, #+3] strb r2, [r0, #+1] strb r12, [r0, #+2] strb r3, [r0] bx lr instead of: _foo: ldrb r3, [r1, #+3] ldrb r2, [r1, #+2] orr r3, r2, r3, lsl #8 ldrb r2, [r1, #+1] ldrb r1, [r1] orr r2, r1, r2, lsl #8 orr r3, r2, r3, lsl #16 strb r3, [r0] mov r2, r3, lsr #24 strb r2, [r0, #+3] mov r2, r3, lsr #16 strb r2, [r0, #+2] mov r3, r3, lsr #8 strb r3, [r0, #+1] bx lr testcase here: test/CodeGen/ARM/truncstore-dag-combine.ll This also helps occasionally for X86 and other cases not involving unaligned load/stores. llvm-svn: 42954
Loading
Please register or sign in to comment