Skip to content
  1. Apr 12, 2012
  2. Apr 06, 2012
    • Johnny Chen's avatar
      Add a new option to the test driver, -N dsym or -N dwarf, in order to exclude tests decorated with · f1548d4f
      Johnny Chen authored
      either @dsym_test or @dwarf_test to be executed during the testsuite run.  There are still lots of
      Test*.py files which have not been decorated with the new decorator.
      
      An example:
      
      # From TestMyFirstWatchpoint.py ->
      class HelloWatchpointTestCase(TestBase):
      
          mydir = os.path.join("functionalities", "watchpoint", "hello_watchpoint")
      
          @dsym_test
          def test_hello_watchpoint_with_dsym_using_watchpoint_set(self):
              """Test a simple sequence of watchpoint creation and watchpoint hit."""
              self.buildDsym(dictionary=self.d)
              self.setTearDownCleanup(dictionary=self.d)
              self.hello_watchpoint()
      
          @dwarf_test
          def test_hello_watchpoint_with_dwarf_using_watchpoint_set(self):
              """Test a simple sequence of watchpoint creation and watchpoint hit."""
              self.buildDwarf(dictionary=self.d)
              self.setTearDownCleanup(dictionary=self.d)
              self.hello_watchpoint()
      
      
      # Invocation ->
      [17:50:14] johnny:/Volumes/data/lldb/svn/ToT/test $ ./dotest.py -N dsym -v -p TestMyFirstWatchpoint.py
      LLDB build dir: /Volumes/data/lldb/svn/ToT/build/Debug
      LLDB-137
      Path: /Volumes/data/lldb/svn/ToT
      URL: https://johnny@llvm.org/svn/llvm-project/lldb/trunk
      Repository Root: https://johnny@llvm.org/svn/llvm-project
      Repository UUID: 91177308-0d34-0410-b5e6-96231b3b80d8
      Revision: 154133
      Node Kind: directory
      Schedule: normal
      Last Changed Author: gclayton
      Last Changed Rev: 154109
      Last Changed Date: 2012-04-05 10:43:02 -0700 (Thu, 05 Apr 2012)
      
      
      
      Session logs for test failures/errors/unexpected successes will go into directory '2012-04-05-17_50_49'
      Command invoked: python ./dotest.py -N dsym -v -p TestMyFirstWatchpoint.py
      compilers=['clang']
      
      Configuration: arch=x86_64 compiler=clang
      ----------------------------------------------------------------------
      Collected 2 tests
      
      1: test_hello_watchpoint_with_dsym_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
         Test a simple sequence of watchpoint creation and watchpoint hit. ... skipped 'dsym tests'
      2: test_hello_watchpoint_with_dwarf_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
         Test a simple sequence of watchpoint creation and watchpoint hit. ... ok
      
      ----------------------------------------------------------------------
      Ran 2 tests in 1.138s
      
      OK (skipped=1)
      Session logs for test failures/errors/unexpected successes can be found in directory '2012-04-05-17_50_49'
      [17:50:50] johnny:/Volumes/data/lldb/svn/ToT/test $ 
      
      llvm-svn: 154154
      f1548d4f
  3. Mar 20, 2012
    • Johnny Chen's avatar
      Add a '-E' option to the test driver for the purpose of specifying some extra CFLAGS · e344486e
      Johnny Chen authored
      to pass to the toolchain in order to build the inferior programs to be run/debugged
      duirng the test suite.  The architecture might dictate some special CFLAGS which are
      more easily specified in a central place (like the command line) instead of inside
      make rules.
      
      For Example,
      
      ./dotest.py -v -r /shared/phone -A armv7 -E "-isysroot your_sdk_root" functionalities/watchpoint/hello_watchpoint
      
      will relocate the particular test directory ('functionalities/watchpoint/hello_watchpoint' in this case) to a
      new directory named '/shared/phone'.  The particular incarnation of the architecture-compiler combination of the
      test support files are therefore to be found under:
      
      /shared/phone.arch=armv7-compiler=clang/functionalities/watchpoint/hello_watchpoint
      
      The building of the inferior programs under testing is now working.
      
      The actual launching/debugging of the inferior programs are not yet working,
      neither is the setting of a watchpoint on the phone.
      
      llvm-svn: 153070
      e344486e
  4. Mar 12, 2012
  5. Mar 09, 2012
    • Johnny Chen's avatar
      Add the capability on OS X to utilize 'xcrun' to locate the compilers used for... · 934c05d2
      Johnny Chen authored
      Add the capability on OS X to utilize 'xcrun' to locate the compilers used for building the inferior programs
      to be debugged while running the test suite.  By default, compilers is set to ['clang'] and can be overridden
      using the "-C compilerA^compilerB" option.
      
      llvm-svn: 152367
      934c05d2
    • Johnny Chen's avatar
      Change the test driver so that, by default, it takes into consideration of... · 5a9a9883
      Johnny Chen authored
      Change the test driver so that, by default, it takes into consideration of both 'x86_64' and 'i386' architectures
      when building the inferior programs.
      
      Example:
      
      /Volumes/data/lldb/svn/ToT/test $ ./dotest.py -v functionalities/watchpoint
      LLDB build dir: /Volumes/data/lldb/svn/ToT/build/Debug
      LLDB-123
      Path: /Volumes/data/lldb/svn/ToT
      URL: https://johnny@llvm.org/svn/llvm-project/lldb/trunk
      Repository Root: https://johnny@llvm.org/svn/llvm-project
      Repository UUID: 91177308-0d34-0410-b5e6-96231b3b80d8
      Revision: 152244
      Node Kind: directory
      Schedule: normal
      Last Changed Author: gclayton
      Last Changed Rev: 152244
      Last Changed Date: 2012-03-07 13:03:09 -0800 (Wed, 07 Mar 2012)
      
      
      
      Session logs for test failures/errors/unexpected successes will go into directory '2012-03-08-16_43_51'
      Command invoked: python ./dotest.py -v functionalities/watchpoint
      
      Configuration: arch=x86_64
      ----------------------------------------------------------------------
      Collected 21 tests
      
       1: test_hello_watchlocation_with_dsym (TestWatchLocation.HelloWatchLocationTestCase)
          Test watching a location with '-x size' option. ... ok
       2: test_hello_watchlocation_with_dwarf (TestWatchLocation.HelloWatchLocationTestCase)
          Test watching a location with '-x size' option. ... ok
       3: test_hello_watchpoint_with_dsym_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
          Test a simple sequence of watchpoint creation and watchpoint hit. ... ok
       4: test_hello_watchpoint_with_dwarf_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
          Test a simple sequence of watchpoint creation and watchpoint hit. ... ok
       5: test_watchpoint_multiple_threads_with_dsym (TestWatchpointMultipleThreads.WatchpointForMultipleThreadsTestCase)
          Test that lldb watchpoint works for multiple threads. ... ok
       6: test_watchpoint_multiple_threads_with_dwarf (TestWatchpointMultipleThreads.WatchpointForMultipleThreadsTestCase)
          Test that lldb watchpoint works for multiple threads. ... ok
       7: test_rw_disable_after_first_stop__with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint but disable it after the first stop. ... ok
       8: test_rw_disable_after_first_stop_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint but disable it after the first stop. ... ok
       9: test_rw_disable_then_enable_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint, disable initially, then enable it. ... ok
      10: test_rw_disable_then_enable_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint, disable initially, then enable it. ... ok
      11: test_rw_watchpoint_delete_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test delete watchpoint and expect not to stop for watchpoint. ... ok
      12: test_rw_watchpoint_delete_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test delete watchpoint and expect not to stop for watchpoint. ... ok
      13: test_rw_watchpoint_set_ignore_count_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test watchpoint ignore count and expect to not to stop at all. ... ok
      14: test_rw_watchpoint_set_ignore_count_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test watchpoint ignore count and expect to not to stop at all. ... ok
      15: test_rw_watchpoint_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint and expect to stop two times. ... ok
      16: test_rw_watchpoint_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint and expect to stop two times. ... ok
      17: test_watchpoint_cond_with_dsym (TestWatchpointConditionCmd.WatchpointConditionCmdTestCase)
          Test watchpoint condition. ... ok
      18: test_watchpoint_cond_with_dwarf (TestWatchpointConditionCmd.WatchpointConditionCmdTestCase)
          Test watchpoint condition. ... ok
      19: test_watchlocation_with_dsym_using_watchpoint_set (TestWatchLocationWithWatchSet.WatchLocationUsingWatchpointSetTestCase)
          Test watching a location with 'watchpoint set expression -w write -x size' option. ... ok
      20: test_watchlocation_with_dwarf_using_watchpoint_set (TestWatchLocationWithWatchSet.WatchLocationUsingWatchpointSetTestCase)
          Test watching a location with 'watchpoint set expression -w write -x size' option. ... ok
      21: test_error_cases_with_watchpoint_set (TestWatchpointSetErrorCases.WatchpointSetErrorTestCase)
          Test error cases with the 'watchpoint set' command. ... ok
      
      ----------------------------------------------------------------------
      Ran 21 tests in 74.590s
      
      OK
      
      Configuration: arch=i386
      ----------------------------------------------------------------------
      Collected 21 tests
      
       1: test_hello_watchlocation_with_dsym (TestWatchLocation.HelloWatchLocationTestCase)
          Test watching a location with '-x size' option. ... ok
       2: test_hello_watchlocation_with_dwarf (TestWatchLocation.HelloWatchLocationTestCase)
          Test watching a location with '-x size' option. ... ok
       3: test_hello_watchpoint_with_dsym_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
          Test a simple sequence of watchpoint creation and watchpoint hit. ... ok
       4: test_hello_watchpoint_with_dwarf_using_watchpoint_set (TestMyFirstWatchpoint.HelloWatchpointTestCase)
          Test a simple sequence of watchpoint creation and watchpoint hit. ... ok
       5: test_watchpoint_multiple_threads_with_dsym (TestWatchpointMultipleThreads.WatchpointForMultipleThreadsTestCase)
          Test that lldb watchpoint works for multiple threads. ... ok
       6: test_watchpoint_multiple_threads_with_dwarf (TestWatchpointMultipleThreads.WatchpointForMultipleThreadsTestCase)
          Test that lldb watchpoint works for multiple threads. ... ok
       7: test_rw_disable_after_first_stop__with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint but disable it after the first stop. ... ok
       8: test_rw_disable_after_first_stop_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint but disable it after the first stop. ... ok
       9: test_rw_disable_then_enable_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint, disable initially, then enable it. ... ok
      10: test_rw_disable_then_enable_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint, disable initially, then enable it. ... ok
      11: test_rw_watchpoint_delete_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test delete watchpoint and expect not to stop for watchpoint. ... ok
      12: test_rw_watchpoint_delete_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test delete watchpoint and expect not to stop for watchpoint. ... ok
      13: test_rw_watchpoint_set_ignore_count_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test watchpoint ignore count and expect to not to stop at all. ... ok
      14: test_rw_watchpoint_set_ignore_count_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test watchpoint ignore count and expect to not to stop at all. ... ok
      15: test_rw_watchpoint_with_dsym (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint and expect to stop two times. ... ok
      16: test_rw_watchpoint_with_dwarf (TestWatchpointCommands.WatchpointCommandsTestCase)
          Test read_write watchpoint and expect to stop two times. ... ok
      17: test_watchpoint_cond_with_dsym (TestWatchpointConditionCmd.WatchpointConditionCmdTestCase)
          Test watchpoint condition. ... ok
      18: test_watchpoint_cond_with_dwarf (TestWatchpointConditionCmd.WatchpointConditionCmdTestCase)
          Test watchpoint condition. ... ok
      19: test_watchlocation_with_dsym_using_watchpoint_set (TestWatchLocationWithWatchSet.WatchLocationUsingWatchpointSetTestCase)
          Test watching a location with 'watchpoint set expression -w write -x size' option. ... ok
      20: test_watchlocation_with_dwarf_using_watchpoint_set (TestWatchLocationWithWatchSet.WatchLocationUsingWatchpointSetTestCase)
          Test watching a location with 'watchpoint set expression -w write -x size' option. ... ok
      21: test_error_cases_with_watchpoint_set (TestWatchpointSetErrorCases.WatchpointSetErrorTestCase)
          Test error cases with the 'watchpoint set' command. ... ok
      
      ----------------------------------------------------------------------
      Ran 21 tests in 67.059s
      
      OK
      
      llvm-svn: 152357
      5a9a9883
  6. Jan 31, 2012
  7. Jan 18, 2012
  8. Jan 17, 2012
  9. Nov 18, 2011
  10. Nov 17, 2011
    • Johnny Chen's avatar
      Add an option '-S' to skip the build and cleanup while running the test. · 0fddfb2c
      Johnny Chen authored
      Use this option with care as you would need to build the inferior(s) by hand
      and build the executable(s) with the correct name(s).  This option can be used
      with '-# n' to stress test certain test cases for n number of times.
      
      An example:
      
      [11:55:11] johnny:/Volumes/data/lldb/svn/trunk/test/python_api/value $ ls
      Makefile		TestValueAPI.pyc	linked_list
      TestValueAPI.py		change_values		main.c
      [11:55:14] johnny:/Volumes/data/lldb/svn/trunk/test/python_api/value $ make EXE=test_with_dsym
      clang -gdwarf-2 -O0  -arch x86_64   -c -o main.o main.c
      clang -gdwarf-2 -O0  -arch x86_64   main.o -o "test_with_dsym"
      /usr/bin/dsymutil  -o "test_with_dsym.dSYM" "test_with_dsym"
      [11:55:20] johnny:/Volumes/data/lldb/svn/trunk/test/python_api/value $ cd ../..
      [11:55:24] johnny:/Volumes/data/lldb/svn/trunk/test $ ./dotest.py -v -# 10 -S -f ValueAPITestCase.test_with_dsym
      LLDB build dir: /Volumes/data/lldb/svn/trunk/build/Debug
      LLDB-89
      Path: /Volumes/data/lldb/svn/trunk
      URL: https://johnny@llvm.org/svn/llvm-project/lldb/trunk
      Repository Root: https://johnny@llvm.org/svn/llvm-project
      Repository UUID: 91177308-0d34-0410-b5e6-96231b3b80d8
      Revision: 144914
      Node Kind: directory
      Schedule: normal
      Last Changed Author: gclayton
      Last Changed Rev: 144911
      Last Changed Date: 2011-11-17 09:22:31 -0800 (Thu, 17 Nov 2011)
      
      
      
      Session logs for test failures/errors/unexpected successes will go into directory '2011-11-17-11_55_29'
      Command invoked: python ./dotest.py -v -# 10 -S -f ValueAPITestCase.test_with_dsym
      ----------------------------------------------------------------------
      Collected 1 test
      
      1: test_with_dsym (TestValueAPI.ValueAPITestCase)
         Exercise some SBValue APIs. ... ok
      
      ----------------------------------------------------------------------
      Ran 1 test in 1.163s
      
      OK
      1: test_with_dsym (TestValueAPI.ValueAPITestCase)
         Exercise some SBValue APIs. ... ok
      
      ----------------------------------------------------------------------
      Ran 1 test in 0.200s
      
      OK
      1: test_with_dsym (TestValueAPI.ValueAPITestCase)
         Exercise some SBValue APIs. ... ok
      
      ----------------------------------------------------------------------
      Ran 1 test in 0.198s
      
      OK
      1: test_with_dsym (TestValueAPI.ValueAPITestCase)
         Exercise some SBValue APIs. ... ok
      
      ----------------------------------------------------------------------
      Ran 1 test in 0.199s
      
      OK
      1: test_with_dsym (TestValueAPI.ValueAPITestCase)
         Exercise some SBValue APIs. ... ok
      
      ----------------------------------------------------------------------
      Ran 1 test in 0.239s
      
      OK
      1: test_with_dsym (TestValueAPI.ValueAPITestCase)
         Exercise some SBValue APIs. ... ok
      
      ----------------------------------------------------------------------
      Ran 1 test in 1.215s
      
      OK
      1: test_with_dsym (TestValueAPI.ValueAPITestCase)
         Exercise some SBValue APIs. ... ok
      
      ----------------------------------------------------------------------
      Ran 1 test in 0.105s
      
      OK
      1: test_with_dsym (TestValueAPI.ValueAPITestCase)
         Exercise some SBValue APIs. ... ok
      
      ----------------------------------------------------------------------
      Ran 1 test in 0.098s
      
      OK
      1: test_with_dsym (TestValueAPI.ValueAPITestCase)
         Exercise some SBValue APIs. ... ok
      
      ----------------------------------------------------------------------
      Ran 1 test in 0.195s
      
      OK
      1: test_with_dsym (TestValueAPI.ValueAPITestCase)
         Exercise some SBValue APIs. ... ok
      
      ----------------------------------------------------------------------
      Ran 1 test in 1.197s
      
      OK
      [11:55:34] johnny:/Volumes/data/lldb/svn/trunk/test $ 
      
      llvm-svn: 144919
      0fddfb2c
  11. Nov 01, 2011
  12. Oct 28, 2011
  13. Oct 25, 2011
    • Johnny Chen's avatar
      Benchmark the turnaround time starting a debugger and run to the breakpoint with lldb vs. gdb. · fc9e79fb
      Johnny Chen authored
      An example (with /Developer/usr/bin/lldb vs. /usr/bin/gdb):
      
      [13:05:04] johnny:/Volumes/data/lldb/svn/trunk/test $ ./dotest.py -v +b -n -p TestCompileRunToBreakpointTurnaround.py
      1: test_run_lldb_then_gdb (TestCompileRunToBreakpointTurnaround.CompileRunToBreakpointBench)
         Benchmark turnaround time with lldb vs. gdb. ... 
      lldb turnaround benchmark: Avg: 4.574600 (Laps: 3, Total Elapsed Time: 13.723799)
      gdb turnaround benchmark: Avg: 7.966713 (Laps: 3, Total Elapsed Time: 23.900139)
      lldb_avg/gdb_avg: 0.574214
      ok
      
      ----------------------------------------------------------------------
      Ran 1 test in 55.462s
      
      OK
      
      llvm-svn: 142949
      fc9e79fb
  14. Oct 22, 2011
    • Johnny Chen's avatar
      Add bench.py as a driver script to run some benchmarks on lldb. · b8da4262
      Johnny Chen authored
      Add benchmarks for expression evaluations (TestExpressionCmd.py) and disassembly (TestDoAttachThenDisassembly.py).
      
      An example:
      [17:45:55] johnny:/Volumes/data/lldb/svn/trunk/test $ ./bench.py 2>&1 | grep -P '^lldb.*benchmark:'
      lldb startup delay (create fresh target) benchmark: Avg: 0.104274 (Laps: 30, Total Elapsed Time: 3.128214)
      lldb startup delay (set first breakpoint) benchmark: Avg: 0.102216 (Laps: 30, Total Elapsed Time: 3.066470)
      lldb frame variable benchmark: Avg: 1.649162 (Laps: 20, Total Elapsed Time: 32.983245)
      lldb stepping benchmark: Avg: 0.104409 (Laps: 50, Total Elapsed Time: 5.220461)
      lldb expr cmd benchmark: Avg: 0.206774 (Laps: 25, Total Elapsed Time: 5.169350)
      lldb disassembly benchmark: Avg: 0.089086 (Laps: 10, Total Elapsed Time: 0.890859)
      
      llvm-svn: 142708
      b8da4262
  15. Oct 21, 2011
  16. Oct 20, 2011
    • Johnny Chen's avatar
      Parameterize the iteration count used when running benchmarks, instead of... · 38f9daa3
      Johnny Chen authored
      Parameterize the iteration count used when running benchmarks, instead of hard-coded inside the test case.
      Add a '-y count' option to the test driver for this purpose.  An example:
      
       $  ./dotest.py -v -y 25 +b -p TestDisassembly.py
      
      ...
      
      ----------------------------------------------------------------------
      Collected 2 tests
      
      1: test_run_gdb_then_lldb (TestDisassembly.DisassembleDriverMainLoop)
         Test disassembly on a large function with lldb vs. gdb. ... 
      gdb benchmark: Avg: 0.226305 (Laps: 25, Total Elapsed Time: 5.657614)
      lldb benchmark: Avg: 0.113864 (Laps: 25, Total Elapsed Time: 2.846606)
      lldb_avg/gdb_avg: 0.503146
      ok
      2: test_run_lldb_then_gdb (TestDisassembly.DisassembleDriverMainLoop)
         Test disassembly on a large function with lldb vs. gdb. ... 
      lldb benchmark: Avg: 0.113008 (Laps: 25, Total Elapsed Time: 2.825201)
      gdb benchmark: Avg: 0.225240 (Laps: 25, Total Elapsed Time: 5.631001)
      lldb_avg/gdb_avg: 0.501723
      ok
      
      ----------------------------------------------------------------------
      Ran 2 tests in 41.346s
      
      OK
      
      llvm-svn: 142598
      38f9daa3
  17. Oct 11, 2011
  18. Sep 16, 2011
  19. Aug 26, 2011
    • Johnny Chen's avatar
      Add a new attribute self.lldbHere, representing the fullpath to the 'lldb' executable · d890bfc9
      Johnny Chen authored
      built locally from the source tree.  This is distinguished from self.lldbExec, which
      can be used by test/benchmarks to measure the performances against other debuggers.
      
      You can use environment variable LLDB_EXEC to specify self.lldbExec to the dotest.py
      test driver, otherwise it is going to be populated with self.lldbHere.
      
      Modify the regular tests under test dir, i.e., not test/benchmarks, to use self.lldbHere.
      Also modify the benchmarks tests to use self.lldbHere when it needs an 'lldb' executable
      with debug info to do the performance measurements.
      
      llvm-svn: 138608
      d890bfc9
  20. Aug 16, 2011
  21. Aug 13, 2011
  22. Aug 12, 2011
  23. Aug 04, 2011
  24. Jul 30, 2011
    • Johnny Chen's avatar
      Add a @benchmarks_test decorator for test method we want to categorize as benchmarks test. · 5ccbccfc
      Johnny Chen authored
      The test driver now takes an option "+b" which enables to run just the benchmarks tests.
      By default, tests decorated with the @benchmarks_test decorator do not get run.
      
      Add an example benchmarks test directory which contains nothing for the time being,
      just to demonstrate the @benchmarks_test concept.
      
      For example,
      
      $ ./dotest.py -v benchmarks
      
      ...
      
      ----------------------------------------------------------------------
      Collected 2 tests
      
      1: test_with_gdb (TestRepeatedExprs.RepeatedExprssCase)
         Test repeated expressions with gdb. ... skipped 'benchmarks tests'
      2: test_with_lldb (TestRepeatedExprs.RepeatedExprssCase)
         Test repeated expressions with lldb. ... skipped 'benchmarks tests'
      
      ----------------------------------------------------------------------
      Ran 2 tests in 0.047s
      
      OK (skipped=2)
      $ ./dotest.py -v +b benchmarks
      
      ...
      
      ----------------------------------------------------------------------
      Collected 2 tests
      
      1: test_with_gdb (TestRepeatedExprs.RepeatedExprssCase)
         Test repeated expressions with gdb. ... running test_with_gdb
      benchmarks result for test_with_gdb
      ok
      2: test_with_lldb (TestRepeatedExprs.RepeatedExprssCase)
         Test repeated expressions with lldb. ... running test_with_lldb
      benchmarks result for test_with_lldb
      ok
      
      ----------------------------------------------------------------------
      Ran 2 tests in 0.270s
      
      OK
      
      Also mark some Python API tests which are missing the @python_api_test decorator.
      
      llvm-svn: 136553
      5ccbccfc
    • Johnny Chen's avatar
      Add a redo.py script which takes a session directory name as arg and digs into the directory · 4a57d122
      Johnny Chen authored
      to find out the tests which failed/errored and need re-running.  The dotest.py test driver
      script is modified to allow specifying multiple -f testclass.testmethod in the command line
      to accommodate the redo functionality.
      
      An example,
      
       $ ./redo.py -n 2011-07-29-11_50_14
      adding filterspec: TargetAPITestCase.test_find_global_variables_with_dwarf
      adding filterspec: DisasmAPITestCase.test_with_dsym
      Running ./dotest.py -v  -f TargetAPITestCase.test_find_global_variables_with_dwarf -f DisasmAPITestCase.test_with_dsym
      
      ...
      
      ----------------------------------------------------------------------
      Collected 2 tests
      
      1: test_with_dsym (TestDisasmAPI.DisasmAPITestCase)
         Exercise getting SBAddress objects, disassembly, and SBAddress APIs. ... ok
      2: test_find_global_variables_with_dwarf (TestTargetAPI.TargetAPITestCase)
         Exercise SBTarget.FindGlobalVariables() API. ... ok
      
      ----------------------------------------------------------------------
      Ran 2 tests in 15.328s
      
      OK
      
      llvm-svn: 136533
      4a57d122
  25. Jun 25, 2011
  26. Jun 21, 2011
  27. Jun 20, 2011
  28. Jun 14, 2011
  29. May 18, 2011
Loading