CS-314, Spring 2016
Assignment 3. Testing the Adventure Game

Unit testing with Black Box techniques. Unit testing with White Box techniques.

This is an assignment with both team and individual portions.


Check the A3 discussion on the Discussion board regularly.

(Expect Updates.)


DUE Wednesday 9 March 2016, 11:55 PM

CREDIT: 100 assignment points possible.

Updates:


User Stories:

User Story 1: As a student in CS 314, I want to develop unit tests for the Adventure game, so that I write the fewest number of tests that uncover the most problems.

Acceptance Criteria:

  1. Split story A: As a CS 314 student, I want to write Black Box (BB) tests for the Adventure Game description and a specific list of classes and methods in the Adventure game (see the Description section below), so that I get high branch coverage just by writing BB tests. (See the Notes in the Description section below for additional information you need to deliver the acceptance criteria listed below.)
  2. Acceptance Criteria (AC):
    1. Appropriate BB tests are identified, beginning with tests derived from the game description and APIs of the list of classes/methods that need to be tested.
    2. For each test, the input domain(s) are identifed, their equivalence classes determined, appropriate boundary conditions identified, and complete test cases determined.
    3. For each JUnit test class, a short description of what is being testing, the information determined from AC 1 and 2, and the initial state of the object under test, are documented as comments at the beginning of the test class file. Actual test cases are added as comments just above the JUnit methods that implement them. A short description of how the output state is checked for correctness must be part of the test case comments.
    4. The new BB tests are run and their results documented in the overview.txt file, along with the number of defects that they find. These defects must be fixed, and all the tests must pass.
    5. The new BB tests are run (along with any previously-written JUnit tests) under the EclEmma coverage tool, and the coverage results for classes, methods, lines, and branches are documented in the overview.txt file.
    6. A list of all defects found through BB testing is documented in the overview.txt file: for each defect, identify and describe the test and the inputs, and the expected and actual behavior.
    7. Meet 'Definition of Done' (DoD), below.

  3. Split story B: As a CS 314 student, I want to write White Box (WB) tests that increase unit test coverage of the Adventure game so that I have more coverage than with just BB tests, and none of the tests overlap.
  4. Acceptance Criteria (AC):
    1. Based on the coverage results from Split User Story A, the classes/methods that need higher coverage are determined.
    2. Specific WB test input that exercises the code is determined, and JUnit tests are created. Each new WB JUnit test class contains comments at the top that indicated that this is a WB test, the purpose of the tests in the class, and the test cases, similar to the comments from AC 3 for Split User Story A.
    3. All tests (BB, WB, and all previously-written JUnit tests) are run under the EclEmma coverage tool and the coverage results for classes, methods, lines, and branches are documented, along with the number of defects that are found. The defects must be fixed, and all tests must pass.
    4. WB tests are added, using the instructions in AC 2 and 3 in Split User Story B, until 100% branch coverage is achieved for the specific list of classes and methods or the branch coverage does not increase when new WB tests are added.
    5. If 100% branch coverage is not achieved, the reasons for this are documented in the overview.txt file.
    6. WB tests do not overlap BB tests. That is, as each new test is added, coverage results increase. The goal is that nothing is tested with both BB and WB tests.
    7. A list of all defects found through WB testing is documented in the overview.txt file: for each defect, identify and describe the test and the inputs, and the expected and actual behavior.
    8. Meet 'Definition of Done' (DoD), below.

Definition of Done (DoD):

For all user stories:

For CS314 development:

Description:
The game description is as follows:

Playing the Adventure Game, you (the player) visit a series of interconnected underground rooms in which treasure is hidden. You can move from room to room searching for treasure. The objective is to bring the treasure out of the cave. Some rooms have doors that are locked. To open locked doors, you must find the key to locked doors, which are placed somewhere in the labyrinth. As you go from room to room, you can

Besides the functionality outlined in the game description, you must develop tests for the following list of classes and methods:

Notes:
  1. Your tests should bypass the program's interface. The tests should not run the GUI code. The tests should create instances of the classes under test and set them up so that different configurations can be tested.

  2. Use the EclEmma Eclipse plugin coverage tool to track test coverage. Some further information:

    • Install the EclEmma Eclipse plugin (from http://update.eclemma.org) as described here. (The plugin has been installed if you see the coverage launcher in the toolbar of the Java perspective. It is a green circle with a white arrowhead pointing to the right, with 2 squares on the bottom right of the circle. The leftmost square is green and the rightmost is red.)

    • Instructions for using the plugin are given here.

    • You can highlight your AllTests.java class in the test package and click the coverage icon to run all the JUnit tests with coverage.

    • A 'Coverage' tab should be displayed, that shows the results of coverage tests. Expand the project and packages under the 'Element' column to see data for each class. You can expand the class to see method data. On the far right of this window there is a drop down menu that allows you to select the type of coverage reported. Line counters, branch counters, and method counters are probably the most useful. (The 'type' counter seems to be recording information about how many classes are covered.)

    • Information about counters can be found here.

  3. Create coverage reports. To do this, first run the tests you want, and merge the sessions if you need to. Next go to File->Export. Choose Java->Coverage Report, and choose HTML report. Save this report. Two files and a folder will be generated. The files are index.html and .sessions.html, and the directory is .resources. You can export the report as a zip file too. If you open the index.html file, you will see the summary for the project. Click on the project to see package and class data. The measurements reported for classes are coverage for bytecode instructions, branches, complexity, lines, methods, and classes. You can record these numbers in the overview.txt file. See Split User Story A, AC 5 for specific items that must be included in the coverage numbers reported.

  4. Make sure to test under a variety of Adventure Game cave configurations such as caves with several Treasure and Key objects, and several doors in a room.

Tasks:

1. Continue splitting the user stories if needed. With your team, estimate story points for all split user stories. If you find a split story still has 13 or more story points, split it.

2. With your team, create the tasks needed to implement each split user story so that the Acceptance Criteria can be met. Make sure to meet the definition of done listed above. Add to your team's definition of done if needed. Estimate the number of hours for each task. Create tasks that are between 1 and 10 hours long.

3. Decide which teammembers will be working on which tasks.

4. Add the split user stories and tasks to the UserStory folder of your project, one text file for each user story: e.g. A3-US-1.txt. Please see previous assignment A2, A2-US-1.txt, for an example file format; estimated story points and task estimated and actual times must be included.

5. As each person finishes a task, update the task information in the respective User Story file to include the time it actually took to complete.

6. Make sure the plain text file in the src directory called overview.txt has been modified by each person so that it contains the information required in A3rubric.txt.

7. Create a single jar file of the code on the master branch that includes sources, test sources, the overview.txt file, and the user story file(s) (e.g. A3-US-1.txt). (See A3rubric.txt, PART 1 for details on the required jar file structure.) Part of your grade for A3 is that your program compiles and runs on the Computer Science Department's Linux systems, so test your program appropriately. Your program will be tested in Eclipse.

8. Submit your jar file to the Canvas dropbox for Assignment 3 (A3) to check-in your assignment.

Post questions or comments about this assignment, or problems with the design to the Assignment 3 Discussion on the Canvas discussion board, or send email to cs314@cs.colostate.edu.


Grading: (for details see A3rubric.txt)

1. You will receive a team grade based on the Canvas submission, your tests, test results, coverage results, and the game functionality.

2. You will receive an individual grade based on GitHub logs of your repository submissions and review system comments. Each team member is expected to have created and used branches for their own work for A3, with frequent pushes from their local systems to GitHub, and review requests when they are ready to merge to the master. Each team member is expected to review code changes proposed by their teammates and provide comments.


Return to CS-314 Home Page