Literature

We will mainly be using Algorithms 4/E. by Robert Sedgewick & Kevin Wayne: Algorithms 4/E. Author site. Addison Wesley, 2011, Edition 4. (ISBN 9780321573513) Book site. (You need a copy of this book.)

coverAlgorithms4
Figure 1. Algorithms 4. ed.

Course info

Additional resources

Alda Run

Do not abuse annotations in tests.

Disabling coverage by means of some annotation has become trendy. Although it may have has its place in production code (which we doubt, see remarks below), it is not meant to defeat our teacher tests. Therefore, before we allow your code in our 'test arena' we disable such annotations, because we see that although the spirit might be willing, the flesh is sometimes weak, as in: 'it might be used for code that is not @Generated by some framework or IDE'. Since we have no time to write a full fledged parser, we simply comment the annotations by means of a one line sed script and a loop.

for f in $(grep -Prl '@Generated' ${arena}/src/main/java --include "*.java");do
    sed -i 's#@Generated#//@Generated#;s#@NoTeacherTestCoverage#//@NoTeacherTestCoverage#' $f;
done

The script does not modify you code in the repository, it is applied before the tests are run. The project is copied to a ramdisk for speed and avoidance of disk wear. After the test run the resulting reports are copied to the publication server.

  • The best way to improve your coverage is to remove the code that is not required by the business. It is bad style to implement equals and hash code just for testing.

  • You not always need equals and hashcode, and if its only purpose is tests, then use the assertJ facilities to test the relevant fields with getters, like field by field comparison.

  • If your class is immutable (only final fields) then equals and hashcode might be useful if you want to use the instances as keys in a hashmap. If not, you typically do not need equals and friends, and comparable or a comparator might have its use. That also allows mapping by means of a SortedMap<K,V>, with TreeMap<K,V> as the first implementation that comes to mind.

  • If you really want the comfort of equals in your test code, use a custom static boolean equals(my, other) method in your test code, that does what your equals would do. With a few tricks you can even reuse the IDE generated code by replacing this with my.

  boolean equals(Student my, Student other){​
  // No need for null test or type test as per normal equals(Object)
  if (my==null && other != null ) return false; // This is what Objects.equals(a,b) does too
  if (my!=null && other == null ) return false;
  if (my==other) return true;
  // we can now do method invocations and field test, because both are non-nill
  if (!Objects.equals(my.getBirthDay(),other.getBirthDay())) return false;
  // for the primitive fields
  if (my.studentNumber == other.studentNumer ) return true;
  // etc
}​

and use it in your own assert method.

  void studentAssertEquals(Student a, Student b) {

    if (!equals(a,b)){
      throw new AssertionError(String.format("expected student a=%s and b=% to be equal but they were not.",a,b));
    }
  }
  • During development having a toString and equals and hashcode might be useful, but throw them out (== best) or comment them (potential technical debt) when the business does not need them. If the methods were generated, then dropping them is no real loss and the IDE will gladly re-generate them when needed.

    • In a real project, having code that is not tested or used is a technical debt, because the assumption might be that such code is well defined business wise (as in which fields to consider or how to format) without proper requirement and proof. I have had the same situation too in projects and simply dropped the methods that are not demanded by the business, or made sure that the business code uses the result.

  • For those cases where equals and hashcode are defined for (hash)mapping (use the final fields only) the test recipe in PRC2 is well defined. You might want to add that to your test-toolbox.

  • If you find a business case (==test case) that uses the api only and does cover your uncovered code, you can of course suggest to include such test code in out teacher test suite.

Tip

When debugging and testing, a useful toString() is invaluable.

  • In the teachers tests, we invoke toString() when producing error messages on failed tests, or even before an assert is done.

  • Sometimes this object.toString() triggers a null pointer exception. It makes finding problems in your code really hard, if the tool (toString) that is used to do that triggers null pointer exception.

  • Not overwriting toString would be the better choice, because the default never throws exceptions.

  • To test if your toString is the culprit, comment it out and re-run your tests.

  • If you want a toString and your class has nullable fields, wrap such field in Objects.toString(nullableField); like
    …​."nullableFleld = "+Objects.toString(nullableField) +…​.

  • The nullable field should of course have a well behaved toString() too.

The test results of the students assigment, including test and optional coverage reports as well as the teachers tests can be found per topic in the following places:

Alda Run Rules

All testing as described below is on best effort basis. Things may break when running alien (that is: your) code.

The student repositories are regularly checked for a new code revision.

  1. The student code (and test-code) is compiled. On any compiler error, the compiler output is published then STOP.

  2. The student tests are run on the students code. If there are errors, the error report is published then STOP.

  3. A coverage test run is performed with the code and test code above. The resulting coverage report is published.

  4. If all tests are green in the above and the code coverage lies above the threshold, the teachers tests are run.

    1. The teacher’s tests are combined with the student’s code and compiled. On any compiler error, the the compiler output is published then STOP.

    2. The teacher’s tests are run on the students code. If there are any errors, the error report is published then STOP.

    3. If there are no errors in the previous step, a coverage test run is done and a coverage report is published.

The goal of the exercise is to pass the teacher’s tests with no errors. The winner is that student that meets the goal in the earliest date-time. The date-time used is the svn commit time.