CSC223, Class 16: Unit Testing and JUnit Admin: * Sam is unhappy about attendance. * Homework due. * Two groups: Implementation of WorldState * Two groups: Implementation of Bugs that read instructions from a file * Present on Monday! (10 minutes) * Next homework: Comprehensively test java.util.Vector using JUnit. * More homework: Read paper on JUnit. * Task for SamTang. Overview: * Testing a particular class. * Generalizing: What should a testing infrastructure look like? * JUnit basics. Background: What is Unit Testing? * Testing units. * Making sure that the small independent components of your program function properly. * Imperative programming: "components of your algorithm" * Object-oriented programming: "individual classes" Background: Why do we do unit testing? * Rapid feedback about whether your code works or not. (XP) * Supports extreme version of testing: Test all the time! (XP) * Easier to integrate: You're sure your classes (and your colleagues' classes) work, so the only problems are from integration. * Testing provides a form of documentation and specification. "This is what I expect it to do." * Testing lets you be brave: If you make lots of changes, you can quickly see whether or not you've broken things. * Testing lets you have group ownership. * Outside of XP are there reasons? * Documentation/specification * Rapid feedback If testing is good, we probably benefit from a standard way to test * Sam believes that we should think generally about the issue of testing and design a framework * Sam's sarcastic approach to XP: We'll grow a testing framework by writing tests. * Point class problem one: * Test concordance of constructor, double getX() and double getY() * Constructor is Point(double xcoord,double ycoord) * Design question: If multiple tests fail, do we want to report on all the errors, or just the first one? * SamTang: All of 'em, since it's a specification issue. * Ryan: Later ones may depend on earlier ones, so you might have propatagint errors. * Can't we just print out the point? No. Automated testing. * Question: Are we better off * Ryan: Starting with the assumption that everything is okay and then correcting that assumption when things fail * Evan: Assume that something is wrong and correcting that assumption when everything succeeds? (We could use a big ugly vector of Booleans.) * Do we simply return "failure" or enumerate the errors? * The latter * How? * Keep a vector of all the error messages. * Can we use exceptions? * We want all the errors! (Excdept Sam's spelling errors). * Perhaps we want a vector of exceptions. * Have we tested enough? No! We need to test a variety of points. * Including coordinates of 0. * Including negative coordinates. * Perhaps randomly. * Nested for loop: for (x = -10, x < 20; x += 0.3) for (y = -7, y < 23; y += 0.1)