Thursday, May 28, 2009

The Cycle

Hi all,

Due to work commitments, I am not going to be able to come in today, but that shouldn't be much of a problem. You are in a bugfix and polish phase now, and there is a pretty defined way that you should be working, so you don't need me much at all. The Cycle goes like this:

0. Look on the buglist and fix any bugs assigned to you.
1. Play the game
2. Find Bugs
3. Lodge Bugs
4. Goto 0

Step 0 actually can be broken out a bit. Firstly, it is tightly integrated with step 3 - Lodge bugs. You need to make sure that whatever system you are using to keep track of bugs has a way to assign these bugs to particular people, or at least a way for people to look through the list and take a bug that they think they could work on, and mark it as taken (so that other people don't start working on the same thing at the same time).

The other thing about step 0 is the way in which your changes (art, code, whatever) are integrated into the game. When this is done right, in a fluid manner, we call it continuous integration - the last thing you want is everyone to have to stop work and wait while Jimmy updates the codebase with Johnny's new title screen. The last thing anyone wants to hear is 'nobody do anything to sprites.py - Timmy is working on it!'.

Team B is doing it this way:

Andrew (P) has a subversion repository set up on his portable drive, and he has installed TortoiseSVN on his local computer. This gives him the ability to check things in and out of the repository, label 'snapshots' etc. He is the 'integrator' for the team. It is his responsibility to take whatever code/art changes people are making and put them into the game. Normally, everyone would have their own subversion checkouts, but since we could never get this working, we have to do it with a central person arbitrating all the changes. It's not ideal, but it will work.

So at the beginning of the lesson on a Friday, Andrew takes any work that has been done by people during the week and commits it to the repository, changing code to reference new art, or play nice with other new code, whatever. Once all that is done (hopefully in the first 10-20 mins of the lesson), he tags it in subversion as DEV_xxx (some development label). Once that has been done, he does an SVN Export of that tag, and this export is distributed to all members of the team to test, or work on (on a USB drive, whatever).

Every time there is some new code to fix a bug, someone comes to Andrew with their changes, and he integrates it into the release. By the end of the lesson, he has integrated a number of changes, and now he does another SVN tag = REL_xxx. This is a 'release' label. He does an export of the SVN release label onto the USB drives of his teammates, and they take this home to work on and test during the week.

The good thing about the labels/tags (same thing) is that anybody finding a bug should enter it into the teams bug database as being found on that particular label - so the bug might have a 'Found In' field. which would contain 'DEV_034' or 'REL_002'. That way, you know when it appeared, and it was fixed (yep, you need a 'Fixed In' field, too).

The reason we Export onto people's USB drives instead of SVN Checkout is because the rest of the team, other than the integrator, aren't using Subversion. Export will copy out all the files in Subversion, without all of Subversions metadata directories - '.svn' stuff. This is useful because when the team member fixes a bug, they simply hand Andrew their USB drive, and he drags and drops their file tree onto the checkout/working copy he has on the local machine. The updated files immediately show up as being changed, and there is no conflict of SVN metadata weirdness.

The only thing Andrew needs to be aware of is making sure he has updated and committed, BEFORE he drops someones changes onto his checkout. You do NOT want local changes in your working copy when you are dropping someone elses export in - if they have changed the same file as you, your changes would be overwritten. You want a nice, clean, working copy to drop changes onto. (i.e. There should be nothing but green ticks next to all the filenames/directories before you drop changes in).

These little gotchas are all because we don't have a central server - it is a lot less painful if you do have this type of setup. Still, doing it this way (with a central 'integrator'), is better than nothing.

Thursday, May 7, 2009

Planning to Test

The only way to make sure software works correctly is to test it. But what do you test? What are you testing against? What criteria determines whether a test passes or fails? This is all part of the Test Plan. Each team's Test Lead is about to get busy.

You need to write a test plan. It is going to be the guideline for your team when they come to do test your game. There are lots of different ways to write these - we are going to keep ours simple. We pretty much want to generate a testing checklist for the team to run through when they test the game.

Starting Points:
In general, what assertions does the game design document (GDD) make that you need to verify in a test?
  • e.g. "The player uses the arrow keys to move". Does this actually work? This probably ends up as a bullet point on the testing checklist: 'Verify that arrow keys move the player'
  • e.g. "The counter counts down from 5:00 mins. When it reaches 0, the game ends". Does it actually count down from 5 minutes? Does the game end when it reaches 0?

The Test Lead makes a list of the 'assertions' in the GDD, and as specified by the team members also, when not covered by the GDD.

The Test Lead then defines simple processes to test each of these points. Sometimes this isn't so 'simple', and there is only so much that you can reproducibly test, but do what you can :).
  • e.g. Run the game, verify that the title screen shows for 3 seconds.
  • e.g. Verify that the player walking animation is shown when pressing right or left, but the jumping animation plays when pressing up.
  • e.g. When the player collects a tomato, their score increases by 3.
The next part will be to define what happens once someone finds something wrong. A team member is running through the checklist and finds a defect. What do they do? How do they record it? How does it get fixed? How do we know it is fixed?

These questions and more, answered next time..