Post tutorial Report RSS Testing your mod [WIP]

All you could need to know about QA management for mods. Suitable for mod project leads and QA leads, this tutorial covers everything from setting up bugtrackers and content distribution systems, to testing strategies, scheduling and recommendations on free QA management software. See 'The Mod Tester's Handbook' for a guide to testing and bug reporting techniques.

Posted by on - Intermediate QA/Testing

Testing your mod

A tutorial on mod QA management and testing strategies


by Stephen 'Crispy' Etheridge

crispy[dot]pie[at]gmail[dot]com



Table of Contents

Introduction

  1. Why test?

How should I test my mod?

  1. Playtesting
  2. Destructive testing
  3. Playtesting or destructive testing?

Tools for testing

  1. Bugtracking
  2. Content distribution

Setting up a test schedule

  1. Scheduling a playtest

The QA Process

  1. Lifecycle of a bug

Ingredients of a bug report
Testing strategies

  1. Testplans
  2. Destructive

Conclusion

  1. What testing can do for you
  2. About the Author
  3. Extra Reading

Appendix I: Using Forums for Bugtracking

Introduction

Why test?

When you design and create a game you usually do so according to a vision. Your aim is to get the game to accurately convey this envisaged experience to the player. Since your vision is usually focused on making fun and enjoyable parts of the game, you will probably overlook areas that could result in confusion, frustration or boredom. In essence, the purpose of testing is to give developers a new perspective on their creation. On almost every occasion, testing will have a developer saying to themself: "Oh, I didn't think of that." or "I can't believe I forgot to do that." Testing is about anticipating player perceptions to the game and giving developers a second chance at improving their creations before they reach the end-user.

Before going into more detail about testing your mod, a word to the wise: if you are a small mod with a small or very niche playerbase, you probably shouldn't be expending large amounts of time and people in testing your mod. If you are specialist mod you probably have something no other game can offer to your players, so you can afford to make mistakes and gather feedback from your community. If you haven't released yet, you are in a similar situation: the quickest way to get feedback on your mod is to release early and test the waters, then go back and make the changes and improvements where they are needed.

The sort of mods that should be more open to having a dedicated testing process are the established big mods with stable playerbases and the story-based single-player mods that want to avoid letting spoilers get into the public domain, ones that rely heavily on 'progression'-based, play-once gameplay. Don't be under the misapprehension that you need to have testers for a mod to succeed. The most important ingredient for a mod to succeed is a release, everything else -including, to some degree, polish- is secondary.

Test Methodology: How should I test my mod?

Opinion polls versus bughunting

Generally speaking, testing can be broken down into three main areas:

Feature testing

When a feature is first implemented into a game someone needs to make sure it functions correctly. If this feature is subsequently changed, someone needs to check those changes are having the desired effect on the game. This type of testing is also known as Alpha testing, but can be carried over into early Beta testing if changes to features are made later in development due to deadlines (would take too long to finish), recurring functionality issues (it just won't work cleanly), or very poor player response (bad designer judgement, but it's never too late to re-design).

Usually the developer will feature-test his own work, but when changes to a feature can have more varied ramifications, or if the developer simply doesn't have enough free time to test their own work, it is down to a testing team to check these things.

Playtesting

Although, generally speaking, 'playtesting' is used simply to refer to the testing of a game, due to the undirected, free-form connotations with the word 'play', I prefer to distinguish between 'playtesting' and the other more focused methods of testing.

This type of testing is more about gauging subjective aspects of gameplay, it is based on opinion and latent response. Is this boss fight too hard? Is this puzzle too obvious? Did the player get the same emotional response from a part of the game as the designers intended? Is a weapon too powerful? The best results for this type of testing are given by virgin testers, i.e. a player who has never played the game before. The results are also better if this type of testing isn't too heavily structured, don't lead players through the game as you intended it: let the game do the talking.

In an ideal world you could invite new players round to your house, watch them play the game and have a chat with them afterwards to get their opinions on things. In the real world of modding, this isn't always possible, so some sort of feedback response document to give to virgin testers to fill in is an option if you have the time to compile one (more about how to go about writing one of these later). (open vs. closed questions) Nevertheless there are many people you can call on to test your game. In your immediate surroundings you have your friends, your family, neighbours, sibling's friends, all of whom can give you a fresh perspective on things. Next up you have new team members. Whenever a new member of the team joins get them to play the game and write down everything they can tell you about their first impressions.

Destructive testing

This type of testing is more calculated, more repetitive and, often, more mundane. It's about finding any weakpoints in the game and exploiting them. It depends more on the tester being experienced, persistent, dedicated, focused, aware, creative, logical, analytical and deliberately counter-intuitive. In short, it depends on them knowing the game and coming up with as many ways to break it as possible. Testplans or tasks can help testers focus on an area to destructively test and break the job up into sections that, once accomplished, give a sense of completion to make the task less repetitive.

For your destructive testing you want to have a dedicated team of testers. These testers can double as developers, but they should never be relied on to test their own work, because more often than not the pre-conceptions they have about how their work should be interpreted by a player will prevent them from looking at it from different angles. If you are using testplans, you'll want a fair few testers so you can rotate tasks between them so you get a fresh set of eyes (and ears) on things. The more perspectives and methods used to try to break the game, the better.

An attempt at definitions
Feature testing is evaluating the functionality of the game according to a developer's goals, as communicated to the tester by the developer.

Playtesting is evaluating the functionality of the game according to a developer's goals, as communicated to the player via the game.

Destructive testing is evaluating the functionality of the game in disaccordance with a developer's goals, as communicated to the player via the game, or to the tester via the developer.

The three terms are all used in Quality Assurance, but the way I have arranged and defined them here is according to my own opinion. The above definitions are an attempt at clearly separating the goals of each testing method by comparing who is testing and who told them what to test for. For feature testing, the developer will tell the tester (which may be themself) to check that a certain element of the game fulfils its function. In playtesting, ideally the player and developer should have no direct contact, so that anything the developer hopes to communicate to the player is done so solely via the game interface and the game space. Here the developer must assess the success of their communications based on the player's behaviour both in-game and 'in real life'. Destructive testing is trying to do anything that the developer has expressedly said the player should not be able to do (or even what the dev has not expressedly said the player should be able to do). This information may have come from the developer (e.g. the design doc describes how a feature should function), or it could be something the game tells the player (e.g. a conversation icon tells the player to talk to a character, but what happens if the player attacks the character or pushes them off of a cliff?).

Test Methodology: Destructive testing versus Playtesting

Which method is better for testing multiplayer and single-player?

It is important to remember that both these types of testing are important for both single-player and multiplayer games. Playtesting is just as essential for multiplayer games as it is for single-player games. It's obvious that the types of puzzles and challenges presented to a player in an SP title will be straightforward for some players and harder for other players, but 'playtesting' (as I have defined the term) is crucial to keeping your multiplayer popular.

The importance of accessibility in multiplayer
You see, multiplayer games rely on large playerbases and a constant influx of new players. Therein lies a problem: newer, less experienced players are no match for experienced players, who are both more practised at the game and more knowledgable in terms of tactics and strategy. The big ask of a multiplayer game is to bring the newer players up to speed as quickly as possible, so the experienced players don't get bored and the newer players don't feel like they're losing out unfairly. By gauging the reactions of virgin testers to the game, multiplayer designers can improve map navigation, menu navigation, HUD layouts, naming conventions, and so on with intuitiveness at the forefront of their minds.

Destructive in single-player
On the other hand, destructive testing can find some very hard-to-reproduce bugs that we call 'tester bugs'; bugs that require very specific, difficult or unintuitive actions from the user. Tester bugs will usually get waived in the late stages of development on a single-player title. If you look hard enough you will find them everywhere in single-player games, regardless of who made it (I even found a couple in Valve's The Lost Coast tech demo by using physics props to enter parts of the level the player is not supposed to be able to access). They're the sorts of bugs you only find if you're looking for them. If the average end-user stumbles across one that's, say, due to timing, they won't have the determination or the inclination to reproduce it. In The Lost Coast example I was just able to look at some untextured backfaces of geometry the user is not supposed to see, but in other single-player mods I've tested, just a bit of trick jumping or using props to climb to areas has left me stuck 'behind the scenes' with no way of escape. So you should be getting your testers to 'attack' single-player levels and scripts for destructive bugs; just know that not all of what they find will be a big priority to fix.

Destructive testing in multiplayer
Now, while these may seem very harmless bugs for single-player games, in a multiplayer game any exploit, no matter how difficult to find or perform, will become public domain within weeks if it gives the player a significant advantage. This means even if your game has been thoroughly balance tested, just one exploit and a few savvy griefers can ruin your game until you get an update patch out. In single-player usually the ramifications of tester bugs aren't quite so disastrous and won't affect such a large proportion of players.

An example of an unintuitive bug becoming mainstream in Valve's Team Fortress 2

More food for thought on the types of testing in later sections.

Test Organisation: Tools for testing

Tools for testing

The time may come where you want to dedicate a significant amount of time to testing your mod. For this you will need the following:

  1. A method of presenting and listing your bugs to all team members
  2. A clear way of communicating testplans and schedules to your testers
  3. A method for distributing your game files to the development team so everyone is working from the same page

Bugtracking software versus forums

For a small mod being made by a small group of individuals, especially groups that can communicate with eachother directly at a convenient time of the day (i.e. VOIP or phone), it's fine to store all your bugs on a forum and track them manually. If you have any questions you can always chat to the team member in question. On the other hand, if your mod is big in scale, if it is complex and has a lot of levels or areas where things could go wrong, and if you have a large team spread over many different timezones, I would recommend using some free bugtracking software.

Bugtracking software will typically allow you to label your bugs according to how severe they are, what part of the game they affect, who found them, who needs to fix them and when they need to be fixed. Most bugtracking software takes a bit of technical know-how to get it set up, so you're best leaving this to a webmaster or programmer if you're not good with fiddling with command prompts behind the scenes.

Content distribution

In addition to a method of collating your bugs, you'll also need a way to distribute your game files to everyone involved. Here, again, there are options available for small scale and large scale teams. If you're working on a small mod with only a few developers making changes to the game files, it's best to assign one person to manage file changes and upload it to an FTP server for everyone to download and work on. When you're working with a larger team and fixes and additions are flying round in all directions, you may want to invest in a version control system.

Version control systems allow you to keep a 'master' copy of your game files on a central server so that anyone from your team can download the latest build, make changes to their particular files and then update the master copy with these changes. Subversion is one type of version control system that has many free clients available for download. Again, when setting up a version control system, you will need someone technically-minded to do the back-end stuff. On top of that you will need sturdy, reliable hosting for the repository server (i.e. where the 'master' files are kept).

See the 'recommended' page for some examples of recommended software.

Test Organisation: Setting up a test schedule


When development is fully underway and you have something in a largely playable state, you may want to begin testing in a more organised format. Here are some tips to help run your tests smoothly and to make them more productive:

Playtest regularly

Playtests work best if they are regular events. Depending on how far along in your project you are, or how quickly your team as a whole is producing new content and fixes, you may be testing once a month or you may be testing once a fortnight. The important thing is that you schedule playtests as a recurring event that happens on a regular basis. In the months leading up to release I would recommend you have a playtest once every two weeks. This way your development team has two weeks to fix all new major issues; one heavy week leading up to the playtest, one quiet week where they can take it a bit easier. Knowing that fixes or changes should be implemented for the next playtest will give your team members something to aim for and keep them motivated.

Timezones!

Make sure everyone is aware of what time the playtest starts... in their own timezone! Try if at all possible to compile a list of all the team members and their timezones so that you can give at least a rough guide in the publicised schedule itself. I.e. try to give at least one timezone per continent to give a point of reference, such as Eastern Standard Time (EST) for North America, Coordinated Universal Time (UTC) for Europe and Eastern Standard Time (AEST) for Australia/New Zealand. Beware of how Daylight Savings Time affects certain timezones, e.g. British Summer Time (BST) and Eastern Daylight Time (EDT).

Keep it short

Remember to keep your playtests short and sweet. Begin them on time (do not wait for stragglers) and try to keep them to an hour max so they don't feel too much like a chore. When playtests are allowed to become drawn-out affairs your dev team and testers have less reason to attend. Make sure you keep to your original schedule so that people are only giving up as much time as they originally agreed to. You can deal with latecomers separately at another time, but don't let their disorganisation have a negative effect on the more dedicated members of the team.

Downtime

If done well, testing can be just as demanding as developing. If you are running things well, your development will be geared towards hitting targets for each playtest. This means the few days around playtests will be hard work for all involved. Remember to give your testers and developers downtime. Try to avoid scheduling tests two weeks in a row, especially if testing on weekends. Ideally working on the mod should be something people want to give up time for, but you need to give them a decision in the matter or it stops becoming fun. Making sure that playtests don't get postponed will allow people to have a healthier on/off cycle from weekend to weekend.

Repo-lockdown

At least one hour before the test is due to begin, lock down your game files repository. This will be easier to do if you just have one person looking after an FTP, but if you are using a version control system (e.g. Subversion), you really need to make sure NONE of your team is committing changes after this time. Even if they think it's a small change, any delay you make for them is a delay for everyone else. Be firm on this point, because sometimes in a dev's haste to get a fixed version ready, they will make small errors and have to re-compile, and end up wasting monumental amounts of time.

Smoketest

During this time before the playtest, one of your team needs to go through the playtest build and test it to make sure it works properly. This is a simple check to make sure that the game boots, maps load, levels can be completed and the user can progress to the next level correctly, and is known as a smoketest. Whoever is performing the smoketest should also be aware of any major changes that have been made to the code since the last playtest so that they can focus their checks around that.

The smoketest is designed to make the organisers of the playtest aware of problems in advance so actual playtest time isn't wasted in the event of a last-minute hitch. If the game has a serious error that makes it virtually or actually unplayable, you know almost an hour ahead to cancel the playtest. However, sometimes it may be a problem with a map or one asset that doesn't necessarily mean you have to cancel the whole test. In this case you can simply test on a different map instead or re-direct focus and circulate a 'known issues' alert to your testers. The repo-lockdown and smoketest are there to give you a buffer zone to make last minute changes to your plans when problems arise, so do not ignore them.

Gather beforehand

Get your team to meet 15-30 minutes beforehand so they are all in one place and can be reached easily. During this time they can chat to eachother or check websites, just as long as they can be contacted. This is also a great setting for introducing new members of the team. Teamspeak for a test is preferable, but also can be fiddly and time-consuming to setup. Skype is good for conference-style chat/voice meetings. IRC is also an obvious choice except for the fact that a lot of first-timers will have trouble setting it up. Other instant messaging programs like Live! Messenger, Yahoo, AIM and GoogleTalk are okay, but aren't as well-suited to group chat.

The other reason for meeting beforehand is it creates another buffer zone for latecomers to filter in, and for you to locate no-shows.

Take notes

Get your testers and developers to keep a pad of paper close to hand for taking notes. Comparing notes on what actions preceded a crash or a looping sound/animation will help immensely in tracking down the root causes of bugs. It's also useful for writing down any error messages that cannot be recorded via screen capture. In addition to this, having a note-taking resource available that doesn't require the player to quit the game will make the playtest run more smoothly.

Remind them to take screenshots of any issue they see, and remember that a lot of games write dump files to the game directory when they crash (e.g. Half-Life 2 writes *.mdmp files), which programmers will find immeasurably helpful in resolving issues.

Debrief

Hold a meeting soonafter to discuss any new or recurring issues. It's best to do this immediately after the playtest when experiences are their freshest and most vivid in the memory. Allow the people who need to leave soonest to speak first, and try to keep this debrief to under 30 minutes as an absolute maximum. If there are issues that still need discussion you should arrange another meeting, but avoid drawing out the testing process too much or it risks becoming a tedious affair. Keep meetings text-based so you can log what has been said (IRC, Skype, MSN, GTalk, Yahoo and AIM all permit you to keep logs). After the meeting put the logs up on your dev webspace so anyone who missed the test or had to leave early can read up on important discussions.

Set targets

After discussing all the issues that have arisen from a playtest and documenting them, the Project Lead and/or the QA Lead need to meet with the senior members of the team to prioritise these issues and assign the most important ones to be fixed in time for the next playtest, along with any other features that need implementing. Let your team know where you want the project to be in two weeks' time. Some may complain, but most will be encouraged by the fact the project is actually moving forward with direction and momentum.

Actioning Bugs: The QA process

The lifecycle of a bug

During test you will inevitably discover bugs. This section deals with how those bugs need to be actioned (it also appears in my other testing tutorial for Moddb).

As the QA Lead, you need to make sure that bugs are never lost in the no-man's land between one state and another. Often, a member of the team might deal with a bug and then forget to indicate this in the bug report. Part of your role is taking ownership of the bug database and making sure it is up-to-date at all times. You need to make sure the devs always have sufficient information to diagnose bugs, you need to make sure that postponed suggestions and fixes are reinvestigated at a later date and you need to make sure attempted fixes work and produce no other ill effects.

  1. Discovery The bug is discovered by a developer, tester or perhaps a player.
  2. Communication The testers/developers are notified of the problem. A player will make a forum post or send an email, or perhaps tell someone in-game or in an IRC channel. This needs to be logged immediately and passed onto whoever handles QA. On the other hand a tester or developer will need to talk to their teammates, perhaps the issue has already been found and a report exists? Perhaps you need to edit the report with new information?
  3. Investigation A developer or tester should investigate the bug. Perhaps it is a design feature that has been misinterpreted by the user. If it is a bug, the tester will need to document which conditions the bug can be reproduced under and how reliable these steps are.
  4. Documentation A bug report is logged in a bug database with information including a search-friendly summary, description, steps to reproduce, reproduction rate, priority, seriousness,etc.
  5. Verification The bug is checked by a senior member of QA for clarity, accuracy, significance and urgency.
  6. Assignation The bug is assigned to a particular member of staff to be fixed or discussed. This process can go back and forth between team members. Whenever a bug is assigned to someone a reason must be given, e.g. "NMI" (need more info), "NAB" (not a bug), "not an art issue - maybe an error in the code?", etc.
  7. Fix An attempt at fixing the bug is made. Some comments briefly describing the solution should be added to the notes so that expertise is shared across the board and similar future issues can be dealt with quickly.
  8. Regression The fix is committed to a build of the game and that build is tested. The result of the regression is logged in the bug entry, e.g. "Fixed" (plus the name of who reported it fixed), "Fail" (plus details of why it failed; the attempted fix may have created a different issue), "CNR" (cannot reproduce; plus the reasons for this).
  9. Closure The bug may be closed after regression or even before a fix has been attempted, but a reason for closure must be given, e.g. "Fixed" (regressed as 'Fixed'), "Closed - as designed" (not actually a bug), "Waived" (the bug has been deemed too low a priority for this release, "Duplicate" (the bug has already been reported; a link to the duplicated bug should be given).
  10. Re-opening If a bug has been waived for one version, it may be re-opened to be fixed for a later patch when priorities and focus are re-assessed.

Test Organisation: Ingredients of a Bug Report

Depending on what bug tracking software you have available, you may not be able to provide all of this information in a bug entry. Nevertheless, these are the types of things that you could expect to see in industry-standard software. When you are customising your own bug database with your own personalised set of data fields, this information may be useful as a reference source.

Some later chapters of this tutorial cover some of the more key areas in more depth.

Summary
The summary should contain keywords that make it searchable and immediately understandable.
Description

Where you can go into more detail about how you found the bug, and any other notes you have to add about it.
Steps to reproduce
This may be part of the description or it may be a different section of the bug, but the steps should always be included (along with a reproduction rate if this isn't represented in a separate data field).
Reproduction rate
How often your steps lead to successful reproduction of the issue.
Attachments
If you can get a screen capture, a dump file, a video or a demo file, you should always try to include it with the bug. (do a section on all major engine locations for demos, dumps, screens, etc.)
Category
What part of development does this affect? Is it a graphical, audio, AI, scripting, animation, collision, UI or controls issue? Is it a soft lock or a hard lock? Is it an issue with the gameplay design?
Location
Where in the game does this issue arise? Does it happen on a particular level? On a particular menu?
Class
How severe the issue is. See the 'Bug Classification' chapter for a thorough explanation.
Priority

How quickly should this issue be fixed? This could be a number from 1 to 5 or it could be a milestone.
Version found
On which version the bug was first observed.
Version fixed
Not neccessarily the version the bug has definitely been fixed on, but the version on which an attempted fix has been implemented.
Status
What stage of the QA cycle this bug has reached. See the chapter entitled 'The QA Process' for more info.
Found by
Obviously the developers will need to be able to contact the tester who originally experienced the issue if they require more information.
Assigned to
The team member assigned to deal with the bug at this stage.

Temp

temp

Temp

temp

Actioning Bugs: Prioritisation

The numerical scale of importance
In my other tutorial for mod testing I talked about the bug classes from A to D. These classes only indicate the severity of a bug, and while in general the more severe bugs are the ones that take priority for getting fixed, sometimes bugs can be severe but hard to come across in normal play. The other scale deals with prioritisation by numbers, with 1 being the highest priority and anything lower being less of a 'must-fix'.

For example, in a Team Multiplayer, if I can build a crude staircase from 9 players standing on top of eachother to allow a 10th to jump over a wall to somewhere they shouldn't be getting to quickly, it is an issue. If it gives one team a significant advantage it could be a B bug. But what if your game is only designed for 6v6? If your default server settings only allow 12 players at a time, this bug is highly unlikely to crop up on normal play, it's also very unlikely the bug gets discovered in the first place. In this case you'd probably call this bug a B2 or even a B3. In other words, the bug itself has big consequences, but actually finding and reproducing the bug is very unlikely under normal circumstances, and even once known, it would only be possible on non-default settings.

The second element to prioritisation comes in response to testing the game. In any given organised test you're going to come across issues that prevent you from testing certain features of the game. You need to identify any bugs that are preventing test and get those fixed for the next scheduled test. The longer you leave a bug stopping you from testing an area of the game unfixed, the more problems you're going to encounter when you finally come to test that area.

The third element to prioritisation is that when you develop a game with a small team, fixing bugs is a sliding scale. A bug that would be looked at and fixed early in the project may have to be sidelined in the later stages of development. This may sound like you're not doing your job properly, but in reality it's very important to help your team prioritise their workloads or you will never reach a release. At some point you're going to have to go through all those C bugs and begin waiving them (postponing them either for a future release or indefinitely). You will then have to look at the B and A classes and begin waiving the ones with lower priorities. Eventually you'll reach an acceptable state of polish and, for once, your team will have a clear view of the finish line!

This is where the QA Lead has a lot of crossover with the mod's management. In the actual industry, Producers have a lot more influence over which issues get fixed for the final deadline, the QA Lead is usually just carrying out their orders. In modding there is no need for Producers in terms of a dedicated position, so the Project Lead and/or QA Lead can assume much of a Producer's responsibilities. The point here is that QA Leads need to be able to see the game as a whole and they are instrumental in getting a mod to release.

Temp

temp


Testing Strategies

Conclusion

  1. What this handbook can do for you
    1. Help you control the level of quality of your mod. Keep track of issues
  2. About the Author
  3. Extra Reading
    1. QA management for mod


Appendix I: Using Forums for Bugtracking

How to get the most out of a forums-based bug database

It's true, most mod teams will not have the means to invest in, or be large enough to warrant professional-style bug databases. Most will simply have some public forums with a private section used for development discussion. This section then takes a look at how you can still keep track of bugs even when just using a simple forum setup.

The big difference between forums and proper bug databases is that forums don't provide many search options at all. Typical bugtracking software allows you to filter by 10-20 different fields and allows you to combine multiple filters into one search. With forums, the most you can do is search by keywords and by the creator of the post. But this doesn't mean you can't keep a tight lid on the status of your bugs. The main thing you need to do is keep all your bugs to a very strict naming format.

The way this works is every time a bug is entered it gets actioned by a senior member of the dev team. If you have a QA Lead this would be their job, if you don't it would fall to the Project Lead or the Lead Programmer, but don't just leave it to chance, make sure someone knows it's their job. Every time a bug gets actioned, a prefix at the beginning of the bug is changed to show its status. Typical 'bug status' prefixes would be 'Active', 'Claimed Fixed <+version number>', 'Fixed Verified', 'Regress', 'Waived <+version number>', 'NAB'. The idea here is if you filter your bug forum by name, all the bugs in need of regression will be together and all the fixed bugs will be together. Optionally, you can also add a second prefix to show which part of the game the error affects. Later in this section there is a list of suggested prefixed to use.

E.g. If you can list posts in alphabetical order and the list looks something like:

[Active][TXT] ...
[Active][UI] ...
[Claimed Fixed][AUD] ...
[Claimed Fixed][COL] ...
[Fixed][BAL] ...
[Fixed][GFX] ...
[Regress][COL] ...
[Regress][CRH] ...

...then if you're looking for an audio bug that is still open you know you can find it at the top of either the 'Active', 'Claimed Fixed' or 'Regress' lists. You know that collision bugs (COL) will all be grouped together above Crash bugs (CRH) near the top of a list and anything to do with the menus will come at the end of the lists (UI). This would make it easier for individual members to see all their relevant bugs together, so the artists can just focus on the bugs marked [GFX], the sound artists can focus on [SFX] and so on.

In many forums there is a character limit for the summary line of a thread, so in practise '[Regress]' would be changed to '[R]', '[Claimed Fixed]' to '[CF]' and so on. I just used the full terms to illustrate the example.

The plus points to this system are:
- Bug status is clearly indicated for every bug summary; at a glance you can see how many bugs you have to fix
- No time needs to be spent setting up a full bugtracker, everything you need is on the forums
- Everyone knows how to use forums, so no significant training is required for using the bug database

Post a comment

Your comment will be anonymous unless you join the community. Or sign in with your social account: