In Part 1, I explained how I went about collecting the content for my quiz. Hopefully, those who were there were pleasantly surprised by the chosen format. Here's how I went about it:
Most quizzes don't allow for any tactical decisions. An Infinite Rebounds based quiz is uniform, which is kind to the serious quizzer who is fed up of artificial ways to inject drama into quizzes. "Showy" quizzes have disruptive rounds such as buzzers, rounds with very uneven scoring, which may be fun for the audience, but painful for the good teams on stage. But IR can be a tad boring and monotonous. All teams need to do is to focus on the next question, and not worry about any kind of tactics, except for in the odd long connect. It's a classical format, but one for the purists.
Worse, a theme quiz can aggravate these problems further because not everyone is keen on the subject of the quiz. Round-and-round-the-mulberry bush can put you off to zzz-land.
I wanted the format to:
1. Have equal weightage for all three topics
2. Force teams to make a few tactical decisions throughout the quiz
3. Not overly penalise teams who aren't good at any one topic; conversely, don't make it possible for teams who are good at only one theme to get lucky (say, by somehow getting all the questions on one theme in a mixed set)
I chose three separate "sets", one per topic. To normalise performance across sets, teams would get points in a range, depending on their performance in that set. Initially, I thought of assigning scores from 1 to 6 based on position. But that would be unfair if a team completely swept the round - the others still wouldn't be too far away. We have been experimenting with different scoring methods in our BC theme quizzes, so I chose to go for the relative method. The winner of a set gets 5 points; the rest get a score from 0 to 5, relative to the winner's score. So if the winner had 100, they get 5. If the next team gets 80, they end with a relative score of (80/100)*5 = 4. A team gets 65? - a score of 3.25.
The winner would be the team that gets the maximum aggregate across all three sets. This sounded fine, until I realised the flaw.
What if the same team single-handedly won the first two sets? They would have 10, and no one else could catch them. The race for top spot would be over 2/3rds of the way! So I put in a final-within-a-final concept. The teams with the top two scores after the three sets would make it to this last round (as it turned out, I put 3 teams in the finals as 2nd and 3rd were closely clustered). That way, I could ensure interest would remain alive till the end of the three sets. Also, this gave trailing teams a chance to catch up, especially if they could dominate the last round and push the others back.
I designed each set to be identical - after playing around with ideas, I settled on starting with 4 common qns of 5 pts each (10 pts each could allow teams to get a huge lead), followed by IR for 10 qns (initially, I wanted at least 12 so that all 6 teams would be assured of at least 2 qns each, but changed my mind since I had so many common qns).
The last 2 qns (10 pts each) would be for one individual per team. Each person on a team had to take part in a set of their choice. This was one of the tactical questions in the quiz.
I wanted to try this because I think it gives each person a chance to shine, as well as makes it hard for teams to hide a weak member. We have a system where the top 6 teams after the prelims qualify outright, while the next 3 teams are split up and each member added to these 6. In this case, I allowed the teams (in ascending order of scores) to pick a member. Normally, teams would have picked the best possible person (hence, went from lowest to highest, to have somewhat balanced teams), but in this case, they would also have to weigh in the need to have three people with complementary skills.
The other tactical questions:
1. Each team got 1 attempt to go for an answer "out of turn" i.e. you can whisper the answer to the QM without waiting for your chance in IR. I usually keep this to prevent inadvertent sitters which can penalise other teams. The top 2 teams got an additional chance (to compensate for their late chances to pick their 'drafts'). Since teams had only 1 or 2 "out of turn"-ers in the entire quiz, they had to decide when to use it and when to guess that teams ahead of them would be unable to answer.
2. The final round for the top teams had 6 questions, 2 in each topic. Ordinarily, they would get 10 for a correct answer. They could choose to double if confident, with a penalty of -5.
It was interesting to watch teams apply these at various points. I think the rules weren't as confusing as initially feared!
I must confess that the last round could be a little unfair to a team that swept all three sets only to lose in the small last round. My hope was that I would choose questions of equal weight that would reflect the finals and not distort the final standings (not entirely successful in reality). That said, I was prepared to tolerate some unfairness (it was easy as I wasn't participating :-)). I also realised that it is very easy to engineer a close finish if you have very few questions - a common trick employed by most of the 'show pony' quizzes.
Finally, a mention that the 4 starter qns were set around the old game of 'Name, Place, Animal, Thing' and that even the prelims had three sets of 8 qns each. And despite the ineptness of my local photocopy guy, it is probably the first ever horizontal prelims sheet!