Overthinking: September 2008 Archives

 

September 12, 2008

Writing in the comments section, Nick Weaver expresses the not unreasonable concern that if you randomly select people who are eligible for extensions, people will try to game the system:
If you do the randomization suggestion, you really don't want to do it for a security conference, because then you will get security researchers playing games (eg, 3-4 different titles/abstracts/author orders, withdraw the lowest extending ones)

This really is the way security people think, but I think it is possible to design technical mechanisms to stop people from doing this. One obvious thing you could do is simply forbid any individual from being an author on two papers that apply for extensions. This sounds a bit restrictive but one could certainly argue that if you're asking for extensions on two papers, you don't have time to work on them both and should just focus on one.

It seems like there is a less restrictive approach, however, namely to use cut-and-choose. Nick's attack relies on people submitting extension requests for papers they don't really intend to submit. This implies a countermeasure where you force authors to prove that their requests are legitimate. You can't really do this for the papers where you reject their extension requests (since they could reasonably claim they're not ready to submit, which is why they asked for an extension), but you can for any request where you grant an extension:

Here's one natural approach:

  • Force people to submit all their requests at once before you tell them whether any are granted.
  • Randomly (cryptographically securely random) select whatever subset of the papers you're going to select.
  • Require the authors to actually submit papers for which requests were granted.

(Obviously you'd need to notify authors ahead of time that these were the terms).

Now, obviously, an author who is trying to cheat you can submit bogus papers (or slight variants), but I'm assuming that if you actually force them to submit, the PC can detect them and appropriately punish the authors.

I'm being deliberately vague around the words "require" and "punish" here, because we're now leaving the realm of cryptography and entering the world of politics. One thing you could do is simply refuse to accept any papers unless the authors submitted all the papers by that author. More severely, you could refuse to accept any papers from them in the future (this sanction is sometimes used in other cases of clear misconduct). In any case, it's clear we could create sanctions that authors wouldn't like.

Let's do the math on how well this would work. For mathematical convenience, assume that you are only submitting a single real paper and that if you're caught cheating is that all your submissions for this conference are summarily rejected. Assume further that if the extension isn't granted, you don't submit. Thus, we have the following outcomes:

OutcomePayoff
No extensions granted  0
One extension granted  1
>1 extension granted  0

If the probability of extensions being granted is p and you submit n times, then the probability of each of these events is:

OutcomeP(outcome)
No extensions granted  (1-p)n
One extension granted  p*(1-p)n-1*n
>1 extension granted  1-pn

However, because we've assumed that the payoffs for the first and third cases are zero, and the second case it's 1, the payoff is just the second case, i.e., p*(1-p)n-1*n. To find the best strategy for the submitter, we want to find the n that gives the maximum payoff. Since we can only choose discrete values of n it's easiest to find this value experimentally. Here's the relevant R code to find the maximum:

pay <- function(p,n){
    n* p * ((1-p) ^ (n-1))
}

max.pay <- function(p){
  n <- seq(1,100)
  pays <- sapply(n,pay,p=p)
  n[order(pays,decreasing=TRUE)][1]
}
(note 1: this code does not work properly for very small values of p where the optimal n value is > 100. That can be fixed by making the maximal n value bigger. note 2: we happen to know that there is a single maximum so we could do a loop that stopped as soon as p_{n+1} < p_n, but that was too much effort.)

It turns out that for all p >= .05, the optimal number of submissions is 1, which is the behavior you want to encourage. For p < .5, it makes sense for the submitters to do multiple submissions (with the number increasing as p increases). Accordingly, if you want to give extensions less than 1/2 the time, you need more draconian punishments (e.g., banning from future submissions) to enforce that.

I should note that this sort of suspicious attitude probably isn't the best way to get the result you want. In general, the academic community depends on people's voluntary compliance with norms (we don't really check that people don't make up their data, for instance), and if you simply tell people that this is the norm for this conference (and perhaps make them explicitly affirm their compliance), I expect they will do so. On the other hand, if you make it a game, they're likely to take it as a challenge to cheat (especially in the security community) (cf. the famous A fine is a price.). On the other hand, that's less fun than designing a new mechanism.

For extra credit, consider the following extensions to the model: (1) the punishment is greater than just having all your papers rejected. (2) authors have more than one paper. (3) submitting without an extension is less good than submitting with an extension but still of nonzero value.