Bruce Schneier contends that reciprocal transparency doesn't work because the benefits of transparency accrue to the side with the most power (aka the government), leaving the individual with the knowledge of what the other guy is trying to do to him but being powerless to stop it. He does, however, admit that government transparency is essential.
After missing the point for a while, David Brin finally gets on track and counters that, first, individual transparency is inevitable, so we might as well institutionalize it in as benign a form as possible. Brin then argues that you can't make some entities transparent and others opaque because that merely invites those that would accrue undue power to hide behind those protected entities. Finally, Brin advocates a divide-and-conquer strategy for ensuring that the government can't gain too much of an advantage by using the individual's transparency against him. If large numbers of individuals are attracted to a particular government abuse, the power of that portion of the government is reduced. In other words, the government may be powerful but there's a lot more of us than there are of them.
Despite treating some of Schneier's arguments as incoherent straw men, I think Brin's on the right track here. Surveillance is inevitable. The best you can hope for is to be able to watch the watchers. But the crucial dynamic that allows the public to self-organize to attack government abuses has a problem: The public is too easily manipulated. The self-organizing exposures of abuse that are theoretically possible using the Internet are adroitly perceived and perverted by mass media. Until the majority of citizens are able to pull their information instead of having it pushed at them, we're vulnerable to manipulation. This ought to be a solvable problem but we're missing some key technology.