Engineers fall into two broad categories. There are optimistic engineers, who see the possibilities in an idea and become enthusiastic about it. Then there are pessimistic engineers, who like to spot problems with an idea before attempting to develop it.
In general, you need a few of both kinds of engineers for a successful project. I tend to be a pessimistic engineer--I like to find the problems and fix 'em before we try to build something.
But being a pessimistic engineer has an odd property. If you find a problem, and you fix it, you immediately start to behave like an optimistic engineer; you start to advocate for your solution, because you can see that it's better than the original, or some other solution. In short, on most projects, pessimistic engineers eventually become optimistic.
This temperment spills over into my views on public policy. So, for example, if I can work out a probable dynamic that causes public health insurance to erode the viability of private health insurance over time, I can come up with a health care scheme that avoids that problem. (Don't do that!) And if I can see that a bill that's all about insurance reform won't do anything to control health care costs, I can immediately start working on a solution that might be able to control costs. (Beef up HSAs, allow employers to dump the cash they spend on insurance into the HSA, and give employees a broad range of choices on how they wish to design their own health care.)
But, of course, my intuitions on health care policy suck.
Why, you ask? It's really very simple: Everybody's intuitions on health care suck, as do everybody's econometric models and other analyses. See, there's this little thing we learn about when we're geeks, called non-linear differential equations, which govern the dynamics of most problems. The thing is, we can't solve non-linear diffEQs as a rule. We can simulate them, but then chaos and complexity theory show us that our models are sensitive to all kinds of things we can't measure very well, and are therefore hopelessly flawed. The system involved in almost any public policy endeavor is so complex that the best we can do is try something, monitor it closely, and see if we can tweak it when things go wrong.
Occasionally, we blunder into a policy issue where the behavior of the system is almost linear. Preventing monopolies and requiring bank reserves and things like that are solutions that operate either in simple areas of an economic system, or act to reduce excursions that are so obviously bad that we're almost certain that the disease is worse than the cure.
However, here I am, yet another pessimistic engineer, modeling stuff in my head on public policy. Being a pessimistic engineer, I can spot the soft spots in the policy and I can propose solutions.
And then I'm hooked. I become an optimistic engineer. I want to twiddle with the system to implement my proposal to fixing the previous problem. But of course I'm now in a position where I actually think that we ought to do something to modify the system, even though I intellectually know that whatever I propose is likely to have flaws that are as bad as or worse than the ones I was attempting to solve.
If I'm feeling incredibly disciplined, I'll eventually realize that the proper solution to a social engineering problem is usually to do nothing or very little. I'll also remember that, even though I probably can't engineer a policy to make things better, the emergent properties of complex systems will usually self-organize to create a better (and stranger) solution than I could possibly have imagined.
I think this makes me a conservative. It also may go a long way towards explaining why the majority of public policy geeks are liberals. Even if you're a pessimistic social engineer, the temptation to fix problems in other people's solutions is irresistible, eventually turning you into a temporary optimistic engineers. You want to see something done.
The fact that what you propose doing will almost certainly make things worse never occurs to you until much later.