Sunday, September 11, 2016

In Which I Go Full Tinfoil Hat on the SpaceX Explosion

The SpaceX explosion last week was a big bummer.  But it turns out that it's merely the leading edge of The Alien Invasion.

Well, probably not.  But something very weird is going on.  Here's a GIF I built (lovingly from screen shots) of the video that everybody's now seen:

See the little speck coming in from the right?  If you go to the video (watch it in HD to do this right) and step through at full screen (which it turns out you can do in YouTube by pausing, then hitting '.' to go forward and ',' to go back--who knew?), it looks like a bird.  It even kinda flaps.  But there's something strange about this "bird":

It's moving at about mach 1.

Now, if you watch the video, there are all kinds of birds and bugs flying around between the camera and the rocket.  And of course, the closer those critters are to the camera, the faster their apparent angular velocity, so we shouldn't get too worried about them.

Except this "bird" seems to go behind the northeast lightning tower.  That means that it's actually behind the vehicle, about 2.7 miles away, not gamboling carefree a few yards in front of the camera.  There's exactly one frame where the "bird" is in the line-of-sight of one of the towers.  It's one of the last frames in the GIF, in the leftmost tower.  Here's a blowup of it:

This isn't quite definitive, but it sure looks like it's behind to me.  So let's run with it:

The slowest that an object in the field of view can be going is when it's moving perpendicularly to the line of sight.  To work that out, I used Google Maps/Earth to match the view of SLC-40 we see in the video.  That gives us a rough bearing:  west-southwest.

To get it more accurate, we can time the gap between when we see the first explosion on the video and when we hear it on the audio.  It comes out to 12.5 seconds.  Assuming that it was 80° F at test time (the high was 88 the low was 74, and it was a morning test), the speed of sound would have been 1139 ft/sec.  That makes the distance from the vehicle to the camera (or at least to the microphone) about 2.7 miles.

Turns out that there's yard with a bunch of junk in it 2.7 miles away from the pad to the WSW.  Here it is:

So now we have a pretty good bearing.

Note that, in the GIF, the flying object is right over a hemispherical pressure tank to the right of the image, and it takes exactly 12 video frames, each @ 33.3 ms, to get to the point where it's behind the tower.  That's almost exactly 400 ms of time.

If you draw a line that's perpendicular to the line of sight behind these two objects, you get a distance of about 470 feet:

470 ft in 400 ms gives you a speed of 1175 ft/sec.  Remember the speed of sound was 1139 ft/sec?  That "bird" is moving at just a tad over mach 1.

Birds don't do that.

This is all nuts, obviously.  Even if the... I guess we'd better call it the "flying object", and at this point we can't exactly call it identified...  really is moving at mach 1, it never touches the vehicle, so it would have to:
  1. Somehow release something to bomb the rocket.  This is probably at the hairy edge of possible with technology that's sorta-kinda almost there.  You could probably trash the thing pretty well with a decent-sized ball bearing at that speed.  Or...
  2. Use some kind of death ray as it flew past.  But this simply doesn't exist in a bird-sized form factor.  I'm not sayin' it's aliens (but it's aliens...).
Massive holes in both of these:  First, when you drop things, they can't suddenly leap ahead of you, so our ball bearing has to have some form of propulsion.  (Aliens!!!!)  And second, why would you go to all this trouble when it would be easier to plant somebody in the scrub a mile or so away with a .50 caliber sniper rifle?  (Unless they were aliens...)  And of course there ain't no stinkin' death rays (except the ones created by aliens...).

Finally, if we've got a trans-sonic UFO, where's the sonic boom?  Well, there might be an answer to that one.  If you listen to the video with headphones, starting at about 1:15, we can hear the following events (the ranges are frame numbers from that spot):

000-000 Distant thump, left channel
030-030 Distant thump, right channel
044-050 Metal-on-metal squeal
096-103 Metallic clank
125-125 Click
174-175 Hiss from some sort of pressure release
251-251 First explosion heard

Remember, this is all happening 2.7 miles away on a humid Florida morning, which means that any high end sound will have gone bye-bye long before it reaches the mic.  So the metal-on-metal squeal, the clank, the click, and the hiss are probably from some source much nearer to the mic.  But the distant thumps sound an awful lot like sonic booms.  Unfortunately, you'd expect to hear them first to the right, then the left, instead of the way they are on the video.  But it's not inconceivable that somebody plugged their boom mic in backwards and didn't notice.  That would give a pretty reasonable accounting of the object from a sonic perspective.

And, since the tinfoil hat fits very snugly this evening, two more things:
  1. If you watch the "bird's" "wing" two frames after the explosion, it's illuminated by the flash.  That's a lot more likely to happen if the wing is behind the vehicle and reflecting it back toward the camera.  (On the other hand, if it's a bug instead of a bird and very close, light might shine through a transparent wing.  Doesn't look like a bug, though.)
  2. What's Elon Musk doing asking for photos and videos from the public?  They've got that pad blanketed with high-speed cameras.  It doesn't make sense unless SpaceX thinks that the cause might be external to the pad (or maybe external to this world...).
That's all I've got.  If it turns out that there's some artifact that makes the bird/bug/UFO actually in front of the tower, then we're done.  But the evidence for it being behind the tower is pretty compelling.

So yeah:  It's aliens.  They want to scuttle our space program, and nobody told them, "Klaatu berada nikto".

Update, in the cold light of day:  It's a bug. It's a particularly determined and disciplined bug that flies in an almost-straight line, but it's a bug nonetheless.  It has legs hanging down.  It has transparent wings, visible in more than one frame.  It doesn't quite fly in a straight line, or at the same speed.

As for it going behind the tower, it looks like it's an artifact of the video encoding.  During the early phases of the explosion, another bug, moving lower left to upper right, crosses the southwest tower after being obviously in front of the explosion (you can see it silhouetted against the fireball), and it shows the same sort of artifact, where the encoder prioritizes the in-focus static structure and simply refuses to render the moving stuff in front of it:

Oh, one other thing:  The GIF was made from a 1080p60 video, so each frame is only 16.67 ms.  So the object would moving at mach 2, not mach 1.

And it did occur to me that one of the reasons that Elon would want other media is because if you want to prove conclusively that it's a bug, all you need is a shot from any other angle--which won't have the bug--and you're done.

So:  not aliens.  But of course that's exactly what they want us to think...

Yet Another Update, 9/18/16:  I've clearly been retired from the MPEG business for too long, because there's an obvious explanation for the video artifact.

MPEG (aka H.264) encodes three different kinds of video frames.  I-frames occur periodically and encode the entire picture, but in between I-frames (which have to be very large), are P- and B-frames, which encode changes from the picture in the I-frame (P-frames encode changes after an I-frame, and B-frames encode changes that occurred before the next I-frame.)

If you've got a bug flying in front of the tower, the tower is obviously not moving, so the pixels covering the tower (which are encoded in a 16x16 pixel square called a macroblock), would only be encoded in a subsequent P-frame if something substantially different covered up part of the picture.

Encoders are all a little bit different, but a huge amount of engineering time and effort has been spent on deciding when the human eye would detect that something about the picture has changed, and then only generating the relevant macroblocks in the P-frame if that threshold was exceeded.  Since the part of the bug that covered up the tower is pretty much the same color and greyscale as the tower itself, the encoder simply didn't generate the macroblock.  On the other hand, the part of the bug that's silhouetted against the sky is an obvious change, so the encoder updated that macroblock.

Result:  only the part of the bug against the sky was encoded, making it look like the rest of the object is behind the tower.

Tinfoil hat completely removed and tossed in the garbage.

Friday, May 27, 2016

The Decline and Fall of the Low-Skill Labor Market

I've finally gotten around to drawing some diagrams that help explain why the bottom half of the income distribution in the US is (quite rightly) so fearful, angry, and... well, Trumpy.

For convenience, I'm going to divide the job market into low-skill (you can get the job and work immediately if you have a couple of years of high school), medium skill (you don't need book smarts, but you do need to be trained or at least practice your trade to get good at it), and professional (you need extensive training and theory before even being hired, and then you need substantial experience on the job to get good at it).

Here's a chart of the three labor markets, ca. 1980, and how they got their labor, and how that labor eventually flowed up and out of them:

In 1980, all three of these segments functioned well, because there were numerous pathways between the low- and medium skill markets.  High school kids or low-skill immigrants would start with a job that would give them the work habits they needed either to get promoted into a medium-skill position or to find a medium-skill job based on their resume.  Because the low-skill market was a stepping stone, it had high turnover, supply matched demand pretty well, and wages were stable.

Only people with a reasonable chance of completing higher education went to college.  The best of them got professional jobs.  Those that finished or almost finished could expect that their major mattered a lot less than the credential, and could expect to be ushered into at least a decent medium-skill job.

By the oh-oh's (I'm still flogging that one over the "aughts"--it seems to capture the basic mood considerably better), some bad things were starting to happen to the low-skill labor market, and they're really kicking in right now:

The root of all of these problems is that automation kicked in first in the medium-skill jobs.  All of those knowledge and clerical jobs got their productivity boosted enormously be automation.  They may not have been done completely by a robot, but some hunk of software allowed the job that used to be done by ten people to be done by one or two.  The other eight or nine joined the ranks of what I think of as the "previously skilled":  they're people who have good work habits and demonstrable ability to learn a medium skill job, but there simply aren't enough medium skill jobs to go around.

Some of them got repurposed into organic job growth, or into new niches that opened up as businesses changed.  That's just the old "creative destruction" dynamic and, while it's uncomfortable for the displaced worker, it's not a disaster.
What is a disaster is that when the medium-skill labor market isn't growing very well, there's no need to go fishing for new recruits from the low-skill market; the ones you have will do just fine.  So suddenly, the best that the high school grads, the low-skill immigrants, and the previously skilled can hope for is to hang onto the low skill job that they landed.  Mobility up into medium-skill positions dries up.

The dearth of medium-skill jobs has had another effect:  All of those college kids that graduated with a philosophy degree or a BA in English Literature?  There are people floating around in the medium skill pool who have better qualifications for the remaining jobs than they do.  What growth there is in that market can be filled by explicitly hunting for grads who are trained in the fields that they need.  No need to pay any attention to those "well-rounded" applicants.  So even workers who graduated find themselves playing Chutes 'n' Ladders into the low-skill pool, with no way out.  Maybe we should call these people the "inappropriately skilled".

But it gets worse still.  Not only have we clogged the outlet from the low-skill pool but we've also added in new sources of bodies, in the form of the previously- and inappropriately-skilled.  So we have the supply of low-skill labor exploding and, while demand in the market is growing at a pretty normal rate, it can't possibly keep up with the supply.  That means that that low-skill wages are going down.

Note that the upper end of the job market--the professional jobs and the medium-skill ones that consume experienced workers or appropriately educated grads--are still working just fine.  But the low-skill pool is in terrible distress.  Those jobs don't provide a living wage and they are now truly dead-end jobs.

The only way out is through some sort of retraining, but retraining is expensive.  And how do you support yourself while you're in school?  Even worse, the odds of a retraining program actually helping you aren't great.  If you're coming from the high school track, you may not have the study skills necessary to learn something new.  Humans start to lose their plasticity in their twenties, so suddenly becoming a good student if you weren't one already is not a very good bet.  And even if you are retrained, the medium-skill pool is still in flux; there's no guarantee that the job you're training for won't be automated by the time you're competent to perform it.

If you're in a dead-end job that doesn't pay a living wage, social problems start to set in.  You start losing jobs, because they're all equally dismal and all equally non-remunerative.  You go through periods of unemployment, which makes it even less likely that you'll climb back into the medium-skill stuff.  Eventually, you may drop out of the labor market entirely.

So what happens from here?  I'd like to say that things get better, but I doubt it:

The next big, big thing to hit the beleaguered low-skill market is that, just as automation stunted demand in the medium-skill sector, it will soon do so in the low-skill market.  Add in self-driving trucks, robo-burger-flippers, and the odd construction bot, and suddenly we have the same dynamic of driving people out of the market with no way to get back in.  But now there's no place to go.  The chute drops you into unemployment, with no ladder back out.  Eventually, you become discouraged and drop out of the labor force.

Note also that the job market for new high school graduates becomes wildly untenable.  They may bull their way into the few remaining low-skill jobs, but the "unemployed" bin will always beckon.  Some of them may not even make it into the labor force at all, graduating directly into long-term unemployment.

About the only bright spot in the picture is that the education system will eventually adjust.  It's likely that it will have better success choosing qualified applicants (and the applicants will have better success in choosing a sustainable career), and better pedagogy can make the educational experience more successful.  But none of these trends is likely to halt the relentless attack of the robots.  Unless you possess extraordinary skills, you're in for a very uncertain future--one where downward mobility is unlikely to be just a temporary setback, but instead a sharp ratcheting down of one's standard of living, with little hope of recovering it.

There are four policy implications for this, three medium-sized ones and one huge one:
  1. Minimum wage laws are counterproductive, because they encourage automation.  Even local minimum wage laws can be devastating, because it only takes one incentive to invent the robo-burger-flipper once in LA or Seattle to poison the entire labor pool with the innovation.  Forgoing minimum wage increases won't stop the progression of automation, but at least it can avoid accelerating it.
  2. I've avoided talking about immigration above, but it's pretty clear that you need to consider low-skill immigration separately from medium- and high-skill immigration.  Yes, everybody's annoyed that evil employers are cheating on H-1B visas, but most of those are rolling into stuff that straddles the line between medium-skill and professional.  Those immigrants may displace domestic labor, but they're unlikely to send the displaced down the chute.  On the other hand, low-skill immigration is yet another inlet into the low-skill labor pool.  It's not even close to being most serious of them, but it sure doesn't help.  I can't really justify an immigration policy that makes things even worse for the most economically vulnerable natives.  At the very least, we ought to take a breather on the low-skill immigration.
  3. I also haven't talked about offshoring and outsourcing above.  That's because those are really just variations on the automation theme.  It's true that we're displacing medium-skill labor with cheap overseas labor, but that labor would be useless without numerically controlled drill presses and just-in-time logistics and remote management.  If you were suddenly to repatriate all of those offshore, previously medium-skill jobs, you'd discover that they'd all fallen into the low-skill category--which is why they were able to go offshore in the first place.  Ending offshoring (or free trade, or whatever hideous Trumpian nightmare is being proposed this week) is a lot like goosing the minimum wage:  If anything, it will hasten the end, rather than making things better.
  4. And now for the big one:  That circle in the upper right-hand corner of all three charts, the one that says "retired or out of labor force"?  That's just a giant entitlement sink, soaking up every possible federal and state dollar that can be scrounged--usually from things that are essential governmental functions.  We're going to need to figure out how to build a functional welfare state, or we're going to have to learn to live with a lot of our fellow citizens leading third-world existences--complete with third-world violence and the occasional armed insurrection.
That welfare state, or the lack thereof, is the thing that puts us in the direst peril.  We're almost certainly going to have to raise taxes on the functional parts of the economy, at least a little, but even more important is that we're going to have to reform the system so that we acknowledge that a whole bunch of people can't just be coerced into working if only we squeeze them hard enough.

When you actually look at how the money is doled out today, a lot of it goes to things that aren't particularly helpful in keeping one's head above water if one is unemployable.  We're spending a lot of money on Pell grants for inappropriate education (see above).  We're sending social security old-age benefits and Medicare to some people who don't need them--or at least who don't need them as much as somebody with no income or wealth at all.  And, maybe the biggest sink of all, we're spending huge sums on Medicaid, which may provide mediocre health care for poor people, but it can't keep them housed, clothed, and fed.  Somebody needs to take a long, hard look at Maslow's hierarchy and do some very unpleasant things.

Saturday, February 6, 2016

Security for Wind and Solar Energy

I've slowly become convinced that an electric power grid that was predominantly wind and solar might actually work and be cost-competitive.  It obviously depends on a big maturation of energy storage technology, but that seems to be moving forward.  As long as it scales as fast as fossil plants or nukes, and is as cheap, reliable, and dispatchable, wind and solar can fill the bill.

But in addition to those usual metrics for the power market, there's another that we don't talk about very much:  security.

We've had a couple of high-profile hacks on power grids recently.  I'm unconvinced that the software security issues for renewables are any worse than those for fossil fuels or nuclear.  But what if a state or non-state actor embarked on a coordinated sabotage campaign?

It's relatively hard to bring down power lines, and power networks are relatively robust.  Switching stations are small and can be effectively guarded.  Generating plants are already already well guarded, and the cost of attacking them would be high for any organized group of saboteurs.  Dams are hard to destroy without specialized explosives, and they're easy to guard.

Solar and wind farms are kind of a nightmare, though.

First, they're big.  They consume a lot of area.  They require a lot of fencing to secure.  In many cases, the fencing has to accommodate public and private rights of way.  And even if you have the fencing, guarding it will be hugely expensive.

Once you get through the fencing, you can take out solar panels with a hammer.  Or a sand blaster.  Or acid.  In some cases, exploding some sort of corrosive agent over a solar farm could degrade a significant chunk of it all at once.  How hard would it be to rig a couple of drones to spray hydrofluoric acid down a row of solar panels?

Wind farms are a little harder to take out with low-tech sabotage, but they're even more distributed than solar farms.  And wind turbines have much higher power density per localized area.  It doesn't take much explosive on a wind turbine pillar to take out more than a megawatt of nameplate capacity.  And what happens if you fly a drone into a turbine blade, at just the right spot, with just the right hardware, to inflict maximum damage?

Rooftop solar might be a bit more immune from sabotage.  But how many solar panels do you need to take offline before the power company has to make big changes to the external power delivery to a neighborhood?  And I'm still unconvinced that the bulk of residential power is going to come from rooftops, especially in cities.

One of the nice things about coal, gas, and nuclear is that they are generated indoors, with hefty buildings surrounding them, with simple fencing and easy tasks for guards.  Wind and solar must be sited outdoors, with fragile generation mechanisms.  We should think through the security issues carefully.

Friday, April 3, 2015

The Iran Nuclear Program and the Boiling Frog Syndrome

The framework is out for the Iran nuke deal, and the shrieking has begun.  However, based on a little arithmetic that I'm kicking myself for not having done about four years ago, the deal may--now--be the best we can do.

From the framework, it is asserted that Iran currently has 19,000 centrifuges deployed and 10 tonnes (metric tonnes, each equal to 1000 kg) of low-enriched uranium (LEU).  I knew about the 19,000 centrifuges, but the 10 tonnes of LEU took me be surprise.  And it's the cause of the self-kicking.

It turns out it's a lot harder to take natural uranium (NU), which is about 0.7% U-235 and enrich it to 3.7% LEU than it is to take 3.7% LEU and enrich it to 90% highly-enriched uranium (HEU).  Here's a handy blurb on how it's done, but the key graph, reproduced below, is the second one:

This takes a little explaining.

First, we're dealing with a standard metric of how much energy it takes to separate a given amount of feed stock uranium of a particular assay to a product stock of a particular enriched assay, called a separative work unit (SWU).  Reading the definition will make your head hurt, but the SWU is very handy, because it tells you how much spinning of centrifuges has to happen to get to a certain level of enrichment.  And if you know how many SWUs a year a particular centrifuge can produce, you can figure out what the enrichment capacity is.

The IR-1 centrifuges that make up most of Iran's separation capacity did about 0.7-0.9 SWUs per centrifuge per year by the end of 2011, and I'd guess that performance has improved somewhat in the intervening three years.  It's gonna be a little less than 1 SWU/year-centrifuge.  I'm going to call it 0.9 SWU/year-centrifuge but remember that that number is an educated guess.

But back to the graph above.  This is the enrichment curve for 1 tonne of NU feedstock.  The horizontal axis is the enrichment level that that uranium has reached, and the vertical axis tells you how many SWUs you have to apply to get there.  Labeled along the curve are some key milestones for typical LEU (4%, although we're going to eyeball that back to 3.7% in a moment, because that's what the agreement implies), and the amount of enriched product stock you get out of that original 1 tonne of feed.  The rest of the stock is assumed to be "tails", or "depleted", meaning that it's had most of the U-235 removed from it (somewhere about 0.2% U-235).

Note that it takes about 800 SWUs to turn the 1 tonne of NU feedstock into 133 kg of 3.7% LEU, but it only takes another 500 SWUs to turn that 133 kg of LEU into 5.6 kg of 90% HEU, fully capable of going boom.

That sounds weird. Why does it take fewer SWUs to make the stock much more enriched?  The answer lies with the amount of material you're enriching.  You go from 1 tonne of NU to 133 kg of LEU, and then to only 5.6 kg of HEU.  The SWUs are work amount on feedstock.  If you consider the LEU to be the feedstock for HEU, rather than NU, then the fewer kilograms to work on means less work for more and more enrichment.

So, to get 1 kg of HEU from NU, you need to put in 232 SWU/kg.
But to get 1 kg of HEU from 3.7% LEU, you only need to put in 89 SWU/kg.  Call it 100 SWU/kg.

Another thing to note:  Enriching from NU to LEU reduces the amount of product by a factor of 7.5.  Enriching NU to HEU reduces the amount of product by a factor of 179.  But enriching LEU to HEU only reduces the amount of product by a factor of about 24, which we will round off to 25 for ease of use.

So how much 90% HEU do we need for a bomb?

The Little Boy bomb dropped on Hiroshima was a "gun-type fission weapon", and it required 64 kg of 80% HEU.  Using conventional cannon technology, it fired a hollow cylinder of HEU over a thinner cylinder of HEU that fit just inside the hollow, to create a critical mass in the world's nastiest game of ring-toss.  It had a yield of about 15 kilotons.

The Fat Man Nagasaki bomb used plutonium in an "implosion" configuration.  In this, shaped charges went off to liquefy, then compress, a solid sphere of plutonium.  Harder to do, but it requires less material, and it generates a better yield.  Fat Man yielded 21 kilotons using 6.2 kg of Pu-239.

Turns out that you can build implosion weapons out of HEU as well, and the state of the art requires a lot less material.  This paper examines three different implosion technologies for both HEU and Pu bombs, labelled "low-tech" (equivalent to Fat Man technology), "medium-tech", and "high-tech".  The "medium-tech" scenario, of which I'd think we'd have to assume a country like Iran was capable, requires only 9 kg of 90% HEU to produce a 20 kiloton bomb, equivalent to what was used on Nagasaki.

Now let's go back to our 10 tonnes of Iranian LEU, and 19,000 centrifuges:
  • 10,000 kg of 3.7% LEU / 25 LEU-to-HEU factor = 400 kg of 90% HEU.
  • 19,000 centrifuges * 0.9 SWU/yr-cfuge = 17,000 SWU/yr.
  • 17,000 SWU/yr / 100 SWU/kg = 170 kg/yr of HEU.
  • That's 14 kg/month of HEU.
  • So, assuming Iran has a bomb design ready to go (a conservative bet), it could enrich its whole stock of 10 tonnes of LEU and produce 1.5 Nagasaki-sized bombs a month, for two and a half years, ending up with 44 bombs.

Holy shit.

Iran can break out whenever it wants.  It can have 3 bombs, one to test and two to deploy, in two months, give or take some machining and assembly time.  If the IAEA comes to inspect once a month, one trumped-up excuse to force it to skip a visit is all the wiggle room they need.

Based on that, a deal that limits Iran to about 5000 centrifuges and 300 kg of LEU on-hand sounds fuckin' awesome.

But even with the deal, the "one year break out" is silly:
  • 300 kg of LEU / 25 LEU-to-HEU factor = 12 kg of 90% HEU
  • 5000 centrifuges * 0.9 SWU/yr-cfuge = 4500 SWU/yr.
  • 4500 SWU/yr / 100 SWU/kg = 45 kg/yr of HEU.
  • That's 3.75 kg/month of HEU.
  • We can't use the same 3-bomb criterion for a break-out, because there isn't enough LEU to support 3 bombs' worth of HEU.  But a 1-bomb break out would take 2.4 months.
That doesn't sound like "more than a year" to me.  You can argue whether one untested bomb constitutes a break-out, of course.  But consider the following two game propositions:
  1. If I respond to a breakout, I run a risk that the one bomb will be successfully used against me.  Upside: better foreign policy leverage = medium.  Downside * probability of downside: Huge times kinda small = medium.  Downside-to-upside ratio: 1.
  2. If I don't respond to a breakout, there is no risk that the bomb will be successfully used against me.  Upside:  Maybe the horse will sing = small.  Downside * probability of downside: foreign policy hemmed in * medium probability = medium.  Downside-to-upside ratio:  less than 1.

But I digress from the main point by pointing out that even the fuckin' awesome deal isn't so fuckin' awesome.  And the main point is this:

How for fuck's sake did we allow this to happen?

Let's go back to the IR-1 centrifuge performance blurb.  There we learn (fig. 1) that Iran had only 7000 centrifuges in May, 2009, and this NYT article states that Iran had about 1 tonne of LEU in February of 2009.

Let's run the numbers:
  • 1000 kg of LEU / 25 LEU-to-HEU factor = 40 kg of 90% HEU
  • 7000 centrifuges * 0.6 SWU/yr-cfuge (based performance estimates cited above--yaaaay Stuxnet!) = 4200 SWU/yr.
  • 4200 SWU/yr / 100 SWU/kg= 42 kg/yr of HEU.
  • That's 3.5 kg/month.
  • Using the 3-bomb breakout criterion, that's a 7.7 month breakout time.
So, being charitable, the Obama Administration's "deal" pretty much undoes the damage they allowed to happen on their watch.

So how did we get to 10 tonnes and 19,000 centrifuges?

Well, I have to admit that I feel pretty stupid for not running these numbers back then.  There's no magic here.  But the real answer appears that the administration was... really vague, in a happy-talk kinda way, without ever actually lying its ass off:
If Tehran has no hidden fuel-production facilities, to create a bomb it would have to convert its existing stockpile of low-enriched uranium into bomb-grade material. International inspectors, who visit Natanz regularly, would presumably raise alarms. Iran would also have to produce or buy a working weapons design, complete with triggering devices, and make it small enough to fit in one of its missiles.

The official American estimate is that Iran could produce a nuclear weapon between 2010 and 2015, probably later rather than sooner. Meir Dagan, the director of the Mossad, Israel’s main spy agency, told the Israeli Parliament in June that unless action was taken, Iran would have its first bomb by 2014, according to an account in the Israeli newspaper Haaretz that Israeli officials have confirmed.
It's hard not to come to the conclusion that things were pretty bad when Obama took office, and now they're not only bad in a "Waaaahhh, Iran's going to destabilize the region" kind of way; now it's more like "Oh shit oh god, Iran could have a strategic nuclear capability in about two years".

I've been pretty sure that Obama and Clinton's booting of the Status of Forces Agreement for Iraq was the single worst foreign policy blunder of the administration--until now.  This situation's gone from a grease fire in the kitchen at the beginning of Obama's tenure to a five-alarm fire.

Make no mistake:  the deal, if it actually gets done, is not only pretty good, it is now existentially necessary.  But it should never have come to this.  Like the frog that sits in the pot of water and slowly boils to death, we just assumed that the constant accumulation of stockpile and capacity would never make a qualitative difference.

Wednesday, November 5, 2014

Who Pays for Obamacare?

The dust seems to have settled on ACA a bit, so now is probably a good time to look at who's paying for what.   A lot of this question is boring and obvious:  when it comes to things like Medicaid expansion and the exchange subsidies, the answer is simply that we all are.  Revenues for those entitlements come out of the general fund.  Sure, there are a vast array of new taxes and the Great Medicare Advantage Robbery, but ultimately the burden of this stuff is spread, if not exactly evenly, at least widely.

Instead, I'd like to look at just the impact of the parts of ACA that deal with the Qualified Health Plans and the exchange subsidies.  In this case, the burden is far from even.

A quick refresher:  ACA mandates that the vast majority of individual insurance policies be Qualified Health Plans.  QHPs have the following attributes:
  1. They have a mandatory set of coverages, including preventive care, psychiatric care, substance abuse treatment, maternity coverage, and a variety of other women's health coverage.  Those coverages cost something.  Pre-ACA, a lot of that cost was borne by women, who were more expensive to insure.  Now they're borne by both sexes equally.
  2. QHPs are guaranteed issue, which means that poor risks can't be denied coverage.  Again, costs increase.
  3. QHPs have community rating restrictions.  The one with the largest impact is that rating on age is capped at 3 times the rate for the best unadjusted rate in the plan.
I have problems with the mandatory coverages and the size of the community rating cap, but in general these are all reasonable.  But they cost something.  The question is how much, and who pays.

To answer this, let's start with some stats that HHS released in June 2014 (see table 2):
  • Before any subsidies are applied, the average premium for an individual QHP is $346/mo.
  • 87% of all people insured with a QHP through the exchanges received subsidies.
  • Based only on those who received a subsidy, the average subsidy was $264.
  • Therefore, the average subsidy reduced the monthly premium to $82/mo.
Next, we need to know how many people are signed up on the exchanges and how many people have purchased non-group QHPs outside of the exchanges.  This should give us a pretty good number for the total number of people in the individual insurance market.  It's not a perfect number, because it doesn't account for grandfathered non-QHPs.  I'm going to ignore the grandfathered plans on the grounds that I'm trying to find out what QHPs are costing, and who's paying for them.  By the same token, I'm ignoring the vast majority of people who are covered by employer-sponsored group health plans.

This site states that, as of this writing (and the numbers have already been revised down two days later):
  • There are 7.3 million people current on their premiums for policies purchased on the exchanges.
  • There are another 8.0 million who have purchased QHPs off of the exchanges.  By definition, all of these people are not receiving subsidies.
Next, now that we know that the average policy costs, $346/mo, we need to calculate what it would have reasonably cost in the absence of ACA's QHP provisions.  To do that, we first note that, in 2010, before ACA took effect, the Kaiser Family Foundation calculated that the average individual policy premium was $215/mo.

Now, if we know how much that premium would have gone up due to health care cost inflation, we can come up with a "what if there were no ACA" number for the average premium.  Here are some World Bank numbers for 2010-2012 for per-capita health expenditures:

Year Expenditure Percent Change
2010 8254
2011 8467 2.6%
2012 8895 5.1%
2013 9125 (est) 2.6% (est)
2014 9361 (est) 2.6% (est)

I'm using the 2010-2012 average growth to compute the numbers for 2013 and 2014.  When you do the whole thing, growth from 2010 (where we had the average premium of $215) to 2014 is 13.4%.  That would make the new average premium $244.  Note that this number is conservative; in reality, a lot of the per-capita spending growth occurs with the elderly, who are covered by Medicare.  So we might reasonably expect that the "if ACA didn't exist" premium, based solely on cost inflation, could be even lower.

OK, we're getting there.  Now, we know that the average person who got a subsidy probably did very well under ACA.  But what about the people who didn't get a subsidy?  On average, they're paying $346/mo for a QHP, when without the ACA laws they could expect to be paying $244.  So they're paying $102/mo, or $1224/yr, more under ACA.

We're now in a position to calculate the total amount spent on the QHP provisions and subsidies for ACA.

87% of the 7.3M on the exchanges get a subsidy of $264, so 7.3M * 87% * $264/mo * 12 mo = $20.1B/yr in subsidies.  (Note that this number doesn't jibe with the CBO number of $17B in Table 1, presumably because the $20.1B is for a full year and the $17B is for about 7 months of a fiscal year, with February, March, and April having a big ramp-up.)

Then we have 13% of 7.3M = 950K people on the exchanges with no subsidy, and another 8.0M people off-exchange, for a total of 8.9M who are paying the full freight of the new laws, which we computed to be $1224/yr, so they're paying $10.9B/yr.

$20.1B in subsidies + $10.9B for the full freight people = $31.0B a year as the total cost of the QHP and subsidy provisions of ACA.  (Note that I'm leaving out the risk corridor payments.  They make things even worse, but not that much worse.)

Now, I'm going to make the assumption that the $20.1B is distributed evenly across every taxpayer in the US.  The most recent IRS statistics show that the number of returns filed in 2012 as classified by filing status (Excel) shows:

Filing Status Returns Taxpayers
Married Filing Jointly 53.7M 107.4M
Married Filing Separately 2.7M 2.7M
Heads of Household 21.8M 21.8M
Surviving Spouses 0.1M 0.1M
Filing Single 66.7M 66.7M
Total Taxpayers 198.6M

Again, this number is conservative, because it doesn't account for people who paid no federal tax.  But based on this number, every one of those 198.6M taxpayers in the country (feel free to tweak this number up by a bit to account for it being 2014) is paying out $101/yr for the ACA subsidies.  So far so good.

But wait!  The 8.9M people who don't get subsidies are paying not only that $101/yr, they're also paying the $1224/yr for the increased premiums, for a grand total of $1325/yr. That's more than 13 times what the average taxpayer pays.

Put another (and even more horrifying) way, 4% of all taxpayers are paying for 35% 38% of the cost of the QHP and subsidy provisions of ACA.

That doesn't sound very fair, does it?

But maybe those 8.9M are very wealthy, you say.  I don't have any data on the income distribution.  We know that their incomes are greater than 400% of the poverty level, or they'd get subsidies.  But consider the small business owner or the independent contractor.  Some of those people are wealthy, but the vast majority are living solidly middle-class existences.  I'd be surprised if the majority of them made more than $100K/yr.  Those are the people we're burdening with a hefty chunk of this law.

On-balance, I favor a lot of the provisions of the QHP and subsidy portions of ACA.  But if you're going to embark upon this grand redistribution of wealth, the least you can do is spread the burden fairly.

UPDATE 11/8/14:  If you take out the filers with less than $10,000 adjusted gross income, you wind up with about 172M taxpayers.  That's probably a better proxy for who's paying for the general fund stuff.  If you use that number, then every taxpayer is paying $122/yr for the subsidies and the 8.9M full-freight people are paying $1346/yr each, for a total of $12.0B.  So then 5% of taxpayers are paying for 39% of the total.  Not a lot better.

Also fixed computation error (see strikethru).

UPDATE #2:  I found a whole bunch of CMS data on per-capita healthcare spending and consumption (see here, here, and here) and was able to cook it down to get a better number for the 2010-2014 increase in personal health care consumption, ages 0-64 (so no Medicare taken into account).  The number is 16.1%.  Using that number and the 172M taxpayers revision would make the following changes to the model:
  • "What if there is no ACA" average premium: $250/mo.
  • Contribution of excess premiums to total cost of QHP and subsidies: $96/mo/person, $1152/yr/person total $10.3B/yr.
  • Subsidy cost shared by 172M taxpayers:  $117/taxpayer
  • Total cost of QHP and subsidies: $30.4B.
  • Yearly amount paid by full-freight QHP payers: $1275/yr., or $11.3B.
  • So 5% of taxpayers pay 37% of the total cost of ACA.

Tuesday, December 17, 2013

GDP vs. Productivity Growth, Updated

I keep referencing this chart in comments elsewhere, so I thought I'd update it:

GDP and productivity numbers courtesy of the US BEA and BLS.  The moving average and growth rates are courtesy of Excel.

Friday, November 22, 2013

My Love-Hate Relationship With Like

I write a lot more comments on other blogs than I do original posts on this blog.  Given my massive following of 2 people who may or may not have pushed the wrong button somewhere along the way, it seems like a better way of getting other people to see my writing.  If I'm working something new out, I'll write it here.  If I'm merely pontificating on one of the topics du jour, I'll often do it somewhere else.

An odd thing has happened:  On Disqus, my "like"-to-comment ratio has been rising recently, going from considerably less than one to 2.6 as of today.  What changed?

What changed is I started commenting on sites that agreed with me.

It's a small but distinct rush to discover that thirty or forty strangers read my comment, thought to themselves, "Damn, that TRM is really on to something there," and took the vast quantity of effort necessary to reach for the mouse and click the little up-arrow, or thumbs-up icon, just for me.  If I write a popular comment, I'll go back and monitor it.  I certainly want to read response comments, but I have to confess, I love the likes.

Now, when I comment on sites where people disagree with me, I don't get the likes.  It's a bummer.  It makes me less likely to comment again.

This is a terrible dynamic.

It is fair to say that I've learned more when I was wrong on an issue, and sharpened my arguments more when I still thought I was right, by getting ripped by smart people on a site where I'm arguing against the consensus, rather than by receiving the little squirt of "like" dopamine.  But dopamine will do what dopamine does; I find myself spending more time reading, and commenting on, sites where people will reward me.

I've written ad nauseam about the dangers of the self-selecting properties of the blogosphere (my, that word's become somewhat quaint, hasn't it?).  That dynamic existed before social media took over the web, and it was dangerous even back then.  But the invention of the "like" paradigm has distilled the problem down to its essence and provided the ultimate binary self-selector.  If people like you, you come back.  If they don't, you stop coming.

We're going to have to come to grips with the fact that our most powerful communication tools are terrible for public discourse.  I'm not a fan of the bland network news.  (Speaking of which:  is there some kind of new standards and practices edict that requires somebody to lose their shit and sob uncontrollably in at least one segment of each of the ABC/CBS/NBC broadcasts?)  But we sure could use a medium that had a natural tendency to form consensus instead of sharpening contrasts.

Thursday, November 14, 2013

The Semi-Frozen Flounder of Reality

I'm pretty sure that you're not going to learn much new from Jonah Goldberg's indictment of l'affaire Obamacare, but you're going to laugh so hard that you may pop a ligament in your back.

Friday, November 8, 2013

There's a Fairer Way to Do Universal Coverage

I've spent most of the morning wandering through the endless wasteland that is health care insurance analysis.  First observation:  it's so difficult to find a centralized source of data to answer simple questions that only a modicum of paranoia is required before you wonder if obfuscation isn't in a few people's best interests.  Of course, the less paranoid explanation is simply that this thing is so friggin' complex that nobody really knows what's going on.

But I did find some stuff, mostly courtesy of the Kaiser Family Foundation and the CBO.  I know the source of insurance for the US population in 2011:

It's good to know these stats in actual number of people, too:

Employer  149,350,600
Individual  15,416,100
Medicaid  50,670,200
Medicare  39,996,700
Other Public  3,846,400
Uninsured  48,611,600
Total  307,891,600.0

(Note that these are 2011 numbers.  The number of uninsured went down in 2012--see below.)

And, of the uninsured, I know the 2012 income distribution:

I know some other handy facts:

  • There were 47.3 million uninsured in the US in 2011.
  • The average individual health care policy cost $2580/year in 2010.
  • The average group plan policy for a single employee cost $5615/year in 2012 (total of both employer and employee contributions).
  • The average group plan policy for a family cost $15,745/year in 2012 (total of both employer and employee contributions.
  • CBO projected cost of exchange subsidies is $2B for 2012, $4B for 2013, and $25B for 2014.  Let's assume $2B a year for administrative stuff and move the other $2B in 2013 to 2014 to account for the giant ongoing exchange clusterfuck, and we'll use $25B for subsidies in 2014.
  • CBO also projects that one third of the people newly eligible for Medicaid will reside in states that will extend coverage in 2014, another one third will be covered by 2015, and the last third will not be covered ever, or at least until after 2015.  In 2014, CBO estimates that 7 million additional people will be covered by Medicaid.
I think we're ready to do some computation now.  So we have 47M uninsured minus 7M going to Medicaid, for 40 million people to insure in 2014.  Let's suppose we get half of them, taking it down to 20M.

Let's goose the $2580/year for the average individual plan premium in 2010 by, say, 16.7% over the last 3 years making the yearly premium $3010.  At roughly 15 million people in the individual market, we're looking at $45.2B in premiums.  Now we're going to add 20M people to the market who will be, on average, older, sicker, and more pregnant than the current market.  Let's say their unadjusted premiums would be $4000/year.  That's $80 billion more in individual premiums.

  • Total getting individual insurance in 2014:  15M (current) + 20M (new) = 35M.
  • Total amount owed in premiums by those 35M:  $43B (current) + $80B (new) - $25B (subsidies) = $98 billion.  Or, to put it another way, 35M people will pay $55B extra in insurance premiums.
  • That's an average of $1570 to be paid by 11% of the population.
Mind you, we're talking about 11% of the population that is much poorer and much more insecure than the US average.  And we're forcing them to pay an extra $1570.

Let's give these people median household income of $42,000, (US median is $51,000) and the average household size of 2.55.  So $1570 * 2.55 / 42,000 = a 9.5% tax levied on a class of people that are pretty much living hand-to-mouth at best.

In what universe is this fair, to say nothing of compassionate?

Now let's perform a little experiment:  We've got 149 million people who have group insurance worth an average of $15,700 a year per family or, using the same household size (which isn't quite right, because the characteristics of employed households are different from those than are unemployed, but we're going back-of-napkin anyway), $6150 per person.

If, instead of 35M people, we have 149M + 35M = 184M people paying the extra $55B, that amounts to a tax of $299 per person, instead of $1570.  Using the US household median income of $51,000 and 2.55 people per household, you're looking at $299 * 2.55 / 51,000 = a 1.5% tax paid by 60% of the population.

Want to make ACA fair?  Here's what you do:
  1. Get rid of group insurance.
  2. Remove the caps on health savings accounts.
  3. Force employers to contribute the amount that they previously spent on employee to their HSAs.
  4. Everybody buys individual plans.
  5. Keep community rating as-is.
  6. Get rid of these stupid minimum health benefits and bronze/silver/gold/platinum classifications and let people buy the plans they want.
Want to do something even better?  Dump Medicare and Medicaid into the pool at the same time.

This is obviously going to impact more people than the current law.  And the people it's going to impact are a lot more powerful than the lower-middle-class voters you're hitting with ACA right now.  But either something close to universal coverage is a national imperative, or it's not.  This kind of reform wouldn't make ACA a good law.

But at least it wouldn't be an obscenity.

Saturday, November 2, 2013

Snowden-Induced Cognitive Dissonance

I believe both of the following statements:
  1. Edward Snowden is a straight-up traitor, who has done irreparable harm to the national security of the US, and has damaged its diplomatic standing in the world.
  2. Edward Snowden has done the American people a huge service by revealing to them the extent of the modern surveillance state, and by provoking a long-overdue discussion about the tradeoffs we need to make as we balance freedom and security.
Believing both statements makes my head hurt.  Mr. Snowden asking for the US to stop treating him like a traitor makes my head hurt even more, because the derisive laughter it provokes sends little pressure waves through my skull, exacerbating the pain.

I don't buy that Snowden dumped all of the NSA's secrets out of any civic-spirited impulse for one second.  Snowden is a narcissistic sociopath who wanted to be famous.  If anything else were the case, he would have leaked a minimum amount of information anonymously.  He also would have found more benign host countries than China and Russia.

Still, we have to come to grips with the intersection of Big Data and national security.  Evgeny Morozov had an excellent essay on this topic a couple of weeks ago, where he pointed out that the data collection is only a small part of the problem.  The bigger problem is that the data can be processed in ways that can't be understood, either by the watchers or the watched:
Thanks to smartphones or Google Glass, we can now be pinged whenever we are about to do something stupid, unhealthy, or unsound. We wouldn’t necessarily need to know why the action would be wrong: the system’s algorithms do the moral calculus on their own. Citizens take on the role of information machines that feed the techno-bureaucratic complex with our data. And why wouldn’t we, if we are promised slimmer waistlines, cleaner air, or longer (and safer) lives in return?

This logic of preĆ«mption is not different from that of the NSA in its fight against terror: let’s prevent problems rather than deal with their consequences. Even if we tie the hands of the NSA—by some combination of better oversight, stricter rules on data access, or stronger and friendlier encryption technologies—the data hunger of other state institutions would remain. They will justify it. On issues like obesity or climate change—where the policy makers are quick to add that we are facing a ticking-bomb scenario—they will say a little deficit of democracy can go a long way.

Here’s what that deficit would look like: the new digital infrastructure, thriving as it does on real-time data contributed by citizens, allows the technocrats to take politics, with all its noise, friction, and discontent, out of the political process. It replaces the messy stuff of coalition-building, bargaining, and deliberation with the cleanliness and efficiency of data-powered administration.

This phenomenon has a meme-friendly name: “algorithmic regulation,” as Silicon Valley publisher Tim O’Reilly calls it. In essence, information-rich democracies have reached a point where they want to try to solve public problems without having to explain or justify themselves to citizens. Instead, they can simply appeal to our own self-interest—and they know enough about us to engineer a perfect, highly personalized, irresistible nudge.
Thanks to Snowden, we can have discussions like this.  And perhaps because of this, maybe the US should be merciful and ignore the man.  But if he ever comes in easy range of US law enforcement, he must be prosecuted to the full extent of the law.

Now where's that Advil?

Obamacare at the Cusp

I can see the following outcomes from the current crisis in the Affordable Care Act, aka Obamacare:
  1. The exchange starts working, the brouhaha over the baldfaced "if you like your policy you can keep it" lie dies down, and ACA settles into a system that, with a few tweaks, actually works.  (This doesn't mean that it's better than the legacy system; it merely means that it becomes the new normal.)  I'll put the odds of this occurring at about 45%.
  2. The outrage builds and ACA collapses quickly and cleanly, with the legacy system remaining in place.  30% probability.
  3. ACA collapses slowly and messily, taking the legacy system with it, and we wind up with a single payer system.  15% probability.
  4. ACA collapses, takes the legacy system with it, and something new emerges from the rubble.  10% probability.
I support the idea that we don't wheel the gurney out of the ER and dump you in the hospital parking lot if you can't pay, so the most obvious solution to getting near-universal insurance enrollment is excluded.  After that, you can use a single-payer system (Medicaid and Medicare), but that will only work for the genuinely poor and won't cover the young invincibles who've had bad luck.  For them, you can either put a mandate in place (which is what ACA did), or you can do something only slightly more humane than dumping them in the parking lot, namely make medical debts undischargeable through bankruptcy.

Long story short, I think there's nothing wrong in principle with the ACA tripod of mandates, guaranteed issue/community rating, and subsidies.  As for the details, ACA has clearly screwed the pooch.

The pain in the individual insurance market is just the beginning, as Megan McArdle has pointed out:
A lot of folks with employer-sponsored insurance are also going to see their insurance changed, though not quite as quickly. And not “The benefits will get so much more awesome!” but “The Cadillac tax kicked in and we had to drop most of our plans except for the ones with high deductibles.” A friend who sits on the benefits committees of two organizations says that their experts predict that pretty much all plans will end up being of the “consumer-driven” (read: high-deductible) model once the so-called Cadillac tax kicks in.
Let's not kid ourselves:  The ultimate point of this little exercise was to change the incentives in health care enough to put downward pressure on prices, so anybody who thinks this was going to happen without significant pain is an idiot.  The question is whether we're going to get the maximum amount of efficiency for the minimum amount of pain out of this system.  I think the answer to that is going to be a resounding 'no".

Would a single payer system have a lower pain-to-efficiency ratio?  Yes.  But that's not saying much, and the total amount of efficiency generated would still be quite low.  The idea of US single-payer health care scares the shit out of me;  there are too many interest groups and too many providers bellying-up to the public trough to make single-payer anything other than a disaster that simultaneously damages public health while destroying medical innovation, one of the US's core competencies.

Would the legacy system have a lower pain-to-efficiency ratio than ACA?  I honestly don't know.  Certainly not for the poor.  We also have a lot of evidence that the legacy system sucks at controlling costs.  Ultimately, I think that ACA would wind up being a modest improvement, it's so complex that there's no telling what unintended consequence will ooze out of the woodwork next.

And it's possible to do this simply, fairly, and honestly--three adverbs that will never be applied to ACA.  Here are the main principles:
  1. You can't dump most of the cost on the individually insured alone, and you have to be honest with the majority who get their insurance from their employers.  The simplest thing to do is to eliminate group insurance and throw everybody into the individual market.  They won't get as good a deal as they do from a group, but they'll be significantly less screwed than the people in the individual market are right now under ACA.  Let the employers put the money they spend on insurance into a health savings account--that ought to keep the tax incentives about the same.
  2. Keep the mandate.  Keep the subsidies.  But just pay them out of the general fund, with a reasonable income tax increase to cover them.  All this revenue sleight-of-hand is ridiculous.
  3. Keep the exchanges, but get the government out of the business of specifying acceptable insurance plans.  They're not smart enough, or adaptable enough, to understand the market and how it will change in the future.
  4. Stop transferring wealth from the young, who have low incomes and low net worths, to the not-quite-old-enough-to-retire, who have high incomes and high net worths.  For that matter, stop transferring wealth to retirees that have a large enough net worth to fend for themselves.  I'm fine with community rating, as long as you rate in age cohorts.  This is the single most immoral thing in ACA.
  5. Whatever you wind up doing, fold Medicaid and Medicare into it.  Yes, lots of poor people are ignorant and lots of old people are stupid--that's just a fact of life.  If you want to produce a menu of government-approved plans for people who are being 100% subsidized, fine.  If you want to set things up so that they can't use their subsidies for anything other than premiums, also fine.  But this maintenance of five huge health programs (Medicare, Medicaid, individual insurance, group insurance, and price increases caused by the poor defaulting on their medical debts) is just stupid.
  6. Never, ever, ever, allow the government to set prices for medical reimbursement.
You could tweak ACA to do all of this, but the opportunity for new and exotic rent-seeking is much higher doing these kinds of tweaks.  It'd be easier to repeal and replace if Obama's fragile ego would allow it.  But it won't.

As usual, the result will be even messier than what we have now.  Eventually, the incredibly stupid parts of the law will atrophy and we'll hopefully be left with a system something like what I outlined above.  Of course, there will be fifteen completely intolerable sacred cows left over, lovingly maintained by the groups that milk them.  So what else is new?

Sunday, October 13, 2013

How American Political Parties Dissolve

As I write this, the US government has been shut down for two weeks, and we are 4 days away from the "X date" where the Treasury Department runs out of workarounds on the debt ceiling and has to stop paying somebody.  (Odds are that this will not lead to a default on bonds, per the link above, but things are going to be dicey.)  I can't decide whether to call the set of House and Senate Republicans driving the standoff the "Obamacare defunders" over their (now fading) insistence on defunding ACA, or just the "Tea Party"; the two groups may be congruent.  I think I'll settle on the Tea Party, with the proviso that there may be a few Tea Party folks who haven't completely bought into this level of brinksmanship.

There are a lot of theories for why we've gotten into a situation where there appears to be no ZOPA over such a fundamental set of government functions.  Some of the most popular:

  • Republican leadership authority has eroded.
  • This kind of brinksmanship is virtually cost-free for Tea Party representatives in safe districts.
  • This is merely a principled stand against something that's worth risking a financial and/or constitutional crisis.
  • This is an excellent way to demonstrate the irrelevance of the federal government to the average voter, sort of a natural follow-on to the Obama Administration crying wolf over the sequester.
  • This is necessary to re-energize the Republican Party.
All of these reasons have some truth in them, but all of these are iffy propositions.  They're all based on extremely dubious judgment calls.  Politicians are notoriously risk-averse and it's hard to believe very many of them are willing to bet their careers on any of them.  What's going on?

The Tea Party has been heedless of the damage they're doing to the Republican brand.  I think that's because they're no longer concerned with that brand in its current form.  The Tea Party has been essentially conducting an ideological purge, drawing a clear demarcation between "us" and "them".  Any person espousing slightest deviation from Tea Party orthodoxy is labelled "them" and is dismissed.  What remains is a clear minority, but it's a minority with a frightening level of unity.

This seems crazy, but there's one situation where it might be astute:  If the Tea Party believes that the moment has come when it can seize control of the Republican Party, then the gamble might be worth it.  The goal of the shutdown and debt ceiling standoff is then to eject Boehner from the speakership and to eliminate the power of the business conservatives and social moderates in the party.  The Tea Party is effectively seceding from the GOP, but it's hoping to carry off enough support to be able to keep the name and the party infrastructure while changing the ideology.

The US has scant experience with the wholesale dissolution of political parties, but what history tells us is instructive:
  • In the early 1800's, the Federalist and Democratic Republican (aka the Jeffersonian Republican) parties had grown so close together in their positions that several internal schisms arose.  The "corrupt bargain" election of John Quincy Adams in 1824 and the follow-on election of Andrew Jackson in in 1828 shattered the one-party system, leading to the formation of the Democratic party, which supported Jackson's muscular view of executive power, while the Whigs formed around the idea that, well, they didn't like Jackson.  In this case, the Democrats were reacting to the increasing sophistication of the northern financial and industrial system and wanted it stopped.  In short, they had a strong animating principle.  The Whigs formed in an attempt to preserve the emerging institutions, but their only true animating principle was opposition to the reactionary forces mobilized by the Democrats.  The Democrats won, temporarily; they got to keep half of the party name in the divorce.

  • The Whigs finally got an animating principle:  abolition.  By 1840, it was clear that the US was headed for a massive territorial expansion and, absent significant opposition, the new territories were likely to allow slavery.  The Kansas-Nebraska Act in 1854 shattered the Whigs, with the militant abolitionists forming the Republican Party.  We all know how that ended.
The Tea Party thinks they've found their animating principle:  60% of Americans now believe that the federal government has too much power. They hope to forge a new coalition around the intent to reduce the size and power of the feds.

This is a goal that I support.  Note that it is, despite the negative connotations of the word, a "reactionary" goal:  "Government has gotten too big, and we want the trend reversed."  To that extent, the Tea Party's formation looks much like the formation of the Jacksonian Democrats in the late 1820's.  But there was more than one reactionary thread at work in the 1820's.  When the Democrats formed, the top-line issue was about rolling back financial and industrial power.  But the coalition worked because a strong anti-abolition sentiment flowed through the rural and southern factions.  And that issue ultimately overwhelmed the Democratic Party, leading to 1861.

The Tea Party also has other reactionary threads running through it.  Its members are socially conservative, isolationist, and anti-immigration.  They don't like the current US culture.  The subtext here is more diffuse than was the anti-abolition subtext of the Jacksonian Democrats.  The closest you can come to a one-line description would be, "We don't want to live in a sick, militantly secular culture."  But note that, unlike the 1820's and abolition, nobody's threatening to ban religion in the US.  The cultural forces are more complex than that, and the level of support for and against them has a whole bunch of people staking out positions somewhere in the middle.  (I'm one of them.)

If the Tea Party were truly just a small government party, they might easily attract enough factions from the rump of the Republican Party, independents, and conservative Democrats to form a viable party, and the current Republican Party would wither away, in fact if not name.  But the subtext is problematic for them, and their methods make it even more problematic.  It's unlikely that there are enough people willing to save the village by destroying it for their play to work.

So let's summarize the possible outcomes:
  • Tea Party takes over the Republican Party, then attracts enough small-government fellow travelers to make a working near-majority.  (Unlikely, per above.)
  • Tea Party fails to wrest control of the GOP, but winds up destroying it.  A new center-right party forms from the rubble.
  • The GOP establishment acts to crush the Tea Party in short order, leveraging the shutdown debacle to do so.  The Tea Party members capitulate and accept their role as simply a part of the GOP coalition and work to nominate an electable 2016 candidate.  (Note that if this succeeds, Paul Ryan is going to get the lion's share of the credit, and goes to the head of the class for 2016.)
  • Tea Party succeeds in gaining control of the GOP, but dramatically shrinks its influence.  They lose two or three more elections and wither away, then the more moderate elements of the GOP re-establish the status quo ante.
I'm terribly afraid that this last scenario is the most likely, and it's terrible news for the country.  By the time the old conservative coalition recovers from this disruption, we will have had 16 years of a Democratic executive.  Any hope of reining in the government will have gone out the window (if it hasn't already), and we'll be well on our way to the debt crisis that could easily tear the country apart.

A final, cautionary, historical note:  The emergence of the Jacksonian Democrats isn't a definitive event.  Things limp along for 30 years, waiting for the animating oppositional principle (abolition) to become clear.  When it does, the result is the biggest calamity the US has ever faced.  Small government is a fine animating principle, but I have no idea what the principle is that eventually emerges to oppose it.  If history is a guide, we're not going to like the result.

Thursday, August 15, 2013

The NSA, Domestic Surveillance, and the Era of Big Data

I've been off the radar all through l'affaire Snowden, but I've been wrestling with how we should think about this.  Conclusions are a bit thin on the ground, and this post is going to ramble a bit as a result.

First, let's acknowledge that Big Data is going to be the most revolutionary scientific and technological advance of the early 21st century.  Giant, mine-able data sets are already changing how we do biology, sociology, epidemiology, economics, and just about any other -ology or -nomics that you care to name.  Big Data makes problems of intractable complexity suddenly tractable.

And this goes for counterterrorism intelligence analysis as well.  By definition, looking for terrorists is about looking for needles in haystacks.  Big data lets you find needles by looking at how they've changed the shape of the haystack in which they've been placed.  Big Data drastically amplifies the effectiveness of traffic analysis.

So, to the extent that you really, really want to stop every last whack job from setting pressure cookers filled with black powder in public places, then you should be really, really happy that the NSA has mountains of data under its control and the computing power to sift through them.  But of course the fly in the ointment is that "mountains of data under its control" bit.  If they can sift through them looking for terrorists, they can sift through them looking for drug kingpins.  Or child pornographers.  Or embezzlers.  Or Republicans.

Seems to me that this boils down to two basic questions:
  1. Who owns the data set?
  2. What techniques can the owner perform on the data?
We're pretty comfortable these days with businesses owning the data sets that they create during the course of business.  AT&T and Verizon own their call detail records--the ever-popular "metadata" that the Obama Administration has been droning on and on about.  Google owns every search you've ever made and (for a lot of us) every email you've ever sent or received, to say nothing of every four-beer blog post you've made.  And speaking of four beers, Facebook owns every photo of you exercising your constitutional right to join the whoo people from time to time.

We're not exactly happy that these guys own all this stuff, but we're used to it. Why?  Because we trust corporations to mind their business, even when their business is making money off of our semi-public behavior.  Which brings us to the second question:  what limits do we place in the owners of data when they process it?

To answer this question, I like to imagine doing something that I'd be ashamed of (of course this never happens in real life...) and then imagining various people looking at the electronic breadcrumbs.

Do I care if companies provide access to my data for advertising?  If Amazon is trying to sell me a romance novel (hey, my wife and I share an account!), not so much.  But if BDSM-Mart tried to sell me whips and chains, I'd be perturbed.  Is there a difference between these two cases?  Maybe.

First, Amazon tries to sell me things that Amazon sells.  If I didn't do business with Amazon, they wouldn't try to sell me romance novels.  And if Amazon started trying to sell me whips and chains, I'd probably close my account (no doubt causing Amazon's revenues to decline by at least 25%).

On the other hand, BDSM-Mart would have to acquire my data from somebody else.  Maybe they could do this through Google, but Google understands where its bread is buttered and would never give BDSM-Mart direct access to my metadata.  Instead, what they do is allow BDSM-Mart to place advertising with Google, and Google will insert advertising in pages that people whose search and email patterns indicate that they might like bondage paraphernalia.  At no point does Google say, "Hey, BDSM-Mart!  The Radical Moderate looks like he might be a good customer for you!"  If they did, I'd have a big problem with Google.  At the very least, I'd be doing a lot more private browsing, which would hurt Google's revenue.  At the most, I might be complaining to my congressman or filing lawsuits, which, writ large, might put Google out of business.

Next hypothetical:  Imagine two guys at Google's water cooler, swapping stories about those hilarious searches that The Radical Moderate made last week.  Would I be creeped out about this?  The idea doesn't make me happy, but my guess is that if this happens (and maybe Google has a policy about their employees making directed queries and maybe they don't--I'd guess that developers have to make directed queries in the course debugging stuff all the time), they don't talk about me, they talk about me as "some guy".  Do I care now?  Ennh.

Now, let's imagine that somebody in Google takes my data and posts my searches on a public website, with the express purpose of shaming or humiliating me.  That's lawsuit territory, and I'm suing Google, not their employee.  Google has a vested interest in making sure that this doesn't happen, the same way that they're not going to let BDSM-Mart see my data.

Bottom line:  I trust private companies with this sort of data because they value it properly, or they don't remain in business.

But now let's suppose that Google's "customer" is the government.  What kinds of questions can they ask?

Well, some of them might be close to innocuous.  They might want to know the correlation between the number of accounts that search for BDSM stuff and certain types of pornography, for purposes of formulating law enforcement policy.  If all of those correlations are anonymous, no harm done.  But the logical next step isn't so wonderful.  If the feds manage to convince some judge that a certain pattern of BDSM searches indicates that there's a high likelihood that the account will also contain searches for kiddie porn, that would be not so harmless.  I can't imagine a judge issuing a warrant under those circumstances, but you never know--and the way the FISA courts are set up, you never will know.

But notice what had to happen for me to be at risk from the government:  Google had to turn over account-specific information.  So, not only did Google no longer control the data, but they gave it over to somebody who would process it in ways that Google would never dream of.  Here's where the ownership of the data becomes crucial.  Without the data sitting in some server farm controlled by the feds, they can't control the post-processing and queries, and they can't overstep any bounds.

So maybe that's the first step toward a sensible data-mining policy:  the government shouldn't be able to control data.  Or, at the very least, it can't control data with identifiable account info.  The NSA seems to have paid lip-service to anonymizing account info, but it's also pretty clear that the names and addresses were just a protected table, with lots of people able to gain access.  If that information remains safely back with the company that owns it, retrievable only with a warrant, things are somewhat more private.  But we haven't solved the problem of how to keep the feds from mining the anonymous data for whatever they want.

Another possibility is to require warrants for the actual post-processing and search queries.  This is something that would obviously require a pretty sophisticated court (I don't see many judges being able to conduct a code review), but it has the advantage that all data now stays with a private owner.  The feds can donate a firewalled, sterilized server farm to the company for doing the processing, and they can get the anonymous results back.  But the government can't be allowed to own the data, or it will find some way to abuse it.

Does this impede the NSA in its quest to identify real security threats?  It certainly makes the process more ponderous.  If you've got a real-time threat, your engineers aren't going to be able to poke at the data willy-nilly until some interesting answer pops out.  But that's an unlikely way for these guys to work; they're much more likely to come up with specific data-mining programs that they'd like to run on well-established data sets, over a long period of time.  Adding one more hoop to jump through before they conduct this sort of surveillance doesn't seem like a big price to pay.

The much more troubling form of data collection are the taps that the NSA is alleged to have placed to collect raw internet traffic.  There's no way to firewall that sort of collection with a private company.  On the other hand, the only "account" associated with such data is an IP address.  Keeping that information firewalled from the feds without a warrant should allow them to do bulk analysis on the traffic without compromising privacy.  And if they find something alarming, they can get a warrant.

So we can reduce the problem, through fairly simple means, to a question of how much we trust the courts.  Per reports on how FISA courts run currently, the answer to that question is, "not very much".  Fixing the operation of these courts is difficult, but paramount.  At the very least, application for warrants needs to be an adversarial process.  Currently, outside groups can file friend-of-the-court briefs, but it's pretty hard to quash a warrant when you don't know what it's for.  Appointing and providing security clearances for a pool of outside parties with aggressive anti-surveillance agenda would go a long way toward leveling the playing field.  I'd feel a lot better if an EFF lawyer were opposing every warrant application.  I'd gladly contribute to a fund for the billable hours.  (I'd rather not have the feds footing the bill...)

Ultimately, though, we're going to have to live with the fact that applying Big Data techniques to the detritus of daily life potentially gives the government huge power over us.  The only way to prevent that power from being abused will be the exercise of constant vigilance and aggressively proscribing the activities of the security agencies.  I'm not sure I approve of how Snowden outed the whole system, but it's hard to deny that his actions have opened up a very useful conversation.  I just hope that we put some real reforms in place before we all go back to sleep.

Tuesday, May 14, 2013

Neocorticalism and Its Weaknesses

[Blogger's note:  I went dark halfway through writing this.  Started it back in November, then got diverted into a self-imposed software project and lost interest for a while.  Not sure I'm back to blogging for real--we'll see.  No doubt you, my three readers, are incredibly excited!]

Just finished reading [well, last November] Ray Kurzweil's How to Create a Mind.  This has the usual Kurzweillian arguments about why all information technologies--including those associated with biology and neuroscience--grow at an exponential rate, irrespective of the actual technology base.  It extends his singularity books by going a bit deeper into how to build artificial cortical pattern recognizers, and makes the argument that a hierarchy of these modules pretty much constitutes the human mind and its consciousness.

Kurzweil references Jeff Hawkins's On Intelligence, one of my favorite books of all time.  Kurzweil takes issue with some of the details of Hawkins's model, but the two authors both agree on several central points:
  • Both note the existence of a "cortical algorithm", where all areas of the neocortex work pretty much the same and project axons in pretty much the same way.  This means that patterns get recognized the same way everywhere, and project up to higher level pattern recognizers and down to excite, reinforce, or inhibit lower level patterns.
  • Kurzweil thinks that consciousness is an emergent property of a big neocortex, while Hawkins thinks that consciousness is "what it feels like to have a cortex".
  • Both of them spend a lot of time on input pattern recognition, but very little time on motor outputs.
  • Both treat the thalamus and brain stem as I/O devices, with little to do with consciousness (although Kurzweil does note that the thalamus is essential to be conscious).
  • Hawkins thinks that the secret sauce for complex cognition lies in the time-dependent behavior of cortical activation, where pattern #1 primes the downward links for pattern #2 to be activated if things go as the patterns expect them to in the immediate future.  Kurzweil is strangely silent on time-dependent behavior, which is weird, since speech recognition is so heavily time-biased.  I think he's glossing over some behavior of hidden Markov models in which I'm not expert.  He may also be slightly cagey about trade secrets.
I think that they're barking up the right tree with respect to the cortical algorithm, but they're both minimizing the importance of the older brain structures.  I have some trouble with this "neocorticalist" approach.  Here are my big objections:
  • Neocorticalism can't really explain attention.  As I've said before, I think that attention is likely an incredibly ancient property and is intimately involved with consciousness.  I'm prepared to believe that human-style self-reflection and theory of mind might be new, emergent properties, but they can only emerge because of some repurposing of older mechanisms.
  • Simple pattern matching doesn't get you very good motor performance.  I'm prepared to believe that the motor cortex is coordinating intentions and hifgh-level actions, but the whole business seems way too asynchronous to allow us to be catching balls or playing the piano.  The cerebellum is clearly involved, but I suspect that you need a way to make the neocortex semi-synchronous.  There's some evidence that the basal ganglia are involved at least in time perception; I'll bet we're going to discover that the same structures are "clocking" groups of neocortical patterns to coordinate activities in the motor cortex.
  • One of the things that was intriguing about Hawkins's architecture is that it hinted that the learning process self-organized the cortical hierarchy so that novel or unlearned activities were first handled, somewhat hesitantly, at high levels, but the learning process pushed down the salient features into lower level learning, but no mechanism was described for how this happens.  Kurzweil pretty much ignored this, going so far as to posit some kind of central allocator of pattern recognizers.  There's something subtle going on here that pure neocorticalism can't capture.
Since this book was published, Kurweil has taken a high-level technologist position at Google, which has also bought, DNNresearch, Geoff Hinton's startup.  Deep learning, a term coined by Hinton, has been getting a lot of press recently.  It's pretty clear that Google has decided to pour money into this.  It's a problem that's amenable to implementation in cloud computing, so the fit is pretty good.

People implementing deep learning train each layer in a multi-layer network separately, using unsupervised feature detection.  ("Unsupervised" means that you don't tell the layer when it's done something right or wrong--you merely let it classify inputs as it sees fit.)  This still can't be quite what biological systems do, because they can figure out their own layering, which has to be imposed for deep learning.  My guess is that layering in humans is governed by chunks of the cortex that are genetically predisposed to accept axons from I/O-like areas of the thalamus and other parts of the brainstem.  They therefore learn to detect features associated with that kind of input and project out to a more amorphous set of regions, which can then combine multiple projections into novel features.

This still doesn't define a mechanism where layers compete with each other to identify features at the appropriate level of detail, but you can see how that might emerge with enough feedback between layers.

My guess is that we're going to discover that a huge amount of our cognition is dependent on a fairly arbitrary set of input regions mapping to another fairly arbitrary set of cross-regions, and so on.  But note that "arbitrary" doesn't mean that they're not genetically predetermined.  The stability of those mappings across most humans is what allows us to communicate with each other and what allows us to perceive most individuals as "sane".  One of the most interesting things about figuring out how all this hangs together is the possibility of producing entities that think radically differently from us but which still can extract insights about the universe that we're not optimized to perceive.

Things are going to move pretty quickly from here.  We aren't close to a strong AI using this kind of technology, but the Watson/Jeopardy thing shows that you can get an awful lot of interesting work out of something that you wouldn't necessarily want at your dinner party.