December 2007 Archives

Christmas in Texas

The sister and dad waiting for the rest of the carolers to arrive at our Posada's starting point last night.

Obviously, catching up with family and eating foolish quantities of tamales and fruitcake take precedence over blogging. But I do want to mention that the yearly Posada was a hit as always. I can only assume that for most people it's quite rare for carolers to wander by and sing at your doorstep, because newcomers to the neighborhood are generally delighted but also confused by the whole affair. In our culture, after all, it's almost never a good thing when somebody knocks on your door unexpectedly. Even if it's just your neighbors. I guess that's been true for a while, though, if you listen to the lyrics of A-Wassailing:

We are not daily beggars who beg from door to door / but we are neighbors' children whom you have seen before!

So merry Christmas or preferred holiday of choice, everyone! I'll be here in Texas, thawing out, for the next week.

Time and the Modern Computer

So I ran across something insanely cool today (at least if you're me), but it'll take some explaining. However, for the hardcore geeks in the audience, the following output sums up the story nicely:

...@ebexgs:~$ uname -a Linux ebexgs 2.6.18-3-686 #1 SMP Mon Dec 4 16:41:14 UTC 2006 i686 GNU/Linux

...@ebexgs:~$ ./nanosleep
Looped 1000 times:
Used timeout of 50 microseconds.
Slept average of 7989.000000 microseconds.

...@vader:~$ uname -a Linux vader 2.6.22-2-686 #1 SMP Fri Aug 31 00:24:01 UTC 2007 i686 GNU/Linux

...@vader:~$ ./nanosleep
Looped 1000 times:
Used timeout of 50 microseconds.
Slept average of 56.000000 microseconds.

So let's back up a bit. If you've used practically any computer in the last twenty years (and if not, how exactly are you reading this?) you may have noticed that the machine is capable of doing several things at once. Say, playing an MP3 while loading a web page. That's not as simple as it sounds, because for most of that time, most personal computers had just one CPU that was capable of running just one instruction at a time. Thus your computer, all appearances to the contrary, was truly single-minded. This was visible in primitive operating systems like DOS: you could only run one program at a time. (And DOS really was primitive; this hadn't been true of "real" operating systems since the mid-60s at the latest.)

When more advanced operating systems (also, Windows) made it to personal computers, multitasking came with them. This is the juggling act that lets a computer appear to divide its attention between multiple users or programs. The operating system contrives this by running each program in sequence, switching between them many times a second. Just like the persistence of vision that keeps you from seeing the gaps between frames of a film, if the context switches come fast enough you never notice that each of your programs is actually spending most of its time frozen in place.

There are a number of ways you could do this, but almost all systems today use some kind of timer interrupt. The problem, of course, is that while your program is running, the operating system isn't; thus, how can the system manage the context switches? One way is to require that all programs occasionally give the operating system a chance to run -- such "cooperative" multitasking is unreliable since just one unhelpful program can jam up the works, but it is fairly simple and was used by early versions of both Windows and MacOS. Better to use a signal that periodically triggers the operating system, regardless of what the programs are doing. Conveniently, when IBM originally designed the PC they included a clock chip that can be programmed to send out just such a periodic signal. It ran at about 20 Hz, and was mostly intended to help DOS keep track of the time.

While not really ideal, this was already good enough to enable some "preemptive" multitasking; instead of hoping your programs all cooperate, the operating system could start and stop programs on its own, by using that clock tick to trigger a bit of its own code. And while the interrupt frequency has gone from 20 Hz to as much as 1000 Hz, this scheme has gone mostly unchanged over the last few decades.

This has some nasty side-effects. When computers now can run a billion or more instructions per second, a thousandth of a second is kind of an eternity. Usually that isn't too important, because now lots of other things can cause the operating system to kick in and stop or restart a program -- things like moving a mouse, or a packet arriving on the network, or a disk finishing a write -- and most programs spend most of their time waiting for events like that to happen. But if you have a program that needs to wait some arbitrary amount of time before doing something, rather than being woken up when an event occurs, with most operating systems there's no reliable way to do that precisely. For instance, say you're controlling a widget where you need to send some data, then wait a couple of milliseconds before sending more. On a modern computer a millisecond is a long time, but if your system's clock only ticks once every four milliseconds (fairly common), there's no way to do it. If your program gives up control and asks to be woken up in two milliseconds, the operating system itself probably won't kick in again to do the scheduling for twice that long. On the other hand, if you try to just spin your wheels for that time, there's a good chance the operating system will wake up in the meanwhile and take control away from you, only to return who knows when.

This bit me hard during one of my college jobs. I was working for a psychology lab that wanted software to measure peoples' reaction times with high accuracy in various situations, and these unpredictable few-millisecond delays made the task much more complicated than it needed to be. The simplest thing would, of course, be to program the timer to run faster (still involves hacking in the operating system's guts, but mostly in straightforward ways). However, there's a limit to how fast anyone wants one of these to run, since eventually the system would be spending all of its time running the operating system's scheduling code, and most programs don't need high-resolution timing. So in most systems the actual clock chip won't go more than a few times faster, anyway.

There's a much cleverer solution. Way back when, we all learned that the code to handle your clock interrupt had to be as simple as possible, so that it wouldn't take too much time. But a millisecond is forever these days, so you can do a bit more with it. Plus, there's lots of other hardware lying around that will wake up the operating system if something needs to happen. So, if you're willing to live without the regular tick, it turns out there's another mode you can use. While these clock chips might have a restricted range of tick frequencies, they're very precise, and besides sending periodic ticks they can also do one-shot countdowns. Rather than wake up every so many milliseconds, before finishing its work the operating system can instead look at a list of upcoming timeouts and set the clock chip like an alarm for the next one. If there's nothing to do for half a second, the system can go idle and save power, but if a program wants to get woken up a few microseconds from now, in this scheme the operating system can set its wake-up call for then.

In very recent versions the Linux kernel has started using this scheme, and in Debian testing the 2.6.22 kernel has the required bits enabled (NO_HZ turns off the regular tick, and HIGH_RES_TIMERS enables the wake-up calls). So in the output that I started with, I wrote a little program that tries to sleep for 50 microseconds and keeps track of how long it actually takes. Under the 2.6.18 kernel shipped with the Debian Etch release, each one takes about 8000 microseconds -- 160 times longer than the requested delay. (8 milliseconds, or two 250 Hz timer ticks -- hilariously, this loop takes half as long on a much slower computer with only one CPU.) Under the new kernel, 56 microseconds. Which is a feature I've been wanting for about eight years now, and is therefore awesome.

Revenge Wing

I guess Republicans just can't help themselves sometimes, but it really seems like we get some of our very best framing from their lame attempts to insult us. They really are just that incompetent. So after some Bush aide tried to diss the academics, bloggers, and other fact-using types as the "reality-based community", within a matter of days the phrase had caught fire and you had Dems everywhere declaring themselves proud members of the reality-based community.

So anyway, I'm hoping this one catches on as well, because it really captures something of what's necessary in these times. One of digby's commentators is apparently dubbing us the "Revenge Wing of the Democratic Party" now:

It took this recent post by Digby and this morning’s column by Krugman for me to “get it.? If you are a conservative, you should read the two pieces, not to criticize them nor ridicule them, but to understand their perspective. As briefly as possible, Krugman and Digby are speaking for the ‘Revenge Wing’ of the Democratic Party. “The GOP and big corporations are evil incarnate and we need to be ready to rumble, willing to do “what it will take to turn a progressive agenda into reality.?

These 21st Century Savonarolas believe that the next (Democratic) president must be willing to take the fight the enemy (the Republicans) and be willing to do whatever is necessary.

Um, here here!

The Broken Now

I want to point out this article by WIRED's Noah Shachtman on the intersection of technology and the present state of American warfare. In particular in Iraq, technology -- and its limitations -- has engendered various competing myopias about the nature of military power. And yet, the article kind of isn't even about technology.

As WIRED's blogger in Iraq, while Noah gets shown around the latest and shiniest gadgets, he's also getting a peculiar sort of backstage pass to Mesopotamian warfare. After all, he represents neither the superstar opinion-maker nor scrappy muckraker families of journalism. This nerd shows up in the desert wanting to tell the kids back home about your toys, and he leaves with an unexpectedly honest, oblique slice of how things actually get done.

Yes, the result smacks of rose-tinting; or rather, sepia. Apparently everyone is a little bit Lawrence of Arabia. Like almost everything WIRED publishes, between the words is always a hit of existential ennui, but this has little to do with Iraq. For these writers, it's perpetually just after the fin de siecle, the bleak dawn after the balloons drop. The future started yesterday and we were supposed to have starships and cyberbrains and flying cars, but instead we're stuck with this broken world where everything is complicated and everything falls apart, forever. This article almost comes right out and says it, too, which is a bit refreshing. To wit:

They were supposed to be the wars of the future. And the future lost.

Spamming Snow

So I'm walking home a couple of nights ago, and I note that somebody's been writing in the drifted snow. This is one of those blocks where the front yards slope down steeply to meet the sidewalk, so it's possible to reach a fair amount of snow from the sidewalk. The printing is fairly neat, the letters a couple of feet tall, making a line of text parallel to the sidewalk for the length of several front yards. Like a news ticker, except I'm the one that's moving.

Obviously the content had been objectionable to someone, because large chunks have been rubbed out. Just a letter here and there remain, until I reach what must be the end of a sentence. Evidently the message was meant to be exciting. What remains reads:

...(stuff mostly rubbed out)...e!!!1!11!!one!!!1!!eleven!!1! ...(more stuff rubbed out)... AWESOME (cartoon of male genitals) !!!

It's like somebody transcribed spam onto a snowbank.

Apparently the meme is catching, because walking home last night there were more rubouts, but also more text showing up in the snowdrifts. Plus a large number of Jesus-fish. And the epigram, "Kucinich is J'aai!" I'm not sure if J'aai is meant to indicate support or opposition.

Burnin' Dark Matter

Yesterday I gave a talk on "Stars Powered by Dark Matter" that I think came out pretty well. The overall gist, after giving a really brief overview of WIMPy dark matter, that stars tend to accumulate dark matter particles in their interiors, and at high enough concentrations this starts to have potentially interesting effects. In particular, WIMPs (i.e. particles of dark matter) probably self-annihilate. That is to say, if you bring two WIMPs together, they will annihilate just like matter and antimatter. However, you have to get them really, really close together for this to happen -- we know this, because otherwise you'd see lots of gamma rays and whatnot from all over due to WIMPs annihilating in the halo of dark matter surrounding our own galaxy.

I considered four cases. First, working mostly off this 2002 paper by Bottino, I discuss the amount of dark matter likely to build up in our own Sun. You get some 1024 or so particles per second, which annihilate to add a few petawatts of energy to the Sun's core. Sounds like a lot, but it's completely negligible to a star, and the effects are too small for us to detect.

Next, Fairbairn et al have a preprint out discussing what happens if you ramp up the dark matter concentration. Turns out, if you make the dark matter particles a billion times more common than we think they are around us (here, we think it's probably several per cubic meter, depending on how massive you think the particles are), the dark matter annihilation produces more energy in the star's core than would nuclear burning, and you get a so-called "WIMP burner". Of couse, finding such a high concentration of dark matter might be tricky...

So I also talked about some work from earlier this year (this conference proceeding and this paper) by Igor Moskalenko and Lawrence Wai. They think you can get such huge concentrations of WIMPs near the supermassive black hole at the center of a galaxy. Apparently, you could have a white dwarf -- a dense, dead, burned-out star -- that swings within a milliparsec of the black hole and captures enough dark matter to put out ten times the luminosity of the sun. That's twice the distance from us to the Voyager probes, but still pretty close when you're talking about a million-solar-mass black hole.

Finally I mentioned this paper by Spolyar et al that suggests that WIMP annihilation could have prevented the first stars from forming right away (or possibly, at all). Instead, they would remain the dense, dark clouds that we normally expect to form protostars, prevented from continuing their collapse because they can't get rid of the dark matter energy fast enough. If this model is right, that's an effect that the folks who study the first stars and their effects are going to have to find a way around.

Observing with Knobs and Gears

Photo post, just because I can!

Observing with the refractor. This was taken back in November during one of my comet Holmes observing runs.


Earlier this week I was fairly pleased with myself for writing a reasonably elegant piece of code that takes a bunch of variously-sized chunks of data and works out how to efficiently and predictably squeeze them into the spare bandwidth of another data stream.

The next day, I think while walking to the grocery store, it occurred to me that I could do the same thing with a mapping trick and some modular arithmetic.

I was prematurely pleased with myself. That version took two lines of code.

Felis Domesticus*

| 1 Comment

A good friend just lost a dear old pet. How like a cat, really -- they're cleverly engineered to exploit all our empathy and parenting circuits, they make us love them, and then they have the bad grace to have a lifespan one fifth our own.

(And notice, I'm doing it myself. Anthropomorphizing them, that is. It really is remarkable, how precisely evolved they are to make us identify with them, as fellow intelligent, self-centered predators, but react to them as fluffy and playful. They'd eat us in a heartbeat if we were just smaller, and yet we invite them to take up residence in our homes!)

So yes, I like cats, I've loved a few of them, and been sad when they eventually died. But I'd seriously consider a box turtle for my next pet. They've also managed the trick of looking fairly intelligent (when they're awake, anyway), and might not age at all. Probably outlive my grandchildren, at any rate.

*I know, that's not the real scientific classification. But Felis silvestris catus just doesn't have the same ring to it.

XML Tree Pruning

A little while back I had an XML document that I needed to prune down. Now tools like XPATH will easily let you pick out certain bits of an XML tree, but what I needed was kind of the complement: keep the document intact, but zap specific bits of it. Turns out that libxml2 provides an easy way to do this. For instance, in python:

import libxml2
doc = libxml2.parseFile("file.xml")
xpc = doc.xpathNewContext()
for n in xpc.xpathEval('/root/item|/root/folder[./title/text()!="Keeper"]'):
doc.saveFormatFile("pruned.xml", True)

I.e. if I have an XML tree with a root element containing a bunch of item and folder nodes, this will toss out all of them except the folder I want to keep (here, the one with title "Keeper"). The procedure should be about the same from any language that can use libxml2.

December and the Busy Blogger


First of December, and right on cue it's snowing. A lot. Apparently there is some concern that, now that everything is all white, there will be a mad rush to set up camp on the lakes, and the authorities would like to publicize that the ice isn't all that thick yet. So don't do that.

These Minnesotans, sometimes they baffle me greatly.

I think I just need to acknowledge that, between now and when EBEX flies sometime next year, blogging hereabouts is going to be somewhat intermittent. There's just a great deal to do in the last half-year before an experiment (literally) takes flight. That said, there are a few forms of content that more-or-less produce themselves. For one thing, I still burn through an awful lot of political writing online, and I should really get back in the habit of flagging the better bits here. Astrophotography always seems popular.

For another, slightly longer-term, project that could generate a lot of posts, I've been thinking for a while that I need to get with the Web 2.0 and get my photo archive online. I'm less likely to lose them to a disk crash (and thousands of hi-res photos do start to use some disk space) that way, plus Flickr now has geo-tagging features that would let me, for instance, display maps tagged with the locations where photos were taken. Which I think would be really cool, even if the actual tagging would be a bit time-consuming. No great hurry, though. Maybe my next camera will have a built-in GPS.

It's a real pity that I'm not allowed to say all that much in public about my research, because there's a lot of interesting things going on here. Me, I think a blog about the months leading up to a major balloon-experiment flight would make for moderately interesting reading. I guess you'll have to watch the BLAST documentary whenever it finally comes out to get the flavor of the process.

About this Archive

This page is an archive of entries from December 2007 listed from newest to oldest.

November 2007 is the previous archive.

January 2008 is the next archive.

Find recent content on the main index or look in the archives to find all content.


Powered by Movable Type 4.31-en