Wednesday, December 31, 2008

Tech Talk on Wii security model (and breaking it)

A very thorough talk describing the Nintendo Wii game console security model and the bugs and weaknesses that allowed the Wii to be compromised:
Console Hacking 2008: Wii Fail

In a nutshell, security is provided by an embedded ARM CPU that sits between the CPU and the IO devices, and handles all the IO. The two main flaws were (a) A bug in the code that compared security keys, such that it was possible to forge security keys, and (b) secret break-once-run-everywere information was stored un-encrypted in RAM, where it could be extracted using hardware modifications.

There's a nice table at the end of the presentation showing a number of recent consumer devices, what their security model was, and how long it took to break them.

The PS3 is the only console that's currently unbroken. The PS3's security model seems similar to the Xbox 360, but somewhat weaker. But it remains unbroken. This seems to due to the existence of an official PS3 Linux port, which means most Linux kernel hackers are not motivated to hack the PS3 security. (Only the ones who want full access to the GPU from Linux are motivated, and only to the extent that they can access the GPU.)

Thursday, December 25, 2008

Larrabee papers from SIGGRAPH Asia 2008

...as seen on the Beyond3D GPGPU forum, here are the presentations from the recent (December 12th 2008) "Beyond Programmable Shading" course:

SIGGRAPH Asia 2008: Parallel Computing for Graphics: Beyond Programmable Shading

There are good presentations from both GPU vendors and academics. My favorite presentations are the Intel ones on Larrabee, just because I'm so interested in that architecture:

Parallel Programming on Larrabee - describes the Larrabee fiber/task programming model.

Next-Generation Graphics on Larrabee - how Larrabee's standard renderer is structured, and how it can be extended / modified.

IBM / Sony missed a bet by not presenting here. That's too bad, because Cell sits between the ATI / NVIDIA parts and Larrabee in terms of programmability. And Cell's been available for long enough that there should be a number of interesting results to report.

Note to self: consider buying a PS3 and learning Cell programming, just to get ready for Larrabee. Heh, yeah, that's the ticket. Being able to play PS3-specific games like Little Big Planet and Flower would be just a coincidental bonus.

Monday, December 8, 2008

Fun with Git

This weekend I reorganize my home source code projects. I have a number of machines, and over the years each one had accumulated several small source-code projects. (Python scripts, toy games, things like that.) I wanted to put these projects under source code control. I also wanted to make sure they were backed-up. Most of these little projects are not ready to be published, so I didn't want to use one of the many web-based systems for source-code management.

After some research, I decided to use replicated git repositories.

I created a remote git repository on an Internet-facing machine, and then created local git repositories on each of my development machines. Now I can use git push and git pull to keep the repositories synchronized. I use git's built-in ssh transport, so the only thing I had to do on the Internet-facing-machine was make sure that the git executables were in the non-interactive-ssh-shell's path. (Which I did by adding them in my .bashrc file.)

Git's ability to work off-line came in handy this Sunday, as I was attending an elementary-school chess tournament with my son. Our local public schools don't have open WiFi, so there was no Internet connectivity. But I was able to happily work away using my local git, and later easily push my changes back to the shared repository.

Wednesday, November 19, 2008

Microsoft New Xbox Experience Avatars


I just tried creating an avatar on Microsoft's new Xbox dashboard. As you can see (at least when the Microsoft server isn't being hammered) on the left, they provide a URL for displaying your current Avatar on a web page.

The character creation system is not too bad. In some ways it's more flexible than Nintendo's Mii (for example more hair styles and clothing), but in other ways it's more limited (less control over facial feature placement).

My avatar looks better on the Xbox than it does here -- they should consider sharpening the image. For example, the T-shirt my avatar is wearing has a thin-lined Xbox symbol.

I think they do a good job of avoiding the Uncanny Valley effect. I look forward to seeing how avatars end up being used in the Xbox world.

In othe Xbox-related news I'm enjoying playing Banjo Kazooie Nuts & Bolts with my son. All we have right now is the demo, but it's great fun for anyone who likes building things. It's replaced Cloning Clyde as my son's favorite Xbox game.

Internals of the Azul Systems Multi-core Java processor

I'm a big fan of CPU architectures. Here's a conversation between David Moon formerly of Symbolics Lisp Machines and Cliff Click Jr. of Azule Systems. They discuss details of both the Lisp Machine architecture and Azule's massively multi-core Java machine.

http://blogs.azulsystems.com/cliff/2008/11/a-brief-conversation-with-david-moon.html

The claim (from both Symbolics and Azule) is that adding just a few instructions to an ordinary RISC instruction set can make GC much faster. With so much code being run in Java these days I wonder if we'll see similar types of instructions added to mainstream architectures.

Monday, October 20, 2008

Can a comic strip make you more productive?

This one can:

XKCD: Someone is Wrong on the Internet

--- this comic's punchline has saved me at least an hour of a week since it came out. That's more than I've saved by learning Python. :-)

Tuesday, September 30, 2008

Next gen video console speculation suggests we aim low

The next generation of video game consoles should start in 2011. (Give or take a year). It takes about three years to develop a video game console, so work should be ramping up at all three video game manufacturers.

Nintendo's best course-of-action is pretty clear: Do a slightly souped-up Wii. Perhaps with lots of SD-RAM for downloadable games. Probably with low-end HD resolution graphics. Definately with an improved controller (for example with the recent gyroscope slice built in.)

Sony and Microsoft have to decide whether to aim high or copy Nintendo.

Today a strong rumor has it that Sony is polling developers to see what they think of a PlayStation 4 that is similar to a cost-reduced PlayStation 3 (same Cell, cheaper RAM, cheap launch price.)

http://forum.beyond3d.com/showthread.php?t=50037

That makes sense as Sony has had problems this generation due to the high launch cost of the PS3. The drawback of this scheme is that it does nothing to make the PS4 easy to program.

In the last few weeks we've seen other rumors that Microsoft's being courted by Intel to put the Larrabee GPU in the next gen Xbox. I think that if Sony aims low, it's likely that Microsoft will be foreced to aim low too, which would make a Larrabee GPU unlikely. That makes me sad -- in my dreams, I'd love to see an Xbox 4 that used a quad-core x86 CPU and a 16-core Larrabee GPU.

Well, the great thing is that we'll know for sure, in about 3 years. :-)

Wednesday, September 24, 2008

Woot! I'm 19th place in the ICFP 2008 Programming Contest

Team Blue Iris (that's me and my kids!) took 19th place, the top finish for a Python-based entry!

Check out the ICFP Programming Contest 2008 Video. The winning team list is given at 41:45.

Friday, September 19, 2008

Will Smart Phones replace PCs?

That's the question Dean Kent asks over at Real World Tech's forums. I replied briefly there, but thought it would make a good blog post as well.

I'm an Android developer, so I'm probably biased, but I think most people in the developed world will have a smart phone eventually, just as most people already have access to a PC and Internet connectivity.

I think the ratio of phone / PC use will vary greatly depending upon the person's lifestyle. If you're a city-dwelling 20-something student you're going to be using your mobile phone a lot more than a 60-something suburban grandpa.

This isn't because the grandpa's old fashioned, it's because the two people live in different environments and have different patterns of work and play.

Will people stop using PCs? Of course not. At least, not most people. There are huge advantages to having a large screen and a decent keyboard and mouse. But I think people will start to think of their phone and their PC as two views on the same thing -- the Internet. And that will shape what apps they use on both the phone and the PC.

And this switching will be a strong force towards having people move their data into the Internet cloud, so that they can access their data from whatever device they're using. This tendency will be strongest with small-sized data that originates in the cloud (like email), but will probably extend to other forms of data over time.

Peter Moore on Xbox

Peter Moore on Xbox

I always liked Peter Moore, and I was sorry when he left Xbox for EA. He's given a very good interview on his time at Sega and Microsoft. (He ran the Xbox game group at Microsoft before moving on to Electronic Arts.) Lots of insight into the Xbox part of the game industry.

Here he is talking about Rare:
...and you know, Microsoft, we'd had a tough time getting Rare back – Perfect Dark Zero was a launch title and didn't do as well as Perfect Dark… but we were trying all kinds of classic Rare stuff and unfortunately I think the industry had past Rare by – it's a strong statement but what they were good at, new consumers didn't care about anymore, and it was tough because they were trying very hard - Chris and Tim Stamper were still there – to try and recreate the glory years of Rare, which is the reason Microsoft paid a lot of money for them and I spent a lot of time getting on a train to Twycross to meet them. Great people. But their skillsets were from a different time and a different place and were not applicable in today's market.

Tuesday, September 16, 2008

Pro tip: Try writing it yourself

Sometimes I need to get a feature into the project I'm working on, but the developer who owns the feature is too busy to implement it. A trick that seems to help unblock things is if I hack up an implementation of the feature myself and work with the owner to refine it.

This is only possible if you have an engineering culture that allows it, but luckily both Google and Microsoft cultures allow this, at least at certain times in the product lifecycle when the tree isn't frozen.

By implementing the feature myself, I'm (a) reducing risk, as we can see the feature sort of works, (b) making it much easier for the overworked feature owner to help me, as they only have to say "change these 3 things and you're good to go", rather than having to take the time to educate me on how to implement the feature, (c) getting a chance to implement the feature exactly the way I want it to work.

Now, I can think of a lot of situations where this approach won't work: at the end of the schedule where no new features are allowed, in projects where the developer is so overloaded that they can't spare any cycles to review the code at all, or in projects where people guard the areas they work on.

But I've been surprised how well it works. And it's getting easier to do, as distributed version control systems become more common, and people become more comfortable working with multiple branches and patches.

Monday, September 15, 2008

Tim Sweeney on the Twilight of the GPU

Ars Technica published an excellent interview with Tim Sweeney on the Twilight of the GPU. As the architect of the Unreal Engine series of game engines, Tim has almost certainly been disclosed on all the upcoming GPUs. Curiously he only talks about NVIDIA and Larrabee. Is ATI out of the race?

Anyway, Tim says a lot of sensible things:

  • Graphics APIs at the DX/OpenGL level are much less important than they were in the fixed-function-GPU era.
  • DX9 was the last graphics API that really mattered. Now it's time to go back to software rasterization.
  • It's OK if NVIDIA's next-gen GPU still has fixed-function hardware, as long as it doesn't get in the way of pure-software rendering. (ff hardware will be useful for getting high performance on legacy games and benchmarks.)
  • Next-gen NVIDIA will be more Larrabee-like than current-gen NVIDIA.
  • Next Gen programming language ought-to-be vectorized C++ for both CPU and GPU.
  • Possibly the GPU and CPU will be the same chip on next-gen consoles.

Tuesday, August 12, 2008

The Future of Graphics APIs

The OpenGL 3.0 spec was released this week, just in time for SigGraph. It turns out to be a fairly minor update to OpenGL, little more than a codification of existing vendor extensions. While this disappoints OpenGL fans, it's probably the right thing to do. Standards tend to be best when they codify existing practice, rather than whey they try to invent new ideas.

What about the future? The fundamental forces are:

+ GPUs and CPUs are going to be on the same die
+ GPUs are becoming general purpose CPUs.
+ CPUs are going massively multicore

Once a GPU is a general purpose CPU, there's little reason to provide a standard all-encompasing rendering API. It's simpler and easier to give an OS and a C compiler, and a reference rendering pipeline. Then let the application writer customize the pipeline for their application.

The big unknown is whether any of the next-generation video game consoles will adopt the CPU-based-graphics approach. CPU-based graphics may not be cost competitive soon enough for the next generation of game consoles.

Sony's a likely candidate - it's a natural extension to the current Cell-based PS3. Microsoft would be very comfortable with a Larrabee-based solution, given their OS expertiese and their long and profitable relationship with Intel. Nintendo's pretty unlikely, as they have made an unbelievable amount of money betting on low-end graphics. (But they'd switch to CPU-based graphics in an instant if it provided cost savings. And for what it's worth, the N64 did have DSP-based graphics.)

Sunday, July 27, 2008

Mac Min HTPC take two

I just bought another Mac Mini to use as a HTPC (home theater PC). I tried this a year ago, but was not happy with the results. But since then I've become more comfortable with using OS X, so today I thought I'd try again.

Here's my quick setup notes:
  • I'm using a Mac Mini 1.83 Core 2 Duo with 1 GB of RAM. This is the cheapest Mac Mini that Apple currently sells. I thought about getting an AppleTV, but I think the Mini is easier to modify, has more CPU power for advanced codecs, and can be used as a kid's computer in the future, if I don't like using it as an HTPC. I also have dreams of writing a game for the Mini that uses Wiimotes. I think this would be easier to do on a Mini than an AppleTV, even though the AppleTV has a better GPU.
  • I'm using "Plex" as for viewing problem movies, and I think it may end up becoming my main movie viewing program. It's the OSX version of Xbox Media Center. (Which is a semi-legal program for a hacked original Xbox. The Plex version is legal because it doesn't use the unlicensed Xbox code.) The UI is a little rough. (Actually, by Mac standards it's _very_ rough. :-) ) Plex has very good codec support and lots of options for playing buggy or non-standard video files.
  • I connected my Mac Mini to my media file server using gigabit ethernet. This made Front Row feel much snappier than when I was using an 802.11g wireless connection.
  • I installed the Perian plugin adds support for many popular codecs to Quicktime and Front Row.
  • I set up my Mac Mini to automatically mount my file server share at startup and when coming out of sleep. Detailed instructions here. Synopsis: Create an AppleScript utility to mount the share, put the utility in your Login Items so that it's run automatically at startup, and finally use SleepWatcher to run the script after a sleep.
  • I added FrontRow to my Login Items (Apple Menu:System Preferences...:Accounts:Login Items) to start Front Row at startup.
  • I administer my Mini HTPC using VNC from a second computer. I don't have a keyboard or mouse hooked up to the HTPC normally. I disabled the Bluetooth keyboard detection dialog using Apple Menu:System Preferences...:Bluetooth:Advanced... then uncheck "Open Bluetooth Setup Assistant at startup when no input device present".
Things I'm still working on:
  • No DVR-MS codec support in Perian, and therfore none in Front Row. I have to use my trusty Xbox 360 or VLC to view my Microsoft Windows Media Center recordings.

Monday, July 14, 2008

ICFP 2008 post-mortem

This year's ICFP contest was a traditional one: Write some code that solves an optimization problem with finite resources, debug it using sample data sets, send it in, and the judging team will run it on secret (presumably more difficult) data sets, and see whose program does the best. The problem was to create a control program for an idealized Martian rover that had to drive to home base while avoiding craters, boulders, and moving enemies.

I read the problem description at noon on Friday, but didn't have time to work on the contest until Saturday morning.

The first task was to choose a language. On the one hand, the strict time limit argued for an easy-to-hack "batteries included" language like Python, for which libraries, IDEs, and cross-platform runtime were all readily available. On the other hand, the requirement for high performance and ability to correctly handle unknown inputs argued for a type safe, compiled language like ML or O'Caml.

I spent a half an hour trying to set up an O'Caml IDE under Eclipse, but unfortunately was not able to figure out how to get the debuger to work. Then I switched to Python and the PyDev IDE, and never ran into a problem that made me consider switching back.

I realize that the resulting program is much slower than a compiled O'Caml would be, and it probably has lurking bugs that the O'Caml type system would have found at compile time. But it's the best I could do in the limited time available for the contest.

It was very pleasant to develop in Python. It's got a very nice syntax. I was never at a loss for how to proceed. Either it "just worked", or else a quick web search would immediately find a good answer. (Thanks Google!)

The main drawback was that the Python compiler doesn't catch simple mistakes like uninitialized variables until run time. Fortunately that wasn't too much of a problem for this contest, as the compile-edit-debug cycle was only a few seconds long, and it only took a few minutes to run a whole test suite.

The initial development went smoothly: I wrote was the code to connect to the simulation server and read simulation data from the server. Then I created classes for the various types of objects in the world, plus a class to model the world as a whole. I then wrote a method that examined the current state of the world and decided what the Martian rover should do next. Finally I wrote a method that compared the current and desired Martian rover control state, and sent commands back to the simulation server to update the Martian rover control state.

The meat of the problem is deciding how to move the rover. The iterative development cycle helped a lot here -- by being able to run early tests, I quickly discovered that the presence of fast-moving enemies put a premium on high speed movement. You couldn't cautiously analyze the world and proceed safely, you had to drive for the goal as quickly as possible.

My initial approach was to search for the closest object in the path of the rover, and steer around it. This worked, but had issues in complicated environments. Then I switched to an idea from Craig Reynolds' Not Bumping Into Things paper: I rendered the known world into a 1D frame buffer, and examined the buffer to decide which way to go. That worked well enough that I used it in my submission.

I spent about fourteen hours on the contest: Two hours reading the problem and getting the IDE together, ten hours over two days programming and debugging, and about two hours testing the program on the Knoppix environment and figuring out how to package and submit the results.

Things I wish I had had time to do
  • My rover is tuned for the sample data sets. The organizers promised to use significantly different data sets in the real competition. Unfortunately, I didn't have time to adapt the program to these other data sets, beyond some trivial adjustments based on potential differences in top speed or sensor range.
  • I model the world at discreet times, and don't account for the paths objects take over time. I can get away with this because I'm typically traveling directly towards or away from important obstacles, so their relative motion is low. But I would have trouble navigating through whirling rings of Martians.
  • I don't take any advantage of knowledge of the world outside the current set of sensor data. The game explicitly allows you to remember the world state from run to run during a trial. This could be a big win for path planning when approaching the goal during the second or later trials.
  • I don't do any sort of global path planning. A simple maze around the goal would completely flummox my rover.
I very much enjoyed the contest this year. I look forward to finding out how well I did, as well as reading the winning programs. The contest results will be announced at the actual ICFP conference in late September.

Wednesday, July 9, 2008

Getting ready for ICFP 2008

The rules for this year's ICFP contest have just been posted. Although the actual problem won't be posted until Friday July 11th, the rules themselves are interesting:
  • Your code will be run on a 1GB RAM 4GB swap 2GHz single-processor 32-bit AMD x86 Linux environment with no access to the Internet.
  • You have to submit source code.
  • You may optionally submit an executable as well (useful if for example you use a language that isn't one of the short list of languages provided by the contest organizers.)
  • Teams are limited to 5 members or less.
I have mixed feelings about these rules. The good news is:
  • It should be possible for most interested parties to recreate the contest environment by using the contest-provided Live CD. A computer capable of running the contest could be purchased new for around $350.
  • It seems that the focus will be on writing code in the language of the contestant's choice, rather than writing code in the language of the contest organizer's choice. This wasn't the case in some previous year's contests.
  • It provides a level playing field in terms of CPU resources available to contestants.
  • It ensures that the winning entry is documented. (A few years ago the contest winner never wrote up their entry, which was quite disappointing.)
The bad news is:
  • It penalizes contestants with low Internet bandwidth. The Live CD image is not yet available for download, and I anticipate some contestants will have difficulty downloading it in time to compete in the contest.
  • It penalizes non-Linux users, who are forced to use an alien development environment and operating system.
  • It penalizes languages too obscure to make the contest organizer's list. That goes against the whole "prove your language is the best" premise of the contest.
  • The target system is 32 bits and single core, which is at least five years out of date, and does little to advance the state of the art. This penalizes many languages and runtimes. For example OCaml has a harsh implementation limit on array size in 32 bit runtimes that is relaxed in 64-bit runtimes.
  • It seems as if there won't be any during-the-contest scoring system, so we will have to wait until the ICFP conference to find out how the contestants did.
Still, I'm hopeful that the contest itself will still be enjoyable. I look forward to reading the actual programming problem on Friday.

Saturday, June 28, 2008

Network Attached Storage Notes

I just bought a Buffalo LinkStation Mini 500GB Networked Attached Storage (NAS) device. It's a very small fanless Linux file server with two 250 GB hard drives, 128 MB of RAM, a 266 MHz ARM CPU and a gigabit Ethernet port.

My reasons for buying a NAS

  • I wanted to provide a reliable backup of family photos and documents, and I was getting tired of burning CDs and DVDs.
  • I wanted a small Linux-based server I could play with.

My reason for buying the LinkStation Mini

  • It's fanless.
  • It's tiny.
  • Buffalo has a good reputation for NAS quality.
  • There is a decent sized Buffalo NAS hacking community.
  • Fry's had it on sale. :-)

Setting it up

Setup was very easy -- I unpacked the box, pluged everything in, and installed a CD of utility programs. The main feature of the utility program is that it helps find the IP address of the NAS. All the actual administration of the NAS is done via a Web UI.

To RAID or not to RAID

The LinkStation Mini comes with two identical drives, initially set up as RAID0. This means that files are split across the two drives, which means that if either drive fails all your files will be lost. Using the Web UI, I reformatted the drives to RAID1, which means that each file is stored on both drives. This of course halves the amount of disk space available to store files, but I thought the added security was worth it. This process of switching over was fairly easy to do, but it erases all the data on the drives and it takes about 80 minutes.
RAID1 is more secure than RAID0, but it is not perfectly secure. There's still a chance of losing all the data if the controller goes bad, or if the whole device is stolen or destroyed. So for extra security I will probably end up buying a second NAS (or USB 2.0 drive), and setting up an automatic backup of the backup device. The Mini can be set to perform periodic automatic backups to a second LinkStation for this very reason. Once I do that, I'll probably reformat my NAS's drives back to RAID0 to enjoy the extra storage space.

Getting Access to Linux root

There is a program called acp_commander, that enables you to remotely log in as root on any Buffalo LinkStation Mini on the same LAN as your PC. Once logged in as root you can read and write any file on the NAS. You can use this power to install software and reconfigure your system.

Yes, this is a security hole -- it means anyone with access to your local LAN can bypass all the security on the file server. Very advanced users can patch the security hole by following the instructions at this web forum. I think it's extremely negligent of Buffalo to configure their NAS devices in this way. Imagine the uproar if Microsoft shipped a product with this kind of security hole.

Playing with Linux

Once I obtained root access to the Mini I was able to install additional software. I installed the Optware package system, which gives access to a wide variety of precompiled utility programs, as well as tools for writing new programs.
(Yeah, I know, it's crazy to run software on a file server that's supposed to be backing up important data. Right now I'm just having fun playing with my new toy, but eventually I'm going to have to get serious about making it work reliably.)

From looking at what other people have done, I am thinking that I might set up a small web server, or perhaps a media server for streaming music and video.

Thinking of the Future

There's an active LinkStation hacking community at buffalo.nas-central.org. Unfortunately the Linkstation Mini is so new that nobody in the NAS hacking community knows much about it. Right now it seems to be similar to a LinkStation Pro Duo, but only experience will show if this is true.
The Mini comes with a USB 2.0 port, to which you can attach a printer and/or a hard disk. While the hard disk isn't part of a RAID array, it could be used to back up the RAID array, providing an additional layer of security.

Alternatives

There must be 20 different NAS vendors, although many of them just repackage reference designs made by the SOC vendors. SOC mean System on Chip. Marvell seems to be the dominant player in the NAS SOC market these days. A good overview of available NAS products can be found by visiting Small Net Builder. Some brands like Revolution, QNAP and Synology cater to enthusiasts who are interested in using the NAS as a mini Linux server. The only thing that stopped me from buying those brands is that (a) they're more expensive, and (b) they don't currently have fanless RAID1 form factors.
The Revolution brand is actually owned by Buffalo. They add hardware daughter boards to standard Buffalo products. The daughter boards have extra flash chips and I/O connectors. It's possible that there will be a Revolution "Kuro box" version of the Mini some day.

The venerable (out-of-production, but still available in stores) Linksys NSLU2 product is fanless and cheap, and very popular with hackers, but you need to add hard drives, and I don't think its networking performace is very good compared to more recent products.

Another approach is to use a PC, either running a regular OS like Windows XP, Windows Server, OSX or Linux, or a special-purpose stripped-down NAS version. I do have an old PC currently running Windows Media Center that I could use for this purpose, but I didn't seriously consider this option because I wanted something small, low-power, and quiet. (And I was looking for an excuse to learn how to administer a Linux system anyway.)

Apple makes NAS products too. Their the Airport Extreme and Time Capsule products both look OK, but neither one supports RAID1. And there doesn't seem to be a software hacking community around these products. There is a software hacking community around the AppleTV, which you could make into a NAS by adding some USB 2.0 hard drives.

Some routers (like the Apple Airport Extreme mentioned above) have USB 2.0 ports, but I think they avoid advertising themselves as NAS products because they don't have enough RAM (or CPU) to act as both routers and file servers. As a result, these products tend to have relatively low NAS performance.

Some people would laugh at a NAS that has only 240GB of storage. They are more interested in the high-end NASes that use four or five 1GB disks. When formatted in RAID5 configuration those NASes have 3GB of usable space. But they also cost $600 plus the cost of the drive ($160 each). Which is much more than I wanted to spend. Besides the cost, another drawback is that these products are nearly as large and noisy as regular PCs. Still, if you've got a lot of video (or are anticipating generating a lot of video in the future) the larger NASs are the way to go.

A NAS in Every Garage?

While all my friends and I are setting up file servers to store their family's videotapes, I'm not sure if the product will become universally popular. I think it will depend on how people's secure storage needs evolve.
We're already seeing small files (email, photos, low-res videos) being stored in the cloud. It seems like it's just a matter of time before everything is. Unless people suddenly come up with compelling new applications that use dramatically more data (holographic TV perhaps?), it seems likely that people's personal storage needs are going to top out in the next decade. If disk capacity and network bandwidth keep growing at a rapid pace for several decades beyond that, then it seems inevitable that cloud storage will eventually take over.

In any event, by the time this happens my little Mini will long since have been retired. (I remember paying $100 apiece for 1GB Jaz disks back in the day. It's amazing how far and how fast storage prices have fallen.) If all goes well, my my family's photos and other important documents will still be around!

Saturday, June 7, 2008

I saw the original Spacewar! on a PDP-1 today

I went to the Computer History Museum today. I saw the Visual Storage exhibit, which is a collection of famous computers, the Babbage Difference Engine, which is a very elaborate reproduction of a never-actually-built Victorian era mechanical calculator, and the PDP-1 demo. This last demo was very special to me, because I finally got to play the original Spacewar! game, and meet and chat with Steve Russell, the main developer. (Perusing Wikipedia I now realize that Steve was also an early Lisp hacker. D'Oh!, I was going to ask a question about Lisp on the PDP-1, but I got distracted.)

There's a Java Spacewar! emulator, but it doesn't properly convey the look of the PDP-1 radar-scope-based display. The scope displays individual dots, 20,000 times per second. Each dot starts as a fuzzy bright blue-white dot, but then fades quickly to a dim yellow-green spot, which takes another 10 seconds to fade to black. This means that dim yell0w-green trails form behind the ships as they fly around. These trails add a lot to the game's distinctive look. (In addition, due to time multi-plexing, the stars of the starfield are much dimmer than the space ships or the sun.) The fuzzyness of the dots means that the spaceships look much smoother on the PDP-1 scope than they do in the Java simulator.

According to Steve Russel and the other doscents, the Java version also runs faster than a real PDP-1.

I also got to see serveral other cool PDP-1 hacks, including the original Munching Squares, 4-voice square-wave computer synthezed music, and the famed Minskeytron. The author of the music synth program, Peter Sampson, was present, and explained how he carefully patched into four of the console lights to make a four-voice D/A converter to get music out of the machine.

They keep all the hacks loaded into the PDP-1 core at the same time, and just use the front panel to decide which one to jump to. The core memory is non-volitile. The PDP-1 even booted in a few seconds -- just the time it took the power supply to come up to speed.

The PDP-1 demo is given twice a month, on the second and fourth Saturdays. I highly recommend it for adults and children over 12. (It's 45 minutes long, so younger kids might get bored.)

Thursday, June 5, 2008

Thoughts on In-Flight Entertainment systems

I recently spent a lot of time using two different in-flight entertainment systems: one on Eva Air, and another on Virgin Atlantic. For people who haven't flown recently, I should explain that these systems consist of a touch-sensitive TV monitor combined with a remote-control-sized controller. The systems typically offer music, TV, movies, flight status, and video games.

I believe both systems were based on Linux. I saw the Eva system crash and reboot, and the Virgin Air system has a number of Linux freeware games.

The GUI frameworks were pretty weak -- both systems made poor use of the touch screen and had obvious graphical polish issues. The Virgin system was much higher resolution, and was 16:9 aspect ratio. I expect it was running on slightly higher-spec hardware.

Both systems worked pretty well for playing music and watching TV or movies. The media controls were pretty limited - neither system allowed seeking to a particular point in a movie, or even reliably fast forwarding. Both systems provided enough media to entertain your average customer for the duration of the flight.

One cool feature of the EVA system was backwards compatibility mode with the older "channel" music system from the 70's. The controller came with the traditional "channel" UI. If you used the channel buttons, the system simply acted like the old system, cycling through a limited number of preset channels. One nice difference from the old channel system is that these new virtual channels always started when you switched to them, rather than having to join the looping presentation at whatever point it happened to be in.

The game portions of both sysetems were very weak. None of the games were very good. Perhaps the best game was a port of the shareware Doom game on the Virgin Atlantic system. (I used an in-flight entertainment system on Singapore Air many years ago that had Nintendo games. It was more fun.)

The Virgin system allowed you to order food and drink, which was nice. Both systems had credit card swipers, and offered some for-pay options. Both systems allowed you to make in-flight phone calls. EVA allowed you to send SMS messages and emails.

Both systems allowed you to create "play lists" of music tracks that would then be played while you did other tasks. I enjoyed this, but I suspect it's not used much, as anyone with the sophistication and interest to use this UI would probably have their own MP3 player.

The Virgin system had two other very nice features: 1) laptop power in most seats (although only two plugs for every three seats), and 2) Ethernet connections. Unfortunately the ethernet connections were not yet active.

Virgin allowed you to "chat" between seats. I didn't try this, but it seems like it would be fun for some situations (e.g. when a high school class takes a trip.) I expect that the Doom game can play between seats as well, but didn't investigate.

Virgin also had normal mini stereo headphone plugs, which I think was a good idea. Eva had two kinds of audio plug, but neither one was the normal mini stereo plug. I tried using "Skull candy" noise-canceling headphones with the Virgin system, and while they helped suppress the airplane noise, they didn't eliminate it completely.

It will be interesting to see how these systems evolve over time. I think that once in-plane internet access becomes practical people will prefer to surf the Internet to using most of the other services. (besides movie watching) And with the in-seat power, I think many people will prefer using their own laptop to the in-seat system. On the other hand, the in-seat system is very space efficient. There's a chance people will use it as a remote display for their own laptop or mobile phone, which could then remain tucked away in the carry-on luggage.

Saturday, May 31, 2008

Wii long term strategy

Here's a very long, quite good post on Nintendo's strategy with the Wii:

http://malstrom.50webs.com/birdman.html


The thesis is that the mainstream video game market arms race of every-more-complicated games ended up overshooting enough of the potential game market to allow an opening for simpler "down-market" games, and that Wii was able to exploit this opening. The article predicts that Nintendo will now move up-market, producing more complicated games over time, pushing PS3 and Xbox 360 into very up-market niches. Sort of how consoles took over from PC games.

Wednesday, May 14, 2008

OS X will hang if your VPN connection is flakey

OS X is by and large a good OS, but once you get past the sexy UI you find a lot of rough edges.

For example, this month I've been working remotely over a flakey DSL connection. I ran into a very frustrating problem: if you're using a PPTP-based VPN, and your network connection is poor quality, the whole Apple UI will frequently freeze up with the "Spinning beachball" cursor for minutes at a time.

Luckily for me the work-around is to reboot my DSL modem. But it seems like poor system design for the VPN packet performance to affect the UI of non-networked applications.

Tuesday, May 13, 2008

Yegge's rant on dynamic languages

Another superb rant by Steve Yegge on dynamic languages:

http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.html

The comment section's good too -- especially the long comment by Dan Weinreb of Lisp / ITA software fame.

Steve's got the same problem some of my self-taught friends do (hi Bob, hi Jim!): he'll say something in a strongly opinionated way, without giving supporting evidence. I think that makes people think he doesn't know what he's talking about. So people tend to write him off. But if you talk with him, it almost always turns out his strong opinions are backed by some pretty deep experience and insight. I've learned to give Steve (and my self-taught friends) the benefit of the doubt.

Friday, April 18, 2008

Wow, TVs are complicated

Check out this teardown of a Sony OLED TV. It looks like Sony has a standardized architecture for their TVs, which makes sense, but which also means that some TVs have unused capabilities (such as a multi-core CPU powerful enough to run a web browser. I wish an enterprising hacker would figure out how to download code and run them on the TVs -- my understanding from reading Sony's GPL web site is that they already have Linux and busybox installed. Oh well, maybe GPL 3.0 will force Sony to make their TVs user-upgradable in the future.

It's significant that Sony's not using the Cell CPU in their TVs. That was part of the justification for spending so much on Cell. I assume this means that Cell's just not cost-effective for TVs.

Tom Forsyth on Larrabee

Tom Forsyth, who recently left RAD to work at Intel on the Larrabee project, has posted to his tech blog explaining that Larrabee is going to be primarily a traditional OpenGL/DirectX rasterizer, not some crazy raytracer:
Larrabee and Raytracing

Sunday, April 6, 2008

Dusty Decks

Back in the '90s I had a home page where I posted some of my code hacks and articles. If you want to see what I was doing 10 years ago, check out:

Jack's Hacks

(Mostly Java and Anime. Both of which were leading-edge back then, but are kind of main-stream now.)

Sunday, March 30, 2008

ThinLisp

Some notes on ThinLisp, a dialect of Lisp for real-time systems. Thin Lisp was written by Gensym Corporation in the '90s. The general idea is that you develop your program using a subset of Common Lisp, and then compile it into efficient C. Garbage collection is avoided by using object pools, arenas, and similar tricks familiar to advanced C programmers.

The current home of ThinLisp seems to be Vladimir Sedach's Code Page . Vladimir seems to have used it for "one small OpenGL" project before abandoning it. He seems to be happily hacking ParenScript (a Lisp to Javascript translator) these days.

The Scheme guys have similar, but more modest, sysems: Schelp, and PreScheme (part of Scheme48).

Wednesday, March 26, 2008

One year at Google!

Happy Anniversary to me! Google's automated HR script just emailed me its congratulations.

Although I miss my friends and former colleagues at Microsoft, and I miss the games industry, overall I'm still glad I made the switch. I'm enjoying the new work, and learning all the cool Google technologies. Now if only the stock price didn't keep going down. :-)

Some things I like:
  • Switching from Window to Macintosh. It took me six months to get used to the subtle differences, but as a user I'm just happier with the Mac. It's easier for me to use. Now, to be fair, at home I still maintain a Windows Vista machine for the excellent Windows Media Center, but for everything else I use the Mac.
  • Better corporate politics. There seems to be less infighting between groups. And while my overall compensation is about the same as it was at Microsoft, the way it's managed and delivered makes it seem less competitive than MS. Perhaps it's an illusion, but it feels better.
  • Better equipment. I love using a 30" LCD monitor and a high-end laptop, and I love the "we'll just give it to you" technical support. At MS sometimes I felt that I had to fight to justify minor hardware purchases.
  • Fancier food in the cafeterias. I usually eat lunch and diner at work, so the tasty and relatively healthy food is much appreciated. Interestingly enough, I think Google Seattle uses the same food caterer as Microsoft does. I guess we just asked them to cook different food.
  • More connected to the Web / Valley culture. Microsoft's pretty insular. It was good to get closer to the Silicon Valley culture again.
  • Able to use open-source projects. Sometimes the open-source projects are the best way of doing something. But Microsoft's not able to use it, due to a combination of pride, loyalty to its own products, and fear of viral licenses. For example, using Linux for embedded devices is much better than Windows CE. I also find I liked using non-Microsoft technologies like Java, Python and Ruby.
Since you may be wondering, what do I miss from Microsoft?
  • I miss the Xbox project. Right now my old team is probably starting to plan the next generation of Xbox, and it would have been a blast to have been a part of that process.
  • I miss the free game betas, and using beta software in general. (I'm a sucker for new features!)
  • I miss some of the Microsoft-specific technologies like C# (still better than Java), Visual Studio (still better than Eclipse), and F#.
  • Surprisingly, I don't miss my old single-person office very much. It's true that a group office is distracting. But it's also helpful for sharing ideas and for keeping focused on the project.

Well, that's it, better get back to work!

Tuesday, March 18, 2008

Insomniac Games Shares Technology

One very nice habit of Western game companies is that many of them share their technical knowledge with competitors. Insomniac Games is a very good third party console game developer that has concentrated mostly on the PlayStation platform. Their recent games include the shooter "Resistance Fall of Man" and the action platform Rachet and Clank series.

At this year's GDC they announced the "Nocturnal" initiative. It's not a whole game engine, but rather a collection of useful utilities. Things like logging code, C++ object serialization, and a cross-platform performance monitor. Some of the utilities are Playstation 3 specific, but most are applicable to any modern game platform.

Much of this code would be right at home in a "Game Gems" book, but it's even better to have it freely available, on the web, with a BSD-style license. Good for you Insomniac!

Insomniac also publishes technical papers in a GDC-presentation-like format on their Game R & D Page.

Why do so many game companies share information like this? I think it's for a number of mutually supportive reasons:
  1. It's a form of advertising, to show off how smart and competent the developers are. This is helpful in attracting job applicants and impressing publishers and game reviewers.
  2. It educates all game developers, some of whom will eventually end up working for the original developer.
  3. It encourages other developers to share their technology, which benefits the original game developers.
  4. It reduces the value of middleware, driving down the cost of middleware.
Game developers can give away source code because, unlike other kinds of software, the major intellectual property in the game is in the copyrighted and trademarked art assets, (the data) rather than in the code. Yet, at the same time, game quality is directly tied to the performance of the code. This creates a unique economy in which it is profitable for game developers to exchange performance tips with their competitors.

And it's a lot of fun for armchair developers like me. Now if only we can get Naughty Dog to open-source their GOOL and GOAL Lisp-based game engines. :-)

Saturday, March 15, 2008

TaxCut 2007 vs. Case-sensitive file systems

OS X allows you to format your file system as case-insensitive (the default) or case sensitive (like Linux.) I use case-sensitive, to simplify porting and working on Linux software.

Unfortunately, H&R Block's TaxCut 2008 program won't work if installed on a case-sensitive file system. It fails because it can't find files, probably due to differences between the case of the file name used by the programmer and the actual case of the file name on disk.

A work-around is to use the Disk Utility program to create a case-insensitive image, and install Tax Cut on the image. I used a 600 MB image, so that I can store all my tax forms there too, and eventually burn the whole thing to CD to archive it.

Tim Sweeney Too - DX 10 last relevant graphics API

A good, three-part interview with Tim Sweeney (the other FPS graphics guru):

Part 1: http://www.tgdaily.com/content/view/36390/118/
Part 2: http://www.tgdaily.com/content/view/36410/118/
Part 3: http://www.tgdaily.com/content/view/36436/118/

His main thesis is that soon GPUs will be come so programmable that you won't bother using a standard Graphics API to program them. You'll just fire up a C compiler.

I think he's right.

Wednesday, March 12, 2008

Carmack speaks on real next-gen graphics

John Carmack is experimenting with a "sparse octree" data structure for accelerating 3D graphics rendering:

http://www.pcper.com/article.php?aid=532&type=overview

Best quote:

"The direction that everybody is looking at for next generation, both console and eventual graphics card stuff, is a "sea of processors" model, typified by Larrabee or enhanced CUDA and things like that, and everybody is sort of waving their hands and talking about “oh we’ll do wonderful things with all this” but there is very little in the way of real proof-of-concept work going on. There’s no one showing the demo of like, here this is what games are going to look like on the next generation when we have 10x more processing power - nothing compelling has actually been demonstrated and everyone is busy making these multi-billion dollar decisions about what things are going to be like 5 years from now in the gaming world. I have a direction in mind with this but until everybody can actually make movies of what this is going to be like at subscale speeds, it’s distressing to me that there is so much effort going on without anybody showing exactly what the prize is that all of this is going to give us."

Second-best quote is that he wants lots of bit-twiddling operations-per-second to traverse the data structures rather than lots of floating-point-operations-per-second. Must be scary for the Larrabee and NVIDIA CPU architects to hear that, this late in their design cycles.

Hopefully John will come up with a cool demo that helps everyone understand whether his approach is a good one or not. Hopefully the Larrabee / NVIDIA architectures are flexible enough to cope. (Interestingly, no mention of ATI -- have they bowed out of the high-end graphics race?)

Sunday, March 9, 2008

ForumWarz - a game about the web forum culture

This is an interesting role-playing-game set in current-day web forum culture:

http://www.forumwarz.com/

It's somewhat not-safe-for-work, and the humor is pretty low-brow. But what's neat is that you play it through your browser, and it recreates the look-and-feel of web forum culture perfectly. It wouldn't surprise me if the authors just captured the HTML for various real-world forums to create the resources for the game. (Or alternately, created their own fictional forums using web tools, and then captured the HTML from those fictional forums.)

The actual game didn't hold my interest for very long, but it's free and it's fun for a few days.

Thursday, February 21, 2008

Good web site for following the Microsoft / Yahoo Merger

Silicon Alley Insider seems to have the best coverage of the Microsoft / Yahoo Merger.

But for the grumpy inside-Microsoft point-of-view you can't beat Mini-Microsoft .

We write for posterity

The Google Analytics numbers for this blog are dismal. (Hi Mom! Hi Friends!) I think it's because right now I don't have much to say that's both unique and interesting. Partly this is because so much of my life is off limits: I don't want to talk about the joys & cares of raising a family, and I musn't talk about the joys & cares of raising a new product. What's left are comments on the general state of the web, and essays on general topics like this one.

Why write then, and who am I writing for? I write because something inside me compells me to, and because it helps me think to get my ideas down in written form. Who do I write for? From my Analytics numbers it's clear that I'm writing primarily for search engines (Hi Googlebot!) rather than people. And that's something interesting to think about: Baring a world-wide disaster or cultural revoloution, what I write today will persist for thousands and probably even millions of years, and will be read countless times by search engines, and only occasionally, if at all, by people.

My words will be torn apart and merged with other web pages from other authors, becoming a mulch out of which new insights will be gleaned. (Hmm, not unlike how my body will be recyled when I die, its atoms used to make new things.)

Perhaps the last time my essay will ever be read by a live human is in some far distant future when some graduate student is writing an essay on early-web-era civilization, and is trying to find out what those poor benighted souls thought of the future. (Hi posterity!)

No doubt my words will be automatically translated from 21st-century English into whatever language wins the world-wide language wars. Perhaps my essay will even be automatically annotated, with a description of who I was, and a best guess at what I looked like, from searching the world's photo archives. There will be footnotes and links to explain the archaic topics I'm referencing. "Search engine" - they used to store data in seperate computers, and brute-force building the search index. How primitive! How quaint!

And no doubt the grad-student-of-the-future will glance over my words, then move on to the hundreds of other essays on similar themes. (Good luck with your own essay, future-guy!)

Tuesday, January 29, 2008

Hey, Paul Graham's arc programming language is out!

I just noticed (while reading the 4chan prog forum for the first time) that Paul Graham has put up a web site for his minimal Lisp language Arc:

http://arclanguage.org/

The language looks like a nice quiet Scheme-like Lisp dialect. And it has a nice tutorial, as you would expect from a Paul Graham language.

Friday, January 25, 2008

3dMark price/performance charts

3DMark is a GPU/CPU benchmark used by PC gamers to measure system performance. Here are some great charts showing
My home computer system is very weak compared to these charts, except in one dimension, which is that my 1600 x 1200 display puts me in the top 10% of gamers. Woot!

While many people (myself included) have switched to laptops and/or all-in-ones, if you're planning on building a new desktop, check out the Ars Technica system guide. The guide does a good job of speccing out a "Budget Box", a "Hot Rod", and a "God Box", and it's updated every quarter.

Thursday, January 24, 2008

Languages that look interesting

Currently I'm reading up on the following computer languages:

Python - fun, easy to learn, batteries included
Boo - fun like Python, but with macros and type declarations so that it can run fast.
Erlang - very brief code. I'm impressed by how concise the Wings3D source code is.
Typed Scheme - Scheme with type checking. (Could in theory run fast.)

I may try implementing my old "Dandy" game in these languages to see how they feel.

Darwin Ports issue with "patch"

Ever since I've upgraded to Apple Macintosh OS X 10.5 Leopard, I've run into problems using the Darwinports "port" command to install new software.

The problem is that for some reason the version of GNU "patch" that I have installed in /usr/bin/patch is version 2.5.8, and it doesn't operate the way that Darwin ports expects. A typical error message is:

---> Applying patches to erlang
Error: Target org.macports.patch returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_lang_erlang/work/erlang-R12B-0" && patch -p0 < '/opt/local/var/macports/sources/rsync.macports.org/release/ports/lang/erlang/files/patch-toolbar.erl'" returned error 2
Command output: Get file lib/toolbar/src/toolbar.erl from Perforce with lock? [y]
Perforce client error:
Connect to server failed; check $P4PORT.
TCP connect to perforce failed.
perforce: host unknown.
patch: **** Can't get file lib/toolbar/src/toolbar.erl from Perforce

Error: Status 1 encountered during processing.


The work-around is to define the environment variable POSIXLY_CORRECT=1 , as in:

POSIXLY_CORRECT=1 sudo port install erlang

Now, I've done some web searching, and I haven't seen anyone else complaining about this problem, so perhaps there's something odd about my setup.

Monday, January 7, 2008

Web scraping in Java, F#, Python, and not Lisp

Yesterday I wrote a web scraper. A web scraper is a program that crawls over a set of web pages, following links and collecting data. Another name for this kind of program is a "spider", because it "crawls" the web.

In the past I've written scrapers in Java and F#, with good results. But yesterday, when I wanted to write a new scraper, I though I'd try using a dynamically-typed language instead.

What's a dynamically-typed language you ask? Well, computer languages can generally be divided into two camps, depending on whether they make you declare the type of data that can be stored in a variable or not. Declaring the type up front can make the program run faster, but it's more work for the developer. Java and F#, the languages I previously used to write a web scraper, are statically typed languages, although F# uses type inference so you don't actually have to declare types very often -- the computer figures it out for you.

In order to scrape HTML you need three things:
  1. a language
  2. a library that fetches HTTP pages
  3. a library that parses the HTML into a tree of HTML tags
Unless you're using Mono or Microsoft's Common Language Runtime, the language you choose will restrict the libraries that you can use.

So, the first thing I needed to do was choose a dynamic language.

Since I just finished reading "Practical Common Lisp", an excellent advanced tutorial on the Lisp language, I though I'd try using Lisp. But that didn't work out very well at all. Lisp has neither a standard implementation nor a set of standard libraries for downloading web pages and parsing HTML. I did some Googling to try and find some combination of parts that would work for me. Unfortunately, it seemed that every web page I visited recommended a different combination of libraries, and none of the combinations I tried worked for me. In the end I just gave up in frustration.

Then, I turned to Python. I had not used Python much, but I knew it had a reputation as an easy-to-use language with a lot of easy-to-use libraries. And you know what? It really was easy! I did some web searches, copied some example code, and voila, I had a working web spider in about an hour. And the program was easy to write every step of the way. I used the standard CPython implementation for the language, Python's built-in urllib2 library to fetch the web data, and the Beautiful Soup library for parsing the HTML.


How does the Python compare to Java and F# for web scraping?

Python Benefits:
  • Very brief, easy to write code
  • Libraries built in or easy to find
  • Lots of web examples
  • I didn't have to think: I just used for loops and subroutine calls.
  • Very fast turn-around.
  • Easy to create and iterate over lists of strings.
Python non-issues for this application:
  • Didn't matter that the language was slow, because this task is totally I/O bound.
  • Didn't matter that the IDE is poor, using print and developing interactively was fine
F# Benefits:
  • Good IDE (Visual Studio)
  • Both URL fetching and HTML parsing libraries built in to CLR
F# Non-issues:
F# Drawbacks:
  • The CLR libraries for URL fetching and HTML parsing are more difficult to use than Python. It takes more steps to complete similar operations.
  • Strong typing gets in the way of writing simple code.
  • odd language syntax compared to Algol-derived languages.
  • Hard-to-understand error messages from the compiler.
  • Mixed functional/imperative programming is more complicated than just imperative programing.
  • The language and library encourages you to use advanced concepts to do simple things. In my web scraper I wrote a lot of classes and had methods that took complicated curried functions as arguments. This made the code hard to debug. In retrospect perhaps I should have just used lists of strings, the same as I did in Python. Since F# supports lists of strings pretty well, maybe this is my problem rather than F#'s. ;-)
Java benefits:
  • Good debugger
  • Good libraries
  • Multithreading
Java drawbacks:
  • Very wordy language
  • Very wordy libraries
Lisp drawbacks:
  • No standard implementation
  • No standard libraries
Looking to the future, I'd be interested in writing a web scraper in IronPython, which has good IDE support, and in C# 3.0, which has some support for type inference.

In any event, I'm left with a very favorable impression of Python, and plan to look into it some more. In the past I was put off from it because it was slow, but now I see how useful it is when speed doesn't matter.

[Note: When I first wrote this article I was under the impression that CPython didn't support threads. I since discovered (by reading the Python in a Nutshell book) that it does support threads. Once I knew this, I was able to easily add multi-threading to the web scraper. CPython's threads are somewhat limited: only one thread is allowed to run Python code at a time. But that's fine for this application, where the multiple threads spend most of their time blocked waiting for C-based network I/O. ]