Posts

Showing posts from September, 2008

Next gen video console speculation suggests we aim low

The next generation of video game consoles should start in 2011. (Give or take a year). It takes about three years to develop a video game console, so work should be ramping up at all three video game manufacturers. Nintendo's best course-of-action is pretty clear: Do a slightly souped-up Wii. Perhaps with lots of SD-RAM for downloadable games. Probably with low-end HD resolution graphics. Definately with an improved controller (for example with the recent gyroscope slice built in.) Sony and Microsoft have to decide whether to aim high or copy Nintendo. Today a strong rumor has it that Sony is polling developers to see what they think of a PlayStation 4 that is similar to a cost-reduced PlayStation 3 (same Cell, cheaper RAM, cheap launch price.) http://forum.beyond3d.com/showthread.php?t=50037 That makes sense as Sony has had problems this generation due to the high launch cost of the PS3. The drawback of this scheme is that it does nothing to make the PS4 easy to program. In the l

Woot! I'm 19th place in the ICFP 2008 Programming Contest

Team Blue Iris (that's me and my kids!) took 19th place, the top finish for a Python-based entry! Check out the ICFP Programming Contest 2008 Video . The winning team list is given at 41:45.

Will Smart Phones replace PCs?

That's the question Dean Kent asks over at Real World Tech 's forums. I replied briefly there, but thought it would make a good blog post as well. I'm an Android developer, so I'm probably biased, but I think most people in the developed world will have a smart phone eventually, just as most people already have access to a PC and Internet connectivity. I think the ratio of phone / PC use will vary greatly depending upon the person's lifestyle. If you're a city-dwelling 20-something student you're going to be using your mobile phone a lot more than a 60-something suburban grandpa. This isn't because the grandpa's old fashioned, it's because the two people live in different environments and have different patterns of work and play. Will people stop using PCs? Of course not. At least, not most people. There are huge advantages to having a large screen and a decent keyboard and mouse. But I think people will start to think of their phone and their PC

Peter Moore on Xbox

Peter Moore on Xbox I always liked Peter Moore, and I was sorry when he left Xbox for EA. He's given a very good interview on his time at Sega and Microsoft. (He ran the Xbox game group at Microsoft before moving on to Electronic Arts.) Lots of insight into the Xbox part of the game industry. Here he is talking about Rare: ...and you know, Microsoft, we'd had a tough time getting Rare back – Perfect Dark Zero was a launch title and didn't do as well as Perfect Dark… but we were trying all kinds of classic Rare stuff and unfortunately I think the industry had past Rare by – it's a strong statement but what they were good at, new consumers didn't care about anymore, and it was tough because they were trying very hard - Chris and Tim Stamper were still there – to try and recreate the glory years of Rare, which is the reason Microsoft paid a lot of money for them and I spent a lot of time getting on a train to Twycross to meet them. Great people. But their skillsets wer

Pro tip: Try writing it yourself

Sometimes I need to get a feature into the project I'm working on, but the developer who owns the feature is too busy to implement it. A trick that seems to help unblock things is if I hack up an implementation of the feature myself and work with the owner to refine it. This is only possible if you have an engineering culture that allows it, but luckily both Google and Microsoft cultures allow this, at least at certain times in the product lifecycle when the tree isn't frozen. By implementing the feature myself, I'm (a) reducing risk, as we can see the feature sort of works, (b) making it much easier for the overworked feature owner to help me, as they only have to say "change these 3 things and you're good to go", rather than having to take the time to educate me on how to implement the feature, (c) getting a chance to implement the feature exactly the way I want it to work. Now, I can think of a lot of situations where this approach won't work: at the en

Tim Sweeney on the Twilight of the GPU

Ars Technica published an excellent interview with Tim Sweeney on the Twilight of the GPU . As the architect of the Unreal Engine series of game engines, Tim has almost certainly been disclosed on all the upcoming GPUs. Curiously he only talks about NVIDIA and Larrabee. Is ATI out of the race? Anyway, Tim says a lot of sensible things: Graphics APIs at the DX/OpenGL level are much less important than they were in the fixed-function-GPU era. DX9 was the last graphics API that really mattered. Now it's time to go back to software rasterization. It's OK if NVIDIA's next-gen GPU still has fixed-function hardware, as long as it doesn't get in the way of pure-software rendering. (ff hardware will be useful for getting high performance on legacy games and benchmarks.) Next-gen NVIDIA will be more Larrabee-like than current-gen NVIDIA. Next Gen programming language ought-to-be vectorized C++ for both CPU and GPU. Possibly the GPU and CPU will be the same chip on next-gen consol