Made it Columbia, MO just before it started sleeting. It was an odd trip weather wise. I took off from Seattle and landed in Denver for a short layover – the temperature was quite cool and there was a lovely dusting of snow all around. When I left Seattle it was lightly raining and in the 40′s. Denver was dry (duh, that’s what Denver is) and in the 30′s.
When I arrived in St. Louis, it was positively balmy! The reported temp was 60 (!) and it was overcast. Not surprisingly, the rain started after a little while. We wandered around St. Louis running a few errands (since when did St. Louis get Trader Joe’s and Whole Foods?), we headed down to Columbia.
Just as we got home in Columbia, the rain switched over to sleet. Dan (my brother in law) and I decided to make a run out to the mall around 9pm, and by that point we had a nice layer of icy snow and blowing wind. It was really cool – hadn’t been out in a good snowstorm in quite a while. This sort of thing would cripple Seattle in short order, but true to form a good number of lunatics were happily out and about in Columbia. Dan and I did snicker about the number of pickups who cleared hadn’t put much weight in the rear end… they were all over the place. We got our errands done and returned (the snow does help reduce the crowds at the mall), sliding a bit but really enjoying it.
It was certainly nice waking up this morning to a white field and shiny ice-trees all over the place. Add in coffee, muffins, and chillin’ out sitting in the kitchen overlooking a wildlife area – this is shaping up to be a lovely Christmas break.
Oh – and I’ve been told that I need to stop writing so much about “concurrent pythons” and “that energy stuff”. Guess my family doesn’t quite care for my random technical meanderings… heh. Wait till I shove out that draft I’ve been working on about multi-core computer archictures.
It used to be just the data center folks that really dug into power consumption and trying to optimize power efficiencies. But the newer machines are eating enough power that even smaller development labs should start taking a serious look when they go to buy hardware – and many are.
I’ve been working around those issues myself for the past several weeks, so I was really pleased to read that SPEC is coming out with a standard benchmark for measuring compute performance to power consumption. SPEC has a formal press release on it as well as details available to the methodology. I’d been doing a crude variation on the theme: Watts/GHz, but I’m well aware that in using that as a measure I’m making a number of hasty generalizations.
One of the most frustrating components of doing this research is actually finding out how much power something is consuming. I’m looking forward to comparing desktops and laptops to the server estimations I did a little earlier. My hasty estimations were mostly done around HP server hardware, using their specs and a downloadable power consumption estimation tool.
What I found was the DL385 series was running around 42 Watts/GHz. The DL585 being slightly more efficient at 41 to 39 Watts/GHz (depending on how you load out the memory). Looking at blades with their shared infrastructure, and you get some even better numbers: the BL465 (equivilant to a DL385 kind of server) ran at 29 Watts/GHz – the best efficiency estimate I got out of the lot, and the BL685c running around 36 Watts/GHz.
I’ve borrowed a power meter – so now it’s on to measuring some actual consumption numbers…
It’s been forever… well – forever computer time anyway… since I upgraded this blog. It was high time to get things in place, so I grabbed the latest goodies and upgraded the whole kit.
Don’t really notice a difference? Good. If you do, let me know – okay?
Michael Sparks has been a busy guy – obviously continuing to think and fiddle in the concurrent python space quite a bit. He posted a bit of thought and some initial API concepts for software transactional memory on this blog yesterday, and then today kicked out a message to the Kamaelia list with those concepts at least roughly cemented into place with a stand-alone Axon implementation.
My knowledge of transactional memory is, frankly, pretty limited. I listened incredibly enthusiastically at OSCON 2007 to Simon Peyton-Jones talk about it. (Slides available too). He’s a great speaker, and I got the gist of the talk, but I wasn’t ready (and still haven’t gathered myself) to make a leap into some of the new kid languages (Haskell, in particular) to try all this stuff out.
So at a brief reading of the API and description, it looks Michael has implemented the critter. So far, all the fiddling I’ve done has taken advantage of just a single core – in short, the little tasklets aren’t running concurrently – they’re explicitly sharing back and forth. Michael’s clearly thought further down the road there and determined that it would be nice to get all 8 cores of an Octo-Mac into the action. (Well, I expect that’s not QUITE what he though, but that was my immediate translation of it)
Mixing Axons as tasklets and threads, and you can do it. Axon’s got all the components to drive that thing hard. And as soon as you do that, you loose the safety net that I counted on earlier and need something exactly like STM.
I don’t normally post much about work or the work I do (at least anymore… heh). But after today’s “off-hours” upgrade, I’m reminded that Atlassian is a hell of a dev crew, and deserves some serious kudos. In the process, I’d like to take a back-handed (and front-handed) swipe at your traditional “enterprise software” – cause it mostly sucks to manage.
First – the kudos: I upgraded an instance of JIRA Enterprise today. It had languished at the office for several years – the big complement there being that a stock install from nearly 3 years ago worked just fine across a whole big span of time with nobody really paying all that much attention to it. So today I took it up to the latest version, including the latest version of the Perforce source control plugin. Sweet. The directions were incredibly straightforward and a process that I feared would take 8+ hours was done in roughly 30 minutes. In fact, I spent far more time dealing with OS upgrades and the inevitable thousand patches than I actually did upgrading JIRA.
In the process I also seamlessly switched database engine backends – it all just worked.
No other enterprise software that I’ve dealt with has this level of maturity of software upgrades and intelligent defaults. The fact that a stock instance grew from 30 users to over 500 without anyone really noticing is really freakin’ amazing. In addition to the really effective software development and defaults choices, the way it handles itself for installation, migration, and backup is simply wonderful. In one step you have an entire cross-database encapsulation of all the data you could use – and that very process has enough hooks in it that a new instance will recognize an old version and automatically bring it up to speed with a current internal schema.
Now take all that coolness, and add on top of it a completely open API for you to extend the system any way you like. If you look at most other enterprise software, the API’s are often not documented, or simply not available at all.
I didn’t want to particularly work today, but Atlassian made it pretty darn OK with their software.
Rogue Sheep hosted a great holiday party tonight, and Mike Lee definitely kicked our butts for best dressed:
Okay, yeah – it’s a crappy iPhone photo – but you get the drift.
Rob at Good Services Plumbing has resolved Lake Garfield into a few puddles – thank god. Longer term, I think we’re looking at a sump near the back door where most of the inflow comes from. There’s also the option of “2 1/2 days of Jose” – which would mean trenching a route through the middle of the basement for some drain-pipes to get set.
The sump has the benefit of being a tad quicker, maybe a little cheaper and doesn’t involve a jackhammer in our basement for 2+ days. The downside is that if the power goes out – back to Lake Garfield. At least for the “big dump” rain days like I suspect we’ll be seeing more and more frequently into the future.
Rob was fantastic – and this is a brazen plug for those guys. If you’re looking for a plumber to do some work, check out Good Services Plumbing. We’ve used them on quite a number of jobs now, and they’ve always treated us very well.
PS: Karen’s decided that she wants to watch the movie The Poseidon Adventure tonight. I damn near chocked when she said that.
Three storms converging on Seattle in one weekend – yeah, the expected happened. I just didn’t expect so much of the expected.
Our basement’s flooded, and another round of lift, move, drape to get boxes rearranged on things relatively impervious to water has been the late morning and afternoon. A plumbing is coming by sometime this afternoon to see about a basement drain that is either not working, or working very, very slowly. At this point, the water is flowing in under the back door to the basement (sunken steps), through the basement, and out the door to the garage, and from there out into the street and away. That means it’s as deep as it’s going to get – thank god – but still damned annoyed. Maybe’s it’s more like “Garfield Creek” rather than “Lake Garfield”. A few points in the basement are just over 1″ deep – but mostly it’s finding it’s own way out. We’ll have some impressive standing puddles when this is done, but we’ve also worked the broom sweep water sequence before – so I think we can knock it down pretty quick. It’s just damn wearing.
The really surprising thing on this is one is that it’s coming from all corners of the basement. Usually it’s just the back door area – a place where a few sunken steps lead to the basement. There’s a poorly installed french drain that’s more like a french well when the water levels get high enough. We’ve figured that the outflow from most of the houses immediately around us gets directed into a relatively small patch of grassy area – our “back yard”.
I poked around in Eventlet this morning – between shopping runs and helping Karen move things around in the basement. I’d hoped to knock out a quick hackysack experiment with that stuff – since it sounded about the same – and see how it did. I over-estimated by ability to quickly grok Eventlet, and it’s underpinnings of greenlet as well.
Bob Ippolito hacked this stuff together after the same conversations that revolved around how stackless does its magic. That dude is always doing something wild and forward.
What I’ve finally concluded is that the Eventlet stuff is a nice socket-based processing engine (read web server thingy) using the greenlets – but way more than I’d need for the basics to pull together the hackysack example. Seems like heading straight down to the greenlets themselves is what I’d want to do to make it – only I’m not fully understanding all the pieces and parts to make the hackysack example. I was thinking that Eventlet would have the higher level API’s that made it similiar to a Stackless Tasklet or a Kamaelia Component. Alas, it doesn’t appear so… or I’m just too dense to clue it in anyway.
Anyway, I’ve poked around this morning, realized I’m a bit lost – and so I’m going to give it a break to see if my rear-brain can catch up with some of the core concepts here.
Oh – here’s the code that doesn’t work as yet:
counter = 0
self.name = name
self.circle = circle
self.messageQueue = Queue.Queue()
self.g = greenlet(self.goLoop())
msg = self.messageQueue.get()
if msg == 'exit':
print "%s got hackeysack from %s" % (self.name,msg.name)
kickTo = self.circle[random.randint(0,len(self.circle)-1)]
while kickTo is self:
kickTo = self.circle[random.randint(0,len(self.circle)-1)]
print "%s kicking hackeysack to %s" % (self.name, kickTo.name)
hackeysackers = 5
turns = 1
hackeysackers = hs
turns = ts
hackeysacker.counter = 0
one = hackeysacker('1',circle)
for i in range(hackeysackers):
if __name__ == '__main__':