Reactivated

I’ve been intentionally laying low for the past several months – well, more than several really. Since January.

I decided to leave Nebula, where I was working with some of the most fantastic co-workers you could ever wish for, and stepped out on my own. With the OpenStack Atlanta 2014 summit having just wrapped up, I thought it was probably worth peeking out from hiding a bit.

I’m not actively working on OpenStack right now. I’ve gone back a bit to using cloud and cloud services, rather than just building them. EC2, Azure, GCE, Nebula’s private cloud offering, Rackspace’s cloud, HP’s, etc. All the layers I want to use, and getting back to build on top of those services to provide some really interesting value.

I enjoyed the coverage from this last summit. The project is continuing to truck forward at incredibly speed – not without it’s hiccups of course – but making good progress.

I found myself very frustrated with the structure and progress of the foundation 18 months ago. I’m very glad that something is starting to gel in terms of DefCore and a real effort to standardize the options enough to ensure interoperability. From my earliest involvement with the project, that was the end goal in my head – the “big win” that was possible.

When I started with OpenStack I was with Disney. This morning I was reading a quote from Chris Launey about Disney’s use of OpenStack. It’s great to see that continuing to gel and come into fruition. It’s only with really solid interoperability that we’ll see the goal that I was running with at Disney, and Chris re-iterated in Atul’s review of the summit.

Ryan Lane’s presentation of the User Committee’s OpenStack User Survey results are really bearing the fruit that the gang started capturing over a year ago. I think OpenStack has a natural culture internally to want to push and grow very fast. The whole “what’s core” argument for ages – and the basics aren’t solid, and I think there’s still plenty of work there. I personally want to see solid interoperability, so I can use any provider easily and effectively. API’s and libraries like Fog are making it somewhat possible, but there’s still a lot of “continuity” disconnects, and a lot less solid support for tooling to _use_ an OpenStack cloud than I’d like to see.

But hey, like I said – forward motion, continued traction, and continued involvement. It’s looking positive for the project.

Thankfuls…

Yeah, ages since I posted here. Not sure anyone is even reading here anymore, but if you are, well – you’ll be surprised to see a new entry in the RSS feed or however you’ve kept track of this otherwise dormant feed.

It’s the day after thanksgiving and Karen and I were talking about all the various decisions we’ve made leading us to today. Living in Seattle, a great little house, doing things we love. Karen described it, stretching back two decades, of generally “erring on the side of adventure”. Moving to Seattle – now 13 years ago, leaving Singingfish/Thompson/AOL, joining Docusign, leaving Docusign, joining up and working for Disney, in turn leaving Disney, etc. Nothing we’ve done has been a sure bet. Lots of them were “pretty out there” in terms of “can it, or will it even work out”.

Probably the most strange to me is that I tend to think of myself as being risk averse. I’m sure there’s plenty of my family that would smack me upside the head for that. We’ve taken quite a number of flyers, and the sum total of the game has been pretty darned good. We definitely have a lot to be thankful for.

OpenStack docs and tooling in 20 minutes

I’ve gone through the routine several times now, so I decided to make it easy to replicate to help out some friends get started with all the tooling and setup needed to build, review, and contribute to OpenStack Documentation.

I’m a huge fan of CloudEnvy, so I’ve created a public github repository with the envy configuration and setup scripts to be able to set up a VM and completely build out all the existing documentation in roughly 20-25 minutes.

First, we install cloudenvy. It’s a python module, so it’s really easy to install with pip. My recommended installation process:

pip install -U cloudenvy

If you’re working on a mac laptop (like I do), you may need to use

sudo pip install -U cloudenvy

Once cloudenvy is installed, you need to set up the credentials to your handy-dandy local OpenStack cloud (y’all have one of those, don’t you?). For cloudenvy, you create a file in your home directory named .cloudenvy akin to this:

cloudenvy:
  clouds:
    cloud01:
      os_username: username
      os_password: password
      os_tenant_name: tenant_name
      os_auth_url: http://keystone.example.com:5000/v2.0/

Obviously, put in the proper values for your cloud.

Now you just need to clone the doctools envyfile setup, switch to that directory, and kick off Envy!

git clone https://github.com/heckj/envyfile-openstack-docs.git
cd envyfile-openstack-docs
envy up

20-25 minutes later, you’ll have a virtual machine running with all the tooling installed, run-through, and the output generated for all the documentation in the openstack manuals repository. The envyfile puts all this into your virtual machine at

~/src/openstack-manuals

To get there, you can use the command envy ssh to connect to the machine and do what you need.

For more on the how-to with contributing to OpenStack documentation, check out the wiki page https://wiki.openstack.org/wiki/Documentation/HowTo.

Do photon’s have mass?

My grandmother in Burlington, IA had this massive house overlooking the mississippi there. In the windows, she had these ornaments – little glass bulbs with pinwheel looking things in them that spun and spun and spun in the sunlight streaming through the huge windows overlooking the river.

Years later in college, I learned that the window trinket was a classic science experiment regarding photons having mass. I saw one on think-geek some time ago, and got one for my house:

Making keystoneclient python library a little easier to work with

A few weeks prior to the Grizzly OpenStack Design Summit, I was digging around in various python-*client libraries for OpenStack. Glanceclient had just started to use python-keystoneclient to take care of it’s auth needs, but everyone else was doing it themselves – intertia from having it in the base project from the early days and never refactoring things as clients replicated and split in the Essex release.

Looking at what glanceclient did, and had to do, I got really annoyed and wanted the client to have a much easier to use interface. At the same time, I was also digging around trying to allow the keystoneclient CLI to accept and use an override for the endpoint from the command line. Turns out the various mechanations to make the original client setup work with a system with two distinct URL endpoints was quite a mess under the covers, and that mess just propagated through to anyone trying to use the library.

We just landed some new code updates with keystoneclient to make it much easier to use. So this little article is intended to be a quick guide to using the python keystoneclient library and some of it’s new features. While we’re getting v3 API support installed, we’re still very actively using v2 apis, so we’ll use v2 API examples throughout.

The first is just getting a client object established.


>>> from keystoneclient.v2_0 import client
>>> help(client)

We’ve expanded the documentation extensively to make it easier to use the library. The base client is still working from httplib2 – I didn’t rage-change it into the requests library (although it was damned close).

There’s a couple of common things that you’ll want to do with initializing the client. The first is to authorize the client with the bootstrapping pieces so you can use it to configure keystone. In general, I’m sort of expecting this to be done mostly from the CLI, but you can also do it from the python code directly. To use this setup, you’ll need to initialize the client with two pieces of data:

  • token
  • endpoint

Token is what you’ll have configured in your keystone.conf file under admin_token and endpoint is the URL to your keystone service. If you were using devstack, it would be http://localhost:35357/v2.0

A bit of example code (making up the admin_token)

from keystone client.v2_0 import client

adminclient = client.Client(token='9fc31e32f61e78f114a40999fbf594c2',
                            endpoint='http://localhost:35357/v2.0')

Now at this point, you’ll have an instance of the client, and can start interacting with all the internal structures in keystone. adminclient.tenants.list() for example.

You may have spotted the authenticate() method on the client. If you’re using the token/endpoint setup, you do not want to call this method. When you’re using the admin_token setup, you don’t
have a full authorization token as retrieved from keystone, you’re short-cutting the system. This mode is really only intended to be used to bootstrap in projects, users, etc. Once you’ve done that, you’re better using the username/password setup with the client.

To do that, you minimally need to know the username, the password, and the “public” endpoint of Keystone. With the v2 API, the public and administrative endpoints are separate. With devstack, the example public API endpoint is http://localhost:5000/v2.0.

A bit of an example:

from keystoneclient.v2_0 import client
kc = client.Client(username='heckj', password='e2112EFFd3ff',
                   auth_url='http://localhost:5000/v2.0')

At this point, the client has been initialized, and as a default it will immediately attempt to authenticate() to the endpoint, so it already has some authorization data. With the updated keystoneclient library, this authorization info is stashed into an attribute “auth_ref”. You can check out the code in more detail – the class is keystoneclient.access.AccessInfo, and this represents the token that we retrieved after calling authenticate() to auth against keystone.

With only providing a username and password, the token is really only useful for about two things – getting a list of clients that this user can authorize to (getting a ‘scoped’ token – where the token represents authorization to a project), and then retrieving that token.

>>> kc.username
'heckj'
>>> kc.auth_ref
{u'token': {u'expires': u'2012-11-12T23:28:58Z', u'id': u'97913f8839634946afab2897ac19908d'}, u'serviceCatalog': {}, u'user': {u'username': u'heckj', u'roles_links': [], u'id': u'c8d112a0932a454097dfba0f3b598bdc', u'roles': [], u'name': u'heckj'}}
>>> kc.auth_ref.scoped
False
>>> kc.tenants.list()
[<Tenant {u'id': u'7dbf826d086c4580a28cf860a6d13046', u'enabled': True, u'description': u'', u'name': u'heckj-project'}>]
>>> kc.authenticate(tenant_name='heckj-project')
True
>>> kc.auth_ref.scoped
True
>>> kc.auth_ref
{u'token': {u'expires': u'2012-11-12T23:37:10Z', u'id': u'6d811d7c39034813b6cab2ad083cdf3e', u'tenant': {u'id': u'7dbf826d086c4580a28cf860a6d13046', u'enabled': True, u'description': u'', u'name': u'heckj-project'}}, u'serviceCatalog': [{u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://localhost:8776/v1/7dbf826d086c4580a28cf860a6d13046', u'region': u'RegionOne', u'internalURL': u'http://localhost:8776/v1/7dbf826d086c4580a28cf860a6d13046', u'publicURL': u'http://localhost:8776/v1/7dbf826d086c4580a28cf860a6d13046'}], u'type': u'volume', u'name': u'Volume Service'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://localhost:9292/v1', u'region': u'RegionOne', u'internalURL': u'http://localhost:9292/v1', u'publicURL': u'http://localhost:9292/v1'}], u'type': u'image', u'name': u'Image Service'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://localhost:8774/v2/7dbf826d086c4580a28cf860a6d13046', u'region': u'RegionOne', u'internalURL': u'http://localhost:8774/v2/7dbf826d086c4580a28cf860a6d13046', u'publicURL': u'http://localhost:8774/v2/7dbf826d086c4580a28cf860a6d13046'}], u'type': u'compute', u'name': u'Compute Service'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://localhost:8773/services/Admin', u'region': u'RegionOne', u'internalURL': u'http://localhost:8773/services/Cloud', u'publicURL': u'http://localhost:8773/services/Cloud'}], u'type': u'ec2', u'name': u'EC2 Service'}, {u'endpoints_links': [], u'endpoints': [{u'adminURL': u'http://localhost:35357/v2.0', u'region': u'RegionOne', u'internalURL': u'http://localhost:5000/v2.0', u'publicURL': u'http://localhost:5000/v2.0'}], u'type': u'identity', u'name': u'Identity Service'}], u'user': {u'username': u'heckj', u'roles_links': [], u'id': u'c8d112a0932a454097dfba0f3b598bdc', u'roles': [{u'name': u'Member'}], u'name': u'heckj'}, u'metadata': {u'is_admin': 0, u'roles': [u'08ccc339c0074a548104b9050bdf9492']}}

You might have noticed that you can now call authenticate() on the client and just pass in values that are missing from previous authenticate() calls, or you can switch them out entirely. You can change the username, password, project, etc – anything that you’d otherwise normally initialize with the client to do what you need.

OpenStack Keystone plans for the Grizzly release

I posted this information to the OpenStack-dev mailing list, but thought it would be worthwhile as a blog post as well.

Here is an overview of what’s looking to happen in Keystone over the grizzly release cycle.

From the summit, we had the state of the project slides, which might be of interest: http://www.slideshare.net/ccjoe/oct-2012-state-of-project-keystone

Since then, we’ve been working on fleshing out more details around those initial discussions, and we’ve been correlating who’s working on what to get an overview of what’s coming up for Keystone. If you’re into reading raw notes, take a look at https://etherpad.openstack.org/keystone-grizzly-plans.

For those looking for more of a tl;dr:

grizzly-1 plans:
* merging in V3 API work – “tech preview”
https://blueprints.launchpad.net/keystone/+spec/implement-v3-core-api

* move auth_token middleware to keystoneclient repo
https://blueprints.launchpad.net/keystone/+spec/authtoken-to-keystoneclient-repo

* AD LDAP extensions
https://blueprints.launchpad.net/keystone/+spec/ad-ldap-identity-backend

* enabling policy & RBAC access for V3 API
https://blueprints.launchpad.net/keystone/+spec/rbac-keystone-api

grizzly-2 plans:
* pre-authenticated token
https://blueprints.launchpad.net/keystone/+spec/pre-auth

* plugable authentication handlers
https://blueprints.launchpad.net/keystone/+spec/pluggable-identity-authentication-handlers

* consolidated policy documentation/recommendations
https://blueprints.launchpad.net/keystone/+spec/document-deployment-suggestions-policy

* PKI future work
https://blueprints.launchpad.net/keystone/+spec/delegation
– starting into delegation, signing of tokens
– annotations on signing for authorization

grizzly-3 plans:
* delegation
https://blueprints.launchpad.net/keystone/+spec/delegation

* multifactor authN
https://blueprints.launchpad.net/keystone/+spec/multi-factor-authn

Much of the work and desires around Delegation has yet to be fully defined and nailed down, and relies on a lot of additions in making PKI based tokens a stable, solid, default mechanism. I’m sure there will be some redirection once we get a few weeks down the road and see what’s happening with the V3 API rollout and PKI token extensions to support delegation, pre-auth, and so forth.

CloudEnvy – vagrant for OpenStack

I work on OpenStack, I work in OpenStack. Seems like everyone I know that’s been working on, in, or with OpenStack has their own little script to “set up their environment” – meaning getting a VM spun up with their dotfiles, tools, etc, all configured and ready to roll. I had one myself for quite a while, and recently I threw it away.

CloudEnvy (https://github.com/cloudenvy/cloudenvy) is what started that cascade. Brian Waldon started it some time ago as a script that emulated the ease of spinning up VM’s with vagrant, except wrapped over the OpenStack clients. I always wanted to like Vagrant, but it never really synced for me. I think mostly because I was in never-ending kernel panic hell with virtualbox. CloudEnvy is a different story.

I think the most interesting illustration of CloudEnvy is using it to spin up an instance in a cloud, and then run devstack in that instance.

CloudEnvy relies on a starter of two files – one that’s specific to the project (DevStack in this case): Envyfile, and one for your personal cloud configuration (~/.cloudenvy).

Here’s my .cloudenvy file (with the hostname and password redacted):

And the Envyfile I use with devstack:

You’ll notice that the Envyfile references a script I named “cloudenvy-setup.sh” – this is the basic script that cloudenvy uploads to the instance it creates to automatically provision things up. You could easily replace this with Puppet, Chef, or whatever it is you like to configure VMs in your world.

Here’s what I’m doing:

(all three of these files are in the gist https://gist.github.com/3969250)

The Envyfile also refers to a image_name. I’m using a stock UEC precise image that I uploaded to our instance of OpenStack. Pretty shortly, CloudEnvy will be replacing “image_name” with just “image”, and they recommend that you use an image ID (guaranteed uniqueness) over a name. For my immediate use, the name works pretty well.

Once this is all in place:


envy up

Creates the instance, assigns it a floating IP address, SSH’s into the instance, uploads the provision script, and starts cranking on the provisioning. 763 seconds later, a fully operational devstack in an instance running on OpenStack.


envy ssh

Gets you in, lets you do what you want.


envy list

Shows you the instance(s) you have running.

There’s more, a lot more, but hopefully this is sufficient to get you started.

OpenStack Design Summit – wrap-up and links

This fall’s OpenStack design summit in San Diego is wrapped up, and we’re all back to being distributed across the globe. I was pleased with the summit, and pleased to see the project I’m helping coordinate (Keystone) move forward with a lot of ideas, growing interest in contributions, and concrete feedback from a wide mix of folks.

The design sessions are definitely less about actually hacking code than they were a year ago, offset though with the increasing diversity of backgrounds and interests participating in the sessions. The core team developers joined me through-out Thursday and drove most of the discussions, with fantastic input from David Chadwick, Khaja, Ryan Lane. There were way more people in the sessions than that, but to me these three represent a set of fresh inputs from folks with a deep academic background, previous experience building identity systems, and active operator points of view. They and the the previous contributors provided tremendous feedback, asked great questions, and set the stage for a lot of interesting ideas.

This year all the project technical leads gave a “state of the project” overview, but we did that on Tuesday – so like John Griffith (the project technical lead for cinder), I was doing the ‘state of the project’ routine prior to getting the feedback and doing the brainstorming in the sessions. The slides from that presentation are online at http://www.slideshare.net/ccjoe/oct-2012-state-of-project-keystone if you’re interested. The coordinators all video-taped those segments, as I understand it, they should be appearing on the OpenStack channel in Youtube in the next couple of days.

There was also a very active session led by Gabriel Hurley seeking to drive more continuity into the OpenStack APIs, and a matching session by Doug Hellman and Dean Troyer for the OpenStack CLIs. The continued focus on bringing in new ideas while keeping the interfaces consistent and clear is a great sign for the project overall, and I was pleased to see a large number of like minded folks wanting to continue to move things forward in those areas.

This was also the first summit under the auspices of the OpenStack Foundation – they all met for an extended period of time early in the conference, and the Technical Committee managed to pull of a first all-in-person meeting over dinner and scattered conversation Tuesday evening.

And not at all related to any core projects or overall effort: check out the very creative riff on the OpenStack theme Dope’n’Stack, (video on YouTube as well). Gabriel and Erik were working their tails off prior to and during the conference to pull this off, culminating in a great presentation Wednesday evening at Piston’s party. (I was disappointed that Gabriel lost the mohawk for the summit, but he said he was sick of wearing it after three weeks).

OpenStack Folsom RC1

It’s been a busy couple of weeks, and I expect the new several to be busy as well, leading up to the next OpenStack Design Summit (Oct 15th-18th in San Diego, CA).

We rolled RC1 for Keystone Folsom release out the door this past week, and at this point I think all the projects have an initial release candidate out the door. The original release date is 5 days away, and it’s looking pretty good for hitting it. If you want a quick overview of what’s coming in this release, I’d recommend a look at Emilien Macchi’s Folsom overview, which is a pretty nice high level summary.

While we’ve been busy nailing down bugs and wrapping this release together, the OpenStack Foundation has finally come into form. As the Keystone project technical lead, I’m on the OpenStack technical committee – picture and title, but I haven’t written a bio yet. (Sorry Lauren). I find it really quite difficult to write a bio at myself. Regardless, it’s great to finally see this moving into a foundation external to any single corporate interest. That’s not to say it’s all in the land of milk and cookies, there’s just a lot of people with all slightly different interests jumping into the pool to push this little rowboat in different directions.

One thing that we did early out of excellent foresight was keeping the direction of the core projects democratically oriented by the contributors to those projects. The people that show up to write, update, and support the code are the ones that are ultimately making the decisions on what features get implemented, and when. Lots of folks talk about what could be, but it’s the contributors that make it happen.

A perfect example of this is Adam Young, who in the past 6 months drove the implementation of PKI based tokens in Keystone. I’m not even sure I’ve met Adam face to face, but I definitely know him – and he’s been a fantastic contributor, and in the Folsom development cycle was promoted to the core Keystone team.

Mac, iOS, DevOps, Cloud Services, and daily life in Seattle