Saturday 29 March 2008

going google part 2 - imapsync

I've already mentioned that I'm going google.

The largest part of this is transferring my mail from my IMAP server up to gmail. That kept me up until 5 this morning, and is about half done. And let me tell you, it's been a pain.

I started in Thunderbird. I knew it would take some time, but I'd got all weekend, shift a thousand at a time, done before you know it. Nuh-uh. Thunderbird would only shift a couple of hundred at a time. Any more, and thunderbird grinds to a halt.

So I did a little googling, and found imapsync, which as I write this is doing the job. It's a perl program, with lots of command-line switches. Here's mine (sanitised)..


imapsync --host1 10.1.2.3 --user1 myusername --password1 xxxxxx \
--prefix1 /home/sites/site1/users/myusername/imapmail/ --authmech1 LOGIN \
--host2 imap.gmail.com --user2 my.googleid --password2 xxxxxx \
-ssl2 --skipsize --noauthmd5 --subscribed --delete --expunge \
--exclude SPAM|HAM|Drafts --split1 200 --split2 200 \
--nofoldersizes --regextrans2 s/[a-z]\/// --fast --syncinternaldates

  • --prefix1 /home/sites/site1/users/myusername/imapmail/ is to strip off the full path to my IMAP folder tree.
  • --regextrans2 s/[a-z]\///is because I've got my IMAP organised like a/alan, a/andy, d/dave, etc. It whips the initial letter and forward slash out of the gmail label.

The command line's a bit gnarly, but it's getting the job done. So big up imapsync.

Friday 28 March 2008

networkshop 2008 blog aggregator goes live!

JaNET hold an event every year, called networkshop, and it's a three day gathering of networking (fibre and routers, not facebook and twitter) professionals from UK academe.

We get to talk dirty tech, and swap stories, and socialise with our peers. And betters. Mostly betters.

This year, JaNET training are having a crack at supporting the conference with what I'm interpreting as a VLE sort of approach. It's currently private to networkshop attendees, and I'm sure there's plenty of room to debate whether that level of privacy is needed. We're a pretty paranoid bunch. Or 'reasonably careful'.

Not the point though, The point is, they've built a blog aggregator in yahoo pipes. So for anyone who doubted it, I present this as evidence. JaNET training are getting with the program. They're drinking the kool-aid.

going google

I'm gonna take the red pill. As of now, I'm shifting 6 years worth of email from my cobalt raq in the back bedroom up to gmail. I've got to the point where I trust google's ability to look after my mail better than I trust my own.

What's prompted me is the eeepc. It's got a tiny little solid state disk (I know it's not actually a disk, so sue me), and thunderbird's indexes would take a good chunk of that. So if I want to access all my email from the eeepc, gmail's the only way to fly.

We've also started looking at google docs, and I've been using google calendar for a while. Not to mention google talk.

So it's time to go the whole hog. Wish me luck.

eeepc!

Delivered yesterday, by a nice man from DHL, a spanky new little eeepc. First impressions so far.

This is, without a doubt, the most integrated Linux box I've ever used. I've opened it up, gone through the little quickstart guide, and was up a running in five minutes. Startup time is good, less than 30 seconds from power up to being able to use it. Nice.

Nothing to write home about so far. I've only browsed and run IM (for twitter, of course) yet, (I've run aptitude update and aptitude upgrade, of course).

It's a pretty happy circumstance when linux on a laptop just works without any, yknow, WORK. In my experience, at least.

Plans:

  • See about getting it to use the blackberry as a modem.
  • See if I can get it running as a VPN client. This will make the difference between a nice little toy and a machine I can realistically use for work.
Anyway, it's a sweet little thing.

On another note entirely - me mother was round most of yesterday, and by the time she left I had no kitchen sink. There's a big hole where it used to be. H and Tornado Boy are coming home tomorrow afternoon. The man at the builder's merchant has apparently said it will arrive today. So, assuming all goes to plan, we'll have a new kitchen installed by the time they get home. D'you see the potential problem there?

Wednesday 26 March 2008

Twitter as platform

Here I really want to just jot thoughts.

What excites me about twitter, and yahoo pipes, and jabber, and the way this stuff can be wrangled into working together, is that it starts to bring the network to life. In a science-fiction kinda way.

One of sci-fi's signature technologies is the 'personal terminal'. It's a small thing, with perhaps some intelligence, but it's key feature is a permanent, on-demand connection to 'the network'. Iain M Banks in his culture world has terminals as a part of the hyper-powerful post-AI thang. Your terminal knows where you are, and you can contact society through it, both machine and organic.

Wonderful sci-fi stuff. IMHO. But the truth is it's not far off. I've a phone with GPS, it runs Java applications and can communicate over jabber with 'the network'. I can listen to my online friends public voice. I can, in theory at least, access all sorts of user generated content from my phone, based on my location, which my phone knows.

Next on the list is an exploration of location-based status reporting - the example is traffic conditions.

  • I'm stuck on the motorway - I twitter '@uktrafbot M5J3 Southbound stopped.' Or better, software running on my phone does.
  • You're planning a journey. You're route planning software checks along the route and looks for (a) recent traffic twits along the route if you're leaving now, or (b) if you're leaving tomorrow morning it looks for patterns of twits during the time you'll be travelling. And lets you know if you're route might run into problems.
  • You're travelling. You're heading along the M5. Your phone knows where you are, and periodically checks for recent twits along your route, alerting you to problems ahead.

I'm looking for a collaborator, with mobile java dev skills.

brummie twits part 3 - mawhin_bot1 settles down.

OK. That'll do for now. @mawhin_bot1 is pretty well complete in it's first incarnation.

I've posted before, here and here, but to save the clicking I'll describe it again.

It's a twitter account, inspired by @peteashton and @BhamPostJoanna, which follows people who claim their location to be in the West Midlands.

It sits and watches some of the public twitter feed, and when it spots a tweet from a midlander, it tweets about it, and starts following the twitterer.

A couple of issues have come up.

  • @kevin_rapley raised the issue I had feared, that not everyone located in Birmingham claims brummiehood. Some are, sad to say, even offended. And we'd best not even mention the Black Country. So announcements are are lot more cagey these days.
  • Do you want to be followed like this? No? Block @mawhin_bot1. It won't try again. Unless it's broken, in which case, let me know and I'll (1) fix it, and (2) hard-code so it doesn't ever follow you again.
So far, mawhin_bot1 has collected 47 twitterers, ( actually 46 plus @bbcfooty, who also ain't real ). There's a bunch of background work still going on, and I'm hoping to extend this to handle several other cities/conurbations.

On the subject of locality, I think, I suppose, at the public transport level. If you're within a city's regular public transport network, that's a level of locality that's interesting. I think.

I'm quite excited about twitter as an application delivery platform. Especially with location thrown into the mix. Onward and upward!

Sunday 23 March 2008

Brummie Twits part two

Well, that was a start. But a bit broke and it doesn't seem to update in a bloglines feed or anywhere.

On to mark 2. This time it's a twitterbot, name of @mawhin_bot1, whom I have chained to the computer and is searching for brummies. ( If by brummies you're prepared to accept anyone in the west midlands conurbation. Call us what you will ).

It's doing a couple of things I thought were interesting.

  • It's using XMPP. I can get a pretty decent feed of 'tracked' terms over XMPP, something I just can't do using twitter's REST API.
  • It's using the twittervision API for determining location.

A coupla things to do:

  • Add support for @dangerday, a twitter version of fireeagle.
  • perhaps add support for using 'whois' over IM. But that's gonna be harder. And I don't know that it will add any further data that twittervision.
  • Build reflection, so @mawhin_bot1 reflects everything said by those it follows. Or possibly this could be another twitter user. Thoughts on this would be welcome.

Friday 21 March 2008

Brummie Twits

@peteashton and @BhamPostJoanna wondered " Is it possible to get a feed for twitters from a specific city?".

@aeioux pointed out http://twittermap.com

Twittermap, while it relies on twitterers twittering their location, gives results according to where twitterers are / were recently.

This yahoo pipe instead searches a twitterers' friends (any twitterer whose friends you can view) for the location in their profile. So the result is not 'live', but reflects where the friends are based.

Here's a first hack using yahoo pipes. The page parameter is for when there's more than 100 friends to deal with.

Sunday 16 March 2008

IT Support in a large environment.

I wanted to jot down a few guiding principles, that have worked (I think) well for me. They work in the environment I'm in (Large FE college), very few 'special' users.

  • Provide simple services. The question's not "what'll it do?", but "how well can I support it?". If it can do everything brilliantly, but it's broken often enough that it's not used, you lost. There are exceptions. Keep them limited.
  • Monitor everything that matters up the wazoo. Don't wait for your users to tell you it's broke. They won't. They'd rather carve it on a siberian rock than tell you about it. And by the time someone does overcome their disdain for you and let you know there's a problem, it's been going on for ages, and you lost. There are exceptions. They are your friends.
  • Recognise, and hammer into your colleagues when they forget, that your users, by and large, don't care about computers. They're not interested. They use the computer 'cos they've been told to. For work at least. And all they want to do is their job. And this is as it should be. When you wash your face, should you care about the details of water supply? No. All you want is water that's not brown. If you expect users to be interested in anything outside their job, you lost.
  • Automate everything that's feasible. Once a script is right, it's right, and it will be tomorrow. If you're relying on users or tech support people to do task J the same today as they did yesterday, you lost.
  • Document your recovery procedures. The last time you want to be trying to think is when it's all gone to hell and the phone's on fire. That's when you want to be following instructions blindly. If you're having to think what to do under pressure, you lost. I lose pretty often here. But it's worth trying.

Using nagios to monitor print configuration

Now this isn't a copy a previous post, which was about monitoring print queues. This is about monitoring our quite complex printing configuration system.

Again, a bit of background. We keep as much configuration info as we can in an LDAP directory. And we install printers on our windows desktops ( about 1,200 spread over several sites ) during either the startup scripts (for locally attached printer) or the login script (for network printers). We use the concept of a 'nearest printer', assigned to the workstation dependent on physical location.

So, sticking to network printers, we have:

  • A CUPS queue per printer, served by Samba to windows desktops.
  • A 'nearest printer' attribute in LDAP, attached to the workstation entry. This 'points to'..
  • A printer entry per printer, with an 'installcommand' attribute, which gives an appropriate command line to install that printer on a workstation. This gets run during the login script.
Perhaps it would be clearer to describe how a workstation gets a printer. I'm focusing on network printing, so this happens in the login script.
  • Look up my (the workstation's) LDAP entry.
  • From my LDAP entry, get the nearest printers.
  • For each nearestprinter, get the installcommand from the printer's LDAP entry.
  • And run it.
Now we like this. We can have several printers associated with one workstation, and we could (though we don't) associate nearestprinters with user accounts as well as workstations. It's real easy to change a workstation's printer(s), and users need know nothing about it. When it works, it just works.

But there's no referential integrity going on here. We can have orphans anywhere. A CUPS queue with no install commands. An installcommand referring to a non-existent CUPS queue. A workstation with a nearestprinter that doesn't exist. etc. Basically, config rot, caused by human failure to attend to detail.

And here we get to the point of it all. We have a nagios check called 'check_print_config', which checks all this, and creates a warning state if something's out of whack. It's posted below. As with most code posted here, it's finished to the point where it works. It's not great code. It does, I'd posit, do something interesting.


#!/usr/bin/perl -w
use strict;

my @nagiosCupsQueues;
my @nearestPrinterQueues;
my @installCommandQueues;
my @output = ();

# print "Getting monitored queues from nagios...\n";
@nagiosCupsQueues = (`grep check_cups_queue /etc/nagios2/conf.d/allPrintQueues.cfg | cut -f2 -d'!'`);
chop @nagiosCupsQueues;

# print "Getting installCommand queues from LDAP...\n";
@installCommandQueues = (`ldapsearch -LLL -x -b "ou=hosts,dc=example,dc=com" '(installcommand=*con2prt*)' installcommand | grep '/cd ' | grep -iv idcard | grep -iv tmu220 | grep -iv null | sort | uniq | cut -f4 -d" " | cut -f4
-d"\\\\"`);
chop @installCommandQueues;

# print "Getting nearestprinter queues from LDAP...\n";
@nearestPrinterQueues = (`ldapsearch -LLL -x -b "ou=hosts,dc=example,dc=com" '(&(objectclass=computer)(nearestprinter=*))' nearestprinter | grep nearestprinter | grep -iv lpt | grep -iv archicad | grep -iv tmu220 | grep -iv idcard | grep -iv null | sort | uniq | cut -f1 -d"," | cut -f2 -d"="`);
chop @nearestPrinterQueues;

foreach my $icq ( @installCommandQueues ) {
next if $icq =~ /^$/;
push(@output, "ICQ: $icq ") if ! grep(/^$icq$/i, @nagiosCupsQueues);
}
foreach my $npq ( @nearestPrinterQueues ) {
next if $npq =~ /^$/;
my $npqNotInCups = 0;
my $npqNoInstallCommand = 0;
$npqNotInCups = 1 if ! grep(/^$npq$/i, @nagiosCupsQueues);
$npqNoInstallCommand = 1 if ! grep(/^$npq$/i, @installCommandQueues);
# push(@output, "NPQ:$npq:") if ( $npqNoInstallCommand && $npqNotInCups );
my @duffClients = `ldapsearch -LLL -x -b "ou=hosts,dc=example,dc=com" "nearestprinter=cn=$npq,ou=hosts,dc=example,dc=com" dn | grep dn: | cut -f1 -d"," | cut -f2 -d"="`;
chop @duffClients;
push(@output, "NPQ:$npq: " . join(",", @duffClients) . " ") if ( $npqNoInstallCommand && $npqNotInCups );
}

#print Dumper([ \@output, ]);
if ( @output > 0 ) {
print "WARNING: " . join(" ",@output) . "\n";
exit 1;
} else {
print "OK\n";
exit 0;
}

Friday 14 March 2008

Automating nagios configurations.

At the last count, we run something like 140 print queues, and as offices move, and printers get replaced, and 'stuff changes', queues are created and deleted and renamed. This post is about how I've addressed ensuring that nagios is monitoring all our queues, and minimising the opportunity for operator error.

A little background. We use CUPS to queue print jobs, and our technicians are free to create and delete queues as need be. They do not have access to the nagios configs.

So, the basic idea is that we periodically run a script on the nagios server that:

  • Queries each of our print servers for a list of existing queues
  • Creates a nagios config file for all print queues in the list
  • signals nagios to restart, and re-read it's configuration

So we get a monitoring configuration that doesn't miss print queues out, nor alarms about print queues that no longer exist. And no-one has to remember.

Which is nice.

So, ( and I apologise in advance for the code. I'm a sysadmin. Whaddya expect. ). The following is a perl script called from cron, once for each CUPS server. We pass the server address, and a human-readable site name, and we get nagios code out on stdout, which is piped into the appropriate nagios config directory. It depends on lpstat, which queries the CUPS server.



#!/usr/bin/perl

$cupsServer = $ARGV[0];
$site = $ARGV[1];

@queues = `lpstat -h $cupsServer -p | grep printer | grep -iv "sent" | grep -iv "off-line" | grep -iv "unable" | grep -iv "attempt" | cut -f2 -d" "`;
chop @queues;

foreach $queue ( @queues ) {
print "define service{\n";
print "\tuse generic-service\n";
print "\thost_name $cupsServer\n";
print "\tservice_description CUPS_" . $queue . "\n";
print "\tservicegroups " . $site . "PrintQueues\n";
print "\tcontact_groups " . $site . "-printer-admins\n";
print "\tcheck_command check_cups_queue!" . $queue . "\n";
print "\tregister 1\n}\n\n";

print "define serviceextinfo{\n";
print " host_name " . $cupsServer . "\n";
print " service_description CUPS_" . $queue . "\n";
print " notes_url http://wiki.example.com/wiki/index.php?title=Nagios/" . $queue . "&action=edit&preload=Nagios/NewServiceTemplate\n";
print " action_url http://" . $cupsServer . ".example.com:631/printers/" . $queue . "\n";
print " icon_image HPlj4550p.gif\n}\n\n";
}



Coupla notes - the nagios action_url shows a clickable icon taking the user to the CUPS queue in question. The notes_url points to a wiki page. We use this to keep notes about the service.

This is all very well, but nagios won't pick up the changes without a restart. So once cron has built the config file, it does this:


export now=$( /bin/date "+\%s" ); #get the current time into a format nagios understands
export commandfile='/var/lib/nagios2/rw/nagios.cmd'; #identify the file nagios reads for external commands
/usr/bin/printf "[\%lu] RESTART_PROGRAM\n" $(( now + 30 )) > $commandfile #tell nagios to restart in 30 seconds


And Bob's yer uncle. Monitoring our CUPS queues with nagios means we become aware of problems quicker, and respond quicker. And automating the config makes this practical.

Sunday 9 March 2008

What's your guiding question?

I know mine's changing again, and just for once, I'm aware of it happening.

Maybe (almost certainly) it's been put better elsewhere, but I made this up all by my own self. Your guiding question is the one you always ask. The one you measure everything you do against. The first question. The last question.

I'll try to explain.

My official job title is 'network and systems manager'. In practice, I'm a significant chunk of a team who make all the technical calls, and do all the fixing. We're generalists, who specialise in whatever's the problem right now. Not an unusual situation.

The pertinent part is where we 'make all the technical calls'. And technical decisions aren't always simple. Interesting technical decisions always aren't simple. Security versus ease of use. Customisability versus maintainability. Everything versus budgets. And the technical decisions I make and influence are affected, I hope strongly, by my guiding question.

The guiding question, for me, has evolved.

  1. What do I need to do to make this machine work?
  2. What do I need to do to make this service work?
  3. What do I need to do to make this service work well for my users?
  4. What do I need to do to make this set of services work together well for my users?
  5. How should these services work together to best support what my users are doing?

And today, it's
  • What service infrastructure should I be providing and supporting to equip my users to do what they do, but better?

What's your guiding question?

I moved from bloglines for the comments

and then I forgot to turn comments on. Whoops. Sorted now.

More monitoring with twitter

Or, the little twitterbot that could.

As I've mentioned before, we've got two identical nagios boxes running, one notifies us of problems via email, one via a special private twitter account that the systems team follow. So if email service or one of the nagios boxes goes down, we'll still get notified.

This is an improvement, and we're already getting to problems quicker. Great smashing super. But sometimes we trip over each other. I'll log in to fix something to find that P is already working on it. This hasn't bit us yet, but rest assured, if we don't deal with it, it will bite us one day. So.

The little protocol we're working with now is as follows: when you take on a problem, you IM the others that you're working on it. But that's n-1 messages before you start working on the fix. A pain and a waste of time.

So I'm working on a little bot. It watches the direct messages feed for the monitoring twitter account ( let's call it skaffen ), and when it gets a new direct message, sends it back as an update to the skaffen account, with the original sender prepended. Like this:

skaffen: WARNING -- stuff is borken
mawhin: d skaffen fixing stuff
... up to a minute, because of twitter rate limiting
skaffen: mawhin is fixing stuff

So to pick up a problem you direct message the monitor. I think that's sweet.

Thursday 6 March 2008

Planning for networkshop 36

I and a colleague will be attending #networkshop2008 (UKERNA run conference for UK academic network folks).

I'm pretty excited about this one, as most every session is of interest. And already there's a clash. The first set of sessions, we two have three choices - Voice Services, Network Security or Network Engineering - all of which promise to be interesting, and more to the point, relevant to stuff we are doing or intend to be doing or need to be doing.

What to do?

Well, the presentations are usually available online after the event. Sometimes before. I'd like to think that UKERNA will be videoing the sessions and putting them up somewhere. I'm hoping there'll be a number of delegates blogging. I intend to be twittering/blogging. I wonder what else would help?

fireeagle

Got a #fireeagle invite off of twitter. This is cool.

It's a Yahoo location service/framework/thingy. Yahoo holds a 'location hierarchy', and you allow applications to see/change some level of that hierarchy.

So you might want one application to know where you are down to the city level ( restaurant recommendations, for instance ), and another to know your exact location. Like perhaps http://rescuemefromthemadkidnapper.com.

Sticking with my current twitter fetish, http://twitter.com/dangerday is a bot which lets you update fireeagle with your location and to query user's locations.

So on twitter you can find out where I was last prepared to admit to being with 'd dangerday q mawhin'.

Deeply cool.

Twitter use case

Colleges / tutors should have a twitter feed that students subcribe to.

So, students could get: last-minute timetable changes, reminders of assignments due, event announcements, freebies (only via twitter, to encourage usage).

And of course, it works the other way too. So if a student is working, and needs to ask a question, twitter it. Anyone following (the rest of the class, the tutor, possibly other tutors with expertise) can answer, and everybody gets the benefit.

I'm sure this has been done to death in HE, but it ain't about fashion, is it? It's about what helps.

Monday 3 March 2008

web 2.0 agogo!!

I've not 'got' this web 2.0 stuff up 'till now, and have sorta gone along with the slashdot 'yeah yeah, get a life, nothing new, whatever' approach.

Until this weekend.

Now I'm beginning to get with the program.

Google Talk on my blackberry ( with unlimited business data plan, and that's the kicker, I suppose ) is lovely. 'Cos it's one way of linking up to twitter.

And through twitter I get - system status notifications from my network and service monitoring systems - and it's a separate notification channel to email.

And through twitter I get - iwantsandy.com.

And my colleagues and partner can see my personal calendar 'cos iwantsandy publishes an ical feed that google calendar can understand.

And I've recently started using blackberrytracker.com, and with yahoo pipes' help I'm thinking I'll be able to geocode my twitters, retrospectively, using the REST API that blackberrytracker provides.

It all relies on not caring about the cost of mobile data.

But given that, it all hangs together. I can finally get close to running my shizzle entirely from my phone, and that not be crap.

GPS Timesheet idea

Now I imagine I'm your average tech geek, in that I'd much rather spend my time developing 'cool stuff' than drudge work, like filling in my timesheets.

In fact, so much so that I'm regularly in schtuck with my boss over it. He's werry understanding, but still...

So, I've got use of a blackberry 8820 with GPS, and a couple of applications that will produce tracklogs. And I can relatively easily send them to my PC via bluetooth. And I know where I work.

So, how about this:

  • keep the tracklogger on permanently.
  • upload to my PC whenever I remember.
  • persuade my PC to upload this to a webapp which..
    • knows where I work.
    • Will output a table (CSV export?) of when I arrived at work and when I left. Or at least a radius around where I work.
Now I think I can do this with what's available now. And I'm gonna have a go.

What would be better is the blackberry app that can twitter my current location every n minutes. So I build the whereaminow facebook app, etc, etc.

UPDATE Someones already done it, sorta. It's called blackberrytracker, but I'm stuck waiting for the registration email.

UPDATE Works great. Ish. Stay indoors too long, and the GPS fix starts to drift. So my three days at home sick looked a bit more like a busy day for a drug dealer. Hmmm. There's something called Kallman something that should help.