Edinburgh 2011.... Day One (The Journey)

August 25, 2011

Despite having lived in Edinburgh for almost a year on and off while doing my MSc, I've never been up there at festival time. We decided to rectify this, so earlier on in the year (way earlier on in the year) we booked some accommodation for the second week of the festival and made our plans to hit the city for a festival holiday. Over a few posts I'll describe in laborious detail what we did, what we saw, and what we (or I) thought of it. Which you'll enjoy, I'm sure.

Getting to Edinburgh from Cardiff is pretty easy, but fairly expensive. You can fly, drive, or get the train. Flying is pretty quick, but moderately expensive. The train is pretty cheap if booked in advance, but takes a long time and if you don't book in advance it basically requires the sale of your first born son to get a ticket. (Don't get me started on the cost of rail travel blah blah blah). I didn't bother to work out the cost of driving in our car; given the appalling state of it I wasn't totally convinced it could do another trip up to Scotland and back - for sure if we were going to drive we'd need to get 'whatever it is that makes the funny noise while braking' fixed, plus the petrol, and we had nowhere to park up there, and there's the whole actually having to drive 500 miles thing. After some back and forth we settled on getting the train. Luckily, because we're the organised type (read: Lisa is organised and drags me along for the ride) we were on the ball for the advance tickets, and managed to get singles from Cardiff to Manchester for £13 each, then from Manchester to Edinburgh for £17ish each, (and the same on the way back), giving us a return cost of £60 each - pretty good.

We caught a nice early train out of Cardiff on a Sunday morning, bagged a table seat and sat back for the million hour trip to Manchester. Unfortunately for us it seemed like everyone in the world wanted to take the same train as us, by the time we got to Hereford it was rammed. A nice man (ex quality assurance chap with a nice terrier dog) took one of the seats opposite us, and he was soon joined by a skinhead Chelsea fan on his way to Stoke to watch the match. I managed to bury myself in my laptop watching a movie, leaving Lisa to make polite conversation. Ha. From what I can gather the conversation seemed to revolve around working out the price of drinks when serving at a bar. Or 'mental arithmetic' as it's sometimes known. The train from Manchester was a bit better in terms of overcrowding, but not much. This time the conversation was totally football dominated. We sat by a young guy from Sunderland, who wasn't just an avid Sunderland fan, he was an avid football fan. He seemed to know every player in every team in the whole football league, and every transfer that had occurred over summer. Usually when people ask who I support, and I reply 'Shrewsbury Town', there then follows a long conversation explaining what exactly that is, and inevitably there's some explanation about how there's a whole bunch of football going on aside from the premier league. Not with this guy - within a millisecond of the words 'Shrewsbury Town' leaving my mouth he was engaging me in conversation about the town players he'd played with when he was at Carlisle, where the ex-manager was now, how his son was doing, how we were looking for the new season, and on and on. Once he got going there was no stopping him. Some Scottish guys got on at Preston, and it was soon revealed that this chap's football knowledge didn't stop at English football - oh no, he could engage the Scots in any amount of conversation about the SPL as well. I wondered for a bit what he could do with a memory and passion like his if he applied it to something other than chasing Sunderland up and down the country and harvesting in every bit of football columnist opinion he could find. It scared me a bit, so I stopped talking to him and read my book instead. At some point I fell asleep, and when I woke up he'd gone. Probably had a five a side match to get to.

We got into Edinburgh at about 5.30pm. It was gloriously sunny and warm, and it felt good to be there. I've always loved the city, more than any other I've found myself in. Most times I've been there I've arrived by train into Waverley, so walking up out of the station felt right and a bit like coming home. It's a strange feeling to experience in a city that you've spent a relatively short amount of time in, but I was genuinely excited to arrive back in Edinburgh again. Our accommodation was only a ten/fifteen minute walk from the station so we headed straight there to get checked in before our first show.

We booked accommodation in March of this year, opting for a University run studio apartment in the city centre, in the Richmond Place Apartments. Edinburgh University actually give alumni discount when booking rooms in _some_ of their halls and holiday flats, but not the ones we booked, so it came in at about £100 a night. Fairly pricey, but pretty cheap by festival standards, especially considering the location and the facilities available.

The accommodation is actually in a university halls of residence - basically it seems the university have converted a floor of a residence tower block into studio apartments. Above the apartments are six or seven floors of normal university halls. Because it was summer I guess the place was pretty empty and therefore pretty quiet, although there did seem to be some students coming and going. I can imagine in term time it must be a pretty noisy place to stay, but in summer and at festival time it's fine. The flat itself was really nice, way better than my house; although that's not hard to achieve. The room itself was decently sized, you could swing a cat in it for sure. It was kitted out with all mod-cons, everything one could want for a week away in Edinburgh. About here would be a good place for a picture, but I don't have any, because I forgot to take any - if you are desperate to get an idea of what the place is like, go here and click the link for 'Richmond Place studio apartment; with mezzanine. Our room was almost exactly different from that, while being pretty much the same.

So, we checked in, threw clothes in the wardrobe, and left to go get some festival...

Why write?

August 24, 2011

It's a fairly sensible question: why do I write anything here? There aren't going to be that many people reading, if any. Probably the only people that come past are going to be the other people in COMSC having a nose around to see what people are putting on their websites, and the few visitors I get from the random google searches that seem to appear in my analytics reports every now and again. Also probably my mum, because she's a bit of an an internet stalker. Hi mum.

Despite the fact that no-one is reading, I think it's important for me to write fairly regularly. It's a good skill to practice and keep up, and I feel that even in these post-blog social network dominated days, the writing of a few lengthy blog posts on a particular topic every now and then is a fairly healthy habit to get into. The other reason to post regularly is the hope that when I post a bit of code that I've been working on, or an explanation of some work, someone somewhere will find it useful. So, I'll continue to post, until I get bored.

With that in mind, recently I went back up to Edinburgh for a week for the festival. Because I have little work I feel like (or feel is ready for) writing about on here but still feel that writing semi-regularly is good for me, I'm going to post some reviews of my week up here. Almost a 'what I did on my holidays' type thing. So look forward to that over the next week or so.

More Augmented Conversation

August 4, 2011

Another update on the summer project? Already? Yes.

The project is really cracking on. We're two weeks from the end and beginning to see the results roll in, every meeting brings a new version of the software with more functionality. Nick has successfully written a nice framework that allows us to input conversations and automatically retrieve search results based on the topics of those conversations. Even the voice input works (almost) and we've got enough time to try and move on to some content extraction ideas. I've now written a script to do some automatic evaluation and we're in a position to subject the attendees of next week's mobisoc meeting to a human evaluation, which I'm sure will be fun for all concerned.

CUROP summer project update

July 12, 2011

I figure it's about time I posted an update on the summer project that myself and Ian are supervising. Our summer student Nick is now two weeks into the project and seems to be getting to grips nicely with the problem. He also seems to be coping pretty well with two novice supervisors babbling away at him whenever we meet up! If he leaves the office without being completely overwhelmed with information I think it's been a successful meeting.

So far Nick has managed to code some software that takes textual input, passes it to several web services for keyword detection and then performs searches for relevant content on Google, Yahoo! and Bing. The next stage is to get to grips with the temporal nature of conversations so that the keyword detection and searches are carried out continually, with some control of time scales and how much text is used as input. After that we'll start to look at algorithms for combining keywords to get the most successful search queries possible. Meanwhile, we need to come up with a human evaluation that isn't going to make everyone at MobiSoc hate us when we ask them to do it, and an algorithmic evaluation that makes sense and actually evaluates the right parts of the system.

Overall though we're ahead of schedule, which is a very good place to be. I'll continue to post about progress as the project goes on.

WowMom 2011

June 29, 2011

Last week I attended WowMom 2011 in Lucca, Italy. The conference was pretty good, but I was mainly there for the Autonomic and Opportunistic Computing (AOC) workshop where I was presenting some work, as I mentioned in an earlier post. The workshop was really interesting, a lot of the work was relevant to work we'd done in the past on the SocialNets project and work we're doing in the future with the Recognition project. There were some very interesting discussions on areas such as mobility models and mobility traces and the capturing of user data, particularly in the keynote from Tristan Henderson and also in the panel session at the end of the day. I also met some very interesting people from around the place and hopefully will run into them again at some conference or other down the line.

I've just got to sort through all my notes now so I can talk about it all at MobiSoc tomorrow lunchtime!

Bad Foursquare Day...

June 17, 2011

I can understand losing mayorships, but when it's somebody close to you and she steals two from you in one day, it's ridiculous:

I will have my revenge...

Curop Project

June 14, 2011

Some excellent news yesterday as we've heard that we've got the CUROP funding for the summer project that I previously mentioned.

All being well, it should start within the next couple of weeks, so I'll update with the progress once there is some!

Logging in to websites with python

June 9, 2011

As previously explained, I needed a python script to login to a website so I could access data. There's loads of examples out on the web of how to do this, my solution (mashed together from many examples) is described below. For the whole script, jump to the end.

Firstly, we need to set some simple variables about the website we're trying to log in to. Obviously, I'm trying to login to myfitnesspal, but this process should work with most websites that use a simple form + cookie based login process. We need to set the url we are trying to access, where to post the login information to, and a file to store cookies in:

# url for website        
base_url = 'http://www.myfitnesspal.com'
# login action we want to post data to
login_action = '/account/login'
# file for storing cookies
cookie_file = 'mfp.cookies'

Then we need to setup our cookie storage, and url opener. We want the opener to be able to handle cookies and redirects:

import urllib, urllib2
import cookielib

# set up a cookie jar to store cookies
cj = cookielib.MozillaCookieJar(cookie_file)

# set up opener to handle cookies, redirects etc
self.opener = urllib2.build_opener(
urllib2.HTTPRedirectHandler(),
urllib2.HTTPHandler(debuglevel=0),
urllib2.HTTPSHandler(debuglevel=0),
urllib2.HTTPCookieProcessor(cj)
)
# pretend we're a web browser and not a python script
opener.addheaders = [('User-agent',
('Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) '
'AppleWebKit/535.1 (KHTML, like Gecko) '
'Chrome/13.0.782.13 Safari/535.1'))
]

Next we need to open the front page of the website once to set any initial tracking cookies:

# open the front page of the website to set
# and save initial cookies
response = opener.open(base_url)
cj.save()</pre>
Then finally we can call the login action with our username and password and login to the website:
<pre># parameters for login action
login_data = urllib.urlencode({
'username' : 'my_username',
'password' : 'my_password',
'remember_me' : True
})
# construct the url
login_url = base_url + login_action
# then open it
response = opener.open(login_url, login_data)
# save the cookies and return the response
cj.save()

The parameters for the POST request (and the action to POST to) can usually be found by examining the source of the login page.

There you have it - you should now be logged into the website and can access any pages that the logged in user can normally access through a web browser. Any calls using the 'opener' created above will present the right cookies for the logged in user. The cookies are saved to file, so next time you run the script you can check for cookies, try and use them, and only re-login if that doesn't work.

My full version is attached to this post, it's under a CC-BY-SA license, so feel free to use it for whatever.

Quite how this will cope when websites catch up to the new EU cookie legislation is anyone's guess. My guess is it won't.

Scraping data from MyFitnessPal with python

June 9, 2011

Following my success with extracting my Google Search History in a simple manner, I've decided that I should do something similar to extract all the data I've been feeding into myfitnesspal for the last 5 months. As I briefly mentioned in the review of the app + website, the progress graphs leave a lot to be desired and there's very little in the way of analysis of the data. I have a lot of questions about my progress and there is no easy way to answer all of them using just the website. For instance, what is my average sugar intake? Is this more or less than my target intake? How does my weekend nutrition compare to my weekday nutrition? How much beer have I drunk since starting to log all my food?

Unfortunately there isn't an API for the website yet, so I'm going to need to resort to screen scraping to extract it all. This should be pretty easy using the BeautifulSoup python library, but first I need to get access to the data. My food diary isn't public, so I need to be logged in to get access to it. This means my scraping script needs to pretend to be a web browser and log me in to the website in order to access the pages I need.

I initially toyed with the idea of reading cookies from the web browser sqlite cookie database, but this is overly complex. It's actually much easier just using python to do the login as a POST request and to store any cookies received back from that. Fortunately I'm not the first person to try and do this, so there's plenty of examples on StackOverflow of how to do it. I'll post my own solution once it's done.

Losing Weight in 2011 continued... Libra

June 8, 2011

Another of the very useful apps I've been using since the start of the "New Regime" is Libra. It totally makes up for the crappy progress graphs on the MyFitnessPal website or in the mobile app.

It's an app that has a singular purpose: it's just for tracking weight. But it does it very well. You enter a weight for every day, it works out statistics based on those weights, calculates the trend value for your weight, (so smoothes out daily fluctuations caused by water intake etc), and predicts when you'll hit your target. It performs a weekly backup of data to the SD card, and will export data in csv format too.

The Libra app is available on the android market for free, with a paid ad-free version also available.