WJEC Paris

Friday, Jul 12, 2019

Martin in Paris for WJEC
Me in Paris for WJEC

Just a few days after a very successful datajconf I hit the road again, this time to head to Paris for the World Journalism Education Congress. An entirely different sort of conference, this aims to bring together educators from all over the world to discuss the past, present and future directions of Journalism education. The conference is big, with over 500 attendees, and has been running for over fifteen years (although it actually only takes place once every three years, so is on its fifth edition).

It was interesting to be attending an education conference outside of what I’d consider my usual area. I wasn’t the only Computer Scientist there (in fact there were a number of us), but of course the main focus was on general journalism education, and I’ll be the first to admit that while I have a good amount of knowledge of the area, I’m not an expert. However, the issue of Data Journalism (and to a smaller extent Computational Journalism) education loomed large over many parts of the conference, and on that topic I do have a very good amount of knowledge. We’ve been teaching in this area for over five years now, and it was great at the conference to be able to talk to other educators who’ve been teaching in a similar area to have our experiences and approaches validated. It was also very interesting to be able to contribute to discussions at the conference from a position of expertise, particularly in being able to help other people who were looking to start similar programs, or who are wanting to begin covering data journalism in a wider context in their own schools. I spent some time working within a syndicate looking at Data and Computational Journalism education, starting to move towards an understanding of the skills that are needed to be covered ‘as a minimum’ if you’re going to cover these areas within general journalism education. I was pleased to be able to introduce the idea of computational thinking to a number of educators, and to discuss the ways in which it can be useful as a thought process outside of computer science, and how well it maps to some of the processes involved in journalistic investigation and data driven journalism.

In fact, many of the conversations at the conference made me realise just how cutting-edge our MSc Computational and Data Journalism is, and how important the skills we teach are. A number of employers spoke to me about how they need people in the newsroom with the abilities that you get from our course, and many more other educators mentioned how they wish they could convince their own school to create a similar programme.

We’d actually had a paper accepted to the conference, presenting a reflective look at the first five years of the CompJ MSc. I’ll post more about the talk I gave (and the slides!) in a future post, but the talk went well and we had some interesting discussions afterwards. Unfortunately, the paper sessions were in my opinion some of the weaker sessions in the conference. There were many many parallel tracks going on at the same time, so that the audiences tended to be very small. At the same time, the conference app (which was very cool) only listed titles for the papers in each session, so it was hard to know what each session was going to be about, and in many cases the theming tended to be quite loose (maybe even a bit random!), so that papers in the same session didn’t always have a lot to do with each other. Still, with the number of submissions, papers and contributors it’s a massive logistical challenge to put such a programme together, so I think a little leniency is allowed!

Overall it was a really great experience. I met some lovely new people, caught up with a few old acquaintances, and I look forward to working more with them in the future.

DataJConf III

Friday, Jul 5, 2019

Martin in Malaga for DataJConf
Me in Malaga for DataJConf

Last week we held the third edition of the European Data and Computational Journalism Conference in Malaga, Spain. This is the conference that Glyn, Bahareh and I run, which started in Dublin in 2017, had a second edition in Cardiff last year, and has now embarked upon a journey around the European mainland. It was a fantastic event, with around 100 attendees representing different organisations from across industry and academia, and from 14 different countries. As with last year we had a first day presenting a mix of industry and academic talks, and a second day with a focus on more practical workshops and tutorials.

When Bahareh initially pitched the idea of running a conference in this field one of the main drivers for us was to encourage a mix of industry and academia, as we think the conference can be more useful this way. It has been a great pleasure to see this happen over the last three years, with both academics and practitioners putting forward talk proposals, coming along to the conference, and starting what I hope are interesting and fruitful conversations. By not splitting the conference into ‘industry’ and ‘academic’ tracks we’re able to ensure that all involved in data and computational journalism get to see all sides of the field, and this is a really positive outcome for the community. I’m excited to see some collaborations starting to happen that may not have happened if people hadn’t got together at the conference, and I think this is really one of the key benefits of this conference as a venue.

There were some truly interesting talks on both days, and some very interesting practical workshops too. Our keynotes, Daniele Grasso and Meredith Broussard did fantastic jobs of opening and closing our first day of talks. Our local hosts did a spectacular job with the organisation. Bella and the team put together a local program that was welcoming and inclusive, and showed off Malaga to its fullest. We as organisers all agreed that the job the team had done was fantastic, and they’ve set the bar very high indeed for wherever the conference goes next.

The roundup video gives a really good overview of what the conference looked like:

Fitness Data Downloading

Wednesday, May 29, 2019

Like many (most?) people these days who have the foolish notion to exercise outside I track my activities with GPS and a logging app. Well, I say ‘app’, but of course with the way all things link together my data ends up shared between four or five different services.

I recently had call to download my location data for an upcoming project. One of the sites that stores my running and cycling activity is Runkeeper, and it was from here I chose to pull my data.

Runkeeper has an API, and if we’re going to pull our data out of the service the first thing we need to do is sign up to use it, which you can do at the Health Graph website. The link to sign up is buried slightly in the documentation - but you can sign up to create an app at https://runkeeper.com/partner/applications. You need to enter a few details for an application - as we’re only going to be using this for accessing our own data it doesn’t really matter what information we put in here:

App Registration
App Registration

Once we’ve signed up and got access to the API, we are provided with a client_id and a client_secret. These are the OAuth keys that identify our application to Runkeeper:

Keys
Keys

To access user data from Runkeeper we need to follow a typical OAuth flow. We open a URL, sending Runkeeper our client_id and client_secret. Runkeeper asks our user to login and give our application permission to access their data. They then send us back to our redirect address with a single use access_code. We can then make a request to Runkeeper and exchange this access_code for an access_token. This token will allow us to make requests to the Health Graph API and access the data of the user that authorised our application. If we were building a public-facing application we’d need to write a bit of server code and get it online somewhere to handle this flow, but for our purposes here we only need to authorise ourselves, so we won’t bother and we’ll do it all manually.

Putting together the first request URL is quite straightforward:

from _credentials import client_id, client_secret, access_token

DATA_DIR = os.path.join(os.getcwd(), "data")


def get_auth_url():
    # create an authorisation url to copy paste into a browser window
    # so we can get an access token to store in _credentials.py

    base_url = "https://runkeeper.com/"
    endpoint = "apps/authorize"
    params = {
        "response_type": "code",
        "client_id": client_id,
        "redirect_uri": "http://www.martinjc.com",
    }

    auth_url = base_url + endpoint + "?" + urlencode(params, quote_via=quote)
    return auth_url

Once we have the auth_url we can open it in a web browser. This will open the Runkeeper website and ask us to give permission to our application to access the necessary data:

Authorising the Application
Authorising the Application

If we authorise the application, it will send us back to our redirect URL. A proper server would listen for these requests coming in, capture the code provided in the address bar and carry on with the authentication flow. Again, we’re just doing it manually, so we’ll just copy and paste that code out of the address bar so we can use it in the next step of the process:

def do_auth():

    base_url = "https://runkeeper.com/"
    endpoint = "apps/token"
    params = {
        "grant_type": "authorization_code",
        "code": "CODE_WE_JUST_COPIED",
        "client_id": client_id,
        "client_secret": client_secret,
        "redirect_uri": "http://www.martinjc.com",
    }

    r = requests.post(base_url + endpoint, params)
    print(r.json())

If all is well, the response from the server will contain the access token we need to make authenticated requests to the server.

We can then use this access_token every time we need to make a request to the API:

def get_data(endpoint, params=None):

    base_url = "https://api.runkeeper.com/"
    endpoint = endpoint

    headers = {"Authorization": "Bearer " + access_token}

    r = requests.get(base_url + endpoint, params=params, headers=headers)
    if r.status_code != 200:
        return None
    return r.json()

The above function will make a request to the given endpoint (and assuming it succeeds) will return the data back to the calling code. If we want to get the list of all activities for the user, we can use this function like so:

def get_fitness_activities():
    # store activity URIs separately
    with open("activities.json", "w") as activities_output_file:

        activities = []
        activity_data = get_data("fitnessActivities")

        num_activities = activity_data["size"]
        num_calls = int(num_activities / 25)
        call_count = 0

        if activity_data.get("items"):
            activities.extend(activity_data["items"])
            print(call_count, len(activity_data["items"]), len(activities))

        while activity_data.get("next") and call_count < num_calls:
            activity_data = get_data(activity_data["next"])
            call_count += 1

            if activity_data.get("items"):
                activities.extend(activity_data["items"])
                print(call_count, len(activity_data["items"]), len(activities))

        json.dump(activities, activities_output_file)

The list of activities in the Health Graph is paginated, so we need to call the endpoint repeatedly, fetching each page of activities.

Once we have downloaded the full list of activities, we can then download the full details of each activity. The API is rate-limited to 100 calls every 15 minutes per user, so we’ll put the code to sleep for 9 seconds between calls to ensure we don’t go over the rate limit:

def download_activities(activity_list):

    for activity_uri in activity_list:
        id_str = activity_uri.replace("/fitnessActivities/", "")
        filename = os.path.join(DATA_DIR, "%s.json" % id_str)
        if not os.path.exists(filename):
            activity_data = get_data(activity_uri)
            time.sleep(9)
            if activity_data is not None:
                with open(filename, "w") as output_file:
                    json.dump(activity_data, output_file)

And that’s it. Run the code, give it enough time, and you’ll end up with all your activity data downloaded from Runkeeper. Easy.

The full code for the script is below, or in this gist here.

Academic Parent

Wednesday, Nov 29, 2017

I, like many others, made note of a recent tweet by Nathan Hall discussing the attitude of his colleagues when he prioritises family over career:

The ensuing thread revealed plenty of people who felt the same way or who had experienced the same reactions and attitudes in their own place of work.

It made me reflect upon the level of privilege I have to work somewhere where I can genuinely say that this is not my personal experience. Since my son was born I have been working from home one day a week in order to both save on childcare costs and to get some quality time with him. When he was a small baby, this really was working from home, punctuated by periods of playing with/feeding a small baby whenever he woke up from one of his many, many naps. As he’s got older the amount of work I do on a Friday has decreased massively, but thankfully my job is flexible enough that I can take up the slack by working longer days, or committing some time in the evenings and weekends. The important thing is that I’ve never felt the need or pressure to work more than a ‘usual’ working week. I’m not one of these academics who feels they must work 60+ hour weeks, and I never will be. I love my work and enjoy what I do, but ultimately my family and home are far more important and far more enjoyable.

After my daughter was born we (my wife and I) rearranged our work schedules, as she was missing out on the time I had with the children on Fridays. As a result, we both now work a half day on Friday, swapping over at lunchtime. This allows us both a few hours alone time with the children, and by each working slightly longer hours the other four days of the week neither of us struggle to get our hours in and get the job done.

This has always been an informal arrangement; although my employer offers formal flexible working which I could apply for and am fairly sure I would get, I’ve never bothered. Despite this, my colleagues are supportive of my working arrangements, and I’ve never been pushed into attending a Friday meeting or been questioned on why I’m not in the office five days a week.

The message has also been given by those higher up in the University. During a training course recently, one of the Pro-Vice Chancellors of the University made the point that at the end of the day this is only a job, and other things are more important. It’s nice to know your own view is held by (at least some of) the levels of upper management too.

From the comments and discussion around Nathan’s tweet, and from speaking to colleagues around the world it seems like sadly some of my experiences of life in academia are atypical, and this is a real shame. Whether it’s a systemic structural issue to do with workloads in academia or a form of societal pressure, I do feel we can improve as a profession in supporting those colleagues for whom work is not the sole driving force.

DataJConf Debrief

Saturday, Aug 5, 2017

DataJConf at UCD
DataJConf at UCD

A month ago we held the first European Data and Computational Journalism Conference in Dublin, Ireland. This is a long overdue post about that event.

The conference idea started as all good ideas do, in the pub and with a tweet. It was at a social event during ICWSM ‘16 where I was first introduced to Bahareh Heravi, a data journalism lecturer from UCD. We talked briefly about the things we’re doing in Cardiff with the CompDJ MSc, and she spoke about her plans to introduce something similar in Dublin. A long time passed, and she got in touch over Twitter to ask if I was interested in organising a conference with her, to cover Data and Computational Journalism. Always keen to say yes to things that aren’t technically part of my day-to-day job and that will cause me a lot of work, I jumped straight in, dragging m’colleague Glyn along for the ride.

We spent several weeks having Skype calls to discuss and plan the conference, getting a website together, releasing the call for papers, organising the programme committee, managing the reviews, selecting talks, creating the programme, and then getting tickets on sale for the conference. It was a bit of a mad rush, but by June we were starting to see tickets sold, and had an excellent line up of speakers for the day. All I had to do then was sit back and wait to see if people turned up. Bahareh had less of an easy time, as she was hosting the thing, so spent many hundreds of hours organising the logistics of the event, the catering, lanyards, bags, souvenirs and all the other things that go into making a successful conference - a huge amount of work for which we are truly grateful!

When we initially spoke about the conference, we wanted to make sure we had a mix of industry and academia, and that it really was a mix. Bahareh had had a disappointing time at another DataJ conference where an academic track was included, but kept totally separate from the industry track, which resulted in a lack of discussion between the two groups of participants. This was something we were determined to avoid at all costs. We were also unsure about whether there was an appetite for this sort of conference. Our initial aim was that if we had about 50 people turn up, we’d count it as a success. In the end, we had just over 100 people through the doors, which was amazing, and there was a real mix of people from academia and industry. There was a diverse set of talks, on a range of topics, and it was really nice to see industry types asking questions of the academics, and vice versa. We also avoided the dreaded ‘all-male’ lineup, with a majority of talks being given by females. The proceedings from the conference are now available, if you’re interested.

The conference was followed by a couple of half-day workshops: an introduction to Data Journalism, and an Unconference, both of which were received very well.

All in all, a really successful event. I met a lot of interesting people and made some good contacts for the future. There were a lot of interesting discussions and I came home full of ideas for things to introduce within our teaching and research.

It was such a good time, we’re doing it all over again. DataJConf 2018 will be held here in Cardiff. So I guess this time it’ll be Glyn’s turn to do all the running around organising things…

CompDJ Team Selfie
CompDJ Team Selfie

Post-Graduation Thoughts

Friday, Aug 4, 2017

Last month I took part in my first graduation ceremony as part of the academic procession. This is the bit where staff members from the school(s) that are graduating get dressed up in their silly robes, ‘process’ into the graduation ceremony, and sit on stage for an hour or so clapping as all their students stride across the stage to shake the VC’s hand and graduate from their degree.

It’s a lot of fun, because who doesn’t like dressing up in silly robes and a hat? But its also good for the students, I think it shows them that we genuinely care about the fact they’re graduating, and it’s nice for them to see familiar faces up there on the stage celebrating their hard work. I know I enjoyed that part of my own graduation, so I’m happy now to be able to take part myself. I actually went to two ceremonies this year; the ceremony for the School of Computer Science & Informatics, and the ceremony for the School of Journalism, Media and Cultural Studies.

In both ceremonies I was really happy to see a number of students that I know and have taught. In the COMSC ceremony there were a lot of MSc students from the various programmes from my second year of lecturing the Web Apps and the Visualisation modules. There were also a couple of students whose dissertation projects I supervised, and a few undergraduates who I’ve worked with on summer projects. Then in the JOMEC ceremony this was the first year that we’ve had students from the MSc in Computational Journalism attend the graduation ceremony, which was really nice. I had a nice feeling of pride as they read out the name of the degree programme I helped create, then more as the students strode across the stage.

It’s really pleasing to see students you’ve taught start making their way in the world. Even more so when you see them creating great work and doing interesting things in ‘the Industry’. Take one of our first students Charles, who’s followed a successful stint at Trinity Mirror with a move to go push things forward at The Bureau Local. Or one of his colleagues Nikita, who’s working at one of the first data journalism outfits in India. Or last year’s grad Niko, who after a successful Google News Lab fellowship at The Guardian last year is now working on their vis team. Even this year’s students are at it before they’ve even finished: Jess is busy on a GNL fellowship for Trinity Mirror, Laura is on an internship at The Telegraph and Haluka is doing the same at The Financial Times. Four of this year’s students have job offers already, with Rae having left for the US to go run the Springfield bureau of The Daily Line.

It’s a bit weird, knowing that a few years ago we had an idea that we needed a course to train people to do a thing, and now there are people out there doing just that thing. It’s exciting, and I’m looking forward to seeing more of it.

Thoughts from the CEI Learning & Teaching Conference 2017

Wednesday, Jul 5, 2017

Yesterday was the annual ‘Learning & Teaching Conference’ of the Cardiff University Centre for Education Innovation. This year the theme was ‘To Tech or not to Tech, is that the Question?’. It’s the first time I’ve attended this conference, and I thought I’d get some of my thoughts from the day down in pixels.

(Reading this back later, I realise what a tangent this went on. From “I’m going to review #CUCEI17” to “there are some teachers out there that could do better but I don’t know how to help them”. That was a quite a diversion, for which I apologise. For an actual summary of the conference there’s a great storify of the main discussions during the conference which I think sums it all up very nicely.)

It was a very interesting and at times thought-provoking conference. I felt the main thematic question was solved fairly early on, perhaps even before we entered the room. I think most of us realise it’s not about the tech, it’s about the teaching, and the tech is just one tool in our toolkit that helps us do that effectively. The answer to the question is therefore ‘no’ and the real question is: ‘what tech and how much?’.

The keynote was an interesting look at how technology can be embraced by a whole institution, but I think what really caught my attention during this talk (with one eye on our new building with MATHS) was the lecture theatres that are arranged for group work:

Learning Spaces
Learning Spaces

How fantastic would it be able to teach in that space - you could do so many activities beyond just standing at the front ‘lecturing’. I’m becoming more and more vehement in my belief that a traditional ‘didactic’ lecture is the wrong way to teach most topics in Computer Science, and is actually harmful to our student’s ability to learn and think independently. Breaking the link between the idea of a ‘lecture theatre’ and a ‘lecture’ would be a good start towards changing the way both staff and students think about these things we call ‘lectures’. One of the frequent comments I’ve heard (and apologies to whoever said it as I can’t remember who it was - possibly Vince or Dafydd) is “wouldn’t it be great to have lecture theatres that don’t have a ‘front’”. I can’t agree more.

But of course, this, a lecture theatre with a funky design, isn’t technology. Yes, the group tables can be tech-enabled, with power and interactive displays and all sorts of other gadgets and gizmos, but really we’re just talking here about rethinking our way of teaching to a more interactive, collaborative (and collectivist?) paradigm.

This is where I think the problem comes in. Show a lecture theatre like that to a room full of academics who have all managed to carve out the time to attend a conference on education innovation and of course they’re all going to start thinking ‘Wow, the things I could do with that’. We’re the same people who are already trying to innovate in our teaching. We used lecture capture as soon as they put webcams in our classrooms (or even before). We’ve tried out all the polls and live Q&A systems during lectures. We’re creating long-lasting communities for our students in Slack, asking them to text in questions during lectures, and open-sourcing our lecture materials. We’ve already moved past an “I’ll stand at the front and talk, you sit there and listen” model of teaching. Some have stopped ‘lecturing’ entirely, are fully committed to a flipped learning model and now spend all our contact time on interacting with students, working on activities or problems, and really delivering ‘value’. The conference in this regard was preaching to the choir. Yes, it was helping those of us keen on innovation to discover new tools for our toolkit, talk to other like-minded teachers, and to validate our own approaches, but it wasn’t really attempting to answer the big problem in our own work: How do we convince everyone else to change with us?

Because the big problem isn’t with the people who are trying to innovate. The problem is the academic who doesn’t want to do any of that stuff. The academic who thinks “well, I learnt through traditional lectures, so that’ll be fine for all these students”. I actually had a colleague say to me the other day “I learnt from a book, so I don’t need to make videos for my students. They can just read the book like me”. They generally didn’t realise the benefits that can be had by moving the passive learning to non-contact time and creating an active learning environment within their lectures.

We have stuffed universities with the kind of academics who don’t realise that they’re there because they’re the sort of academically minded studious individual who could have taught themselves off some notes scribbled on the underside of a table if they had to. Anyone who’s dragged themselves through three years of undergraduate education, a year of masters, three to five years of a PhD, and a probable multitude of short-term RA contracts on many different topics before finally reaching the relative stability of a lectureship is going to be the kind of person who can learn things themselves and quickly in whatever circumstance. They don’t know what it is to be the person who struggles in class, or who finds things difficult, or who just doesn’t respond well to a fifty minute ‘lecture’.

We have a situation where people who have no difficulty in learning are having to teach.

And I find that that means they have no desire to try to do things differently, because the way they’re doing things “worked for them”. So how do we communicate to these individuals that actually ‘it worked for me’ is not a valid argument. How do we show them that there is a better way? That they can actually engage with students in a more meaningful fashion? How do we make them understand that

Just reading from a fucking powerpoint is not education.

Those are the questions I want answered next.

Catching a Bug

Monday, Jun 12, 2017

I’m doing some data analysis, and I just caught a showstopper of a bug. Want to see it? Here’s the code as it was before:

new_index = [LIKERT[value] for value in LIKERT.keys() if value in data_counts.index]

and here’s a simple fix for the code:

new_index = [LIKERT[value] for value in data_counts.index]

Doesn’t look like much of a problem, but it completely changed the way my data was analysed. Both lines are creating a new index for a pandas dataframe. I have a dataframe that is indexed:

[0.0, 1.0, 2.0, 3.0, 4.0, 5.0]

and I want to replace the index with the correct names from a likert scale that these values refer to:

['N/A', 'Disagree Strongly', 'Disagree', 'Neither Agree nor Disagree', 'Agree', 'Agree Strongly']

so I create a dictionary that maps from keys in the first index, to values for the new index:

LIKERT = {
    0.0: 'N/A',
    1.0: 'Disagree Strongly',
    2.0: 'Disagree',
    3.0: 'Neither Agree nor Disagree',
    4.0: 'Agree',
    5.0: 'Agree Strongly'
}

I then do a little list comprehension that adds the correct new value to the new index, if it’s key is in the old index. If the key isn’t there, it gets skipped:

new_index = [LIKERT[value] for value in LIKERT.keys() if value in data_counts.index]

All fine, right? Sure, if the index is always in numerical order. Which it isn’t. Using this code, if the index is in the wrong order, you can get ‘5’ being replaced with ‘Disagree Strongly’ (or any of the values other than ‘Agree Strongly’) and suddenly your analysis is completely wrong.

The second line fixes this by looping through the index, not the dictionary, and so creates the new index in the correct order.

A better fix is actually to use the .rename() function, which can rename the index of a dataframe (or the column names) using a dictionary as a lookup, like so:

data.rename(index=LIKERT, inplace=True)

Any values present in the index but not in the lookup are left alone, and values in the lookup but not in the index are ignored, and the result is exactly what I need, all my ‘5s’ replaced with ‘Agree Strongly’ and so on.

So I guess the lesson learnt here is RTFM, and don’t try to be clever and re-invent functionality that already exists.

Weeknotes - 29th May 2017

Sunday, Jun 4, 2017

Monday 29th May

BANK HOLIDAY, innit

Tuesday 30th May

OH GOD, NOW I’VE GOT A DAY LESS TO DO EVERYTHING!

Finished my visualisation coursework marking today. Generally really good quality across the board, and a really enjoyable set of work to mark. As time goes on, I’m liking this visualisation course more and more. It’s fun to teach as it’s an interesting and quite subjective field, which is not usual in a ‘normal’ Computer Science course. There’s lots of scope for discussion and argument and plenty of chances for students to really get stuck into some data analysis and communication and really show off their skills. I get the feeling the mark distribution skewed a little higher than last year, but I haven’t checked that yet.

Also met with the last student who has expressed an interest in our CUROP project for this summer, so we’ll be able to make a decision on that soon and get that project rolling. I also met with another of our CompDJ students about their dissertation project - they’re looking to build a bot that will write articles automatically about particular topics. A very ambitious project, but one that looks to be really interesting.

The other major task on Tuesday was a Skype call with the rest of the organising committee of DataJConf to finalise the accepted talks and sort out the schedule. We had a really great set of submissions, with a good mix from industry and academia. Our programme committee did a great job of reviewing them, so it was a fairly simple task to conduct a quick meta-review of the papers, decide where our cut-off point is and then take the top 8 papers forward to the conference. Sadly the fact that we’re only one-day main track this year meant we had to lose some very good submissions, but I’m hopeful those authors will still come along and pitch their discussion topics for the Unconference on the day after DataJConf (and we invited them to in their notification emails). The schedule is now online, and it looks like it’s going to be a really good day. Tickets are selling, and the attendee numbers are ticking up. We were supposed to make a decision this week on which room to go for, the big room or the bigger room, but we put that off to see how numbers look in a weeks time. It’s a bit of a gamble as there’s always a chance that if we need to switch from the room we already have booked the other room will be unavailable by the time we make our minds up, but who doesn’t like a bit of risk in their conference planning?

Wednesday 31st May

A day in which very little was accomplished towards my own goals, but which had to be done. Most of the morning was taken up with a meeting with my counterpart in undergraduate operations, the school manager, and various faces from college about our generally low survey response rates in the School, and how we might do better at communicating with students to foster a culture that encourages these response rates to improve. One of the key points we came up with was that while we’re very good at listening to students as a school, and then acting upon their feedback, we’re pretty rubbish at communicating those actions and changes back to the students. The outcome of this discussion was a need to empower the operations teams for postgrads and undergrads to do more with the various surveys and module feedback questionnaires, to bring actions and recommendations to the teaching and learning quality committee and boards of studies, and to work with the comms team to make sure students know that what they tell us is listened to and acted upon, and is therefore quite important. Essentially it’s about a culture change within the school, and we all know how easy that is, right…

Also had some interesting discussions with my Head of School today about a number of projects I’ve got going on at the moment. I already wrote about trying to coordinate the large number of new programme / programme change approvals that we have happening within the school, but we also discussed a couple of other projects. One, looking at end-of-module feedback has been going on for a while but is close to being ready for launch. The other was around module-review, and how I want to improve that process by moving to a git based approach, which will allow better oversight and review of module changes and data collection. I’ll talk more about that as the project develops.

At lunchtime we had our first official meeting with Stuart, our third-year student who is working with us for the summer on our Education Innovation chatbot project. He seems to have really hit the ground running and is getting stuck in to building code and designing solutions. Really great to see, and it looks highly likely that we’re going to have something working to test with students in the Autumn.

The afternoon was taken up with an Academic Approval Event in the School of Modern Languages. I was on the panel as the internal member from outside the college. It’s the second approval event I’ve done, and was a fairly pleasant experience. The programme we were looking at was well thought out, and would clearly be a benefit to the school in question. There were the usual typos and small inconsistencies in the documentation, and we had some recommendations that might improve the student experience, particularly around assessment, where there were a lot of essays that might be replaced with some more interesting types of assessment. Overall though it looks good, and I hope they make a success of it.

While all that was going on, we were hosting a hackday over in Bute, a collaboration with The Bureau Local. A team came over from The Bristol Cable and along with our students spent the day looking at voter numbers within local constituencies. I wrote a tiny write up over at the CompDJ blog, but I was a bit annoyed I couldn’t get more involved, what with everything else that was going on yesterday. Hopefully I’ll be able to get stuck in at the next one, as I’m sure this wont be the last hackday type event the Bureau organises.

Thursday 1st June

Today was spent interviewing students for another of our summer projects, looking at Journalism Education. We’ve been carrying out a data collection experiment since last summer looking at the skill requirements of the media industry as exposed through job advertising and mailing list postings, and now we’re looking to back that up through a qualitative analysis of journalism school educators and their syllabi. We had 12 students from a range of schools express an interest in working on this project with us, and choosing between them is a very hard task indeed. Luckily m’colleague is leading this project, so most of that particular burden falls on him. Hopefully we’ll have someone in place very soon and we can get the third of our summer projects up and running.

Friday 2nd June

Y.A.D.A.F

Sunday 4th June

My ‘Friday’ was spent working on some analysis of module evaluation feedback. As I mentioned in Tuesday’s notes, we need to do more and better with the feedback given to us by students. I’ve been working for a while on creating some simple dashboards that transform the quite poor output of the module evaluation system into something that is firstly a little more usable by module leaders, but that also looks more like the survey dashboards (NSS, PTES, etc) that we are used to dealing with.

Module Evaluation Dashboards - WiP
Module Evaluation Dashboards - WiP

The idea is that consistency between the types of visualisations and analysis used will reduce the cognitive burden when trying to assess the feedback and compare across surveys. I’m now starting to put together a system that will create individual dashboards for lecturers and module leaders, and that will also allow comparison between modules on the various programmes and years of our degree schemes, and allow comparison to the school as a whole. With any luck I’ll be able to present this at the next TLAQC and we can start to deliver these to lecturers and operations teams to help them understand what the students are telling them. Today was mostly refactoring my existing analysis code that takes the raw survey data and converts it into percent agree/disagree scores as per the NSS dashboards, and collects the data across the different groupings (years/programmes) of modules.

Weeknotes - 22nd May 2017

Sunday, May 28, 2017

Strange week this week - coming back from holiday, lots of time spent catching up, arranging meetings and organising more meetings for next week

Monday 22.05.17

Most of Monday morning was spent dealing with all the emails I’d received while away last week. The usual mix of requests for information from admin, queries from current, potential and past students, and a number of things relating to projects that are either about to start or were supposed to have started by now! It took an absolute age to crack through it all. The apocryphal tale of the colleague who just ‘deleted everything’ on the return from holiday with the assumption that anything important would be chased up loomed large in my mind as I replied to my fiftieth message. In a world where ‘responsiveness to communication’ is one of the questions in any number of student feedback surveys, I just don’t think that path is the right one to take.

Monday afternoon saw myself and Glyn working on our talk for Wednesday, taking the usual divide and conquer approach to put together something interesting (we hoped) for the ‘Investigating (with) Big Data’ symposium being held by the Digital Cultures Network.

Tuesday 23.05.17

Another morning of marking this morning. I mentioned last week how pleased I was with the quality of the submissions this year, and it has held up through this latest batch too. The students really seem to have engaged with the module, have thought about what the data says and the message they want to communicate, and have then brought the technical skills to the table to implement their solution. I’m really pleased with how it’s gone. Over halfway through the marking now. It’s supposed to be done by the end of this week, but with two days of training courses and a very busy Wednesday, that’s just not going to happen. I have supplied the necessary apologies to the admin staff and I’m fairly sure they’re not going to hurt me too much.

The afternoon was taken up with meetings with m’colleague, potential CUROP students, and a couple of our MSc CompDJ students who are beginning to think about their dissertation projects for this summer. One of the things Glyn and I discussed was our lack of self-promotion around the activities we do as the ‘Computational and Data Journalism’ team. In the last couple of months we’ve scored research project funding, student project funding, international workshop funding and our students have landed a number of prestigious summer internships, and we’re really not doing a good enough job at shouting about this activity. I’ve resolved to drive this forward a bit better, so came up with a list of potential items for promotion, and I’ll be trying to push those out over the summer, and then keep things ticking over during term time next year.

There was also some movement on the Untappd data project front, as I was finally pushed into responding to my co-investigators with some plans on how to progress from last year’s ICWSM conference paper to a fuller journal paper submission. This is one of those side projects that it’s a real shame to not have more time for, as I think we have a lot of interesting things that we can do, but are all lacking the time to really get stuck in to the analysis. Hopefully we’ll be able to push things forward over summer and get something delivered.

Wednesday 24.05.17

Wednesday started with my first catch up meeting with the DoT for a couple of weeks. I’ve been deputy DoT since September(ish), and we’ve probably not had enough of these meetings. The plan is to make them more regular in the future, and that will probably help with keeping all the plates spinning, as I’m now working on a lot of different projects for the School. We discussed the programme approval process, as we have a number of new programmes in the pipeline as well as some changes to existing programmes going on, and we need to make sure we keep everything coherent. I’ve been tasked with setting up some meetings with the key proposers and the usual suspects within the school to make sure there’s enough coordination going on.

In the afternoon, it was over to John Percival Building to give a talk as part of the ‘Investigating (with) Big Data’ symposium. This was a double hander with m’colleague, and we’d chosen to discuss some issues around large data investigations within news media. Glyn started by presenting some of the more recent large-scale collaborative data investigations that have been carried out by news orgs. I followed that up with a discussion around data openness, transparency, and some of the technical issues that are holding back data journalism. I think the talk went well, people seemed interested and receptive to the ideas we presented.

Sadly I couldn’t hang around for the rest of the symposium as I’d double booked myself for the afternoon, having agreed to go to a briefing for exam board chairs being held over in main building. There’s a few new people taking on the exam board chair role within the school, and although I’m not one of them it was ‘decided’ (no idea who by) that I should also attend the briefing, as I’m probably going to be one of the people called upon to step in if the usual chair isn’t available. It was a fairly dull but not entirely useless presentation on the process of getting ready for and dealing with the aftermath of an exam board. It ticks the boxes though, so now I’m trained and can step into that particular set of shoes if necessary.

Thursday 25.05.17 & Friday 26.05.17

Days 2 and 3 of the ‘Leading Teaching Teams’ training programme that I’d managed to score a place on. This part of the programme was run by the Leadership Foundation for Higher Education, and was probably one of the best training courses I’ve been on so far. I spent a long time reflecting on the way I work, and it really delivered some useful insights. We did a lot of self-assessment and analysis of how our individual approaches may or may not be helpful in managing teams, and I’m looking forward to putting some of the ideas into practice.

As with many of these training courses, one of the added benefits was being able to spend time with colleagues from across the University. It’s always fascinating to find out how others work and to hear about common problems or issues across different schools and colleges, and how they’ve been solved (or not!) in different ways. It’s also nice to get an opportunity to discuss things and to hear that others feel the same way. There was a lot of discussion and dissatisfaction expressed over the 2 days about the increased corporatisation and commodification of Higher Education. I’d love to tell you that we’ve solved that particular issue, but sadly not. Many did get righteously angry about it though. I suspect a higher societal change is needed to fix it, and all we can do at this level is to keep pushing for that change.