Animated Strava Maps with D3.js

February 13, 2026

In a previous post, I explained how to download your Strava data using a couple of simple Python scripts. But once you’ve got a local copy of your data, what can you do with it? Well, how about an animated map of runs that builds over time?:

In this post, we’ll explore how to create an animated map of running routes using D3.js. We’ll assume you already have your Strava data downloaded and available as a JSON file (activities.json). Our goal is to take this raw data, convert the route information into a format D3 can understand, and then animate the drawing of each run on a map, adding a date label that updates as we go.

The code is all available in the github repository here, and this post walks through how the code works.

1. Converting Polylines to GeoJSON

Strava stores route data in the simple activity objects as encoded polylines (strings of characters like _p~iF~ps|U_ulLnnqC_mqNvxq…). To map these, we first need to convert them into GeoJSON, a standard format for encoding geographic data structures.

We can use the mapbox/polyline library for this.

// Inside our data processing loop
let mapline = run.map.summary_polyline;

// Decode the polyline string into a GeoJSON object
let route = polyline.toGeoJSON(mapline);

// We also attach the date to the properties for later use
route.properties = { date: run.start_date_local };

This gives us a LineString geometry for each run, which D3 can easily project onto an SVG.

2. Setting up the D3 Map

With our data ready, we need to set up the D3 environment. We start by defining a projection (converts lat/long to x/y pixels) and a path generator (translates GeoJSON into SVG path data).

// Define a Mercator projection
let projection = d3.geoMercator();

// Create a path generator that uses this projection
let geo = d3.geoPath().projection(projection);

// Select our container and append an SVG
let routeSVG = d3.select('.routes').append('svg')
.attr('width', 500)
.attr('height', 500);

In my case, we only want to draw the runs within a specific area - if we draw all the runs in the data we’ll have runs from all over the world (or at least the bits I’ve visited and run in). We define a bounding box for the geographic area from which we want to draw the runs, then filter the runs to only those who have some coordinates within that bounding box:

let cardiff = [[-3.32322, 51.38586], [-3.14065, 51.51634]];

let runs = data.filter(activity => activity.type === 'Run');
let count = 0;
let runsInCardiff = { type: 'FeatureCollection', features: [] };
runs.forEach(run => {
let mapline = run.map.summary_polyline;
// Decode the polyline
let route = polyline.toGeoJSON(mapline);
route.properties = { date: run.start_date_local };
if (route.coordinates.length > 0) {
if (route.coordinates.some((coord) => {
return coord[0] >= cardiff[0][0] && coord[0] <= cardiff[1][0] &&
coord[1] >= cardiff[0][1] && coord[1] <= cardiff[1][1];
})) {
count++;
runsInCardiff.features.push(route);
}
}
});

To make sure all our routes within this filtered collection fit on the screen and fill our drawing area, we use projection.fitSize() to scale the projection correctly:

// Create a MultiLineString of all runs to calculate bounds
let runRoutes = {
type: 'MultiLineString',
coordinates: runs.features.map(run => run.coordinates)
};

// Automatically adjust projection scale and translate to fit the 500x500 box
projection.fitSize([500, 500], runRoutes);

3. The Animation: Dasharray and Dashoffset

The core of the “drawing” effect is a clever CSS/SVG trick using stroke-dasharray and stroke-dashoffset to initially hide then slowly reveal the line.

  1. stroke-dasharray: We set the dash pattern to be length, length. This creates a single dash that is exactly as long as the line itself, followed by a gap that is also as long as the line.
  2. stroke-dashoffset: We initially set the offset to length. This effectively pushes the “dash” (the visible line) completely out of view, leaving only the “gap” visible. The line appears invisible.
  3. Animation: We transition the stroke-dashoffset to 0. This pulls the dash back into view, making it look like the line is being drawn from start to finish.

The extra element that allows us to draw the runs sequentially, rather than all the routes drawing in parallel is the delay - we stagger the start time of each transition by the length of the previous transition.

routeSVG.selectAll('path')
.data(runs.features)
.join('path')
.attr('d', geo)
.attr('fill', 'none')
.attr('stroke', 'green')
.attr('stroke-width', 0.5)
// Set dasharray to [length, length]
.style("stroke-dasharray", (d, i, j) => {
const length = d3.select(j[i]).node().getTotalLength();
return `${length} ${length}`;
})
// Hide the line by offsetting it by its full length
.style("stroke-dashoffset", (d, i, j) => d3.select(j[i]).node().getTotalLength());

// Animate!
routeSVG.selectAll('path')
.transition()
.duration(50) // Each line takes 50ms to draw
.delay((d, i) => i * 50) // Stagger start times (next run starts 50ms after previous)
.ease(d3.easeLinear)
.style("stroke-dashoffset", 0); // Reveal the line

4. Updating the Date Label

To show the date of the current run, we attach an event listener to the transition. D3 transitions emit a start event when they begin. We use this to update a text element.

First, create the text label:

let dateLabel = routeSVG.append('text')
.attr('x', 20)
.attr('y', 30)
.style('font-family', 'sans-serif')
.style('font-size', '16px')
.text('');

Then, update it in the transition chain:

.on("start", function (d) {
// 'd' is the data bound to the path, which includes our properties
let date = new Date(d.properties.date);
dateLabel.text(date.toLocaleDateString());
});

This ensures that exactly when a route starts “drawing” itself on the map, the date label updates to match that specific run.


And there you have it! A data-driven, animated map visualization using Strava data and D3.js.

Simple Strava Downloader in Python

February 12, 2026

Like many runners and cyclists, I use Strava to record activities, and enjoy the analysis and community aspects of the app. But I also wanted to be able to ask questions of my data that the standard dashboard couldn’t answer. I wanted a simple, reliable way to get all my activitiy data out of Strava and into a format I could easily manipulate and use within other projects.

Enter strava-data.

How It Works

This project is a compact couple of Python scripts that handle the two main hurdles of the Strava API: Authentication and Pagination.

The Authentication Dance

Strava uses OAuth 2.0, which can be a bit of a headache for simple scripts, especially if you can’t be bothered to write a whole web app to handle the authentication flow. Instead, I implemented a straightforward command line authentication flow, which is not as nice to use (there’s a bit of copy/pasting between the browser and the command line), but it gets the job done.

In order to use it, you’ll need to register for your own API application with Strava. This will give you access to your own data and is quick to do, you can register at https://www.strava.com/settings/api. Creating an API application will give you a CLIENT_ID and a CLIENT_SECRET, which you’ll use to identify yourself to Strava when authenticating to access data. You’ll need to add these into a .env file in the root directory of the project.

```env
STRAVA_CLIENT_ID=your_client_id_here
STRAVA_CLIENT_SECRET=your_client_secret_here
```

When you run the strava/authenticate.py script, it reads this .env file and generates an authorization URL for you to visit. If you click on the link, or copy and paste it into your browser, you’ll be asked to approve access to your strava data for the app. Once you approve the app in your browser, you’re redirected back to a localhost URL. This will more than likely create an error (unless you happen to have a server running on your machine that is waiting to receive such a callback request which is extremely unlikely). The fact that it gives you an error page doesn’t matter though - the information you need is in the address bar.

All you have to do is copy the code from that URL back into the terminal, and the script handles the rest - exchanging that code for access and refresh tokens, which are saved locally.

Smart Downloads

The core logic to get your data lives in strava/download_activities.py. It’s designed to be run repeatedly.

  1. Load Cache: It first checks your local data/activities.json file.
  2. Find the Gap: It identifies the timestamp of your most recent downloaded activity.
  3. Fetch New Data: It asks the Strava API only for activities that happened after that timestamp.
  4. Update: It appends the new activities to your file and saves it.

This incremental approach is efficient and makes the most of API rate limits, ensuring we’re not hitting the API too much or too often.

Key Features

  • Zero Database Overhead: Data is stored in a flat JSON file. Simple to read, simple to back up.
  • Automatic Token Refreshing: The script checks if your access token is expired and uses the refresh token to get a new one automatically. You authenticate once, and it keeps working.
  • Rate Limit Safe: Built on top of stravalib, it handles the nuances of API communication.

Getting Started

If you want to try this out yourself, you’ll need a Strava account and a few minutes to set up an API application.

  1. Clone the repository.
  2. Install the dependencies: pip install -r requirements.txt.
  3. Set up your .env file with your Client ID and Secret.
  4. Run python strava/authenticate.py to log in.
  5. Run python strava/download_activities.py to grab your data.

From there, the sky is the limit. You can use Pandas, Plotly, or any other tool to visualize your year in running. I’ve been playing with D3 to do some animation … and I’ll talk about that in my next post.

Senedd 2026

October 31, 2025

Next year is an election year here in Wales. Interestingly, it’s the first election under a new proportional representation system that will see 6 members elected from each one of 16 new constituencies. An interesting time politically for those of us who live in Wales, but also interesting from a Data Visualisation perspective.

Up until now, election maps in Wales have been relatively simple, and not just because you take a map of Wales and colour it mostly red. The country is divided into several distinct areas, each of which elects a member of parliament (or Senedd in our case), which means election maps are just a case of drawing the area boundaries and colouring them according to the winner, as we’ve seen in election map time after time. Of course, the Senedd election has always had an interesting wrinkle in that there has always been an element of proportional representation included, with a First Past the Post system used to elect constituency members, and a proportional system used to elect members from the regional lists in 4 different electoral regions.

Representing the results of proportional representation elections geographically is much trickier than with fptp, as we don’t have a single winner in each geographic area, meaning we can’t just draw the boundaries and colour the area in. I’ve previously played with a couple of different ways to do this - attempting a proportional striping of geographic areas (as proposed by Ondrejka, 2012), or using a fixed shape on top of the geographic area to represent the proportional winners. Neither works particularly well.

However, this attempt also included a hexmap, where 4 hexes are used to represent each of the electoral regions - and this is perhaps a little more successful. Could this be the solution for the 2026 election results? If we’re going to use hexagons to represent the election results, then we need a hexmap of Wales, and the ones I’d created earlier weren’t going to work because the number of boundaries and areas has changed since 2021. There are some nice tools out there to help with creating hexmaps, and a flavour of json for describing them, hexjson, so all that was needed was to work out how to draw wales using the right number of hexagons. Time to break out the colouring pencils!

With a little bit of trial and error, I managed to assemble the new constituencies into a reasonable approximation of a hexmap of Wales. All that was left to do now was to turn this into the hexjson format that can describe such maps, and start playing with how to visualise the results. Fortunately the Open Innovations team have lots of useful tools for working with hexmaps, including their hexmap editor which made the process relatively straightforward.

In a future post, I’ll talk about what else we can do for visualising the proportional results for the Senedd elections in 2026.

DataJConf 5.0

September 18, 2025

Wow, has it been 2 years? Unbelievable.

Last week we held the 5th edition of the European Data & Computational Journalism Conference (#datajconf) in Athens, Greece. It was our first conference since Zurich in 2023, and once again it was, I think, very successful. As always, it was fascinating to see the progress that has been made in the last two years, and how both academic research and industry practice has evolved.

I’d previously thought in Zurich that AI had dominated the agenda but that was nothing compared to the schedule this year, where it was so pervasive that the few authors whose work didn’t involve AI felt the need to mention this at the beginning of their talks and perhaps even excuse it. I think we could count on one hand the number of talks that didn’t mention generative some sort of AI or AI adjacent technology. I’d also commented last time around that many news organisations were quite immature in their adoption of AI - there was a lot of pockets of experimentation, but it was not really baked into newsroom practice or workflows. This conference revealed a vastly changed landscape, with most industry talks showing evidence of how AI had become a standard part of the workflow within their organisation, and how mature AI adoption was in many newsrooms. Tellingly though, all were keen to stress that they were not working without humans in the loop - nobody has yet reached the point where they were willing to trust the pipeline of AI into public facing output without some human level editorial oversight - and it’s not clear that anyone would want to get to that point.

All in all, a fantastic set of academic and industry speakers, some incredibly useful and practical workshops, all rounded off with a truly exciting set of keynote speakers. I genuinely can’t wait for the next one.

DataJConf 4.0 & C+J

June 23, 2023

Last week we held the 4th edition of the European Data & Computational Journalism Conference (#datajconf) in Zurich, Switzerland, and this time it was held jointly with the Computation + Journalism Conference (#cplusj), which is usually held in the US.

I wrote about the genesis of DataJConf back when we held the first edition in Dublin in 2017, and I spoke a bit more about the general ethos of the conference with the third edition in Malaga in 2019. We’ve always encouraged a strong multi-disciplinary approach, aiming to attract journalists, developers, industry professionals and academics from across many fields including journalism and media studies, computer science and data science. It was really gratifying to see that this year was no different, with a great mix of all these groups, and more, in attendance. Our teaming up with C+J meant that we had more attendees from the US than we’ve had before, which I think will have been a great opportunity to strengthen links between Europe and the US, and has also helped raise the profile of our European ‘little sister’ conference to the more well established C+J conference.

This was our first conference back since the pandemic, it had been 4 years since we last got together and discussed all things Computational + Data Journalism. It was fascinating to see the progress that has been made in the last few years, and how both academic research and industry practice has evolved.

It’s no surprise that AI dominated much of the agenda, with some very interesting and revealing discussion of the use of GenerativeAI by news and media organisations. What was most revealing though was that widespread adoption of GenerativeAI tools is still some way off, if it will ever happen at all. Most organisations that have experimented with these sorts of technologies have found that the unreliability and ‘hallucinations’ that can be introduced create all sorts of integrity and trust issues when using GenAI to create content, which generally outweigh the benefits. Many newsrooms/organisations are instead for now sticking with rule based/templating automation for content generation, which is much more reliable and controllable. Where GenAI has found use is in transforming/summarising existing content, which again can be more controlled/reliable. On the AI front there was also some discussion of deepfakes, and the potential issues arising there, though little discussion of solutions, perhaps because we don’t have them, or perhaps because existing fact-checking and verification techniques are already sufficient to deal with the problem?

The other big development that was noticeable at this conference was the increase in algorithmic accountability efforts - news organisations and others working to investigate the impacts and biases present in algorithmic decision making processes that have a real impact on people and society. An increasingly important area of concern, that was not really touched upon in previous editions of the conference, but which is now a focus for many teams.

Teams was also an interesting revelation from some of the talks and discussions - where previously the EU has perhaps lagged behind the US in terms of the development and position of data/computational teams in the newsroom and other media organisations, it does feel like there has been a change in the intervening years in some organisations, where these teams are now bigger, more established, and perhaps more central. Certainly, whereas in our first edition much of the talk was about small, niche teams on the fringes of media operations, this time it felt like we had a number of talks where the data/computational teams were key to some of the organisational strategy. A good development to see, though it is also clear from a number of talks that there is still work to do in this area.

For a really nice roundup of some of the issues I’ve touched on above, and more, see this thread from Jim Haddadin:

You can also go back and check through the hashtag to see how the conference unfolded.

Overall it was a really successful conference. Personally I didn’t have to get involved in much of the organisation this time around. The local team did a great job pulling it together, with Bahareh doing most of the steering effort. It was great to catch up with familiar faces from previous editions and conferences, and to meet new people too. I’m looking forward to the next edition …

Students on a scale

April 5, 2022

After our first teaching workshop of our review of undergraduate programmes we had some intial thoughts about what students in 2025 might be like, and what they might learn.

I’d like to unpack the framing that Carl Jones came up with after that workshop, presenting the complimentary but perhaps also conflicting elements that might make up a future Computer Science or Software Engineering graduate, as a series of points along a set of different dimensions:

CS/SE Specific

  • Fabricators v Assemblers (or White-box v Black-box)
  • Low-level v High-level
  • Work with computers v work with people

General/Transferable Skills

  • Work alone v work with others
  • Rigidly structured vs. Flexible learning
  • Research-aligned v industry-aligned

With apologies to Carl for anything I get wrong, and with apologies to Computer Scientists and Software Engineers for the cliches and gross generalisations, here’s how I think this works and where it might fit in developing or redeveloping degree programmes:

Fabricators Vs Assemblers

Fabricators build the bits that the Assemblers put together to build the products.

Fabricators work at the level of components and services, they design and implement algorithms, APIs and interfaces. Their work is often about implementing the theoretical ‘new’.

Assemblers take the components and services and put them together into applications and products. They architect solutions to problems using existing code and technologies, sometimes with little (or no) coding. They are focused on the solutions, and the trade-offs that must be made to achieve them.

Low-level vs. High-level

At the low-level the focus is on efficiency, speed, memory consumption, CPU cycles, power usage. They work close to the algorithms and the data. They are often delivering marginal gains that translate to larger gains at higher levels.

At the high-level the focus is more on wider systems, with concerns for system reliability, data consistency and coherence, uptime and response.

At both high-level and low-level there is an explicit understanding of the trade-offs and impact that the design decisions made at that level have on the project/product/output.

Work with Computers vs. Work with People

At the computer end of the scale, we are familiar with a wide range of languages, tools and environments. We write code, we are on the command line and in the text editor.

At the human end of the scale, we talk to lots of different people - our colleagues, our clients, the stakeholders. We can translate and communicate between these groups.

Work alone vs. Work with others

Working alone is not about the solo coder, at home in the garage delivering a project single-handedly, but is about autonomy, independence and self-direction. It is about being able to align yourself with the goals of the project and team but to determine your own direction along that alignment.

Working with others is about collaboration and communication. It is about sharing of all kinds (ideas, time, respect) and about being an effective colleague, offering and responding to feedback, and keeping the teams goals in sight and mind.

Research aligned vs. Industry aligned

Research aligned work focuses on the cutting edge of CS and SE in both the pure theoretical and applied senses. It may not yet have practical applications in industry but will be pushing at the boundaries of the subject area.

Industry aligned links CS and SE to a more practical applied sphere, delivering necessary skills for the digital economy and powering the wider adoption of CS and SE technologies across business and society.

So what?

If we have these six dimensions (there may be more, or we may need to lose some…) what does this mean for our teaching and our degree programmes?

To me, this is a framework within which our teaching sits. It could even be considered a menu. There are clearly overlaps between some of these areas, and I don’t really think a student would get away with only sitting at one end or the other of any one of these dimensions without at least knowing about the other end. However, I can envision a student coming out of Year 1 of a degree scheme, a solid grounding in core skills under their belt, having experienced both ends of the scales, and now they are making the decisions about where to specialise and using these dimensions as a guide for some of those decisions. Perhaps they are leaning towards being a high-level fabricator type who works with people … so we steer them towards modules that cover or lead towards those areas. Perhaps they are a low-level assembler who is very research-aligned, and so again, we steer them in the direction of the modules that will enable those sorts of outcomes.

I think this is a model we’ll come back to as we continue to explore the space surrounding our degree programmes, and it will be interesting to discover the other dimensions, if there are any, and to see if this remains a useful metaphor to base some of the programme design decisions on.

Computer Science Students in 2025

April 1, 2022

Our first teaching workshop of our review of undergraduate programmes was student-centred, and focused on what our cohort of undergraduates might be like in 2025. Our staff got together online and in-person to try and tease out some of the answers to questions like these:

  • who are they?
  • what skills and knowledge do they have?
  • how do they learn?
  • what will they do and learn on our programmes?
  • where will they go once they leave us?

A wide range of opinions were gathered, and there were some interesting common themes. What follows is a summary that sets the base from which we can explore what the future of our programmes may look like.

Who are the students, what skills and knowledge do they have

A diverse set of entry qualifications is probably one of the key things that will mark out our student intake in 2025. We’re looking at students with and without A-Levels or T-Levels. Some may have Maths qualifications beyond GCSE, other’s won’t. The same is true of formal study of Computer Science, the curriculum for which differs between England and Wales substantially. The larger changes to the Welsh school curriculum will not yet have filtered through to impact incoming students in 2025, but it won’t be far off, and are something that need to be considered for the future.

While school/college leavers are likely to still be the main bulk of the cohort, there is a potential for older students at undergraduate level, particularly those who want more depth than a postgraduate conversion course may provide, or perhaps a more applied programme.

We need to remember that Cardiff as a city and a destination is a big draw to applicants, who will come from across the UK. Our international intake has also grown significantly over the last few years at undergraduate.

How will they learn?

A level of flexibility will be key here. Different students with different support needs, accessing teaching through a range of methods and modes that will enable their learning. While some may appreciate or expect the ‘traditional’ rigidly timetabled lecture, lab and tutorial experience of University, others will be looking for a more dynamic, chunked education accessed on their terms when it is convenient for them. This will most likely be delivered predominantly online, with additional flexible in-person support. Between these two extremes are a cohort of other students who may expect or want something in-between the ‘traditional’ and the ‘new’. To fully support this wide range of preferences across all teaching may be an impossible task. As we move to a more blended model of learning, we’ll need to help our students learn how to become self-paced and self-led learners - something that (as in many industries) important in CS, given the fast pace of change and the need to constantly develop and learn throughout a career. We need to help students realise that we are not the only source of information and knowledge.

Where are they going, What do they learn?

Students who graduate from our programmes in 2028/2029 will be heading to a wide range of destinations. This range will be getting wider all the time as Computer Science continues to become more pervasive throughout society, as it continues to reach across disciplinary barriers and entwine itself in more and more subject areas, and as the Software Engineering principles that underpin the applications of CS become more and more important. The vast majority of our students will come to us seeking a career in industry rather than research, though the pipeline of students from undergraduates to PhD will remain an important part of what we do as a Research-led institution.

One of the main considerations will be the skills that go alongside the subject specific knowledge that we deliver. While a grounding in the core concepts of CS and SE is essential, a lot of the technology specific content of the degrees is less important than developing students as independent reflective lifelong learners, able to adapt, change and reskill as the technologies move on and the subject evolves. An emphasis on skills alongside the technical is crucial: team working and collaboration, writing, reflection, critical thinking and analysis, decision making, entrepeneurship. Consideration must be given throughout the degree programmes to crucial factors such as sustainability, ethics, and employability.

Our excellent Deputy DLT for undergraduate Carl Jones related a lot of this in a nice way to an Agile manifesto type concept; the idea of a range of opposing ideas or views, with students sitting somewhere along each of these scales:

  • Fabricators v Assemblers (or White-box v Black-box)
  • Low-level v High-level
  • Work with computers v work with people
  • Work alone v work with others
  • Research-aligned v industry-aligned

Each scale reveals something about the type of Computer Scientist or Software Engineer that a student will become, and you could see this model being refined into a tool that could guide a student through the decision points and learning in their degree as they choose modules and pathways. They’ll need to see both ends of the scales throughout their time, but will likely end up specialising towards one end or the other on some of them by the time they leave. We’ll dig into this a bit more later…

Hybrid Teaching Workshop

March 16, 2022

as I’ve mentioned previously, we’re running some workshops to review some of our programmes with a view to changes needed in the future. As part of this process we held our first hybrid teaching workshop this week, and I thought I’d write about the general process before I write about the outputs

“Our intention is to run this workshop in a hybrid fashion - with people in a location tbc … and also online. We’ll see how much of a car crash that is …”

That’s from the invite to our latest teaching workshop, which we ran yesterday with several participants physically present in one of our seminar rooms, and more participants joining in from a Teams meeting online. It was our first hybrid teaching workshop with staff (though not our first hybrid teaching session by some distance[1]). I was a bit concerned whether anyone would actually turn up in person, especially as the weather was a typically rainy Cardiff day, but in the end we had a bunch of ‘real’ people in the room, and a bunch of equally ‘real’ but less physically present people online. It worked well, and I think it’s worth getting some thoughts down about the process.

The structure

The first thing that helped was that the session was structured very much as an active session to facilitate reflection in and gather input from the participants, but with a slight cheat in that the amount of ‘whole group’ discussion was artificially limited. The basic structure was:

  1. A short introduction from me to set the context of the workshop, lay out the plan for the session, and how it fits in with the overall review we’re doing. The aim was for this to take 5 minutes. It took 15.
  2. A first activity looking at a potential University applicant in 3-4 years time
  3. A second activity looking at a potential University graduate in 8-9 years time
  4. A third activity asking ‘what would we do if there were no rules?’

Each activity was run as a group discussion session, with groups of 6-8 people talking about the topic in question for 15-20 minutes, with a feedback session after each discussion. In order to capture the discussions the group present in the room had access to flipcharts and pens, but being a bunch of academics naturally gravitated to the whiteboard. Online the groups were in breakout rooms within the Teams meeting, and they all had access to a shared powerpoint file, with an individual slide for each room on which they could capture their discussion points.

The setup

The online meeting was in Teams and my laptop was plugged into the room’s AV system so that the teams meeting itself was displayed on the screens in the room. I kept the directional microphone on the lectern pointed at the laptop so the participants in the room could definitely hear the people online. A few of the ‘physical’ participants had their laptops with them and were also signed into Teams, so whenever they wanted to contribute to the discussion they could unmute and speak towards a device near them rather than relying on my laptop mic in the centre of the room. I had to remember to mute the audio on my laptop during those moments to avoid horrible feedback.

The reflection

Did it work? Yes. Participants both online and in the room were able to participate in the session. Some of the good things that worked well:

  • Shared .pptx files were great. This is a good technique I’ve used before, and that has been written about in a few places online. Having each group working on a different slide makes it very easy to see which groups are getting on with the task without needing to hop in and out of the breakout rooms. I added an extra slide in each deck for the team in the room, then when we were wrapping up the discussion I could take a photo of the ‘physical’ team’s whiteboard, and drop that into the shared powerpoint so the people online could see the full contribution from the room.
  • The feedback for each activity was also made simple by using the shared powerpoint - we just shared that within the team and moved through the slides, asking each room to highlight anything on their slide that they thought was pertinent.
  • By monitoring the online text chat the people in the room were also able to participate in the side-channel discussion that often happens in online meetings alongside the main thread of audio conversation.

Some caveats on this though:

  • It was a smallish workshop. We had 7 people in the room, and another 24 online. I think it’s scalable. Certainly the notion of sharing the file between all groups would work, but with more groups in the room there’d need to be a bit more time for adding the ‘in room’ content to the online presentation.
  • I assume (thought not with any evidence) that there’s going to be an upper limit somewhere on the number of people that can be editing a powerpoint presentation at the same time before something craps out on you.
  • You do need to be comfortable with multi-tasking for this. During ‘whole group’ sessions you’re trying to watch the text chat online for pertinent comments, look out for people online wanting to contribute and also keeping an eye on the room. During the breakout activity you need to keep an eye on each room’s chat, the activity in the shared presentation and the conversation going on in the room.
  • As I said, the amount of ‘whole group’ discussion was limited. This was partially down to time, as there was a lot to do in a short session. It was also partly as there is a further asynchronous feedback/discussion space available online after the workshop. It was also because I think that’s probably one activity that may not work so well in a hybrid session such as this, so minimising the amount of discussion between groups makes sense.

Really though, the only thing I’d change in future is try to have more of a balance of in-room vs. online participants, so perhaps at the next one we’ll try and have cake to entice a few more out of the digital space. Also we’ll not clash it with an event in the main school building that’s booked all the good seminar rooms, so we can hold it in the same building that people actually work in.

The feedback

Other people also seemed to think it worked:

The outro

Some thoughts there on the Hybrid workshop we ran. I thought it would be useful for people to hear how it went, before we dig into what we actually got out of it.


  1. I’m not here to rehash any of the arguments about hybrid teaching, or hyflex, or whatever the hell people are calling it now. My experience of hybrid teaching has been good to date, and I’ve had numerous students thank me for running hybrid sessions when they’ve not been able to make it in in person. However, as my Head of School noted today, it’s probably not for everyone and it likely suits the sort of educator who thrives on a bit of chaos in their teaching sessions, and it turns out that’s me ↩︎

Programme Review

March 8, 2022

So you’ve decided to embark upon a review of your teaching programmes

We’ve started looking at what we do in some of our programmes. For now we’re starting the process in-house, which means the Scholarship group within the School get to set the direction, speed and path that we take[1]. I thought it would be helpful to lay down some ideas and principles for this process. There are a few things I’d like this review to be, in no particular order:

  • open - we’ll be sharing outputs from workshops (and the inputs for workshop) widely within the School and inviting contributions at all stages from all key ‘stakeholders’[2]. I’m also going to be posting here whenever key outputs are created, so they might get sanity checked and have pressure applied by public scrutiny, which I think will help keep us on the right track.
  • inclusive - this isn’t just going to be a cabal of five or six individuals deciding how we should do things. All staff who want to be involved will be able to be involved. Students both past and present will be included in the design process, as will other key parties such as our external examiners, our industry partners and our external advisory board.
  • comprehensive - this isn’t tinkering around the edges. Let’s really test ourselves here - prod and poke at all the things we do and question why we do them. We’re only going to get to do this once for a good few years, so lets not squander the opportunity by shying away from the difficult questions.
  • limitless - at least initially, let’s not worry so much about practicalities, those can come later. To begin with lets think about how and what we’d teach in an ideal world unconstrained by academic regulations, workload, and academic systems and processes. We can compromise the dream later.
  • iterative - if we need to go back to an earlier stage based on things we’ve worked out in a later stage, we will. Each stage may need many repetitions to capture contributions from all the involved parties, and that’s fine. We’ll go round as many times as we need to get it right…

Why do these things matter? It’s all based on my past experiences of programme development. We’ve done some of the above in most of our previous developments, and when we have they’ve been a positive part of the process - but we’ve never done all of them at the same time. I’ve also seen quite a few programme developments that have not been up to scratch, and they’ve usually done the exact opposite of these things. So these then are the principles I think that will lead us to a successful conclusion, whether we decide that a full redevelopment is needed, or whether we decide that there’s nothing that actually needs to change.

The first task for us is to determine how much change is needed in our undergraduate programmes, if any at all. The model for this is fairly simple: decide what the ideal Computer Science and Software Engineering degree programmes would look like in 2025, then examine how our degree programmes currently are, and if the two are not the same, some change is needed. So that’s what we’re going to do next…


  1. I’m aware that there’s an education development service coming down the track at Cardiff. I got to see a snippet of it, and thankfully it matched my expectations in many ways. However I have no idea how far along the track we are, so we’re forging ahead a bit and we’ll let them catch up to us later as I’m sure they will. At least for now with high-level programme design I’m confident in our abilities, because not to blow our own trumpets but we are really quite good at this. ↩︎

  2. god this management speak is depressing, but I’m struggling for a better and less nauseau inducing phrase. ↩︎

Teaching CS

March 7, 2022

in which we start to think about redeveloping our programmes

Back in early 2020 we had plans. Big plans. We were going to look ahead, far ahead, and think about what Computer Science education really was, what it meant, and how it might change over the next decade. I was planning to scope out a project that enabled us to go and visit people, people wiser than us (like we did at TuE), and see what they thought about it, and how they did it, and then we were going to synthesise all that into a plan for the next decade of how we were going to teach Computer Science.

Then there was the small matter of a global pandemic, and instead of spending a couple of years looking at what might change in teaching over the next decade we instead implemented a decades worth of change in teaching in six months. The last two years have seen an unprecedented shift in how we deliver CS education, compressing the changes we could barely see on the horizon and only had the beginnings of thought within our plans into six months, and slow, planned cautious rollout became a fast, reactive and dynamic launch. We went from ‘the old way’ to entirely online to blended learning in the space of two academic years, and now it is time to reflect on those changes and decide what they mean for the future.

At the same time, this is cs that we’re talking about, that ever-shifting, always-developing, forever-moving subject with its tentacles firmly embedded in all of modern society and its boundaries constantly blurring as it impacts on disciplines that previously had nothing to do with computers and computing. Our student intakes become more familiar with computational thinking, with algorithms, with data structures, with programming, with technology, and the places they will go after us become ever more many and varied.

So the review of how and what we teach, delayed for 2 years, is back on.

There’s a lot of drivers and inputs in this review. Not just the changes to teaching in the last two years, and the pace of change within the subject. The School itself has changed, growing in size, shifting in research focus. We’re doing a wider strategy review in the School at the same time, which is also looking at the future of CS teaching. The University itself has changed, with new support for teaching development, and further changes on the horizon. The school is due for academic revalidation of everything we do in a couple of years, and there’s no harm getting a head start on that. QAA are about to publish new benchmark statements for the subject, accreditation criteria have been revised, and there’s a significant amount of learning from our involvement in projects like the IoC to put into practice.

I’m planning to do this in as open a way as possible. I’ll be sharing plans, outputs and thinking as we go. As I said in my invite to the first staff teaching workshop as part of this review, this could be a car crash, but lets see what happens anyway…