Digital Marketing & Communications

We've seen 1s and 0s you wouldn't believe

Topic: Tools

The brief history of the content transition board

📥  Beta, Communication, Tools

We recently returned to a content sprint that was started a year ago. Digging up the old Trello board made us feel a bit nostalgic and quite pleased with ourselves - we’d come a long way since then.

When we began the content transition project we didn’t really have a clear idea how to organise the process. Our first attempt at a transition Trello board reflects that.

transition-board-zero

Our first iteration of the transition Trello board.

The idea was to build it around the top level stages in the process. Each card represented a stage in the process such as content analysis or completing a specific set of training.

Comparing that board to what we have now made us chuckle.

student-rec-board

The transition Trello board in its current form.

So what’s happened in between those two versions of the board?

We realised we need to build the boards around actual content

In the next iteration, we had cards for each new content type and copied into a checklist everything identified as that content type from the inventory spreadsheets.

content-type-cards-checklist

We trialled posting all the content from the inventories onto cards based on content type.

We would make an individual card for each item on the checklist and tick it off the list when that had been done to track progress.

For a reason none of us can remember, we put those cards in a column called “Epics”. Obviously, they are not epic user stories and this caused confusion - in amongst ourselves as well as with the publishers we were working with.

We quickly realised it was very difficult to avoid duplication or prioritise content.

We realised we need some actual epics to organise content in a meaningful way

The next iteration was already a big improvement. We organised the content into cards so that each covered, more or less, an epic user story.

We then pasted the relevant content from the inventories into the checklist again but this time used the full URL so it was easy to check the content without having to copy and paste and type in the domain manually.

Why on earth we weren’t doing this from the start, none of us can tell you. Sometimes even the most obvious things escape you when you’re immersed in a process.

epics-transition-board

Categorising content based on user stories worked better.

We realised we needed to take a step back from transitioning individual pieces of content

Although we were trying to structure to-be-migrated content around user stories, we quickly ran into problems again.

Working through heaps of content as a team at a fast pace meant we ended up working on user stories that were very similar. As a result, we created duplicate content without realising it.

It was time to take a step back. We went back to the principles and started from user needs. This time, we were careful to keep existing content out of sight at the user story planning sessions. This helped us stay focused on what the content should be rather than what it had been.

Having individual old pages on a checklist must have, subconsciously, made us think we need to transition each page as an individual page. So we stopped that, and started including them on the card as a reference instead.

We also improved our housekeeping discipline around individual cards. Each card now needs to have:

  • a user story
  • links to all the relevant existing content
  • links to draft in the new Content Publisher (backend) and preview
  • someone identified as the content subject expert
good-transition-housekeeping-trello

Each card has to have enough information for anyone of us to be able to pick it up.

We have also tweaked the stages each card goes through. “Doing, review, done” has morphed into “Substantive edit, 1st review, more edits, fact-checking, final proof and ready for live”.

The process takes a long time but we have made peace with it for now. If we’ve learned anything in the process of transitioning content so far, it’s that you can’t rush good content.

There’s still room for improvement - there always is - but for now, this is working for us.

 

Visual regression testing

📥  Communication, Design, Style, content and design, Tools

Our new site consists of 15 different layout templates. Each one of these is further broken down into numerous different design patterns for consistently displaying content. The rules that govern the presentation of these patterns (or elements, if you are familiar with atomic design) are generated from a combination of the Zurb Foundation framework and our own 'Origins' framework - all in all over 2000 rules spanning almost 6500 lines of css.

With this level of complexity it is extremely hard work to track the effect of any changes, almost certainly there is an unexpected knock-on effect of changing, re-specifying, or removing a rule.

Up until now, we have relied on our in-depth knowledge of the site to know where we expect changes to appear, and we use Browserstack to quickly check a representative sample set of our layout templates.

However, this requires a fair whack of time to run, and also needs a person to sit and look at each snapshot that's generated. And they need to know what to look for.

None of that is optimal, we needed a way to automate this process. Enter visual regression testing.

We followed this online guide by Kevin Lamping for setting up a prototype for a visual regression testing framework for Origins and the site templates.

Our prototype is in a repository here: https://github.bath.ac.uk/digital/visual-regression-testing/ (you will need a University of Bath login to view this). It contains a README with instructions on how to get it up and running.

Essentially, what it does is use some clever existing technology (WebDriverIO, WebDriverCSS, graphicsMagick, Selenium standalone and Node) to make a brower load a specific page, take a screenshot of a defined element, and then compare it to a baseline image. If there is any visual changes, it will then create a 'diff' file showing the change - and also alert us by throwing an error.

Snapshot of the website header

First we create a baseline image.

Snapshot of an updated website header

Then, when a change happens, another image is generated and compared against the first.

A diff file of changes to the header

If there are any visual changes, a third file is generated showing these changes.

Currently we are manually running these tests, but ultimately we will integrate into our continuous delivery framework so that the test run automatically whenever a new build of the css is pushed to our staging server.

Pretty neat.

 

Get told immediately when your tests pass or fail with Guard and terminal-notifier-guard

📥  Communication, Development, Howto, Tools

guard-demo

Guard in action

Got your feedback loop between writing code and executing tests down to the shortest time possible? Of course you have. After all, you know your tried-and-tested shortcut keys.

But there may be something better than your years-old habit.

Just run Guard

Guard will watch your code for changes. When it detects one, it'll fire up your tests for you in the background. Sounds pretty useful, right?

Get notified

With Guard on its own, you'd still have to Cmd + Tab to your terminal to see the test results.

But Guard also supports plugins to send out notifications. The one I use for my MacBook is terminal-notifier-guard, but there are a bunch of others you could use for your platform.

The advantage to using a desktop notification, as opposed to having your terminal window, is that you can keep your editor at fullscreen. Even better, you can be looking at something else entirely but still see when your tests have finished. This is great for when your tests take a non-trivial amount of time.

Setup

It's as simple as adding this to your Gemfile:

group :development do
gem 'minitest-reporters'
gem 'guard'
gem 'guard-minitest'
gem 'terminal-notifier-guard'
end

In our case above, we've added the minitest-reporters and guard-minitest gems.

Then follow the README for Guard on how to initialise and install Guard.

Small savings add up

It might seem very tiny, but the amount of thinking required to remember your key combos and execute them through your keyboard is actually valuable brain power you could be applying to writing code instead. And if you are frequently running tests, these savings add to up to an even greater amount.

Ben Orenstein said in his talk at Bath Ruby 2015 to try and run your tests from within your code editor. He did so in Vim in his talk. Guard is just a step on from that.

I've also started using the guard-livereload gem recently. Using this with the LiveReload browser plugin, any changes made to a view will refresh my browser automatically and show me the changes immediately. Super useful.

Happy coding!

 

Getting ready to go live

📥  CMS, Style, content and design, Tools

As you might have noticed from our latest sprint notes, we're getting pretty close to shipping some of our first full sections of the beta site.

We've had a handful of individual pages live for months now, all for new content. But soon it'll be the first time we replace an existing section with new content from our new CMS.

Before we ship anything, there are a few things we have to do to get these sections – and the beta site as a whole – ready to meet the world.

Our review process

All content in transition goes through a five-stage process:

  1. Substantive edit and review
  2. Copy edit
  3. Fact-checking
  4. Final proofread
  5. Sign-off and go live

We use Trello to manage this process. Every section has a board, every stage has a column, and every piece of content has a card. As the content goes through the stages, it moves across the board, and eventually into the final column: 'Complete'!

(more…)

What tech we use to test our CMS: Minitest and Capybara

📥  Development, Tools

tumblr_lmht06jDfG1qhed3yo1_500

The road less travelled

For the Editor side of our new CMS, we have made a couple of choices in testing that are a little bit off the beaten track.

For starters we are using Minitest instead of RSpec. Most Rails devs will pick RSpec as that's what comes out of the box. And Rails devs who have been around for a few years are already familiar with RSpec.

tumblr_mg57quMO7h1qj8u1do1_540

You say shoulda, we say assert

We favoured Minitest because of the low barrier to entry it provided us. As a dev house that had mostly written apps in Java in the past, we're already comfortable with unit tests. We're also already familiar with the assertion style. We felt that as a new DSL to learn it's a much smaller and easier set to absorb than compared to RSpec.

On top of that we're using Capybara to run functional tests. This is yet another DSL to learn. We could've overwhelmed ourselves with DSLs! This is again written in Minitest assertions.

This is not without any downsides.

With RSpec and specs being the most common approach, examples and documentation are almost all written in that way. So we still have to negotiate the nuances of spec tests when reading examples. But because we're still unfamiliar to spec tests they're slightly impenetrable. On balance we've lowered the barrier to adoption and got to shipping code quicker. We were already on a learning curve with Rails itself so the early win was worth it.

The wider picture

With our stack we have 2 discrete services, glued together by Redis. Our current problem is that we only have tests for the CMS Editor and not the end to end publishing process. We do have some unit tests on the CMS Publisher middle layer. This is the bit that receives data from the CMS Editor (via Redis) on publish (or preview) and calls Hugo to render out our pages as static HTML. What we want is a way to test the functionality from the point of creating a new piece of content all the way to rendering it out as a final piece of HTML. Currently the impact to the front end from changes we make on the CMS Editor (for example) are obscured from us.

What we'd like to do is make use of something like Docker and auto deploy all the apps via our CI server Bamboo. We'd then have an end to end containerised system we could run full functional tests against. We'd also end up with the front end outputs which we could run regression tests on.

This is all stuff we plan for the future. Hopefully the next time I post we'll have achieved some of that!

The design function Christmas wishlist 2015

📥  Design, Tools

After a busy year, the onset of Christmas always provides a brief moment to step back and reflect upon everything that has happened over the last 12 months.

2015 has been our year for honing workflows, trying out new tools and technologies, and shipping, shipping, shipping.

This intensive regimen has left us shouting "This is great!" almost as much as "Why can't you just work, dammit!" Here are the top five things the Digital team design function would really like to find under our Xmas tree to make us feel all happy and fuzzy.

5. A GUI Git client with interactive rebasing

42 un-squashed commits in that PR? You have fun with that…

My chosen Git tool - Tower - lacks the functionality to make interactive rebases. This means I end up having to drop into the command line to squash commits for my PRs and that’s just asking for trouble.

Please Santa, can we have a well-designed, feature-rich Git GUI client with interactive rebasing for Xmas?

4. sudo vagrant ‘please just work’

Vagrant has been a real boon allowing us to run up an exact copy of our publisher and editor apps on our local machines. However it can be a bit of a nightmare to manage if you don’t lean towards the technical side of things.

Please Santa, can we have a better way of managing/configuring Vagrant boxes that doesn’t mean I have to know what my $path is?

3. Web fonts that work everywhere

As a design team we’ve really gone hell for leather with regards to our implementation of web fonts this year. Although we’re very proud of what we’ve achieved, it has been hard work tackling cross-browser rendering issues, OpenType hassles, caching and pre-loading, font verification and much, much more.

Please Santa, can we have a standardised web font format that works perfectly on EVERY browser and platform?

2. A version of Sketch with the line-height bug fixed

We’ve pushed almost all of our preliminary design work through Bohemian Coding’s Sketch over the last 12 months. I've been working with Sketch since the heady days of version 1.0 but the software was completely new to Liam. Although he could see the benefits of the stripped-back interface and laser focus on interface design, he was constantly infuriated by Sketch’s little ‘eccentricities’ and general bugginess.

Please Santa, can we have a version of Sketch that’s stable, reliable and fixes that awful line-height bug once and for all?

1. Project Comet to hurry the heck up

Adobe’s last great hope? Maybe that’s a bit dramatic, but Project Comet is looking like a genuinely interesting contender in the race to grab the interface design crown.

Please Santa, Comet looks better than anything Adobe has made in ages. Can you get them to speed it up a bit? (Or at least get us on the Beta.)

Have a very merry Christmas and a happy new year! May all your workflow wishes come true!

 

bath.ac.uk content by the numbers and the next steps

📥  Beta, Development, Style, content and design, Tools

In March, the Digital team set out on an ambitious project to inventory bath.ac.uk.

Our purpose was to learn more about the content we create and the publishers who write it. Gathering this knowledge with a thorough inventory process is something that I have wanted to do ever since I joined Bath in 2011.

This is what we found, how we found it and what our findings mean for how we plan, govern and build better content.
(more…)

 

Deploying Rails applications using Mina and Bamboo

📥  Development, Tools

We use Mina to deploy our Ruby on Rails projects. With our deployment scripts written and packaged into a gem we wanted to make use of our continuous integration server to build and deploy automatically to the production and staging environments.

Our continuous integration server is Bamboo which needed a little configuring to play nicely with Ruby. The packages containing the Ruby binaries were installed on the server and recognised by Bamboo (via auto-detection on the Server Capabilities admin screen) then we added the free Bamboo Ruby Plugin to give us Bundler tasks. This kept the deployment plans easy to understand by avoiding as much as possible the need for manual scripting.

The important tasks for us were "Bundler Install" (obvious what that does) and "Bundler CLI" which let us run our mina deployment commands using the bundler environment with "bundle exec". A bit of messing around with SSH keys and it all works beautifully.

The final setup is:

  • A Bamboo build plan pulls the code from github and makes it available as an artifact (tests are run here)
  • A Bamboo deployment plan takes the artifact, runs "bundle install" to get the code required for deployment then runs the mina tasks to push it to the server

Bamboo allows us to trigger each of these steps automatically so we can deploy a new version of our application just by merging code into the appropriate branch in our repositories. We deploy to both our staging and production environments in this way which makes for a simple workflow, all in Github. The results of the build are sent to us via email and appear in our Slack channel. Bamboo has also let us schedule a rebuild and redeploy of our staging environment on a monthly basis so we will be alerted if a piece of infrastructure has changed and caused our tests to start failing.

 

Tracking the progress of the Beta: one whiteboard, 300 spreadsheets

📥  Beta, Tools

Very early into the Beta, we started to face a big question: how do we track the progress of the Beta programme?

We use Trello and Pivotal within the team, but we didn't just want to track what we were up to - the Beta involves dozens of people, working on hundreds of sections and almost a million assets. We needed to be able to see the progress of individual sections, whole organisations, and the entire programme overall.

We went with one of our favourite solutions: stick things on the wall.

In praise of the wall

Our office walls do more than hold up the occasional Zelda poster. We use them to as sprint boards, team schedules, annual leave calendars and reminders of our delivery principles.

The biggest benefit of sticking something on the wall is that it's very visible. You can quickly glance at it while you're working, or discuss a project without crowding around one person's desk - which is great when we have office visitors.

So we stuck our tracker on the wall, and named it the Totally Awesome Content Trackatron.

Rich "smizing" in front of the Trackatron

Rich does his best smize in front of the Trackatron. We added a lot more cards after this.

(more…)

 

How we prioritise our requirements for apps

📥  Tools

One of our ongoing programmes of work is improving the way we handle events. Events are managed and promoted in a broad range of ways across the University, using everything from our online store to Google forms and spreadsheets. This can be frustrating for organisers to manage and confusing for users to access - we think we can improve things for both the people who organise our events and the people who attend them.

In November, we spent three week-long sprints developing an events listing app. This was created entirely in-house, but when it came to incorporating a booking system, we decided to look at third parties. In February we set aside another 3 weeks to evaluate event booking apps and do some research with event organisers around the University (more on that later).

Creating our list of requirements

We started with a wishlist of what we wanted this app to be able to do. We ended up with 20 different requirements covering features to help event organisers, event attendees and us.

We also identified a few requirements (like payment) which were out of scope for this sprint, but worth evaluating now for possible future work.

(more…)