Digital Marketing & Communications

We've seen 1s and 0s you wouldn't believe

Tagged: data

Early review of research ethics content performance

  , ,

📥  Beta

In October, I worked on improving the information we provide about research integrity and ethics. To deliver the new section, I worked with subject matter experts in the Vice-Chancellor's Office and the Office of the University Secretary.

When we started, the content was basically a single page with multiple tabs, many many links and subheadings which were generic or duplicated. Together, we set out on a quest to make the process simpler and important tasks easier to spot and complete.

We shipped the new content in mid-November, so it's still early days. But we can already see from the analytics that we've made a huge improvement.

Making things simpler

Our biggest aim was to make it easier for users to understand what they need to do to conduct ethically responsible research. For this, we reworked parts of the original content into a plain English guide.

Plain English is about writing in a straightforward manner so that the content can be more easily understood by a wider audience.

Looking at the analytics, this guide is by far the most visited individual content item, with 517 pageviews compared to the next item with 344. It has an average time on page of 2:29. These stats suggest that it's being both found and read.

On that note, the fact that we can even start to draw comparisons between individual items of content is a huge improvement in itself. With the old page, all we had was basically an overall number of visitors to the whole page and our best guess.

Coherent user journey

One of the biggest changes is that the new Collection is making it easier for people to move on to the next step in completing their task. The bounce rate has gone down from 64% to 30%, which suggests that people are finding the content relevant.

The main purpose of both the old page and the new Collection is to point users to the relevant information. The change in the average time on page (old 4:32, new 1:15) suggests that it's a lot easier for people to find what they're looking for. This is supported by the fact that 80% of the people are moving onto another item of content compared to 63% on the old page.

There is also a big difference in what content they're moving onto to. From the new Collection, users go to the main items of content in the section - the top three items are two guides and our statement on ethics and integrity. From the old page, users were moving onto a more random set of content scattered across the website with no clear indication of any shared top tasks.

Celebrating success

So yes, it's still very early days. But after nearly three months since going live, the analytics would seem to suggest that users are finding it easier to navigate and read the content we’ve created.

In a transition project of this size, these little successes are worth noticing and shouting about. By sharing them, we are able to not only keep our stakeholders informed but provide a useful example for colleagues in the wider Higher Education community.

I compared data on the new content from the launch date to present to data on the old content from the same timespan one year earlier.


Review of browser usage stats

  , , , ,

📥  Development, Tools

I recently thought it would be worth looking at our browser stats to see which browsers people are actually using when they visit our site. When I saw the recent numbers I was a bit surprised and thought it would be worth reviewing the changes over the past few years to find out:

  • Where have we come from?
  • What has changed in the past 3 years?
  • Where are we now?

I also wanted to produce some graphs to display the information visually rather than having lists of numbers in tables.

Show me the graphs

I'm going to jump straight in to the results now, but the details of where the data came from and how I produced these charts is covered later for those who are interested.

What our users were using in July 2008:

Browser Share / % Visits - July 2008

What our users now use (March 2012):

Browser Share / % Visits - March 2012

These 2 charts are massively different! They show how much has changed in the browser market, and consequently in web development, in less than 4 years.

No Chrome, no Opera, no Android to speak of in July 2008 (see disclaimer at the bottom) and Internet Explorer's share has more than halved.

How did we get here?

The change over time from July 2008 to March 2012:

Browser Share / % Visits per month

The same data as above, in a line chart:

Browser Share / % Visits per Month

So we see the steady decline of IE, the rapid growth of Chrome and (to a lesser extent) Safari and the squeeze of Firefox. Opera is the constant tiny fraction and in recent months we see the Android browser appear in the stats.

So what?

I believe it's important to have an understanding of the browsers that people visiting our site are actually using. We can then develop the site to work best for them. That may mean ensuring pages render well in older browsers or being able to use some new technology because our users' browsers support them - either way we can make things better for people visiting the site.

What about IE?

There is one last graph which I find interesting and should help to answer the recurrent question "Which versions of IE should we support?":

Internet Explorer Version Share / % Visits per Month

There appears to be a strong argument to ignore IE6 (finally!) and focus more on IE8 and 9. Unfortunately IE7 doesn't look like it's going away any time soon.

What's next?

There is of course a lot that we can do with the Google Analytics data. I'd like to look at the different versions of all the browsers, filter by country/language/OS and also compare our stats with other websites.

If you have any interesting browser stats for comparison, let me know in the comments.

Getting the data

I wanted browser stats for the last few years so I went straight to Google Analytics (GA), found our profile for external traffic and delved into the numbers.  However, it soon became clear that gathering data for the past few years from the web site would take ages (as I'd need to export data for 45 months individually).

So, I decided to get the data from GA using the API.  After looking around for existing code to help me, I settled on a pure JavaScript solution as a starting point.  This not only grabbed the data from GA but also generated graphs from it using Google Chart Tools.  It seemed perfect!

Or so I thought.  But, as so often is the case, I needed to modify the original code a lot more than I first expected, to get it doing what I wanted.  There were a few issues to overcome, including 414 errors (which were new to me) from the Google Chart API (documented in the FAQs) and the limited dimensions of Google Charts.  A really helpful tool for checking the data coming from GA was the Data Feed Query Explorer.

Once I had tweaked the code, understood the data formats and chosen my chart colours, I was able to get useful output and generate the graphs I've included in this post.

All data sourced from Google Analytics via the Data API. Values less then 0.5% have been ignored. All data is from external visits to the web site. Graphs were drawn using Google Chart tools and are accurate to 2px. Use caution when interpreting the graphs. Past performance is no guarantee of future success. Lines can go down as well as up.