Atomic Manipulation @ Bath

The Sloan Group: webpage and blog

Topic: Teaching


📥  Ramblings, Teaching

Today was the third running of the (somewhat) annual event "Pizza+Posters+Staff+Students". I had managed to get 12 members of staff to produce posters and present their work. The Head of the Department bought pizzas and we had a great turnout of hungry and inquisitive students. Thanks to all for making this event a success.


Twin Paradox*

📥  Teaching, Uncategorized

Here I wish to play a bit with the "Twin Paradox" of Special Relativity. [*note, it ain't a paradox]. For this you'll need to be happy with (1) time-dilation, (2) length-contraction, (3) Lorentz velocity transforms and (4) straight lines. I like to draw things, so in the video I construct 3 space-time diagrams for the various reference frames of the problem. A pdf version of the 3 space-time diagrams can be found here: Sloan_Twin_SPACETIME so you can see the final versions.

The subtext to this is really it's a chance to put into practice some of the fundamental properties of Special Relativity outside their usual undergraduate use of simple problems. The logic I follow is hopefully step-by-step simple, but the picture we build up will be complicated. At the end the paradox will melt away, in fact it was never there in the first place.



My approach and philosophy to teaching

📥  Teaching

I had to write a small piece on my approach and philosophy to teaching here it is:

Engaged learning through impediment free and active teaching

Peter A. Sloan FHEA PhD MChemPhys

I have taught at Bath for just over 5 years across all years of Physics programmes: lecture courses, laboratories, tutorials and research projects. And have been an active member of the Department SSLC for the past two years. My objective is to get my students actively engaged with their learning. My role is to be a catalyst for student learning, primarily by acting as a facilitator who provides a well-defined, encouraging and challenging environment for them to work in. I achieve this through impediment free teaching and active teaching strategies.

Impediment free teaching

Fullscreen capture 31032016 110811

Too often I have heard my tutees complain about the complexity of a lecture course in relation to the presentation rather than the content - too many pdf files, too many handouts, overall not a consistent presentation. Although low-level, this annoyance prevents engagement with the course material. To prevent such impediments, I use in all my lectured units a single complete handout (paper and pdf) containing: all the important text, equation and diagrams “blanked out”; detailed table of contents for each section so students can see where we are going [and can read ahead if they wish]; and problems at the end of each section.

  • My lectures are fill-in-the-blanks sessions which are akin to “chalk-‘n’-talk” lectures. This “all-in-one” approach receives constant high praise from the students on SAMIS:
  • The lecture notes were given to us were absolutely brilliant!!! It demonstrates clearly that Sloan fully understands and appreciates the need of the students.
  • Format of lecture notes is far superior to any other lecturer’s notes.
  • The course booklet was fantastic … having all my notes and problems in one place was really useful and it would be great to see these used across the physics programs.
  • One of the few lecturers that provides easy access information.

What if a student is ill, or missed a crucial detail in the lecture? By capturing the lectures on Panopto video format I have a resource which allows students to catch-up or revisit a lecture. The lack of presentation distraction, both for the students and myself, allows me to focus on the physical concepts during the lecture sessions.

  • In no other module have I gone away from lectures with such a good level of understanding.
  • I am amazed at the amount of understanding I have gained.
  • Dr Sloan always seems to go the extra step to ensure that his students have an interesting and efficient learning experience.
  • The content is delivered in such a way that means the whole lecture room is listening intently during the whole lecture.

SAMIS Scores (outof 5)  “The lecturer communicated the subject clearly and enthusiastically during lectures” for past years

  • PH10004:  4.98, 4.68, 4.83
  • PH20013:  4.92, 4.78
  • PH30087:  4.45, 4.81, 4.78

SAMIS Scores “Please rate the quality of the teaching” for past years

  • PH10004:  4.63, 4.61, 4.7
  • PH20013:  4.83, 4.72
  • PH30087:  4.36, 4.63, 4.56

Active teaching

The second strategy to produce engaged learning is to create unit specific activities that are complementary to the lecture material. In my 1st year course on Relativity I use moodle based quizzes. In my 3rd year course on Fluid Dynamics, I use hands-on demonstrations (such as hurricane tubes) and youtube clips (for example, how to smoke a pipe to demonstrate the Bernoulli’s principle). This reminds the students that they are learning about the world in which we live and not just a set of equations.

The most mature and evolved active teaching activity is for my 2nd year course Quantum and Atomic Physics. This course relies on complicated mathematics; a subject not appropriate for an exam. To allow the students to tackle such questions I introduced formative coursework in 2014.  At the press of a key, my Matlab-LaTeX program produces for each student a bespoke question sheet, mark sheet and solution sheet.  Each student get different wavefunctions (starting points) so they can work in teams to discuss the physics but can’t copy line-by-line answers. I have several short Panopto films to aid students with their mathematics and last year each was watched a few hundred times.

  • Coursework is great.
  • Clear understanding of the taught material is quickly obtained.
  • Feedback was given in the space of 2 days … Dr Sloan is a superhuman.
  • it made me think a lot about the course content in a way I wouldn't have if I was learning it to pass and exam.
  • Really gave me a concept boost.

I have showcased this work at a recent Faculty eLearning event and have also written several blog posts about this coursework. Over two years and approximately 400 students, I have found that the effect of engaging with the coursework is an 8 ± 2 % boost in performance in their final exam compared to the students’ average mark across all their units. This clearly demonstrates that engaged students make better students.

Finally, I aim to encourage engagement by the undergraduate students with the broader research work within the Department. I initiated and run the highly successful annual “Physics Xmas Lecture” series, which is now in its 5th year, and the “Staff+Posters+Pizza” event that is about to run for the third time where members of staff present their research.




Gender biased coursework?

📥  Athena SWAN, Teaching

Is my 2nd year coursework sexist?

See here and here and here for the background, 2014 analysis and the 2014+2015 analysis.

As part of a drive to remove the gender imbalance in the sciences the Department of Physics at Bath is involved with the AthenaSWAN program. One item of note from the analysis of the degree outcome (sorry I can't find a link for that) has a slight (and perhaps statistically insignificant) imbalance in the degree outcome and gender. So the questions I wish to address here is was the coursework I set sexist? It is heavily based on the maths of quantum physics, and was so abstract as to have, to my mind, no gender implications.

The data:

Here are three sets of graphs for the combined 2014 + 2015 cohort, for males, females and all students.



What we see:

  • Our women students are more likely to submit the coursework.
  • The exam mark boost attained by completing the coursework was, to a large degree, consistent for women and for men.


  • The coursework is indeed gender neutral


Automated coursework analysis (part 2)

📥  Atomic Fun, Teaching

In two previous blog posts (here and here) I outlined the rationale behind and presented an analysis of the first year's worth of data for my "Automated and anonymous coursework". Here I will add in the second year's worth of data and see what's what.

What is it?

A second year course in Quantum and Atomic Physics run in the first semester of the academic term with just under 200 students, mostly physics students, but some Natural Scientists, some Maths and Physics and some straight maths students. The course is challenging and mostly all new physics. How to get the students to play with the mathematics core to the course which is not really suitable for an exam questions? Answer: course-work. The questions are designed to aid a deeper understanding of the physics. The course-work is formative, that is it does not cout towards the final unit mark. I get about half the students submitting their work. The feedback I get is positive about the coursework.

Did it have an effect?

What to measure? I can measure the mark the students got for my bit of the Quantum and Atomic Exam and the overall average marks the students got for the semester, both for those that did the coursework and those that did not.

coursework2015Click on the graph to see them at a more useful size. These charts show two year's worth of data.

  1. Students that have, monotonically,  a higher overall average mark for the semester are more likely to hand-in the coursework, although students below an average of 40 % didn't do the coursework at all (bar 2 students).
  2. For students with an average mark above ~ 80 % the coursework makes little difference.
  3. Between 40% and 80% there is a reasonably consistent trend that those that did the coursework gain a higher atomic exam mark than those equivalent students (same average semester mark) that did not. The mean difference between hand-in/no-hand-in is (8 +- 2) %. Quite a bump up the mark scale.

Perhaps if we could similarly analyse the various problem-sheets as well we could see the bit by bit gain of marks that engaging with the course and the problems to solve gives.


The coursework is effective (to a degree) in providing a deeper understanding of the underlying physics.*



Many thanks to Kristina Rusimova for organizing the data.

*Assuming that is what the exam tests**.

** Another story.


What effect did my new coursework have on exam outcome?


📥  Teaching


Those that handed-in the coursework had a higher mark in my bit of the exam (76%)  than those that did not do the coursework (57%). This uplift was true for all students no matter what their overall all grade for the year was. The students who benefited the most were those that had an overall year mark in the range 40 - 70 %. If we assume that exam mark is related to understanding (which I try and test in my exam questions) then the coursework succeeded in it purpose.

Let’s start with a quote or four:

“The unmarked coursework section was strangely enjoyable…”

“When I got n=3, I did a little dance!”

“The coursework was a fantastic idea …as a mathematician it gave me a concept boost…”

“I can now differentiate anything”

Last summer I decided to jazz up my 2nd year Atomic Physics course. This is taken by Physics, Maths and Physics, Natural Scientists and a few others, in total about 160 students. The coursework involved extensive calculation and mathematics (which there is no time to test in an exam). See a previous blog post on the whys and wherefores of the coursework.

Here I want to explore and present some measurable outcomes of this formative assessment.

Did anyone do it?

Out of a total of 154 students 85 did the course work. So just over 55% submitted.  Considering this is not a summative assessment (it does not count towards the final mark) I was quite surprised at the high submission rate.  In the week preceding the coursework deadline, the Department work-spaces were full of students frantically working their way through lengthy (but relatively straightforward) mathematics. I should also admit that I had told the students that there would be one exam question nearly identical to one of the questions in the coursework sheet.

After the submission deadline, I surveyed the students asking why they did or didn't do the course. Out of the 69 who didn't do it, 7 responded to the survey all citing that they were busy with other things that did count, e.g., placement preparation, lab reports, and that they would use the coursework as exam revision when they had the solutions to guide them.

What were the raw results?

Like all coursework one can get a bit carried away with getting all the marks. This may have been the case here, with an average of 72 % and a spread (std) of 21 % - see graph.


We also see that many who engaged with the work scored high marks, which must have taken quite some time and tenacity.  One student even managed to get Maple to do most the work for them.

How does this compare with the exam mark?

Here I only examine only the portion of the relevant exam that I set (32 marks out of a 60 mark exam). The following graph shows the distribution of exam marks for those that did the coursework and those that didn’t.


A fairly stark difference. Those that did the coursework had a mean mark of 76 ± 15 % and those that didn't had a mean exam mark of 57 ± 22 %. Both good, but significantly higher for those that did the coursework.

Why the exam difference?

To determine out that’s going with an experiment I usually run all sorts of test and background checks, but with this student data I’m a bit limited.

(1)    Was it all down to the questions from the coursework that appeared in the exam?

The question was worth 7 marks out of the 32 marks for the exam. Those that did the coursework got 1.5 marks more for this question than those that didn’t. Not enough to explain the overall difference in the exam. In fact, those that did the coursework scored consistently higher on all the exam questions.

(2)    Were the students who handed-in just overall the better students?

It may be that the set of students who bothered with the coursework were in effect self-selecting and only the strong students submitted. To get a decent picture as to what’s what I have looked at the difference between the  exam mark that student got and the student’s overall exam average for semester 1. In effect, whether they did better in my exam than they averagely are. This is done for both the cohort who did the course work and those that did not and we plot it as a function of the overall mean mark for semester 1.


So we see that the best students with high mean marks had a 4% boost in my exam if they did the course work, and were on their average if they didn’t. Overall we see a boost on at all levels for those who did the coursework. This is especially true and the 40-70 % range. So it seems that the coursework really helped the “weaker” students, but all students had a benefit from doing it.

 What did the students think?

Out of the 85 who submitted the coursework 32 responded to a survey.

·         They overwhelmingly thought it was a useful exercise.

·         They agreed that it was about the right length.

·         They were mixed as to whether it should change from formative to summative.

The official end of unit official survey gathered four text responses that mostly suggested the coursework was a bit long and clashed with other lab-reports etc.  I should note that the teaching scores I received for this course where the highest for any physics course (all years) run during semester 1 of the 2014/2015 academic year.

What’s next?

For next year I’ll run the whole exercise again and again it’ll be formative. But I will shorten the activity by rearranging the wavefunctions (the starting points) for the questions. If I show the exam-marks graphs, perhaps a few more will be inspired to do the work, but then that’ll skew my data for next year’s analysis!  Oh well back to paper writing now.



@andydolman suggested a nicer way to exploring the results. So here are the Atomic exam marks against mean semester 1 marks. The results are binned into 4 % wide semester 1 marks to get a mean and std of the mean for each bin.



Again we see that the students with lower overall semester 1 marks are aided by the coursework, but that many of them didn't attempt the coursework at all.


Randomised course-work.


📥  Teaching

I have taught 1/2 of the 2nd year course "Quantum and Atomic physics" (PH2013/60 if you care) for three years. My section, 5 weeks long and 15 1-hours teaching sessions takes the students from solving the Schrodinger equation for a hyrogen atom, through to spin-orbit interactions, all the way to term symbols for multi-electron atoms. What I have felt is missing, and what I have failed to find time for, is to demonstrate that the large and unwieldy 3D operators really do work and really to return the energy, or the angular momentum or whatever it may be. Therefore I have designed a new piece of course work for this coming year: A question-sheet for answers to be handed in and marked.

Why is this  worthy of a blog post? Well for the class of 160 2nd year students the point is to get each one to sit down and do the questions. None will be too hard, though many will involve fairly tortuous mathematics. So the aim is not to get marks, but to try the problems. Since this is a hand-in course work there may be a temptation to simply regurgitate a friends answers. Hence I have introduced some random element into the mix.

The sheet consists of three main questions, with each having an array of sub-questions. For each students I wanted each main question to be randomly assigned a possible wave-function (i.e., starting point) out of a catalogue of 8 wave-functions.  No two (within reason) problem sheets will be the same. Students will therefore find it hard to copy from friends, but will be able to apply the methods to their own version of the problem sheet. It will also mean they will be exposed to three different wave-functions and get a feel for what they are.

But we can do more. Using some cunning Matlab and LaTeX coding, each question sheet will have printed on it the unique exam code each student has. I have no access to the data base that has the key to transcribe code into name. This course-work will therefore be as anonymous as the exams are, removing any potential subconscious bias. This is especially important as after this year's trial I hope to make this course-work count towards the mark for this course.

The practicalities are that I cannot handout individual sheets, it would take too much time. Instead the Matlab/LaTeX code  generates a pdf for each student that comprising the question sheet and a cover-sheet. This will be uploaded to our on-line teaching forum for this course . The cover-sheet has a marking grid and also the student's exam number. They can simply staple this to the handwritten answers.  At a click of another button I will be able to generate bespoke solutions sheets for each students, again uploaded to the on-line teaching resource.

One downside is that there are 13 questions, and 8 possible starting points so I require individual 104 solutions. That is what PG demonstrators are good for! And I will have a bit of marking to do.



Some of my thoughts on Exams.

📥  Teaching

I have just walked up the hill to the University of Bath listening to my radio. On the Today program there was an interview with the head of Ofqual explaining the new GCSE (or whatever they are) exams. Instead of 8 grades as we presently have they will have 9 and so there will be more “differentiation” between candidates. I am due to write ½ an exam paper today for my bit of Quantum and Atomic Physics (PH20013/60) and that got me thinking.

Many moons ago Universities judged students by asking them questions in one-two-one interviews, or vivas as we call them. But these are time consuming and open to the whim of the examiner. So written exams were introduced. We still have them. I have sat countless exam papers during my school and undergraduate life. Apart from my very last exam as an UG, I always did well, hence why I’m now lecturer. I have to say I always enjoyed them, the 1-2-1 combat of pitting myself against the cunning and sly lecturer.

But exams are not really about the top end showing off. They are about giving a grade to a student that can be entered into a spread-sheet so that upon graduation a student can be given the correct piece of paper which tells the world something about them. But what does it tell? What are these elusive marks that is at the core of the game we play at University? These are grand ideas and questions that will have little bearing on how I in practice compile my exam paper.

Here’s roughly how I do it:

(1)   Read through the lecture notes and the unit outline to get a feel for what I said the students should be able to do.

(2)   Split the total number of marks roughly equally between the main section of the course.

(3)   Write out questions for 40 % of my marks that allow conscientious and studious students (who will know more about the subject than those who didn’t sit through the lectures) but yet who are really not very bright, to pass. These will be typically “state the principle of…” style questions showing a bit of knowledge. And perhaps a few repeat questions already covered in problem sheets.

(4)   Write a few more questions for say 20 % on number crunching.

(5)   The next 20 % go on more extended questions on the physics of the course, which gets at understanding and may well include unseen questions.

(6)   Finally the last 20 % bit. These will include physics from outside the course, typically from an earlier year, to see if the candidate can see the bigger picture and how things fit together.

OK, but the candidate has a set time for this and in this time we try and ensure we ask questions on as much of the course as possible, and we try to see if the candidate can think. This is hard.

I look through past papers, look at the course work problems, in books and on line for good ideas for questions. I bung them together. I then sit the exam myself and if I can’t physically write out all the answers in the half the allotted time, the exam is too long. I then revise the questions and, if I am feeling really diligent, I sit the exam again. Phew.

Marking? Well, I write out mark scheme for each of my questions. I divvy the marks up based on the length of the question and the degree of toughness, which usually reflects the number of “physics” steps required to get the answers.

I’m finished! I had in my exam and answers to the University and relax. How do we ensure my questions are fair? Peer review. One of my colleagues reads my paper and makes comment. Then we send it off to an external colleague and they also make comments. Once all these QA process have been ticked off I have it, a document that will tell me if you are worth 58 %, or 61 % or perhaps 36 % or even 97 % (well done). But wait, my exam is out of 60 marks, so the smallest % quantum is 1. 7%. But wait, the University explicitly bans certain % totals, the examiner is asked to review certain percentages and think again. But only for these cases, and with the understanding (or at least my understanding) that I’ll just bump the % up out of these forbidden zones. But we don’t do this for all, so the % scale is not only quantised but is also non-linear. These are the obvious sources of uncertainty on a final mark, what about the over sources?

What if a question or exam paper is just too easy or too hard. What then? Now we get into the world of normalization, mark scaling and other schnanagins so that the average mark looks OK.  Last year I set an exciting and interesting exam. It was excellent. It really tested the understanding of the students and allowed many to shine and really show they are excellent physicists and scientists with  broad grasp of the subject and a good understanding of how it all fits together. Well done me (and them).

However, what also happened was that the middling students, who are contentious and worked hard were shafted. I had slightly changed the rules on them. I had somehow abandoned my scheme outlined above and tried to get at understanding a bit more. Or it could have been just a slightly badly worded questions, or slightly too much in the paper, or a bit that I taught badly and the students just didn’t get. The upshot was a lot of post-processing to make the outcome fair for the students, fair that is in terms of the overall game that we play at University.

There is a lot of room for error, there is a lot of subjectivity, there is a lot of cross-correlation, there is a lot of reliance on the setter (me!) knowing the difference between a 56 % student and a 58 % student. These grades are important, but perhaps should come with a standard deviation.

Will the new GCSE exams, with their 9 grades,  really have a standard deviation of less than 2 %?