Saturday, February 27, 2021

Avoiding Zoom fatigue and finding new ways to collaborate online

Photo by Surface on Unsplash

Many of us spend several hours a day in online meetings and the term Zoom fatigue has become a popular topic of discussion. Our extreme dependence on video meeting platforms is of course due to the exceptional times we live in and when we are able to meet face-to-face again the number of online meetings will no doubt decrease, though still much more frequent than before the pandemic. But what is missing in these online sessions? We can certainly see each other, smile, laugh, gesticulate and interact but what makes it all so tiring and rather repetitive?

A new study by Jeremy N. Bailenson of Stanford University, Nonverbal Overload: A Theoretical Argument for the Causes of Zoom Fatigue, has investigated the factors behind Zoom fatigue and offers some remedies (see also a summary of the report in an article in New Atlas, Stanford study into “Zoom Fatigue” explains why video chats are so tiring. Bailenson highlights four main issues:

  • Eye-gaze at a close distance. We see a gallery of faces, many complete strangers, looking at us all the time. This would never happen in a physical meeting and can be rather disconcerting. Are they really looking at me and what are they thinking? Do I look all right or do they notice that my hair is a bit untidy or there's a pile of junk on the desk behind me?
  • Cognitive load. We often exaggerate our non-verbal signals to make them clear, like nodding very deliberately or looking straight into the camera. It's impossible to turn and smile at a colleague or exchange a knowing glance when a common issue is mentioned. We also have chat messages and other communication tools to deal with.
  • All-day mirror. We spend a lot of time looking at ourselves and being very aware of our appearance. This is a distraction and something we never have to deal with in physical meetings. One remedy for this is to cancel the self-view option in the platform. 
  • Reduced mobility. In physical meetings we sometimes stand up or stretch our legs for a few minutes but in online meetings we generally sit still for long periods, gazing at faces or slideshows. We also tend to sit very close to the computer camera and this adds to the strain. Maybe if we sometimes switched to audio-only we could all move around while we discuss and thus take a break from the gallery gaze.

But with Zoom, all people get the front-on views of all other people nonstop. This is similar to being in a crowded subway car while being forced to stare at the person you are standing very close to, instead of looking down or at your phone. On top of this, it is as if everyone in the subway car rotated their bodies such that their faces were oriented toward your eyes. And then, instead of being scattered around your peripheral vision, somehow all those people somehow were crowded into your fovea where stimuli are particularly arousing (Reeves et al., 1999). For many Zoom users, this happens for hours consecutively.

We also need to realise that we all need some recovery time before entering the next meeting. Although you can feel super-efficient by stacking online meetings one after the other, you will be much more efficient if you schedule at least 10 minutes between those meetings to simulate the time you would spend walking from one physical meeting to the next. Get up, go outside for a few minutes and mentally tune in to the next task.

So what's the alternative to the talking heads format? Maybe we need to stop gazing at each other and do things together. A post by David WhiteSpatial collaboration: how to escape the webcam, introduces the concept of spatial collaboration, where we focus not on our faces on the screen but on interacting in a shared space such as a whiteboard, collaborative documents like Google Drive or a storyboard like Padlet, Mural or Miro. We can all write or draw on these spaces and the focus is more on the activity and collaboration than our faces. One simple but effective method is how to create a discussion with the help of a simple drawing of a table and names around it.

My suggestion was to draw a very simple diagram of a table (just a square) and place each of the participants’ names around it for each group. We then shared this simple ‘map’ into the non-space of the platform and asked the groups to go clockwise around the table. This was an easy way to establish the order the discussion should go in, but I also noticed that there was suddenly a greater sense of togetherness and place. You could imagine who you were sat next to or opposite, and while this didn’t change the functionality of the technology it did change the psychology of it. It didn’t take much to help people imagine themselves into a shared location.
This spatial collaboration does not have to be noisy to be effective. You can have very rewarding silent collaboration (see my earlier post on this), at least for a few minutes, thus providing variation and reducing fatigue.  

Of course it's great to see each other, but maybe not as much as we do now. 

Wednesday, February 10, 2021

Keynotes on demand


I have been invited to speak at many online conferences and webinars over the past year and although I always try to adapt my content to the audience, I often feel I'm just playing variations on a theme. I can imagine how it feels for the major speakers in the field and how many invitations they get each week to speak on much the same theme each time. In the past this meant extensive travel to international conferences, but today you can tour the world from the comfort of your own home.

One of the experts in demand just now is Tony Bates and he has announced an innovative approach to keynoting. He has an impressive track record in digital learning and despite "retiring" a few years ago he has maintained an impressive level of production with a very valuable blog, reports, lectures and one of the best books on online education, Teaching in a Digital Age (available as an open access book, practising what he preaches). Being in such demand it becomes hard to say no even if such appearances sometimes demand very uncomfortable working hours. You have to draw the line somewhere, but how to do that without disappointing?

So his innovative solution, described in a post, Five free keynotes on online learning for streaming into virtual conferences, is to record five different keynote speeches and offer them to any conference organisers under a Creative Commons license (CC BY-SA). They can be accessed from Commonwealth of Learning’s online institutional repository for learning resources and publications, OAsis, and can even be downloaded. This means conferences can include a major speaker as keynote without any live links, though he can make himself available for a personal Q&A session afterwards, either synchronously or if the time is inconvenient, asynchronously.

Here are direct links to the lectures:
Could this become more common in the future? Of course there are advantages of appearing live but since the purpose of a keynote is for inspiration it is feasible to use recorded lectures and focus the live sessions on discussion and interaction. An on-site conference has high profile keynote speakers as a major incentive to attend with the attraction of possibly meeting that person in a mingle session later on. In an online conference that advantage is largely lost so maybe we'll see more virtual keynotes that act as catalysts for more active discussion instead.

Thursday, February 4, 2021

You are being watched - the rise of educational surveillance


The use of artificial intelligence in education seems increasingly to be more about surveillance and control than fostering learning. During the past year there has been considerable concern about the ethics of remote proctoring where students were subject to constant surveillance by webcam, microphone and analysis of keystrokes and mouse clicks during online exams, see for example an article in Inside E-learning, Learning or surveillance? What students say about edtech and covid-19. Any unexpected activity in the student's behaviour (eg "abnormal eye movement") or movement in the room is detected and may lead to disqualification. Not only are such methods highly intrusive it is unclear what happens to all the data collected by the responsible companies. There must be better ways of assessing a student's ability than this.

If remote proctoring raises concerns then the next example sends shivers down the spine. An article on the site Rest of the worldChina is home to a growing market for dubious “emotion recognition” technology, describes how classrooms are equipped with face recognition technology and each student's facial reactions are constantly monitored. 
Every second, the surveillance cameras installed in each classroom at Niulanshan First Secondary School in Beijing snap a photo. The images are then fed into the Classroom Care System, an “emotion recognition” program developed by Hanwang Technology. It identifies each student’s face and analyzes their behavior: a student rifling through their desk might be labeled “distracted,” while another looking at the board would be labeled “focused.” Other behavioral categories include answering questions, interacting with other students, writing, and sleeping. Teachers and parents receive a weekly report through a mobile app, which can be unsparing: In one, a student who had answered just a single question in his English class was called out for low participation — despite the app recording him as “focused” 94% of the time.

Imagine then coupling this with analysis of every mouse click and keystroke and we have total control, or rather the illusion of control since the conclusions made by AI may be based on in-built biases (like the student called for low participation in the example above). The idea that algorithms can accurately assess a student's emotions by analysing facial expression is ridiculous. We all know how hard it is to read another person's face. But once you get AI making decisions it becomes almost impossible to question them since we tend to see computers as impartial and infallible. No matter how questionable such technologies may be there's big money to be made. The article claims that the emotional recognition market may be worth more than $33 billion by 2023. Money talks. For further serious concerns about facial recognition, see an article in Mashable9 scary revelations from 40 years of facial recognition research.

There are, of course, positive applications of AI in education but the key question in all cases is who owns the data, on what terms and do the students have the right to be forgotten? We all need to be very cautious about letting these technologies into the classroom.