Research Director at the Inria center (Saclay), Wendy Mackay heads the Ex-Situ human-computer interaction (HCI) research group, shared with the Laboratoire de recherche en informatique (LRI - Université Paris-Saclay, CNRS). Internationally recognized in this discipline, she is a member of the ACM SIGCHI Academy (Association for Computing Machinery - Special Interest Group on Computer-Human Interaction).
For the period 2021-2022, she has been invited to hold the Computer Sciences and Digital Sciences Chair, created in partnership with Inria.
The study of human-machine interaction is crucial at a time when technology is ubiquitous. What should it be based on?
Wendy Mackay: It's interesting to study the machine on the one hand and the human on the other, but it's not enough. What we do in our field is try to understand how these two parts interact. As human beings, we are undeniably influenced by the technologies we use on a daily basis; these can affect our behavior and change the way we think, but we can also adapt them to our needs by making them our own. It's important to understand that all human capabilities are part of the human-machine interaction equation. This means taking into account notions such as sensation, perception, motor skills, memory... in short, the whole of human psychology. For example, one of our research topics consists in observing the phenomenon of deskilling, i.e. the process by which a human's skills are rendered obsolete, or even lost, when a piece of technology can perform the task for him or her. This decline in human capabilities has long been observed in connection with delegation to machines. However, we could go in the opposite direction and look for a way to enhance human skills and capabilities through this interaction.
How could we rethink this interaction to stimulate the user's abilities?
We're considering several strategies to achieve this. What's very pleasing today is that some researchers in the field of artificial intelligence are starting to work with us, specialists in human-machine interaction. Let's take the example of a decision that a user has to make with the help of software or a machine. If the user has a system that tells him directly "choose A and not B", and he knows that this system is generally right, he will simply do what the machine tells him, without resorting to his own reflection: his capacities are no longer stimulated, and they risk weakening.
If we design a system which, instead of directly giving the answer, has to react to a user's suggestion, then we create a situation which further stimulates human skills through reflection and interaction. It's a way of learning, no more and no less, between artificial intelligence and human intelligence. However, different interactions can have a wide variety of long-term effects on a user. So, it's quite complicated to set up.
So the disciplines of artificial intelligence and human-machine interaction need to move forward hand in hand?
Yes, it's very important. In artificial intelligence, the quality of the search is measured by the quality of the algorithm. If I have a faster, more efficient algorithm, and it takes less data to get satisfactory answers, then I have good results, ready for publication. On the other hand, in human-machine interaction, we measure the impact of technology on human beings. By working with real users, we measure not only their performance, but also the influence on their ability to innovate; there's both a quantitative and qualitative approach. We're more interested in the user's perspective than the designer's. So, for optimal design and to ensure a positive interaction experience, it's crucial that we work and communicate together.
Our relationship with a machine is determined by the interface that connects us to it. Yet most of today's interfaces have been based on the same model since the 1970s, that of graphical user interfaces. Do we need to rethink the foundations of this model to take the user experience to the next level?
Absolutely, and that's what we're doing! In the 1970s and 1980s, work at Xerox led to the Star, one of the very first personal computers. It was at this time that the Graphical User Interface was conceived. It was the first human-machine dialogue device in which objects and functions were represented in the form of small pictograms. However, this interface was developed specifically for executive secretarial work, with folders, files and cut-and-paste tools - and it's the same interface we still use today! It's a good model, in many ways, but it's still rather limited, when there are so many other options.
At the start of my career, we were much more open than we are now, because everything is now focused on the windowed style that is the essence of systems like Windows or MacOS. When I tell my students that we used to have windowless systems, they don't believe me. For them, it's hardly conceivable, whereas it's perfectly possible to combine the direct manipulation of graphical interfaces with programming capabilities to create more powerful systems. We could also use hand gestures, or even the whole body, to give commands and create more expressive tools that would be more geared towards creativity than content consumption. Of course, the most widespread system currently in use works: everyone knows and uses computers. It's become such an integral part of our daily lives that it has forged our digital way of thinking and conditioned our interaction with machines. I'm convinced that we can do better.
Many tools are used for creative work. How can we optimize them so that they become vectors of creativity rather than mere instruments?
If you take the creative suites we know, such as Adobe's or Microsoft's, you'll notice that each of its components is based on the idea of a precise task to be carried out. The user knows what that task is and how to do it. Everything to do with exploring and deciding on the nature of the problem falls by the wayside. Human beings, however, have a unique ability to use tools with defined functions to explore new ideas. Creative innovation will come from systems that give users enough room to explore their own creativity with the support of adaptable, customizable and flexible tools.
Why has interface design been so restrictive until now?
Because such systems are simply easier to produce. Human beings are very diverse, and if I'm designing a system that I want to sell, I prefer my users to be able to do fewer things, but do them better. It's a clever strategy: it's easy to guess what the user will want to do if we encourage them to do it in the first place. It's easier, more profitable, with a precise audience and less complex tools to design. Historically, there's also a bit of Taylorism, i.e. the idea that we can treat people like machines. We analyze human productivity and optimize the tools used to organize work. This division of human activities into tasks and skills can be found long before the democratization of personal computers. That's why all systems were conceived on this model, which has been maintained ever since.
For this reason, the field of human-computer interaction research is sometimes as interesting as it is frustrating; if we can't show big business how the paradigm shift could make them money, there's little chance of seeing it implemented in industry. In addition to user comfort and efficiency, there is a marketing aspect to consider. In the history of technology, there are sometimes the obvious, such as the cell phone, whose concept is very easy to understand and requires little adaptation for anyone already familiar with the principle of the telephone. Then came the smartphone, with all the capabilities of a computer in the hand and an interface that's simple to understand and use; a veritable revolution made possible by the science of human-machine interaction. But before this kind of epiphany, there's a long process of research.
To interact with a system, you need to know and speak its "language" to a certain extent. Is it possible to imagine a universal interface?
Rather than a universal interface, what we're trying to design is a universal toolbox. We'd like to offer users, especially experts, the means not only to learn, but also to create their own tools or customize the ones they have at their disposal. The idea is to enable users to appropriate the medium and exploit the skills they've acquired, without having to relearn them when they change systems. Unfortunately, large companies have no interest in this when it comes to marketing. Take e-mail and Facebook, for example. With the former, I only need one address to write to anyone, on any system, such as Outlook, Gmail and so many others. With Facebook, you have to become a Facebook member, and you can only communicate with other Facebook members. Since the pandemic, we've seen the same thing with the various videoconferencing applications: Skype, Zoom, Teams, Discord... For each, you have to learn how the system works, have an account, and all participants have to use the same system. To make these systems open, interoperable and easier for users to adopt, one solution is to create tools that are not tied to any particular application. It would be even better if these systems were co-adaptive.
What is a coadaptive system?
It's a term that comes from evolutionary biology. In nature, we find organisms in constant interaction which, as a result, have evolved hand in hand: this is long-term coevolution. In the short term, we find this in symbiotic animals, such as cleaning fish and those they clean. When we look at how human beings use technology, we find this principle of coadaptation. If I want to use Adobe Premiere Pro to edit a video, I'll have to learn how it works and adapt my behavior, but I'll also make the software my own by using it in my own way. A good example is the evolution of spreadsheets. Originally, it was just a matter of adding columns of figures to check a budget.
Then users used them to explore alternatives, such as what we call " What if budgeting ", whereby you can change the input data to see how the output data reacts, and vice versa. This was a user innovation, and spreadsheet designers then adapted their software to meet these new uses. When this happens, it's important for designers to listen to what users have to say; if they do, they can only improve their system. This phenomenon happens everywhere, all the time, in the world of technology, yet it is often ignored: we design a system with a function in mind, but in practice, its users push its limits. Humans have been doing this for as long as they've been human; just look at a child using a pencil - designed for writing - as a ruler to draw a straight line. Appropriation and adaptation are the strengths of human technical reasoning. By combining it with artificial intelligence, we arrive at reciprocal coadaptation: the system reacts to the user's use of it, learns, adapts and influences the actions of the human, who in turn adapts, innovates and influences the system through his or her new uses.
Interview by William Rowe-Pirra