AUMI interviews: Eric Lewis

AUMI interviews: Eric Lewis

Interviewee: Eric Lewis (EW) is a professor of philosophy at McGill University. He coordinates and oversees AUMI research activities at McGill including the ongoing technical development of the AUMI desktop application, interdisciplinary research with the School of Physical and Occupational Therapy and an ongoing pilot program with the Mackay Centre School in Montreal, Quebec.

Interviewer: John Sullivan (JS) is a music technology researcher and PhD candidate at McGill University. He served as the developer for the AUMI desktop application from 2016 – 2019.

The interview was conducted in person at the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) at McGill University on September 11th, 2019.


JS: Good, so just to start, we’re here at CIRMMT. It’s Wednesday Sept 11th. I’m here with Eric Lewis, and we’re going to talk about AUMI for a little bit. So Eric, thanks for being here. Some big general questions to start out: So what is your involvement with the AUMI?”

EL: Well, you know a little bit about my involvement IICSI and previously ICASP. So I am the cite coordinator at McGill for IICSI and what was previously ICASP.”

JS: And what are those?

EL: The International Institute for Critical Studies and Improvisation, and in its prior form it was a project called Improvisation, Community and Social Practice. These are both Canadian-based research teams funded by SSHRC (Social Science and Humanities Research Council, the Canadian federal research-funding agency that promotes and supports post-secondary research and training in the humanities and social sciences), based out of the University of Guelph. McGill, in both of these projects, has been the second site, so there are a number of faculty that have been associated with the project, and a number of graduate students, like yourself, who have been associated with the project, and we’ve have a number of activities based at McGill. One of those being under my tutelage, so to speak, was to establishing McGill as one of the primary sites for working on the improvements and modifications on the tech side to the AUMI project.

The AUMI project emerged out of a research axis. Out of ICASP. It was called, if I remember correctly, Improvisation, Gender and the Body, which included a number of researchers that are still part of the AUMI group, and also Pauline Oliveros, the kind of, fore-mother, so to speak, of AUMI. To be completely honest, I really thought that the kind of research that was emerging from this research axis was exciting, and high quality, and I really loved the people that were in it. And they were also doing stuff. This was the early bits of AUMI’s involvement in these projects, and I just asked them, “Hey, can I be a member of this research axis? Cause I would love to work with you all!” Cause they were doing really cool stuff. As has happened in the discipline of feminist theory, broadly construed, there was increased interest in intersectional identities, and disabilities, and disabilities studies. So this research group started working in this area, and working on, and working with AUMI. Which, you know, was originally the brain child of Pauline Oliveros and early iterations of it was designed by folks worked closely with her.

Since that time, I’ve a) overseen graduate students out of CIRMMT, the Centre for Interdisciplinary Research in Music Media and Technology here at McGill, to work on refining the software/hardware itself, like people like yourself. I’ve also established a laboratory site and a Mackay Centre School here at Montreal which is the school for children with physical disabilities which is part of the English School board. But many of the children which have a basket of assorted disabilities. This was done partially through the enthusiastic response of the then principal of that school, when I made a demonstration of AUMI there. Very soon thereafter a professor in the School of Physical and Occupational Therapy here at McGill, Keiko Shikako-Thomas, also came on board. Her primary research site was also at the Mackay Centre School. And from then on, she and I together were running programs using AUMI at the school, and then using feedback from the children themselves, the teachers, their caregivers, the therapists, to refine AUMI.

As you know, the nature of AUMI as a device, it is really crucial that what it does is a function of what the people that are going to use it want it to be. And there’s always been two complementary but and at times, contradictory functionalities of AUMI. On the one hand, as a researcher who – going back to Pauline’s leadership – AUMI was viewed primary was a way to break down barriers of ability. So that all people, children, can enter collective, creative play around sound, right? So in that sense, it was a device to facilitate community building across barriers of difference.

On the other hand, AUMI has serious therapeutical potentialities which have also we’ve been pursuing, and refining the system itself, thinking about ways to use it therapeutically. And these two go hand in hand, but don’t always go hand in hand. So I’ve always viewed my job was sort of a conduit of information between the folks using the device and folks like yourself who are refining the device and my function was kind of both translate, so to speak, the desires of the users into technical requirements.

They may something amorphous like, “it’d be great if we can find a way for this person to play with that person!” and you know, “this is the kind of movements they have or don’t have,” and I’d have to think about it for a little bit and I’d come back and ask you something like: “Could we make the box change in the vertical dimension rather than just the horizontal dimension cause this person has trouble moving left right but can move up down,” something like that. Right? This required me to learn enough about the kind of technical foundations of AUMI, so that I would know, kind of what really is possible, what would be easy to do, what do I suspect it to be done, but would take maybe the next version? And what really to save for some indefinite future, you have any number of examples where major and important refinements were made, which allowed a whole new group of individuals to use the device, which went from a comment from a parent, or teacher, or a kid themselves to me, and me passing that one to the developers and two weeks later, a new device. The best example of that – I don’t know if you want me to go on?”

JS: Yes, please.

EL: A great example of that was when I gave one of the early demonstrations at the Mackay Centre School, and we had a whole lot of the teachers were there, and afterwards, a teacher who worked with a class of kids with visual disabilities, she said, “You know, my class is really interested in music, they love to find a way to engage in musicking, but, they can’t see the little dot, they can’t see the little square, some of them are color blind, they can’t make out the differences between the foreground and background, you know, what can you do for us?” I walked away and said, you know that’s a drag, right? And we came up with this plan, that, I believe at the time, the developer was, I think he’s at MIT now?

JS: Ian (Hattwick).

EL: Ian, I think it was under Ian’s watch, and so I sat down with Ian, and the he came up – or we came up, I can’t remember, probably he came up {laughter} – with the idea: what if we can change the size of the dot? What if we can change the size of the boundaries of the, you know, the grids? And what if we can change their colors? So that, if you are sensitive to perceiving certain kinds of colors you could all that. This ended up being something really easy and quick to do, and just a couple weeks later I went back to this teacher and said, “Try it now.” She couldn’t believe it! A) it became functional for the class. But I think almost as important, she wasn’t used to asking for something, {laughter} and actually getting it done, if you know what I mean. {laughter} So, my role has been to facilitate those kind of interactions, while working historically closely with the staff there to think about the innovative ways we can actually use AUMI, overseeing the staff on the technical side, and the staff we had hired to actually work with the kids, so we had graduate students in occupational physical therapy that we were working in the classroom there.

So that’s been my main involvement with AUMI, while on the side thinking about it theoretically, in terms of my own less practical research. And of course reporting back to the AUMI research team about what we’re doing, liaising with the other developers and the other theorists and therapists as part of the team, and just trying to thinking in general about what directions we should be pursuing, what could we do to make the device more accessible, better for a wider group of individuals, and get the word out.

JS: Great. Let’s see, wondering if I have any immediate follow-ups. The program with the Mackay school, ended or is in hiatus for now, or…?

EL: Well, I like to think it’s in hiatus.

JS: And this is as much for my own information as the interview.

EL: Sure, no, of course. I’d like to think it’s in hiatus. Again, something that I’m reminded of, unfortunately, far too often in a lot of these sort of community based work that I do is how precarious a lot of communities are. The Mackay Centre School, a year ago moved into a whole new building, and this was a huge project, and a huge burden that the then principal had to see. She just literally had no time to think about integrating or dealing with us. She has then just retired, and Keiko and I have been trying to find the time, literally, to organize and publish the research data we did collect from when we were at the Mackay Centre School. So we don’t want to just go back to them just with “Let’s just carry on”, we’d like to think of what can we do that would be more valuable both for them and produce new data that might help us refine AUMI and make it more useful, so I think the relationship at the moment is merely in hibernation, so to speak.

Obviously it’s always in contingent on funding – these are not cheap projects to run. When you work with children in general, when you work with children with disabilities, there are added expenses, added staff that are necessary to have on hand. I’ve learned the hard way that every time you meet and talk to the teacher, that’s a teacher who’s not in their classroom, someone has to take their place. Someone has to pay for that. When you want to have a lunchtime meeting you need to buy the lunch, you know?

JS: Right, right…

EL: So, you know, it takes a lot of planning to do these kinds of interventions and projects within the school system. The school system comes itself with a whole distinct layer of ethics and ethics protocols, of course.

JS: Sure

EL: So you have the university ones which are rightly very rigorous when you are dealing with children who can’t even give passive consent, because maybe they have serious communication disabilities, right? So, it’s not a project you get up and running in a long weekend, you know. {laughter}

JS: Right, right, sure. Great. So, you mentioned one change with being able to change the size of the dot. Over the course of… so backing up – around when, how long has it been since the development came over to McGill’s side? And made that jump, from I guess it was RPI (Rensselaer Polytechnic Institute) to McGill for the developers? Was it Ian was the first developer here? Or were there a couple before?

EL: No no, there were developers before Ian. Again, there’s always, and continues to be some independent developers, of course. And since we started working both on, desktop versions and iOS and Android versions, there has been kind of a bifurcation of development teams. Of course they need to talk to each other.

JS: Right.

EL: I think it’s about ten years now here at McGill. I’m terrible with names, I wish I could remember everyone. I think Ian may have been the third actually.

JS: There was… who was it, Aaron Krajeski? (mispronounced)

EL: Yes…

JS: And I forget the other name. Oh, I have it actually on my list. I don’t know these guys personally: Aaron Krajeski…

EL: Yes, I remember him.

JS: …and Chuck Bronson are the names I have written down.

EL: Yeah, Chuck Bronson…

JS: I don’t know if they were McGill guys or not?

EL: Aaron I unquestionably remember was a McGill guy. Chuck rings a smaller bell, I’m afraid. {laughter}

JS: So, yeah it’s been some time. So my follow-up question was going to be: What sort of changes have you seen that actually technical development go through, or what are some of the milestones that you saw, that were meaningful, not only the development of the technology but also matched or reflected its usage in the field, through the Mackay project or other places?

EL: Well, from a pure design perspective…

Let’s back up for a second. Motion to music technology is now quite old, right? We didn’t invent some cutting edge device by going from motion to music, right? Our technical challenges are not at that level of “How do you do it”, but are at A) the level of how to do it on the kinds of devices that get donated to centers for kids with disabilities like old Windows ’97 machines, how to do with without using motion tracking cameras, and doing it on the probably dollar fifty – if it’s that much – camera that is build into your computer. And how to do it so that we don’t need me, let alone someone like you there when it’s being used. So really thinking of the end user.

Now the ultimate end user – that we still have not solved this problem – is the kid in a wheelchair, who might not have use of their arms. They can’t power the device on, they can’t use it! But the problem we have worked on refining, is, the challenges, how do you build in as many functionalities as the community you’re working with would like to have, like changing the color. Not everyone wants to change the color, most of the kids who might have a visual impairment that don’t need to do that. So there’s a whole control – you don’t want to complicate them with have to like, “First choose NO change of color!” or something like that.

So how do you build all these functionalities in but keep the interface simple enough so that, ye olde parent, ye olde teacher, who might not be particularly computer savvy, we really got to the point where we had these conversations – you were in on some of them – Will they know what a drop down menu is? Can we actually have nested, multiple drop down menus, will they ever discover that there’s this functionality? Will it be too complicated?

I’ve thought a lot about – you know AUMI is free, right? It’s designed primarily by researchers and university consortium. That it remains free is crucial to me. And I’ve always asked myself, what do you get if it was designed by a company? And you paid 50 bucks for it, say. It’s not clear you’d get a better program or device, or app, but you’d probably get a phone number that you could call 24-7, and say, “I can’t remember how to get my presets that I saved from my 3 kids I’m working with, right? Walk me through it.”

So, when you don’t have that, it’s puts even greater premium on the device itself being really self explanatory. So one thing I think we’ve worked hard on (when I say we I should say developers like yourself have worked hard on), is navigating between that rock and a hard place, which is “Keep It Simple Stupid!” but “Oh, there’s this new thing we want it to do!”

That’s hard! And I think with respects to many particular problems, we’ve dealt with them, but it continues always to be a problem. Every time we get a new version, it’s complicated by the following fact. We have any number of therapists working in the field. Leaf Miller would be a perfect example, no one knows more how to use this device with kids that she does. She’s been there from ground zero. I’m in AWE of watching her use this device with this kids, right? What this means is she knows inside and out the version she’s been working with, right? Then you give her a new version! Suddenly, it’s not working the way she’s used to it, certain parameters are not where she thought they were, it’s reacting slightly differently. So there’s always this trade-off between – in principle a new version is better, right? But maybe it’s not better if you’ve worked for five thousand hours with the last one and you’re really know how it works! Right! {laughter} you know. So that’s a challenge.

JS: Sure. Well, it’s like any instrument, if you have your old guitar that you know all its imperfections and that’s makes it what what it is. And so if you get something else, you know, yeah.

EL: I mean, I remember the very first meeting I had with a bunch of graduate students, they weren’t all tech graduate students, some were. And I demonstrated AUMI to them. Look, I was new too, I was thinking my was through these issues as I continue to, but I was thinking through these issues for the very first time. So I demoed it, and I said, next week come back and come up with a list of cool things you think it can do. When I say “cool” I don’t mean like, cool in that sense, but we discussed about that it’s for use with kids with disabilities. What would be functionalities that you think would be useful to build in, which, given your knowledge, whatever that might be, aren’t fantasy, are in the realm of the possible, right?

And everyone came up with these lists and the lists were really interesting. They were fatally flawed, {laughter} in lots of ways! As were many of my ideas at the time.

In what ways? A lot of them, focused on turning it into a cool creative musical kind of instrument. They were thinking like musicians, they were thinking like able bodied musicians. So all these functionalities, I said, this is a kid who doesn’t understand cause and effect, now you want to give them like, effects pedals so they can play Hendrix? No no! Or, this would – this might induce seizures in a kid, or… So, not all of them, but the vast majority of them, would make it a much more interesting device for them, the person that came up with the descriptions. And look, we can’t put ourselves in the subjectivity of the end users, which is why we need them to power the changes and development to the degree that is possible. It’s very hard. That’s why we have to work really close with caretakers and parents and teachers.

I remember, one of the two early experiences of AUMI with me and a child with disabilities stand out. One was this: I was working with a girl, young girl, and I was having trouble finding sounds that seemed to attract her attention. AUMI allows you to have different sounds triggered by the device, right? It could be instrumental sounds, but it could be anything! Finally I was going, and she didn’t seem to like instrumental sounds, I was like, okay, I’m going to try the sounds of a dog barking, because we have those in there. So I did that and turned the device back to her, and she started moving and producing the barking, and she started letting out what I though were hysterical giggles. I thought she was really happy. I was thinking to myself – I was well chuffed – “Figured it out Eric, there’s the sounds for her!” Suddenly, one of her caretakers runs over and says, “Turn that off, right away!” It ended up that she’s terrorized by dogs! She had been attacked by a dog. But her disabilities manifested themselves, as when she made sounds that we coded as laughter, was when she was really upset and scared. Who would have know that? Of course her caretakers knew that, her teachers knew that. So that was one kind of mistakes where it was a mistake for me to read into her subjectivity, that she’s happy now.

Another was when I was working in Greece for three days with occupational physical therapists and special education teachers there, demonstrating AUMI. And I was working with one particular young teen in a wheelchair. I had worked with him for two days, for hours each day, trying to get him to literally trigger a sound via his limited range of movements. I just wasn’t getting anywhere, so I was like, okay, it’s not going to work with this kid. So I stopped. And their teacher came up and said, “Give it a little more time, give it a little more time.” I’m thinking to myself, “This is day two. I’ve been doing this for 8 hours with this kid. Nothing’s happening!” Right? About hour later, the kid moved his head just a little bit and can’t remember what sound, but like, “click clock!” whatever, and the kid smiled. And the teacher said, “There it is.” And I said, “There’s what?” And she said, “That’s how long it takes him to realize he’s caused a change in the world.” His sense of cause and effect and his own efficacy is so stunted, so to speak, because of their inability to change much in their environment, due to this person’s lack of movement, that’s how long it takes. Again, it is was you or I you would’ve given up!

You know, you give AUMI to any one of your friends, they know immediately, you may not even need to explain it to them! They may not how it works, but they know they did that! You do this and you know, “Wow! I moved my nose and the sound changed!” That’s something we have immediate access to, due to the kind of agency we enjoy. But a lot of people don’t. So there are a lot of these kinds of lessons. And they crucially effect the development stage.

Because how the device is going to work for someone who doesn’t recognize that they just did that – right? – is going to be really different {laughter} you know? than if it’s going to be an Atari game controller, you know. So, those are, are some of those things that we worked on getting better at, and have.

JS: Yeah, yeah definitely. What about… AUMI is assumably filling a need for a musical instrument, or an interactive instrument, that can be used by people with all sorts of disabilities, or abilities. So how do you think in the course of development that has been approached? You know, right now you use this camera based system, and one of the things I think is really neat about it is that the basic fundamental design of it has stayed largely stayed the same, since it was first invented in whatever year it was. It sort of stayed this model of, you have a camera that detects the motion that moves across the screen and triggers sounds. How in the course of development has – have the developers, we’ve collectively – considered or negotiated all sorts of different disabilities beyond, not just movement disabilities, but cognitive, etc.?

EL: Well I think a couple of ways. One is, simply by dividing it up. Some people are more interested in working with one kind of sub-community, others with others. And then you talk to each other. Someone like Jesse Stewart (Carleton University) uses the device in a whole bunch of creative ways with cross ability groups, right? And so he’s thought a lot about that, and does really cool things about that. Mackay Centre School, we’ve primarily worked with kids with physical disabilities, and we’ve thought a lot about that. Some of it is just dividing up these concerns and then you talk to each other and see what might be transferrable.

Another way is, of course, at the Mackay school we worked, and this was a brilliant idea by the principle. At first she said, “You know? Let’s not just use it therapeutically; let’s not just use it with the kids with really limited motor control.” She said, “You want AUMI to really be a community-forming device? Let’s build it into our curriculum, the use of it, by all the kids.” She thought of it like this: storytelling time. When the story says, you know, “She JUMPED on her PONY and RODE across the field! Everyone make the sound of a pony!” And they’d use AUMI to make the pony sounds! Whether they were triggering it with their nose or their arms or controlling it with their fingers because they have good use of their hands. So [she] was thinking about it ways it could be used regardless of ability or disability, by all the kids. And that was really her recognition that that would be a way of AUMI unifying the classroom, and putting all the kids on equal footing.

To me, the main obstacle to all this, and in a curious way, probably the main obstacle, at least I’ve faced in “selling” AUMI, so to speak, is the perception – I mean that literally and metaphorically – that the tracking doesn’t work. You know. You put the dot on the nose, and someone starts moving and suddenly the dot goes off to their ear, or floats around the screen for a while. When you first describe AUMI to someone, and they see the dot not remaining rigorously affixed, they think it doesn’t work. Right? It’s a piece of junk, right? And then people don’t want to use it. They think, oh, these people are just beta testing, some new university project, they can’t even get the damn cursor to say, you know. When of course we know – now there are some real issues there, of course, – but we also know it kinda doesn’t matter. That the cursor is sort of a stand in for what is really going on, in terms of detecting motion within a certain field.

But it looks bad when we tell people, “We’ve got this sort of device! Put a dot on any part of the body, move that and it makes sounds, and you put it on the elbow” and suddenly it’s, it’s gone! So from a development side, I’ve often thought, we need to spend more time on that, even though it’s kind of a false problem, you know {laughter}.

JS: Sure, right.

EL: And it’s a variable false problem. Another problem with AUMI is there is a huge differential in the ability of those who use it in their practice to really use it well. Again I’ll use an example of someone like Leaf. Leaf walks in, it’s set up immediately and boom! She knows the background, colors that will work, the lighting, [finger snap] it just works!

I’ve been working with AUMI for 12 years and I’m still sometimes like, oh, oh, I think there isn’t enough contrast with this person’s shirt and the background, you know. It doesn’t work as well!

And I think there’s not a little variability there, there’s actually a lot of variability. And again, that affects even the ability to sell the device, so to speak, to folks who you think would be useful for them to use it. If I wanted to walk into a new school and say, “I think this is great for you all, let me know you how it works,” I want Leaf or someone like that to demonstrate, not me! I’ll do an “okay” job, I’m not going to do the same job as someone who has never seen it, but the affect would be different, actually, and that degree of variability, which I think is partially a function of having it work on cheap, and old devices, is partially a function of the kind of video tracking softwares and hardwares that you can use in devices like this, plus other kinds of variables. I suspect that different folks have very different experiences working with it. Then it’s hard, you hire some grad students, who are occupational therapists and you give them two weeks fo training with AUMI, and then you say, okay you’re now in this grade 2 class in the Mackay Center School. You’ve got a 35 minute slot to get 6 kids with wheelchairs and get the devices hooked up and have some fun with them. It’s not a easy thing to do!

JS: Yeah, absolutely. The environmental variables is a real hard one to get past. One of the more recent thing we did was the preset system for the desktop version of the desktop version, which is great, so now you can have a classroom of 12 people, save everything…

EL: You’ve figured out what works…

JS: …and send it back in. But if you go to the classroom across the hall…

EL: …or if it’s raining that day instead of sunny, or the kid comes in in polka-dots instead of a solid…

JS: …everything will be completely different. Yeah.

EL: And of course, I’ve always had this, this, ongoing worry or question, which is, we’ve been working on refining it for a number of years. It’s more refined, unquestionably. But it’s sort of like with cars. You have a model, and it gets better every year, and it gets better every year and you have new versions of it, but some point, it’s done. And you come up with a new model car.

So when are we at the point, as I often ask you, should we still be doing this as a big fat old Max patch? Should we, you know, should we webserving it? Should it be something else? I wonder, at some point there must be diminishing returns to refinements. So I’m always, in the back of my mind I’m always asking myself, the design question would be, if you wanted to make something which AUMI does, and you want to start from zero, tomorrow, would you be writing it in Max? Would we be using the built-in camera? Who knows what all the questions would be, but…

JS: One thing is, and I’m really interested to interview Ian because wasn’t he, at a certain point, working on actual physical controllers, or some sort of module that had… I seem to remember him casting things in resin to have some sort of physical form that someone could move around.

EL: Well, yes and we were worried about two questions at that time. The one which finally did get realized to my memory was one which triggered lights for kids who had auditory disabilities, so the movements would – he made a little device that would literally hang over the edge of the iPad or the top of the computer with colored lights that would be signaled, so it was motion to light as opposed to motion to music.

But what he would, was thinking about, this would be the gold standard: I would love to have enough grant money or a company or something. You’ve got this one-size-fits-all generic device called AUMI, and all its problems are to some degree due to its one-size-fits-all-ness.

[interrupted by phone call]

The ideal device would be, okay, you know, you’ve been working with these four kids, they have different kinds of disabilities, different degrees of severity, they’re into music, you’ve been working with them with AUMI. But now, I’m going to make an interface for this kid, right, who only can use head switches normally. I’m going to make a different interface for this kid, right, who can use 2 fingers on one hand. I’m going to make a different interface for this kid which has to be totally eye-tracking, you know. So I think that was the sort of idea. I think if we fast forward some number of years in the future, with 3D printing and things like that we’ll be ina position to make really custom interfaces that maybe software guts are interchangeable but the interface itself is quite variable.

That would be an amazing thing for kids and their parents. Imagine if you just filled out a form: This is the kind of motion that my kind of kid can make and likes making, this is what they’re interested in; you mailed it off and you got a 3D printed AUMI interface in the mail two weeks later that just plugged in, you know?

JS: Yeah, yeah.

EL: Some of the AUMI difficulties we face is, it is a one device fits all kind of – you know – and kids with auditory disability vs. kids with visual disabilities vs. kids who move too much vs. kids who move too little, these are really widely different kinds of challenges for a physical interface that is going from movement to sound. And I think it’s amazing it works as well as it does across so many modalities of ability!

JS: Yeah and it always is sort of that balance of keeping it absolutely simple so anyone can use it, it’s easy to set up and runs on anything. Because as you’re talking I’m thinking, you know, shit we have three 3D printers within half a block of this place. So we could do all these custom interfaces – and that’s great – until you get to the classroom with twelve kids and start handing them out and plug them all into whatever, and three of them work, six of them need to be reconfigured, two of them just don’t work at all, so…

EL: You know, these are all real problems, the sad thing about them is, they are all real easy to solve, in one sense of “easy”. If I had eight grad students and a quarter million dollar printer, and enough grant money, we literally could do that tomorrow. We’d be able to outfit every kid in that school with an interface, and it would be amazing!

And you know, it wouldn’t cost that much money, right? But you know, real money, but it wouldn’t be like “Oh, this would be impossible, this would never happen” amounts.

And it would be, I mean in my world, we’d be doing a much more valuable service that I suspect a lot of projects that eat up a lot of money, {laughter} end up producing. I don’t mean to diss other people’s projects, but you know what I mean as far as, the social good would be pretty immediate.

JS: Yeah yeah, sure. And that is one nice thing that I think Ivan took the reins on with the new version that he did, was to really – from the ground up – start with this modular idea and then, some of the work I did.

I don’t think I ever gave you the actual functioning version of the haptics demo that we did for it, but it was basically the new version that, instead of the camera tracker, it was this haptics module, and it was this along with the the midair infared haptics array that we were demoing at IDMIL (the Input Devices and Music Interaction Laboratory at McGill University) for a while. So just doing a simple different type of interface that tracks your hand over a haptic array that gives you some feedback when you trigger things. But it came from this idea that you have different modules that in theory you should be able to swap out based on whatever you need for an input and what you want to output.

EL: I mean, in my mind it would be amazing if labs like the one you’re associated with and Marcelo Wanderley (director of the IDMIL), who is a part of IICSI, who is instrumental in helping us, not just recruit students but use resources and expertise. But it would be amazing if every project thought, for some amount of time, about: “Hmmm, who could use this interface? And what could I do, that might not be that hard, to broaden the community of people who can use this interface? And is there some way with some modification, perhaps serious modification, but generically the same interface, {laughter} could become a highly adaptive interface?” We’ve talked this, you know, if the T-stick (a musical controller developed at the IDMIL) was in the shape of a U it could sit in front of a wheelchair and rub against it. I think a lot of the digital interfaces that come out of the your lab and I suspect similar labs worldwide, were they even to spend a little bit of time, you have to spend early on, right, because you’ve often discovered, as soon as you’ve made early on, “Oh! If we had only done this, it would be real easy now to modify it in a particular way, but we didn’t.” If people thought early on, even if it was – “We’re not going to do that now, but we’re designing it, so that in the future when we have the time, the resources, the will, or the money to do it, we can do it.” We’d have a lot of potential adaptive interfaces out there.

The same time you’re trying to adapt a traditional instrument so someone can play it with one hand, so they can control a laptop with the other, a control surface or some sort of signal processing, you’re now designing an interface that someone with one hand can use, you know?

But if you’ve thought about, other {laughter} …you know. I don’t think it would be that hard, {laughter} if you know what I mean? But it takes even remembering to do it, very early on, you know.

JS: Yeah, and this, we had mentioned, as far as an upcoming project in my lab with a couple of my colleagues. We’re going to designing some new interfaces for – what was it a mobile workshop for interactive music making, and so what you’re saying is totally fitting, that if we sat down early and said okay, is some of this adaptable or could be thought of towards accessible music making, at least put in those little, whatever it was, those little hooks that we could go back to later and say ok, yes we left the door open.

EL: Every instrument is an adaptive instrument. The problem is, historically too many of them are designed to a normalized sense of what a body is. But they’re all adaptive! So if you just think in those terms! Of course you’re designing an adaptive instrument! I’m designing an instrument that is adaptive to someone who controls both of their hands, say – but that’s an adaption! I think if you just think like that to begin with it already opens up conceptual space for thinking about what you’re designing in different ways, you know. A chair is adaptive. If you were 7 foot 6 this would be a terrible chair. If you were 2 ft 6 this would be a terrible chair. And we talk about this, we know this in industry. There’s this notion of the “standard body” and that’s what you design things for, cause that’s how capitalism works, cause that’s what you’re going to sell! {laughter}

Sorry, that’s that’s a digression! {laughter}

JS: {laughter} Let’s push towards the end here, we’re already going on towards an hour. But just kind of to wrap it up, and we’ve already talked a little bit about it, what do you see as the both immediate and long term goals and directions for the future of not only just the technical development but the the AUMI project overall? Where do you see it going, where do you want to continue to drive it towards?

EL: I’m not quite sure. I’ve been thinking a couple things, though. I think parallel to continuing to work to refining AUMI the way we are, I think we now collectively, have a lot of experience, maybe as much experience as one working in this area in the world on adaptive musical interfaces, and their role in community formation, collective play, therapy, and so on? And we’ve learned so much about that from working on AUMI, that we should take this knowledge and design other kinds of interfaces, design, you know, sponge balls when you bounce them, triggers sounds, you know, just other kinds of interfaces! One thing that I noticed when I worked in the Mackay Centre School when I was there a lot was that, how often something would come up that would be a really cool interface.

I’ll give you one example. They work really hard to get kids to use their walkers and wheelchairs on their own. That’s an important bit of autonomy. If you need a walker or wheelchair and you can use it yourself, you can get around in the world! Crucial for them to learn to do that! For a lot of kids, given their disabilities, this is a hard thing to do! So I was like, what if we sonified it? What if every time the wheelchair turned, it would make a really cool sound effect? And every time their walker goes a certain distance it gives them an encouraging message? So, the kind of technologies we employed in AUMI would make a lot of those things really easy to do. So there’s a whole range of ways in which, you know, are those musical interfaces? In a sense, yes! So I would like to see us broaden the basket, so to speak, of the kinds of interfaces, that we develop.

Look, AUMI is not for everyone! You need to have an iPad, you need to have a, you know it’s… there are ways in which it’s cumbersome if you don’t have an environment where you can set it up easily, where you have enough time, and all sorts of things. Okay, what else can we design? What can we design that’s quick and dirty, say, but kinda fun, that facilitates collective play? So that, I think is something I’d like to see the group do down the road.

JS: That’s a fun direction to take in, just from the design standpoint!

EL: {laughter} Yeah, right.

JS: Yeah, that’s great. Alright then, let’s call it good!

EL: Works for me!

JS: Thanks Eric! We’ll call it a wrap!