I spent the last few days in Sheffield, at a children’s media conference entitled, ShowCommotion. This was my first engagement with a new industry and featured just the same amount of excruciatingly awkward networking practiced by solo delegates at conferences throughout time and space. *sigh*.
It was a really good little package, all in all, with the chance to meet up with some people I hadn’t seen in about a decade, and introduce myself, whilst curling my own toes, to some new ones. I also got to hear some smart people say stuff. Now that I’m back in the cocoon of my office I can commence with the cogitation.
A chap called Paul Tyler produced a great session entitled, ‘the Cross Media Comfort Zone’. It was basically about new technologies and how these hold potential for ‘us’ (as in Children’s media producers) to create new things for kids to play with. He and his panel examined a variety of the latest technologies – a big area of interest was ‘Augmented Reality’ ( frankly all a bit ‘meh’ at the minute, but with loads of potential) and several examples of soft tech.
One that has stayed with me was the work done by some german students, where an incompetent looking robot asked people to point it – literally, by pointing – in the right direction: it used image recognition of the people it was asking to read their body shape as they stood before it and pointed the way.
Another standout thought from this session was something put forward by Dom Mason. He made an interesting point about how gestural interfaces will mean we no longer have to learn a series of difficult, obtuse thoughts and commands to engage with our computers (a word which will itself come to seem quaint and unnecessary).
“Ok, so… go to File, choose Open, select File.. .oh, where’s the file? Hang on, I didn’t put it there… it’s on the E drive… that means I have to go back here…”
This all seems easy enough,but then we’ve learnt what those words mean in the context they’re being used. But when we have gesturally aware computing – Microsoft Natal, the dumb looking robot – available to us, we are also removing the user engagement with the structural principle of computing and using, soft, clever, tactile technology to soften the blunt edges (like forgetting which drive the file we want is on).
This got to me, and on the train back home, I figured out why.
We are paying deference to the user’s inability and building technologies and interfaces which will magnanimously ‘take the blame’ for our inability to locate what we want.
Is this such a good thing? I’m not sure if it is. Doesn’t being wrong provide us with a learning experience?(Even if it is only to remember where we usually keep our stuff).
What happens when we never have to be wrong again?