Feature Top (Full Width)

Friday 24 July 2015

The sound of silence: Using technology to recover sound from inanimate objects.


If a tree falls in the woods, and no-one is around to hear it, does it make a sound? An age old conundrum which at its heart really asks, is a sound the vibrations that travel through the air, or is it only a sound when the vibrations are interpreted by humans or animals? The remarkable work by Abe Davis and his colleagues, have created a technology which may confuse the matter even more, but may prove useful in the fields of forensic science, surveillance, history, to name but a few.

Going back to the tree problem, when the tree fall it creates vibrations which travel in the air to our ears. The vibrations are channeled through the inner ear to the eardrum, which communicates the information to the brain as sounds. The vibrations create sound by making the eardrum physically move. Even if we are not in the wood, those vibrations are still present, and when they come into contact with other objects, cause them to vibrate and move on a micro scale, So small that the human eye could not identify the movement.

Davis et al, using a high-speed camera has managed to -at thousands of frames per second, pick up movement at the scale of a micrometer ( a 100,000th of a centimeter) from objects which have been exposed to sound. Even though the movements are much smaller than that of which can be represented by a single pixel, Davis' intuitive software looks for minute changes which occur in the whole image, which then translates these movements back into sound. In his TED talk, he demonstrates this by playing Mary had a little lamb through a loudspeaker to a plant and then plays the recovered sound from the imagery. It is far from the crystal clarity that we expect from modern gadgetry, however it is clear enough to hear the original tune, without straining.

Although in its infant stage, Davis has demonstrated its effectiveness using cameras at range and through obstacles such as glass and has even adapted the technology to run on a shop bought camera. Albeit the shop bought camera produces a less refined sound, it is still possible to identify the tune being played. However, through refinement of this method and perhaps enhancement of clarity of old silent film, we may be able to hear the voices of those who were once lost, or perhaps recover sound from a crime scene using CCTV footage or to be used as surveillance devices.

So in answer to the original riddle, a tree falling in the woods does make a sound when no-one is there to hear it (as we knew it would do). Maybe the question should be changed to "If a tree falls in the woods and no-one is there to hear it, or to have software analyse the video footage of the event, does it make a sound?"





Thursday 9 July 2015

Piercing perception, part 2: The plug and play brain.

Where is that damn installation disc?

I remember a time, in the not too distant past, where every peripheral that you bought came with an installation disc, containing the vital drivers required to allow the computer to make use of the device. Without them your gadget was no more useful than a rock, tethered to computer via string (or if the device was wireless...just a rock). Each new computer required the user to scrabble about in old boxes, search disc spindles or CD wallets to find the right disk. Failing that, a trawl through the manufacturers support pages were required to track down the specific software that would recognise how the computer was supposed to interpret and make use of  the electronic signals being sent from the input device. Nowadays, thankfully, you plug a mouse into the USB port and the computer is already installing it, ready to use in seconds. Marvelous.

If we were to consider our eyes, nose, ears, tongue and nerves as input devices, and our brain as the computer, then we already know that we have the right drivers to make sense of the signals we receive from them. However, if the brain can -as the computer does- find its own drivers, can we make use of prosthetic peripherals to enable us to sense the world in ways which have never been achieved before?

Sense and Sensing ability.

In the last post Piercing perception, part 1: A mole new world, I discussed how we perceive only a fraction of what the universe has to offer, due to the restrictions in our ability to process sensory information, as well as our inability to interact with the happenings of the universe on a macro or micro scale (in a meaningful way), without the backing of a well stocked science lab. Furthermore, we discussed the super senses of animals and how technology could potentially be used to harness their abilities, hypothetically allowing us to experience the world in new and exciting ways: expanding our umwelt (the world as it is experienced by a particular organism).

Artificial Inference



In the TED talk Can we create new senses in humans?, Eagleman (2015) explains that we already know that our brains are capable of adapting to receiving information from electronic devices, stating that many thousands of people can hear due to having cochlea implants or can see from having retinal implants. In a cochlea implant, a microphone picks up the sound, turning it into a digital signal, which is relayed to the inner ear. Likewise, with a retinal implant, a camera device captures the images, turns them into digital signals which are directed to the optic nerve. The brain can adapt to these new devices and find its own drivers to enable it to make sense of new signals.

But how does the brain speak digital? How does it translate these signals and convert it into something more familiar? The answer is that when the brain, for example, sees an object, it does not really see anything at all. All that happens is it receives electrochemical signals from the eyes, likewise when you hear something, the brain actually hears nothing but receives,  electrochemical signals. The brain sorts out these signals and makes meaning of them. It does not discriminate between what kind of data it is receiving, it just takes in everything and then figures out what to do with it, which Eagleman explains provides an evolutionary advantage, allowing for "...Mother Nature to tinker around with different types of input channels". So perhaps if we had the heat pits of a snake, the electro sensors of the Ghost knifefish or Magnetite which is in some birds to help with navigation our brain would be able to adapt to what they could pick up, allowing us to perceive our world in new ways.

In summary.

Our brains do not see, hear, taste, smell or touch, no more than a computer can see a digital feed from a camera. The computer just takes the patterns of electronic signals and sorts, uses and stores them in such a way to derive meaning. Our brains do the same. They are not fussy about what type of information they receive, they just find a way of using it. The versatility of our brains allows those who have sensory impairments to experience the world as though they would without their limitation; imitating the peripherals which time and evolution have bestowed upon us. The question now is not  whether our brains are capable of receiving digital input, but "How adaptable are our brains?", "Can our brains cope with new sensors, which evolution has denied us?" and "How will this alter our perception of the world?". 



Images

MRC (2012),  Cochlear Impant, viewed 9th July 2015, <http://www.mrc-cbu.cam.ac.uk/improving-health-and-wellbeing/cochlear-implant/>

Shutterstock (2012), altered by Novis, J. (2015) Plug and play brain, viewed 9th July 2015, <http://images.gizmag.com/hero/brainpolymer-1.jpg>








Friday 3 July 2015

Piercing perception, part 1: A mole new world.

Introducing the umwelt.

You could be highly poised, caffeinated and keen; the type of person who remembers even the smallest detail, which others overlook and still, by the very nature of being human, only experience a slither of what the universe has to offer. Our capacity to understand our surrounding is limited to our 5 senses (arguably, there are more), our size and how we manipulate science and technology. This is known as the umwelt: the world as it is experienced by a particular organism.


Our World.

Our senses, touch, taste, smell, hearing and sight are extremely efficient at providing us with the relevant information necessary to exist, however we know that surrounding us are microwaves, x-rays, radio waves (but to name a few) which bounce off or pass through our bodies constantly, and we have no natural way of being attuned to their presence. Likewise, our size restricts us to what we can perceive. As we attend to our daily duties we are no more aware of the plight of a bacterium as we are to a star dying in a distant universe. We are creatures who live in a world of a certain size, who can operate successfully without  consideration for happenings on a micro or macro scale.

Despite our size and senses, we have evolved to deftly wield the tools of science and technology to expand our understanding of our universe. We can detect and utilise x-rays for medical reasons, microwaves for the convenience of cooking and look upon the stars and microbial matter with ease. However, even though we can artificially expand our umwelt, this leads to an intriguing question: What exists beyond it?


The world of the star-nosed mole.

Consider the star nosed mole, found in the wetlands of Canada and North East America...


...it is roughly the size of rat, lives underground and in water, and is blind. Initially we may think that this creature has a very limited umwelt. It lives in the dark, in the cold and cannot see: a pitiful existence, especially if you can imagine how you would live, as a human  in these conditions. However, the star-nosed mole is an incredible creature and that has the most sensitive touch organ of all mammals; its star shaped nose. Each digit-like appendage (ray) has approximately 25,000 sensory neurons, in contrast, the human hand has only 17,000 (Smithsonian.com). The mole can, by puffing out air underwater, sense with its star where prey is, it can move each ray with speed, it can touch up to 12 different objects per second and can detect electrical fields (Epic Wildlife).


A mole new world.

Great. Good for the mole. But what about us? Imagine, if through the use of technology we could utilise the sensitivity of the mole's nose. Imagine that we could sense electrical fields or could experience touch 32,000 times more than we can already. This has an obvious benefit for those that are blind. For example, the technology could allow the individual to know their surroundings by using their own breathing (much as the mole uses its puffing) and the feedback from it to sense objects and people. However, beyond this initial application, what if we could all sense our surroundings this way, layering the information we receive from the technology over that which we receive with our eyes. What if we could overlay, on top of that, being able to perceive electrical fields? Our perception of our world would be transformed completely, and so too would our interaction with it.

The star-nosed mole is just one exception of the animal kingdom, with fantastic abilities. Think about how our world would alter if we had the infrared capabilities of the pit viper or the powerful nose of the bloodhound? From these super powers we would be able to sense things that were there all along, but which had previously been inaccessible to us, thus expanding our umwelt. These new insights could potentially yield scientific breakthroughs in natural sciences, medicine and technology. 

Ahem! Excuse me... we already have the technology. What is your point?

We already have night vision goggles, Geiger counters, thermometers, powerful telescopes and electron microscopes, etc, to view our world. I agree. Although we do not use them, about our person, all of the time. The technology is limited to use in certain scenarios, allowing us momentarily to expand our perception. They are (and I do not mean to trivialise their worth) accessories to who we are and what we are capable of. The technology that I speak of would be worn most of the time, as one would wear, for example: regular glasses, transforming our ability instantly from our command. This could be wearing something akin to Google Glass and saying "Infrared vision" allowing the user to see infrared, or to say "microscopic vision" and instantly being able to zoom in and focus on minuscule objects. Or perhaps allowing one eye to see the changed view, whilst the other stays normal, permitting the user to create an augmented view. Having the power to experience the world at will, without the constraints of a science lab, would truly be as close as a humankind could get to having extra senses, without invasive alterations or through the natural process of evolution.

Final thought.

In summary, there is enormous potential in using technology to reveal the various aspects of our world which are currently hidden to us, That we may be able to toggle 'on' or 'off' powers which have, thus far, been confined to select species or  the realms of fiction is not only exciting, but will enable us to make rapid advances and discoveries in science and technology.  Although an equally thrilling prospect is, what else exists in the universe that exceeds what we can tap into with technology, and what lies beyond our collective knowledge. The nature of this mystery means that there are potentially limitless discoveries to be made from revealing our unseen world.



In my next post :

Piercing perception, part 2: The plug and play brain (due 4th-6th July), I shall discuss whether our brains are capable of adapting to receiving sensory information from new technology.


Images

Epic Wildlife (2014) Star-nosed mole, viewed 2nd July 2015, <https://www.youtube.com/watch?v=Egz2f5_Ip3U>

Novis, J. (2015) perception infographic, viewed 2nd July 2015,

USSLC (2014) Bubble burst, viewed 2nd July 2015, <http://usstudentloancenter.org/student-loan-bubble-not-going-to-burst/>



Monday 29 June 2015

Digital Empathy: How technology will read emotions.



As humans we have evolved to convey our emotions to each other; both as a means of improving our social connections and to aid our survival. Even to this day we consciously and unconsciously contort, skew, manipulate and alter muscles under our skin to provide others with clues to our emotions. If we can read these visual cues, and tease from them insight into understanding of the well-being of others, then can software read faces, with accuracy, to comprehend how the user is feeling?  

Consider for a moment, how much  face time, technology has access to during the course of a day. How many facial expressions you demonstrate in front of the television, your computer, games console or mobile phone. How many times you may smile at a text message, be frustrated with a tricky level on Candy Crush Saga or laugh whilst watching comedy. Now consider which technologies allow the device to 'see' the user's face: mobile phones, tablets and some consoles already have camera devices attached to them. Potentially our gadgets already have the tools to do this, they just need the means to understand what they see and to recognise which emotions are being displayed.

In a TED talk, Connected, but alone?, Sherry Turkle explains how software has been created to enable technology to achieve accurate facial recognition of emotion. Turkle describes how facial expressions are broken up into 45 segments called action units: for example, the corner of one side of the mouth is one action unit. When a device reads these action units it combines them into a snapshot of an emotion called a data point, which is compared, through a complex algorithm, to a databank comprising of  over 12 million data points obtained from people of different ages, genders and ethnicity. By comparing what it can see, to what it knows, it can tell the difference between even the subtlest of emotions (eg. the difference between a smile and a smirk), with an amazing level of accuracy, and add the new encounter to its databank to enhance future learning. The software mimics, in a rudimentary way, how humans learn.

The practicalities of being able to read emotions would be life-changing in the field of special educational needs and disability (SEND). If someone who has autism struggles to recognise facial expression, they could perhaps wear a technology such as Google Glass, which would not only read the facial expression of who the user is looking at, but would tell the user, in the display, what the  person was feeling. Likewise, it could be used to aid visually impaired or blind people, to understand the emotion of others through the form of audible feedback. This combined with the tone and delivery of speech would help to give a more accurate understanding of the speaker's emotion.
Additionally, emotional recognition could be used in education. If a student was reading or interacting with a website or software and was confused, the technology could adapt itself to aid understanding by slowing down the pace of the learning, repeating a part or maybe explaining the content in a different way. Alternatively it could be used to build a profile of how to best teach a child, logging which programs provoke happier responses, free from expressions of puzzlement.

However, do we really want this invasion of our intimate feelings? All of us, to some degree use technology to create a different version of ourselves when communicating, or existing in virtual representation. We sometimes send a text message because we do not want someone else to know our true emotion, even going further in some situations by adding emoji or emoticons to disguise or exaggerate our true feelings. We want to post our happiest, best photos to social media because this is how we wish to be thought of, even though we experience low points and struggles in our life. We want to show control over what others perceive of us, beyond the control we can exercise in a one-to-one situation with another being. 

Additionally, our reaction is being recorded and can be done so in conjunction with what we are viewing. The potential for using this data for marketing is massive. What if you received, instead of regular spam or junk mail (which you delete), focused emails and adverts optimised to provoke certain emotions within you. What if an advert for a product could be tailor-made to make you, an individual, happy? Does it mean that our ability to make conscious choices will erode (beyond that which it already has) under the new powerful psychological tools which will be available to advertisers. Many of us willfully sign off on our rights to privacy by agreeing to the terms and conditions of social media websites, the use of apps or even applying for a loan, with the benefit being the free use of the service. But if you have to trade something for it, then the service is not free. The question that we will have to face in the future is: will the services available to us, be worth the information we provide companies, regardless of their altruistic or profiteering tendencies, or more simply put: how much our are emotions worth?

Images

Image of digital eye from wallpapertube.com

Tuesday 23 June 2015

Safety using AR (Augmented Reality) and VR (Virtual Reality) in school.


In view of the growth of the virtual reality and augmented reality, consideration needs to be given to the safety of its use with children, especially within the school environment.

However, before you decide to brand me a 'Luddite' and form a mob to obliterate every piece of electrical gadgetry I have, let me profess that I love technology... the freedoms it allows, the potential for learning, how it can inspire, and how it can allow us to shape our world according to our needs and desires. Despite it having the power to liberate an individual, it also has the ability to confuse, disorientate and, unsupervised, give access to worryingly inappropriate material. I am amazed with the rapid development of AR and VR from competing technologies such as the Microsoft Hololens, Oculus Rift and the Samsung Gear VR, and genuinely believe that they will have a place in revolutionizing the way children are educated in the near future.

Luckey Palmer (creator of the Oculus Rift) said in an article by Johnson (2015) that VR would predominantly be used in gaming in the next two years, and would then expand to become integrated into the mainstream. In a previous post, I indicate the potential benefits of this technology as being a powerful tool for learning within schools. I still do. Although, if Palmer is to be believed, there is a fair amount of time before sophisticated AR and VR enters our schools; before it reaches the classroom, consideration should be given to its effect on the individual user.

Piaget in 1929, believed that very young children find it difficult to distinguish between fantasy and reality, claiming that they do not have this distinction firmly in place until the age of 12. Morison & Gardner, (1978), Flavell, Green & Flavell, (1986) and Sharon & Woolley, (2004) also agree that:

 "children often err in mistaking non-reality, such as fantasy, appearance, and illusion, for reality" (Woolley & Ghossainy, 2013)
If  this is the case then it could be argued that children between the ages of beginning school and the start of secondary education, may have difficulties in their semantic understanding of AR and VR, If a child cannot understand the difference between reality and alternative realities, then is the use of this technology appropriate? The media in recent years, has been quick to judge (often without evidence) the negative affect that violent computer games and films have on the impressionable, however this technology offers its own set of complications. Films and computer games, although can draw the user in, are not designed to alter the perception of reality: a person playing a computer game, is still aware that they are in their own room, whereas VR and AR are designed to encompass the user in a world which does not exist, or to skew reality. A logical assumption could be that as the user is more intimate with the virtual experience than they are with any other media, then the problems caused by the content (if any) would be greater. Although this theory is flimsy, I am sure it will be tested rigorously by psychologists as the technology becomes more readily available.

In addition, consideration also needs to be given to how children with Special Educational Needs and Disabilities (SEND) interact with AR and VR. For example, those who are on the autistic spectrum, may have difficulty with not only understanding what they perceive to be true but may be distressed due to being overwhelmed by tactile and sensory information, from the mere presence of wearing a headset or having their eyes covered, to sounds that are too loud or are layered over those from the surrounding environment. Equally, the technology should be assessed to see if those with epilepsy can use the equipment safely, without increasing the risk of seizure.


Despite all the concerns surrounding AR and VR, I believe that we should not worry to the extent which Jeff Goldblum's character does in Jurassic Park, prompting thoughts such as:


"Your scientists were so preoccupied with whether they could, that they didn't stop to think if they should."

Woolley & Ghossainy (2013) believe that a  "disproportionate amount of time" has been given to underestimating the accuracy of children to distinguish between the real and the unreal. They believe that children, from the age of 3 can recognise the difference between pretend actions and real ones: eg. pretending to cook using a toy kitchen, or nursing a baby doll. Harris, Pasquini, Duke, Asscher, & Pons, 2006 also argue that most children understand not only tahat physical objects are real, such as a chair or a ball, but things which they cannot see e.g: germs are also real. If this is the case, then the context in which a child learns about their world and the guidance given from adults, is key to their interpretation and cognition. With regards to teaching, educators will require training in how to teach with this new technology and how best to tackle misconceptions incurred from it.

Whereas earlier I have noted potential problems of the use of AR and VR with children with SEND, this technology can be used to allow someone who experiences physical disability, to use an avatar to represent them in a virtual world, who has the same physical capabilities as any other player, regardless of their real world condition. Furthermore, this technology, combined with the right software, could help children who would benefit from sensory integration therapy, as it can let the user experience lights, sound and different visual imagery.

In summary, I believe that AR and VR will be, as Luckey states, integrated into the mainstream, and thus will find a way into education. When it does, as long as it is supported by genuinely inspiring software which motivates students to learn through exploration and creation as well as demonstration, it will be a powerful teaching tool. This article was designed to briefly highlight some unanswered questions regarding potential difficulties with AR and VR being used by children, which during the course of time and the development of the technology, will allow us to better analyse and assess how they react to their use in school.

We exist in a world of technological marvels, where almost anything is possible. We just need to consider how this technology can best support and enrich the life of the user, finding the best way to respond to their needs.

References

Flavell JH, Green FL, Flavell ER. Development of knowledge about the appearance-reality distinction. Monographs of the Society for Research in Child Development. 1986;51(1, Serial No. 212) [PubMed]

Harris PL, Pasquini ES, Duke S, Asscher JJ, Pons F. Germs and angels: The role of testimony in young children’s ontology. Developmental Science. 2006;9:76–96. [PubMed]

Johnson, E. (2015) Oculus Rift Inventor Palmer Luckey: Virtual Reality Will Make Distance Irrelevant (Q&A), re/code, [online] Available: <URL:http://recode.net/2015/06/19/oculus-rift-inventor-palmer-luckey-virtual-reality-will-make-distance-irrelevant-qa/> [Access Date 23rd June 2015].

Morison P, Gardner H. Dragons and dinosaurs: The child’s capacity to differentiate fantasy from reality. Child Development. 1978;49:642–648.

Piaget J. The child’s conception of the world. London: Routledge & Kegan Paul; 1929.

Sharon T, Woolley JD. Do monsters dream? Young children’s understanding of the fantasy/reality distinction. British Journal of Developmental Psychology. 2004;22:293–310.

Woolley, JD, Ghossainy, M. Revisiting the Fantasy-Reality Distinction: Children as Naive Skeptics. Child development. 2013:84(5): 1496-1510. doi: 10.1111/cdev.12081

Images

Geeky (2015) vr-classroom, image, viewed 23rd June 2015, <https://geeky.io/2015/06/08/vr-education.html>

Special thanks

Thank you to psychologist, Irana Tarling for your help with this post.

Saturday 20 June 2015

Do students learn from computer games, outside of school?


Computer Games have been a staple part of my life since I was the size of Mario (before he'd eaten a mushroom), and originated from my father purchasing a Spectrum ZX computer. The power supply made it overheat constantly, The rubber keys lacked the satisfactory tactile feedback of a decent click, Loading the programs from cassette tape was an assault on the senses, complete with seizure-inducing, horizontal loading lines and a screech from -what sounded like- the death throes of a digital banshee. However, it was worth it to pilot a spaceship and blast aliens in Mooncresta or navigate a boxing-glove-wearing egg man (Dizzy) through a fantasy world. Equally exciting, was purchasing Spectrum magazines, which contained reviews, a demo tape with new games on it, and sections showing you how to create your own, inspiring children and adults to actively understand how computer programs work and encouraging them to experiment with code. 


Nearly 30 years later approximately 33.5 million people in Great Britain play computer games regularly, averaging 14 hours per week across computers, smart phones, games consoles and tablets (IAB, 2014). Research from Von Ahn and Dabbish (2008) shows that the average gamer spends approximately 10,000 hours playing computer games before the age of 21. McGonigal (2010) offers an equally phenomenal statistic, that since World of Warcraft by Blizzard was released, gamers have collectively invested 5.93 million years solving problems in the virtual world of the game, further stating that 5.93 million years is the equvilant time it has taken the human species, since walking upright on two feet, to get to this point in time. Collectively, we are evolving digitally, more rapidly than we ever have done as a species.

But why have we invested so much of our time in the pursuit of pixel pleasure? According to Squire (2010) the reason for playing computer games is to find enjoyment in being challenged, exercising curiosity and escaping the real world -to some degree- in fantasy (Malone, 1981). Bruckman (2011) recognises having the power to create or be creative within a game leads to an enjoyable experiences.McGonigal (2010) agrees with SquireBruckman and Malone by saying that gamers have a greater image of themselves and what they can achieve in virtual worlds, feeling as though they are equal amongst their peers, that their goals are purposeful and aimed at the right level of difficulty for them to better their skills, that instant feedback regarding their success through points, tokens or levelling up a character is given regularly and players work together, forming friendships whilst doing so. In summary, if a player experiences: challenge, curiosity, escapism, better self-esteem, equality, belonging, a strong sense of purpose, praise, recognition for hard work and  a greater belief in what they can accomplish then it is easy to see why so many play for so long. Feeling good about yourself and what you do is pretty addictive...the real question might be 'Why would you want to stop?' (I just want to make it clear that by no way am I condoning non-stop gaming, as sunshine, relationships with people, and the big wide world are pretty darn wonderful.) However understanding the reasons why people enjoy gaming so much may indicate why some (children especially) have a reluctance to enter back into the real world.

Interestingly the benefits mentioned by these theorists are similar to the responsibility of teachers under the Teacher Standards (DfE, 2013).


This table indicates a link between the expectations of a learning environment within schools, the responsibilities of the educator and the reasons behind why people play computer games. Such a similarity suggests that there is potential for computer games to not only meet the aims of the DfE but also engage children to enthusiastically access education through the software, using their expertise, which they have built up in their time outside of school. However it must be acknowledged that not all children are computer game players and that the ability between a gamer (who plays 14 hours per week) and non-gamer may be extensive. Also, it must be recognised that not all children like computer games, and that their individual preference may mean that this medium is a less successful tool for them to access learning; much as perhaps a reluctant reader would be to studying from a wordy textbook. Additionally, at the other end of the scale, there may be individuals who have a genuine difficulty in prying their nimble fingers away from keyboard, gamepad or mouse; having a negative behavioural reaction to being jettisoned back to reality, in which case computer game usage will need to be monitored and adjusted accordingly to support the emotional needs of the student, as well as their learning requirements. Despite these considerations, the positives of using computer games as a tool to teach, are many, and if harnessed correctly, could motivate students to enjoyable and rewarding educational experiences.

References
Bruckman, A. (2011) Can Educational be Fun?, Georgia Institute of Technology [online] Available, URL: http://www.cc.gatech.edu/~asb/papers/conference/bruckman-gdc99.pdf [Access Date: 04th June 2015].
Department for Education. (2013) Teachers’ Standards, London: Department for Education.
IAB (2014) Gaming Revolution [online] Available, URL: http://ukie.org.uk/sites/default/files/cms/Infographic%20IAB.png [Access Date: 4th June 2015].
Malone, T.W. (1981) Toward a theory of intrinsically motivating instruction. Cognitive Science, (4), 333-369.
McGonigal, J. (2010) Gaming can make a better world, TED. [online] Available: URL: http://www.ted.com/talks/jane_mcgonigal_gaming_can_make_a_better_world?language= en#t-552334 [Access Date: 04th June 2015].
Squire, K. (2010) Video games in education, MIT [online] Available, URL: https://webertube.com/media/document_source/4681.pdf [Access Date: 04th June 2015].
Von Ahn, L. & Dabbish, L. (2008) Designing Games With a Purpose Communications of the ACM. [online] Vol. 51, (8) Available: URL: https://www.cs.cmu.edu/~biglou/GWAP_CACM.pdf [Access Date: 4 th June 2015].
Images
HonoredShadow (2012) Dizzy, image, viewed 20th June 2015, <URL:http://forums.guru3d.com/showthread.php?t=371412>
Novis, J. (2015) Teacher Standards Theorist Table, image, viewed 20th June 2015, <http://www.jamesnovis.com/research/minecraft/reluctantwriters/HTML/3_computer_games_outside_of_school.html>
Walmsley, K. (2012) ZX Spectrum, image, viewed 20th June 2015, <URL:http://through-the-interface.typepad.com/through_the_interface/2012/10/my-new-zx-spectrum.html> 

Wednesday 17 June 2015

The exciting potential of the Xbox One, Hololens and Minecraft in teaching. Part 1.


I remember when I saw Star Wars (IV A New Hope)  for the first time. I was wide-eyed, young and mesmerised by George Lucas' world of bizarre creatures, Jedi Knights and X-wings. In amongst this iconic cinematography was a scene which was fundamental in shaping my passion for technology and taught a valuable life lesson. It was not the shooting of Greedo the bounty hunter, nor the exploding Death Star but the Holochess game which was played between the meticulously groomed and conditioned: Chewbacca and the bleeping, whistling, childlike droid: R2D2. The game was played on a circular chessboard with the pieces being represented by holograms of animated aliens, which brawled each other when a piece was taken. I thought that this technology was amazing and could only be achieved in a future far, far away. Little did I know, I was witnessing for the first time, what we refer to now as 'augmented reality'; a technology which superimposes a computer generated image on a user's view of the real world, creating a new composite view. However, overshadowing the magic of the special effects was the warning from Han Solo, which still rings as true as it ever did: 'Let the wookie win'.

Today I experienced the same rush of inspiration and wonder which Star Wars had initially conjoured, when I viewed YouTube footage of a particular presentation from E3: the most prestigious gaming expo of the year.

The console: Xbox One.
   The game: Minecraft.
      The new hardware: The Hololens.

After Lydia Winters enthusiastic introduction, the film shows the user wearing the Hololens headset and playing Minecraft on the Xbox One. Instead of playing the game on a traditional monitor, the player, when wearing the device, experiences the game on a virtual screen which appears to be projected on to thin air, in front of them. Impressive? Certainly. Although, as genuinely exciting as this is, it was hard to see beyond the gimmick. Why, when the nation has been conditioned over years to own televisions, would they favour wearing costly cyber specs over the technology they already own?

Arthur C. Clarke once said: 'Any technology that is sufficiently advanced is indistinguishable from magic'. This magic occurred shortly afterwards, when the player stood in front of a square table and cast his spell by saying 'Create World'. Instantly the surface of the table was covered in sand blocks -from within the game- and a fence running around the perimeter. The sand then fell away into a virtual hole, revealing  a 3d, Minecraft world which rose up, out of the tabletop, appearing as though it were a physical model. However this model was animated. Lava flowed, minecarts whizzed around the rails and the player's avatar could be seen jumping and running through the cubic landscape. Equally as impressive, was the ability to see through the walls of built structures, as though the user was a giant peeking through a window, or the way they could see the subterranean chasms underneath the green hills.

Augmented reality, which has been exhibited through silver screen trickery for years (Minority ReportWho Framed Roger Rabbit, Iron man, Avatar etc.) is finally emerging and becoming available to the masses to manipulate and explore games in a new way. It is my opinion this technology could change the way that future generations of pupils access learning, offering exciting possibilities for them to engage with the subject matter. Such as handling virtual artifacts from museums to aid history lessons, exploring a tabletop rainforest to learn about ecosystems or understanding geometry by experimenting within a 3d augmented reality space.  Despite the opportunities this technology offers, it is not without its problems. Although it has been demonstrated at E3, I expect that it, like other devices (Microsoft Kinect, wii  controllers and Playstation's Eye Toy), is not as responsive or as polished as we are led to believe. Equally, as Wheeler (2015) implies in his book: Learning with 'e's, that it would be foolish to suggest that we can educate with these new tools, without adapting the method and practice of teaching. As he states that students are:
'demanding - and expecting - new approaches to learning, approaches that incorporate technology' p.5
Another consideration would be that teachers and children alike, will require training on how to use augmented/virtual reality equipment. This will take a considerable amount of time, time which overworked and overburdened teachers, and over assessed pupils faced with a tightly packed curriculum, do not currently have within the United Kingdom. The final drawback would be the cost of both the hardware and software needed to implement these changes. Luckily, the heavy competition in similar devices to the Hololens could drive the prices down, but thus far, there has been no indication of how much they will sell for. Traditionally, educational software has often been expensive and in my opinion has wildly, varying levels of quality and learning potential; only time will tell whether developers emerge that can create meaningful learning opportunities for students which are also value for money.

The video from the E3 show is available at: https://www.youtube.com/watch?v=xgakdcEzVwg

In my next blog, I shall discuss how the Hololens works in more detail, and in subsequent posts explore and critically analyse the potential for using virtual reality and augmented reality within the classroom.

References
Kotaku (2015) Minecraft Hololens demo at E3 2015 (amazing!), YouTube. [online] Available: <URL: https://www.youtube.com/watch?v=0MnRkPvIjKE> [Access Date: 17th June 2015].

Wheeler, S. (2015) Learning with 'e's, Wales: Crown House Publishing.

Images
Taken from a screenshot of footage from the above YouTube clip, from Kotaku.