Well the Horizon Report helps flesh it out a little--here's their definition:
The concept of blending (augmenting) virtual data -- information, rich media, and even live action --with what we see in the real world, for the purpose of enhancing the information we can perceive with our senses... (p. 21)And then they provide a nice link to this example:
So now I'm thinking I might have a better handle on it...it involves web cameras, phone cameras, or those awesome glasses the guy in the video is wearing (but does it require a bright red blazer? I hope so) and matches the image the camera sees to a database of images and then can layer that image with added data? (I think)
Does this sound right? If it does, than it sure sounds intriguing.The report also states "This kind of augmented experience especially lends itself to training for specific tasks" (p. 22). A perfect fit for much of what we teach!
So now time for questions...
1. Does anyone have a concrete idea of how we could use this emerging technology in library instruction? Is anyone currently using it?
2. Is our library (staff workstations, user computers, instruction labs) equipped to use this technology efficiently?
3. Have I completely misunderstood this technology (a distinct possibility!)? If so I'd love to be corrected!
Come to the the Information Literacy Collaborative to discuss this and other questions raised by the 2010 Horizon Report on Tuesday, March 30 as part of the Current Issues Coffee Club series.