Earlier in the module we looked at virtual/immersive environments like CaveUT. We’ve also looked at networking (social and other) – but where is all going and how will it benefit us (if at all?)
Applications for immersive environment technologies have been hard to identify in the past apart from gaming and simulation. But it is becoming clear that RETAIL is the next place where reactive and immersive wars will be waged. IBM business partners recently predicted that next generation stores will be:
Sense and respond environments that morph themselves to meet the temporal demands of customers’ immediate shopping objectives.
What does that mean? The IBM report talks about the immersive retailing experience being delivered via microenvironments that narrow the focus of shoppers’ experiences to “I” and “me.” How very egocentric this encourages us to be!
It will be a dynamic aggregation of flexible technology, realtime sales, and rich customer data … driven by the ability to evaluate inventory and business conditions, along with specific customer preferences, on the fly. An array of technologies will combine to create these environments, revolutionizing the shopping experience in stores and online.
Retailers will use integrated digital media technologies including
- radio frequency identification (RFID)
- electronic shelf labels
- shopping cart companions
And these will be seamlessly connected to the consumer’s own personal device. Soon films like Minority Report might not be the stuff of science fiction and we will, (and to a certain extent we’re already there) enter the age of ubiquitous computing.
This is science fiction right. Last year the BBC reported that Minority Report style ads would be hitting the UK this year. It hasn’t happened yet -and Wired magazine says that despite 2.5 quintillion bytes of data being created worldwide EVERY SINGLE DAY, the main issue that advertisers and advertising agencies face is in harnessing and targeting the data to those people who would realistically buy their products.
Nevertheless, ubiquitous computing is upon us… and soon we’ll have Google glass
Ubiquitous computing is a post-desktop model of human-computer interaction in which information processing has been thoroughly integrated into everyday objects and activities. As opposed to the desktop paradigm, in which a single user consciously engages a single device for a specialized purpose, someone “using” ubiquitous computing engages many computational devices and systems simultaneously, in the course of ordinary activities, and may not necessarily even be aware that they are doing so.
Mark Weiser – coined the phrase UC in 1988 – and believed that technology would have a calming influence. It would make our lives easier, less stressful. What do you think? Does technology achieve this on the whole?
An interesting techie development re: ubiquitous PLUS immersive computing – which we’ve discussed earlier in the module – is Augmented Reality. AR is the buzz technology and the kind of reach shown in the next video (although fictionalised) could soon be available, courtesy of Google Glass (among other companies developing this technology) and bionic developments:
THE SEMANTIC WEB
The Internet as we know it is only a few years old. All the amazing stuff we take for granted – our ability to plug into the databases of the world, a never-ending flow of info coming into our homes. What’s more most of what we access is FREE. Look at Wikipedia – it’s a model that shouldn’t be possible -but it is. A vast crowdsourced repository of information.
This thing that we’re making (and we are making it together by feeding the databases of the world) is accessed by computers, handhelds, all these mobiles, laptops and servers and what we’re getting out of all these connections, is ONE machine. If there is only one global machine, our handhelds and devices are windows into that machine.
This machine is the most reliable machine human beings have ever made. It has run uninterrupted since it began. What are the dimensions of this machine?
- 120 billion clicks per day.
- 65 trillion links between all the Web pages of the world.
- 2.5 million emails per second
- 170 quadrillion transistors
- 246 exabyte storage.
- 10 terabytes per second total traffic (the Library of Congress is about twenty terabytes. So every second, half the Library of Congress is swooshing around the world)
- It uses five percent of the global electricity on the planet.
In last week’s lecture we made the analogy between the brain and Internet.
- 55 trillion links – almost the same as the number of synapses in a brain.
- A quadrillion transistors – almost the same as the number of neurons.
The size and complexity of this machine is the size and complexity of your brain. And your brain works in a similar way to the web (remember what we discussed re: networks and learning). But your brain isn’t doubling every two years. The web is. If this machine, right now, is equivalent to one HB – one human brain – with the rate it’s increasing, 30 years from now, it’ll be six billion HBs.
By the year 2040, the total processing power of this machine will exceed the total processing power of humanity.
The AR apps you saw in the last video will rely on the new connections of the machine. Humans use the Internet to carry out basic tasks – book tickets, check the time of a gig etc. A computer can’t perform the same tasks without human direction because web pages are designed to be read by people, not machines. This desire really has come out of the need to structure the huge amounts of data we are ‘coping with’ every day. We’re no longer enjoying the process of accessing information – we now need a way to effectively filter information, that doesn’t involve directing our computers to filter it. We want the Internet to know what we want. Right?
We’re busy programming this machine right now (or at least creating ontologies that allow it to make connections through machine readable pages). Ontologies deal with what exists – objects and people – and how they can be grouped, and related hierarchically and subdivided according to similarities and differences – Tuneglue for instance, or the Visual Thesaurus.
So – the semantic web is about publishing pages designed to be understood by computers so that they can perform more of the tedious work involved in finding, sharing, and combining information on the web by reading these ontological connections.
The languages specifically designed for data are now pervasive – Resource Description Framework (RDF) and Extensible Markup Language (XML) are already embedded in many web pages. In fact they’re embedded in the CMS systems and Galleries you’re all using to deliver your content on this module.
Where HTML describes documents and the links between them, RDF, OWL, and XML can describe arbitrary things such as people, meetings, organisations, things or events. This means that the computer can DECIDE what you want based on your previous choices and activity.
Tim Berners-Lee calls the resulting network of Linked Data the Giant Global Graph. The semantic web is already with us and – as we saw in last week’s lecture – we are continually feeding the machine with more and more links and information about ourselves and our aspirations. Every time you tag something you are creating more data. Every time you name a file, or use an alt tag or change your status – or choose from a drop-down – you are creating data.
Where will this all lead?
There’s a lot of paranoia about semantic technology (transparency is encouraged by the likes of Tim Berners-Lee and Kevin Kelly but frowned upon by whole tribes of academics anxious about the loss of control this openness might bring).
There are also a lot of high profile dissenters who believe that AI enthusiasts, instead of making machines that think like human beings are now striving to describe the world in terms that machines are good at thinking about.
So we end up with the age-old conundrum: Does the world make sense, or do we make sense of the world?
Watch from 3.39- 8.30
While I’m happy to be reminded about friend’s birthdays or offered suggestions about what to buy for their birthday – I don’t want my computer to do it for me. An interesting upshot of all this targeting is the Filter Bubble. You’re search results become so targeted that you can end up in an unrealistic bubble.
Will the Giant Global Graph be capable of detecting feelings or the nuances of human behaviour and experience? Perhaps we are creatures of habit. Perhaps the technology will be able to second guess our desires and needs. Perhaps it will be able to book the right holiday for us or choose the next album we want to buy.
My online profile (amazon choices, flight/holiday searches, sites I visit) probably does allow for a certain amount of pigeonholing. And with the advent of advertising technologies like Phorm and other DPI (Deep Packet Inspection software) our digital lives are, more and more, under threat of surveillance.
Here’s what the enthusiasts say:
There’s only one machine, and the Web is its OS. All screens look into the One. No bits will live outside the Web. To share is to gain. Let the One read it. It’s going to be machine-readable. You want to make something that the machine can read. And the One is us. We are in the One. – Kevin Kelly, Wired, exec editor.
Plenty of writers and filmmakers are speculating about the catastrophic future that this technology could bring. Black Mirror and Utopia are just two of the current dramas dealing with this. But is this what the future will really be like?