Showing posts with label education. Show all posts
Showing posts with label education. Show all posts

Saturday, 9 February 2013

Being Negroponte

'Learning when there is no school'
In 1995 I read a little black paperback book that changed my view on the world. The title of the book was 'Being Digital' and the author was Nicholas Negroponte. Several key elements of Negroponte's book stood out for me and challenged my thinking. Firstly, he talks of a time when all media will be transformed from atoms into bits. This premise, written in the middle of the 90s, looked forwards to a time when newspapers, movies, music, television, photography, and a host of other media would reside exclusively within the digital domain. The repercussions would be that large businesses who relied on shipping 'atoms' would go out of business, whilst those who sent bits would thrive. Negroponte is a gentleman and doesn't have the hubris to declare 'I told you so', but a quick look around at the world of business will tell you that he was right. Large photographic companies, the music industry, book and newspaper publishers, high street chain stores and even the mighty Hollywood film industry are struggling to adapt, survive or maintain their preeminence in a world where everyone has a mobile phone with a camera, downloads of e-books exceed print based sales, iTunes is the favourite method of purchasing your favourite music, movies can be streamed online, and people are migrating en masse to online stores such as Amazon. Negroponte's vision was prescient indeed, and we ignore the man's ideas at our peril.

Secondly, Being Digital featured further predictions about touch screen computers, artificial intelligence and convergent technologies such as TVs and computers combining their functionality. The entire book is crammed full of these instances, and it is not hard to see why it had such a huge impact on me and many others like me almost 20 years ago.

It was a delight and a privilege to be invited to meet Nic Negroponte over dinner in the run up to the Learning Technologies Conference. I sat and chatted with him for more than two hours as he regailed me and my co-diners with story after story of his many exploits. Negroponte established the now legendary MIT Media Lab, and was also founder of Wired Magazine. I first became aware of his work by reading his then regular column. He is well connected too. Close friend and LOGO inventor Seymour Papert married author and cyberspace researcher Sherry Turkle in the living room of Negroponte's home. Negroponte and his then wife met with Alan Turing's mother and brother, and were given all his 'baby photographs'. He worked alongside legends such as artificial intelligence pioneer Marvin Minsky and in so doing, became something of a legend himself. In his opening keynote speech at Learning Technologies, Negroponte stalked across the stage reminding his audience that it is a big mistake to assume that knowing is synonymous with learning. 'We know that a vast recall of facts is not a measure of understanding,' he declared, 'and yet we subject kids in school to constant memorising to pass tests.' His answer? What we need to do in schools, he said, was to find ways to measure curioisty, creativity, imagination and passion, as well as the ability to view things from multiple perspectives.

Negroponte is now celebrated for his high impact initiative to provide children in poor countries to access learning through laptop computers. His One Laptop Per Child project has now given children from Ramallah to Rio access to the learning they previously never had a hope of having. The total number of laptop computers distributed through the 1LPC project now exceeds 2.5 million in 40 countries, and there are many heart warming stories to be told. Children are now teaching their own parents how to read, using the laptops as tools. In Ethiopia, over 5000 children are learning to write computer programs using Squeak. Plans to begin distribution of touch screen tablets are well underway, and it won't be long before we are talking about One Tablet Per Child. All of this is run on a charity basis, and is philanthropic to the core, with supporters including the Bill Gates Foundation and Salman Kahn's Academy.

If we have learnt one thing from the 1LPC project, says Negroponte, it is that children learn a great deal on their own, with little or no help from others. This echoes the work of pioneers such as Sugata Mitra, whose 'minimally invasive education' was demonstrated by the 'Hole in the Wall' experiments. Negroponte said that Mitra is now working with him and others at MIT - they have joined forces to advance these projects further. Children have a natural curiosity, Negroponte is at pains to point out, and discovering, making and sharing things is second nature to them. We should nurture these characteristics he warns, rather than stifling it in rigid school systems.

Photo by Steve Wheeler

Creative Commons License
Being Negroponte by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Friday, 8 February 2013

Three things

There are three things we need to know about learning for this generation. The first is that learning needs to be personalised. As I argued in a previous post, learning must be differentiated, because one size does not fit all, and standardised curricula and testing are not fit for purpose in the 21st Century. Personal learning is unique to each learner. The tools and devices students choose, and the pathways they decide to take are in many ways beginning to challenge the synchronised and homogenised approaches we still practice in schools, universities and organisations.

Secondly, learning needs to be social. Much of what we learn comes from contact and communication with others. Increasingly, such contact and communication is mediated through technology, and social media tools are ideal for this purpose. The celebrated Russian psychologist Lev Vygotskii proposed the idea of learning being extended when children are mentored by a knowledgeable other person. His Zone of Proximal Development theory has been central to our understanding of how we learn in social contexts. Yet in recent years, with the proliferation and equalisation of knowledge and the strengthening of social connections through digital media, new theories such as connectivism and paragogy have emerged to challenge the central place of ZPD in contemporary pedagogical theory. We need to ask whether we now need knowledgeable others such as subject experts to help us extend our learning when we have all knowledge at our fingertips. Now many learners are exploiting the power of social media to build and engage with equals in personal learning networks.

Thirdly, learning needs to be globalised. As we develop personal expertise, and begin to practice it in applied contexts, we need to connect with global communities. Students who share their content online can reach a worldwide audience who can act as a peer network to provide constructive feedback. Teachers can crowd-source their ideas and share their content in professional forums and global learning collectives, or harness the power of social media to access thought leaders in their particular field of expertise. Scholars who are not connected into the global community are increasingly isolated and will in time be left behind as the world of education advances ever onward.

Photo by Steve Wheeler

Creative Commons License
Three things by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Wednesday, 6 February 2013

You can't walk where I walk

Someone once told me that life is like a fast moving stream. You can put your foot into it, and even let it flow over you for a while, but you can never put your foot into the same river twice. That's quite profound, but there is something even more profound. It is this: You can't walk where I walk. In other words, you can't experience what I experience. We may be sat watching the same movie or TV programme. We may read the same book, participate in the same conversation, or sit in the same lecture. But your experience will be different to my experience. We may come away with similar messages or impressions of what we have observed or experienced, but because we are unique individuals, we are by nature different to each other, and our perceptions will also be different. That is one very important reason why in schools, standardised testing, homogenised curricula and batch processing by age need to be changed for more personalised approaches to education.

It's all down to individual perception - what psychologists call the 'representation of reality'. My reality is slightly different to yours and yours from mine. It has little to do with you and I viewing the same thing from slightly different angles, although sometimes that can be a factor in creating different perceptions. No, it's not about different angles, it's about different perspectives. A number of variables cause each of us to view life uniquely, and to represent reality from different perspectives, including our age, gender, culture, background, health, preferences, personal beliefs, in fact just about everything that wire our brains uniquely, and make us individuals. When teachers attempt to differentiate learning, they generally focus on aptitude and ability or in some cases, whether a student has a disability. Some teachers are sidetracked into considering 'learning styles' but that is a big mistake, as I have previously discussed. Carl Rogers advocated 'unconditional positive regard', a philosophy that plays out when every student is considered to be of equal worth in the classroom, regardless of their previous 'form'.

What teachers should be focused upon is the whole child, and how they perceive life and represent reality differently to everyone else in the room. Differentiation should encourage diversity not simply make provision for it. It should celebrate the fact that we are all different, and include every single voice in the classroom, giving each an equal weight. That's hard to achieve, but with some fore thought and practice, and a great deal of patience, teachers can encourage each student to participate fully and play to their individual strengths. We are not that different from each other really. We all have the same needs, to be respected, to feel we belong to the group and to have a voice. Each of us is the same, but in uniquely different ways. If you can understand that, then you will understand why you can't walk where I walk.

Photo by Steve Wheeler

Creative Commons License
You can't walk where I walk by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Tuesday, 5 February 2013

Changing the world

It's not often you get to talk with someone who has changed the world. That's exactly what I did this week in a glittering lounge in the Carlton Ritz Hotel, when I sat down with Steve Wozniak, co-founder (with Steve Jobs) of Apple. Wozniak designed the first Apple computer, and together with Jobs, set in motion a company that continues this day to mould our use of digital technology. If you use an iPad, iPod or iPhone, if you have an Apple Mac computer or laptop of any sort, you undoubtedly have Steve Wozniak to thank. Apple, and its co-founder Wozniak have shaped our desires and crystallised our dreams with innovation after innovation. Steve Jobs may be no longer with us, but Steve Wozniak - 'Woz' - lives on, larger than life, and as effusive and buoyant as ever about the future of technology and its role in education.

This week, Woz and I were both invited speakers at the 3rd International Conference on eLearning and Distance Education in Riyadh, Saudi Arabia. He was already sitting in the speaker's lounge, ready to present his opening keynote, when I wandered in, unaware that he was there. There was no-one else in the room. I walked over. We shook hands. We sat down. Then we talked.

The world according to Woz is one of sustained wonder at the many ways technology can be made to do our bidding. As a young boy growing up in the 50s and 60s, he told his father that he would one day own a computer. His father laughed and told him a computer would cost more than a house to buy. Computers in the 50s and 60s were indeed expensive. They were also almost the size of houses. But Woz's dream of one day owning a computer was realised when he began work for the Hewlett Packard computer company. Within a short time he was taking computers apart to see how they worked, and had soon had drawn up the plans to construct his very own computer - the Apple 1. He met Steve Jobs, who said 'we can sell this', and the rest, as they say, is history.

Now aged 62, and with a life time of achievements behind him, Woz has a great deal to say about schools and education. He even became a school teacher for a few years after he had made his fortune and had put Apple behind him. He believes that computers and digital technology are now our prime scientific and academic tools, but balances this with the view that regardless of the impact of technology on society, we still need rich personal and social interaction for effective education to take place. Hence, he says, teachers will always be needed. He is very determined to enforce the idea that children learn best when they are interested. When you have the desire to learn, he says, no-one can take that away from you. And yet, he argues, school is the one environment that currently teaches children that taking a test determines how 'intelligent' they are, but cramming for that test it is certainly not learning. He asks, are schools sending out the wrong message to children, when we ask them to study for test after test? Children are born curious, he says, and all of us - teachers, parents, society - should keep it that way.

On computers and design, Woz is adamant - he is only interested in designing devices that are interactive. 'They need to respond when I use them', he said, 'otherwise I lose interest'. On the nature of knowledge, he told me, all of us need to gain some 'fact' based knowledge, but that this is only the starting point, as we gain skills that will enable each of us to take our place in society. The man is insightful, inspirational and iconic. Yes, it's not often you get to speak to someone who has actually changed the world.

NB: The above content is taken from my conversation with Steve Wozniak, and also excerpts from his Keynote speech in Riyadh on February 5, 2013.

Photo image courtesy of Steve Wheeler

Creative Commons License
Changing the world by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Monday, 21 January 2013

Telling your story

Blogs are not simply about text. They can also encompass hyperlinks, sounds, videos, and images. Blogging is also about telling your story. Today I was involved in teaching a session for a BA group on the use of digital photography and communication. Specifically, the session focused on images as narrative, and all of the group managed to create some impressive and in some cases stunning image sequences. I used images from a trip with my students to the Gambia in 2009 to present my own example of a narrative at the beginning of the session. I thought I would share it with you here on my blog. I hope you find it interesting.


This image is of a man looking out over the sea, in a coastal village in the Gambia. Poverty is commonplace here, given that the Gambia is one of the poorest countries in Africa. One of the few jobs most young Gambian boys can do is fishing. It's a dangerous, low paid job, and this image depicts some of the boats they use to launch themselves out to sea.


This image is of children collecting firewood for the compound cooking fires. There is no electricity or gas in most parts of the Gambia, so open fires are the most common means of cooking. Children also fetch water, sometimes from several kilometers away from their villages, and because of the necessity for this work, they often miss school. As a visiting group, my colleagues and I, along with our students, saw the need and raised money for a new well to be sunk in the village. The children don't have to walk 4 kilometers each time they needed to fetch a pail of water anymore. Now they can go to school.


I took this image of a young girl sat in a village compound. I couldn't resist capturing the photo, because it was so iconic and representative of the children in this part of the world, and it conveyed innocence and hope. I showed her the image on my digital camera, and she was shocked but delighted. She clearly recognised herself, but I don't think she had seen a camera before, and probably not an image of herself anywhere else other than in her reflection.


I decided to use a reworked version of the picture of the young girl in a blog post called 'What Price Education?' to hammer home the message that every child deserves a good education. In the Gambia, children can only go to school until they are 11 years old, because the state only funds primary education, and it's very basic. There are few secondary schools, and children can only attend them if their parents can afford the fees. Very few can. As a result, Gambian children are some of the most disadvantaged children in the world. I couldn't think of a better was to use the image than in a manipulated front cover of the National Geographic magazine. It was very easy to do. Using PowerPoint, I created a yellow background, and a smaller blue background for the frame, and then placed the image above. Finally, I chose appropriate coloured font styles to mimic the familiar National Geographic livery. I saved the image as a .jpeg file and then uploaded it to the blog like any other image. I hope you like the images and that in some way, they speak to you.

 Photos by Steve Wheeler

Creative Commons License
Telling your story by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Saturday, 19 January 2013

We need a rethink

There's a very useful and refreshing article by Tom Barrett in this week's TES Magazine entitled 'Education needs to plug into Web 2.0'. Never before have I read an article that I agree with so completely. Those of us who are immersed in a world where the use of social media is so sustained, embedded and familiar, forget that many schools still ban the use of Web 2.0 type tools in their classrooms. Tom has some advice for schools who are in this category, and I quote:

"Perhaps one of the biggest barriers to engaging with the social web in schools is the perceived issue of safety: many teachers say they are left feeling helpless when pupils' work is available on the World Wide Web. I have been blogging with classes for eight years and these common-sense guidelines always work:

1) Be open to parents and allow them to share any concerns.
2) Moderate all comments before they are posted online.
3) Have a clear and robust e-safety policy.
4) Work within the school policy on images of children on blogs.
5) Publish a set of blogging guidelines on your site and share them with parents.
6) Make sure the whole school community is aware of your work."

Common sense indeed, but I would also add that schools should encourage and permit children to help teachers co-create the e-safety and school policies on social media use. They use these tools outside of the school on a daily basis and often have a sophisticated grasp on how social media work. Who better to inform schools than the users themselves?

I once spoke at an event where a school leader remarked that his school had banned access to blogging, YouTube and all other social media because 'they are dangerous'. I countered by asking him whether we should also stop teaching children how to cross the road, because traffic is dangerous too? I think he got the message. Where better to teach children about the dangers and risks of using the Internet, than in school? I think a rethink is very much overdue.

Whether this blog post, or Tom's article, or any number of other good pieces of advice will have an impact on the impasse many schools find themselves in with relation to social media use in schools, remains to be seen. But just a few moments thinking about the risks (and balancing those up against the clear benefits social media have in schools who do allow them) should convince most school leaders that adopting social media in the classroom really is the best way forward.

Image source

Creative Commons License
We need a rethink by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Tuesday, 15 January 2013

Pelecon flies higher

Those of you who have ever attended a Plymouth Enhanced Learning Conference, or even followed from afar via the social media channels will know that Pelecon is an extraordinary event. Since attaining international conference status and extending its programme to 3 days, Pelecon has become one of the must-attend European learning technology conferences on the calendar. The event attracts learning technologists, lecturers, researchers, teachers, learning professionals, health and medical staff, private trainers, and just about anyone who is interested in the very latest in digital pedagogies, literacies and technologies. In previous years delegates have enjoyed listening to high profile and diverse keynote speakers such as Stephen Heppell, Keri Facer, Gilly Salmon, Graham Attwell, Sherry Terrell, Jane Hart, Josie Fraser and Alec Couros.

This year the conference takes place between 10-12 April. For 2013, we have lined up a veritable feast of leading speakers, all of whom are featured on the Pelecon conference website, and this year promises to push the boundaries even further than before.

#pelc13 is set in the delightful South West coastal English city of Plymouth. The surrounding Devon countryside is stunning as it unfolds in springtime, the towns and villages are steeped in history (the Mayflower Steps and Plymouth Hoe are a must for all tourists to visit) and the culture is rich. The Conference social events including a Wednesday evening Teachmeet, are guaranteed to be fun, entertaining and engaging.

This year the Pelecon Conference dinner will return once again to the visually stunning surroundings of the National Marine Aquarium. Located in Sutton Harbour in the historic Plymouth Barbican area, the Aquarium is the largest in the UK, and is one of the premier tourist attractions in the South West of England. Delegates who enjoyed the conference dinner at previous Plymouth eLearning Conference events in 2009 and 2010 were unanimous in expressing their praise for the evening.
The Dinner starts on Thursday evening, April 11th, at 7.30 pm with welcome drinks and an exclusive tour of the entire aquarium by official guides. Delegates can watch as the sun sets over Plymouth while fishing boats and other marine vessels arrive and depart from nearby Sutton Harbour. The three course dinner will be served in the Atlantic Reef area of the Aquarium, where diners can watch the sharks and other large fish swimming in one of the largest glass tanks in Europe, whilst they enjoy their meals. The company will be great, the food will be excellent, and the live music will be splendid. The price for the evening isn't bad either - at only £40.00 per head. The bar will be open until 11 pm, and then afterwards, the nearby Barbican and Coxside watering holes will be sure to offer a warm welcome to any delegates who wish to linger to explore Plymouth's nightlife a little more. The Conference Dinner has only one drawback - there will only be about 110 dining places available at the Aquarium, so please book your place for this exclusive event soon to avoid disappointment!

The Pelecon Conference organising committee look forward to joining with delegates at the Conference Dinner at the National Marine Aquarium. We hope you can attend the conference.

Photo by Jose Luis Garcia
Creative Commons License
Pelecon flies higher by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Friday, 11 January 2013

The foresight saga

Vuzix M100 Smart Glasses
This is part 8 in the series on the future of learning and technology. At the start of each year everyone it seems, goes into the prediction business. The first week of 2013 saw many articles appearing on what we can expect to see this year. A large number of the articles were about new technology trends, and there was much speculation about how certain technologies might transform our mundane little lives. With the massive Consumer Electronics Show CES 2013 opening its doors last week in Las Vegas, technology news was making prime time TV all across the globe too. The stars of CES 2013 were the Vuzix M100 Augmented Reality Smart Glasses (pictured), Samsung's new ultra thin bendy phone screen and the 4K ultra high Resolution television screen. These are not future technologies. They are technologies for today, 2013. 4K resolution is not enough it seems. Already there are articles predicting beyond 4K into the exotic TV world of the future where transparent televisions (what the...?), and even 'choose your own size' projected wall TVs will roam majestically across the prairies. Entertainment will literally go to the wall.

But what of the future? What are the tech-gurus saying we should look out for this year? The BBC's New Year's eve article 'Who will call it right in 2013?' seemed to hold a competition amongst the illuminated ones, the technology soothsayers of our age. Peering into their digital chicken guts, each gave it their best shot (without sticking their necks out too far, thus avoiding any potential damage to their stellar reputations) predicting what we can all expect to bump into as we turn that chronological corner. The article should perhaps be re-titled 'who will call it at all in 2013?' because many of the so called 'predictions' were banal to say the least.

Robert Scoble (the celebrated blogger) stayed safe and on piste, predicting that 2013 would be contextual. He talked of heads up displays (Google Glasses and the Vuzix M100 are already gearing up for mainstream release) that we could use when we all go skiing (yes, we can all afford alpine holidays in today's burgeoning economy. I'm just nipping off to Gstaad), to brag to your friends through the gift of video evidence just how high you climbed before you fell drunk from the ski-lift, and how long was your 'hang' time on your latest jump. That's if you have any friends left. How's that for context?

Dave Coplin, chief envisioning office at Microsoft (every company should have one) was even safer in his predictions, suggesting that 2013 will be about mobiles, data and trust. More and more, he suggests, data are (he says is) going to be the lifeblood of all our activities. And mobile devices will offer personalisation and  will become the first point of contact for everything we do. Well, who knew?

Mark Cook, chief executive of Getronics UK and Ireland (yep, a household name) takes the prophet's mantle for the safest prediction for the year. He reckons that many companies will move away from BYOD (Bring Your Own Device) to CYOD (Choose Your ... etc). Interesting, as many companies don't even have a BYOD as a policy yet. Cook thinks that CYOD will place the initiative back in the hands of the organisation,  offering employees a device of its own choosing. That's novel. Now why didn't I think of that? I guess you will be able to choose any colour you like, as long as it's black.

So the future is much the same as the present then. I think I'll stick to CES in the future.

Image source

Creative Commons License
The foresight saga by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Wednesday, 9 January 2013

Global learning collectives

This is part 7 in a series of posts on the future of learning and technology. I spent the last two years of my school life at AFCENT* International School, in Brunssum, Holland. There was one word to describe AFCENT School - diversity. I remember how culturally rich the experience was, because children from all of the NATO** countries attended, and I often sat alongside American, German, Canadian, French, Norwegian and Swedish classmates.

I discovered that this school's education was far more than just the three 'R's. We learnt phrases from each other's languages (slang and swear words were particularly good fun to practice), heard about unfamiliar customs and practices, and sampled strange and wonderful food and drink from other countries. I should point out that in the 1970s Britain was far less multi-cultural than it is today. This was the age of the cold-war and our parents were serving in the military. We took part in multi-national games that went on all day, where we played the roles of politicians and generals, as we tried to avert a nuclear war. We produced and performed in musicals such as Godspell, Jesus Christ Superstar and Fiddler on the Roof in the school assembly hall. We learnt to play the games of other countries. It turned out that Baseball and American Football were less of a mystery for us Brits than Cricket was for our American counterparts. Who knew?

We learnt traditional songs and stories we would never otherwise have encountered, because each child could not avoid bringing their own personal stories, history and culture into the classroom. German Christmas, Canadian Bring 'n' Buy sales, and American cheerleaders were not something I had encountered in any English school. Believe me, if we'd had American cheer leaders at school in England, I would never have missed a lesson. At AFCENT School we literally had the best of both worlds by attending an international school. Not many school students are as privileged.

Some years ago, I saw several schools try to replicate this cultural richness through the use of video links to connect two (or more) classrooms together across distance. It was a great step beyond the pen pal letters we used to write when I was in secondary school in the 1960s. Then we had to wait for days or weeks for a reply. Now whole groups can meet and converse with each other in real time without travel. Language learning, cultural exchanges, personal stories, preparation for overseas school exchange visits and a whole host of other benefits can be realised when children collaborate and share their learning across language and cultural divides. The excitement of connecting with children in schools in other countries was tangible. Some schools who connected using videoconferencing manage to project the live video images onto big screens so that large groups could participate, and the kids loved it.

Video conferencing was just the start. We now have several alternative technologies that will allow schools to connect cheaply and easily with school children in other countries. One of the futures of education will be greater connectivity between schools around the world. Through the use of social media meeting tools such as Google Hangouts, video sharing tools such as Skype, and even massive online open games, students around the world already enjoy better chances to learn from each other and with each other, regardless of their geographical location. How will this develop? I foresee the emergence of global learning collectives where children will learn together across schools and time zones, collaborating on projects and other joint activities, and where technology will help us to once and for all bridge the great divides of geography, culture, creed and ethnicity.

*AFCENT = Allied Forces Central Europe. NATO = North Atlantic Treaty Organisation

Image source

Creative Commons License
Global learning collectives by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Tuesday, 8 January 2013

Where AR we now?

This is part 6 in a series of posts on the future of learning and technology. Technology is great for many things, but perhaps its most useful application is enabling us to do things better, faster, smarter. Augmented Reality (AR) is one such tool that has a lot of potential to enhance our senses, but to date has had poor uptake and real life application in the world of learning.  AR typically provides the user with additional information than can be obtained naturally. It takes live views of the real world around you and augments them with computer generated sensory information such as graphics, data, video or sound.

Examples in include smart phone applications such as Layar, which use the GPS and video camera tools to position the user in an information sphere, and feed them contextual information related to that specific geographical location. This can include information about local environment, navigation of complex transport systems (see the embedded video below featuring Acrossair's New York subway app), weather, news and  amenities, as well as cultural or historical information, and even social information. You might for example, wish to discover who else in your location is using Twitter or another social media tool. The opportunities to use such applications in education are fairly obvious, but not everyone has access to the technology, and it can be quite difficult to use effectively if you are able to gain access.  Part of the problem is the inconvenience of having to hold your phone up if you wish to interrogate your environment. A better, more intuitive application of AR is the use of large screens (see the image above, taken in a Westfield shopping centre, London). Better vision, and a more natural means of interrogation of one's surroundings can be achieved using this technology, and objects can be rendered in 3D using simple marker technologies (see BBC this video for a vivid demonstration of some upcoming AR features and uses).



Perhaps the most promising and intuitively easy to use AR version is the wearable (or eye wear) application seen most recently in Google Glasses. A simple heads up display (HUD) is located in the upper right quadrant of one lens of a reasonably normal looking pair of spectacles, and users can control what they see with their mobile phone. Eventually, natural gesture control (such as a head tilt, wink of an eye) or voice control will be developed to enable even more natural and unobtrusive AR use. It has had its problems and suffered a few teething difficulties, but I believe that AR is on its way to a learning environment near you and it will catch on quicker than we expect. Our desire to learn more, and to learn while on the move at any time and in any context, will ensure that the wearable AR device will be available for an affordable price very soon. What educators do with them next, is really down to each individual's creativity and imagination.

Photo by Steve Wheeler

Creative Commons License
Where AR we now? by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Saturday, 5 January 2013

Digital classrooms

This is Part 5 in my series of posts on the future of learning and technology. A few years ago Peter John and I wrote a book entitled 'The Digital Classroom'. It was published by Routledge in 2008 and is now also available as a Kindle reader version. It wasn't the first published under that title, and it probably won't be the last. The idea of a 'classroom' (regardless of how anachronistic that may sound) is appealing when it is 'digitised'. It's the old, comfortably familiar territory embellished with the new. Everyone in the world of education it seems, has an interest in how technology is going to influence what we do in the classroom. The book was received well, and we received some positive comments and feedback. Although the book is probably a little dated now, with technology advancing at rapid pace, it still set a benchmark for some of the things we could expect to see in the coming years. We talked for instance about how technology would streamline assessment, and how the curriculum might be impacted by new technologies. There were sections on digital literacies and mobile learning, both of which we considered to be important for the success of education and learning in the future. Blogs and wikis and other social media made an appearance, even though at the time they were still fairly nascent in compulsory education. We even mentioned the Semantic Web (or Web 3.0) as a potential horizon technology for learning. We spent a lot of time talking about digital cameras and interactive whiteboards, both of which have had dubious success in the school classroom.

Ultimately though, we could not have predicted the new tools and technologies that will become very much a part of normal school life in the recent and coming years. We did not foresee the touch tablets and their rapid success in schools, nor did we predict the rapid rise of smart phones and apps, or the potential of augmented reality. The non-touch motion sensing gestural interfaces now emerging (for example the Xbox 360 Kinect) and the voice activation applications were still just a gleam in the eye for many of us. Perhaps we should not have titled the book The Digital Classroom, but simply Digital Classrooms, because now we know that there are many possibilities, and that classrooms that have digital capabilities are many and varied. If I was to take a risk and suggest possibilities for the next 5 years of development, I might be right on some of my predictions, and hopelessly wrong on others, but here we go...

The signs are there that in the coming years, more gestural interface technology will be available for learners, and that advances in manufacture and design will enable the installation of screens on walls, desktop, in fact on any flat surface. The screens will be resilient and high resolution, but as thin as a sheet of card. The mouse, and keyboards such as the one in the image above, may disappear completely in favour of voice and gesture activated tools. For students with mobility issues in particular, this may turn out to be an important leveler. Smart touch devices will continue to develop too, so that every student will have the means to access all their learning resources right there in their hand, wherever they are, and whenever they need them.

Much more learning will be done outside of the classroom. Digital classrooms will become the place where learning is performed, celebrated and assessed - on large wall screens for all to enjoy. For many teachers, learner analytics will become an indispensable tool for tracking student progress and intervening when necessary. Many governments will probably insist on it and legislate accordingly when they realise just how much data can be mined from personal activities across the web. Eye tracking and attention tracking will also emerge as useful behaviour management tools for teachers in the next few years. Gamification and games based learning will establish a stronger foothold in classrooms as teachers realise just how powerful self-paced, self-assessed task oriented and problem based learning can be.

Probably the most important development I foresee though, is the emergence of student developed applications. As technology increasingly takes its hold on the school classroom, so students will become increasingly adept at coding. There is more scope than ever for children to experiment with computers. The Raspberry Pi is just the first of many tools to support this. The result will be the creation of a vast array of student games, mobile apps and eventually new forms of hardware (See this TED talk by 12-year old app developer Thomas Suarez). Many of the new apps and games will be made commercially available. Schools working in partnership with commercial companies will ensure it happens. We may even see some children achieve millionaire status before they leave school, and it will become commonplace for young people to be entrepreneurs before they reach higher education age. Now there's incentive.

A lot of learning comes from doing, making and problem solving. One of the most important contributions technology has made to education over the last decade can be found in its provisionality - that with digital, nothing is necessarily graven in stone, anything can be changed, upgraded, edited, revised, deleted. Learning in digital classrooms will be much more exciting, because learning through failure and experimentation will engage learners thoroughly in the right conditions.

Finally, a word of warning. We don't know how long these developments will take, nor do we know for sure  if they will materialise, because it is very hard to predict the future accurately, and schools are conservative places where change can be very difficult to achieve. What we do know is that the future will be very different from anything we can imagine right now. As ever, your comments and views on this article are very welcome.

Image source

Creative Commons License
Digital classrooms by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Thursday, 3 January 2013

The future of classrooms

This is Part 4 in my series on the future of learning and technology. What will be the future of school classrooms? It is unlikely that we will see the demise of the classroom in the next decade. Those who study the future of education often suggest that the demise of traditional classrooms is not only inevitable, but imminent. This is due to the rapid proliferation of mobile technology, the disintermediation of traditional teacher and student roles, new trends such as MOOCs and the upsurge of user generated content on social media sites -  all of which take learning away from previously familiar territory. The argument that these tools and trends are removing the need for classrooms and 'schools' in specific geographical locations is a strong one, but also has some flaws.

In a recent article, Larry Cuban attempts to gaze 10 years into the future, and makes the case that classrooms will stay very much the same during this period. Firstly, he argues, teachers tend to use new technology in much the same way they used old technology, and that as a result very little has changed in terms of pedagogy. Secondly, he suggests that technology is overhyped and is not future-proofed, especially against 'major unplanned events', although what these might be, he fails to elaborate. Anyone who is familiar with Cuban's work will think 'well he would say that, wouldn't he?', but is he right?

One of the future developments he is optimistic about, however, is the lightening of students' backpacks. Cuban believes that the digitisation of texts (books, encyclopedias and other paper based knowledge) will take hold and become an important trend. He predicts the obsolescence of the hard bound book, at least in the hands of school children. Automated assessment of learning through computer adaptive testing is another trend he predicts, where students are given grades based on their performance on multiple choice questions. Implicit within this scenario is learner analytics, where the data mining of all student scores, attendance levels, social media postings and discussion group contributions can be analysed to provide teachers with an overview of where the student is, and whether any intervention is required. Also implicit within this prediction is the need for teachers to adopt new roles, change their professional practice, and move from instructors to facilitators and moderators.  It also means that teachers would need to revisit their concepts of knowledge and learning, and begin to accept that often learning occurs without their direct input, both inside and outside the classroom. Many teachers would welcome such a shift in practice, whilst many others might feel very threatened by such a seismic shift in the profession.

Cuban is very sceptical of online courses, and presumably his sceptiscism also embraces MOOCs. He believes that online learning has repeatedly failed to deliver its promise. His argument here stems from the human need to socialise, to gather together face to face, and learn firsthand the cultural, moral and civic values we hold so important in today's society. Online course, he argues, fall very short of delivering this richness.

Cuban sees a place for technology in schools, but does not see it radically changing the face of the 'place for education', and says:

'...by 2023, uses of technologies will change some aspects of teaching and learning but schools and classrooms will be clearly recognizable to students’ parents and grandparents.'

Is he right? Will we see no radical change in schools in the next 10 years? Will it take longer for us to witness transformational changes in our education institutions, or are the changes above sufficient to revolutionise pedagogy? Are schools too conservative and resistant to change to be impacted by new technology? Is technology the only catalyst for change, or should we look elsewhere? As ever, your comments on this blog are welcome.

Photo by Paul Shreeve

Creative Commons License
The future of classrooms by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Wednesday, 2 January 2013

The future of intelligence

This is Part 3 in a series of blog posts on the future of learning and technology. In my previous blog post I examined the debate about whether we are becoming more intelligent or less intelligent as a result of our prolonged and habituated uses of technology. I believe that if we are to fully apprehend the many issues and nuances of our relationship with future technologies, we first need to begin to appreciate the complexity of human intelligence(s) and the problems associated with trying to model these digitally.

Many commentators express concern about the negative impact technology may have on our ability to think critically, construct knowledge and read/research more deeply. The argument is that we are becoming increasingly dependent on search engines and other tools, that trivialise knowledge and simplify what we learn. A secondary argument is that there is a large amount of content on the web that is spurious, deceiving or inaccurate, and that user generated sites such as Wikipedia and blogs undermine the authority of professionals and academics.

Futurologist Ray Kurzweil's argument looks beyond these issues, holding that the tools we have available to us as a result of networked social media and personal devices, actually enable us to increase our cognitive abilities. He argues that we are becoming more creative and have the potential for endless cognitive gain as a result of increased access to these technologies. His position is reminiscent of the work of American cognitive psychologist David Jonassen (1999) and his colleagues, who proposed that computers were mind tools, and that our cognitive abilities could be extended if we invested our memories into them. Others, such as George Siemens and Karen Stephenson hold that we store our knowledge with our friends, and that connected peer networks are where learning occurs in the digital age. British computer scientist and philosopher Andy Clark, is of the opinion that we are all naturally aligned to using technology. In his seminal work, Natural Born Cyborgs (2003), Clark sees a future that combines the best features of human and machine, where we literally wear or physically internalise our technologies.

There are examples of how such cyborg existence might come about. Recently, demonstrations of Google Glass, eyewear that connects you via augmented reality software and gestural control to information beyond your normal visual experience, and Muse, a brain-wave sensing headband, have veered us in the direction of cyborg experience. I predict that other devices, wearable, natural gesture based, and sensor rich, will appear in the next few years, and these will be affordable to many. And yet, as science fiction writer William Gibson intoned, the future may be here already, but it's just not evenly distributed. He is right. A persistent digital divide exists between the industrialised world and emerging countries. Mobile phones may be proliferating rapidly, but Divides are also evident within western digital society where some invest in new technology, and a whole spectrum of other responses, from mildly enthusiastic to outright rejection are present in the population. There are even divides between those who can use the technologies and those who can't. Technology remains unevenly distributed, and will be for some time to come. But the digital divide will not stop the march of technology. What might wearables and non-touch interfaces achieve for us?

It is debatable whether wearable and invasive technologies will increase our intelligence. What such tools might be able to do though, is free us up physically, enhancing our visual capabilities, and enabling us to control devices hands free. They will also enable us to free up cognitive resources, by distributing our thinking and memory, enabling us to focus on important things such as creativity, intuitive thinking, critical reflection and conducting personal relationships, while the wearable computer navigates, searches, discovers, stores, retrieves, organises and connects for us. It will not make us smarter, but technology will enable us to behave smarter, work smarter and learn smarter. That's if we accept that ultimately, the success or failure of such tools is really down to us and us alone.

References
Clark, A. J. (2003) Natural Born Cyborgs: Minds, Technologies and the Future of Human Intelligence. New York: Oxford University Press.
Jonassen, D. H., Peck, K. and Wilson, B. (1999) Learning with Technology: A Constructivist Perspective. Upper Saddle River, NJ: Merrill Prentice Hall.

Photo by Jussi Mononen

Creative Commons License
The future of intelligence by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Tuesday, 1 January 2013

Is technology making us smarter?

This is part 2 of the series on the future of learning and technology. When discussing the future, especially the future of technology, there are some writers who almost always seem to be quoted. Near the top of the list is the futurologist Ray Kurzweil, who has much to say about our technological future, and also about the growth in human intelligence. His views are quite optimistic, especially around computers and the nature of knowledge. Kurzweil popularised the concept of 'the Singularity', but it was science fiction writer Vernor Vinge who originally coined it. In a nutshell, the Singularity describes a tipping point in technological development when computers exceed the power of total human capability. This will occur, Kurzweil argues, due to a rapid advance of technology and proliferation of human and machine intelligence. Whether we shall see the Singularity is one question. Whether it will have such as profound effect on our society and our humanity as Kurzweil and other predict, is an even bigger question. We simply don't know if computers can or will surpass human thought, or what the implications might be if they eventually do. Such questions have for years been a focus of the Strong vs Weak AI (Artificial Intelligence) debate.

In Kurzweil's view, technology and the human mind are symbiotic, reliant upon each other for their mutual development.  His vision of the future requires humanity to become increasingly intelligent, made smarter because of increased opportunities to connect, create and find knowledge across the network. James Flynn, (2012) of the University of Otogo in New Zealand reveals that over the last century, IQ scores have been steadily rising from generation to generation. Whether this occurs as a direct result of access to technology and greater opportunities for networking, is yet to be established. But, intuitively this seems to be a reasonable proposition.

There are those who argue the exact opposite, that humans are becoming less intelligent and more dependent upon technology. This perspective is championed by Nicholas Carr (2011), who provocatively argues that habituated use of search tools such as Google is 'making us stupid'. Carr's essential thesis is that we are bombarded with content on the Internet, and cope with this by reducing our depth of study whilst increasing our breadth of study. In other words, he argues, we tend to skim read and miss out on the richness of meaning we would have absorbed pre-internet. In his original publication, Andrew Keen (2007), was adamant that the Internet is undermining the authority of academics and is a threat to our culture and society. In his most recent edition, Keen turns his ire specifically onto user generated media such as blogs and YouTube (Keen, 2010). Tara Brabazon (2008) appears equally cynical about the impact the Web is having on this generation of learners, but provides a more measured response. She suggests that it is an error for universities to invest more in technology than in teacher development, and in so doing, opens a debate on the future of education in the digital age.

So the future of technology supported learning is uncertain and contested. Are we being made more intelligent by our habituated uses of technology, or are we becoming smarter because we have more opportunities to create our own content, and think more deeply about it? Does our collective increase in intelligence owe itself to better connections with experts and peers, or should we simply put the growth of knowledge down to a natural, progressive evolution of the human mind? Is technology actually a threat to good learning, creating a generation of superficial learners, or do interactive tools such as social media and search engines provide us with unprecedented access to knowledge?

Such questions are exactly what the study of the future is all about.  

References
Brabazon, T. (2007) The University of Google: Education in the post-information age. Burlington, VT: Ashgate.
Carr, N. (2011) The Shallows: What the Internet is doing to our brains. London: W. W. Norton and Company.
Flynn, J. R. (2012) Are we getting smarter? Rising IQ in the 21st Century. Cambridge: Cambridge University Press.
Keen, A. (2007) The Cult of the Amateur: How Today's Internet is killing our culture and assaulting our economy. London: Nicholas Brealey.
Keen, A. (2010) The Cult of the Amateur: How blogs, Myspace, YouTube and the rest of today's user generated media are killing our culture and economy. London: Nicholas Brealey.

Image source

Creative Commons License
Is technology making us smarter? by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Sunday, 30 December 2012

Facing the future

At the end of each year many of us tend to focus on the future, wondering what it will bring. We wish each other a happy New Year, and hope that life will treat us kindly. We try to shape our own futures by making New Year resolutions, many of which fall by the wayside after a week or two. Much of our future is not ours to shape. But still we persist in trying to predict the future.

Many of our predictions about the future are based on speculation or wishful thinking. Remember the personalised jetpacks we were all going to use, and the Moon colonies many thought would be established in the 1970s? No matter what we think we 'know' about the future, we are unable to predict the future with one hundred percent confidence. Gambling casinos and bookmakers make a fortune out of our desire to guess what will happen next. On 21 December 2012, many people held their collective breaths because of a well studied, but poorly understood 'prophecy' about the ending of an age. Some sold their houses, or gave up their jobs in preparation for the 'end of the world', and were relieved and disappointed in equal measure when nothing happened. The Mayan Apocalypse did not happen. Many of us didn't believe it would. We have seen it all before, several times. Down through the ages self appointed religious cult leaders have predicted the return of Christ, or the start of Armaggedon, or some global catastrophe, largely based on their own personal interpretations of texts or 'signs'. This always spreads fear and uncertainty to many. All the modern day prophets have failed, but have ruined the lives of many gullible and impressionable people in the process.

What about teachers and schools? If we try to predict what will happen to education in the next year, we will probably have reasonable success, especially if we work within the teaching profession. Those of us who are engaged as learning professionals tend to see the trends first, and can better understand the nuances and vagaries of education better than the average 'man in the street'. This is why practising teachers are better placed than politicians to offer ideas for improving education. The caveat is that if we try to predict what will happen in education over a longer time scale, say 3 to 5 years time, we become less accurate, because there are random events, changes in policy, variations in world economy, new technologies, or other unknown variables that can happen to change the terrain.

And yet, you and I have a sneaking suspicion that if we do not try to anticipate the future, and make ready to respond to changes as they occur, we will be caught off guard. And we would be right. Anticipating change is a natural part of our survival strategies, and should be encouraged. So we have a conundrum. Do we try to predict the future and risk being badly wrong, or do we just let the future roll over us and try to adapt to it? If we decide on the latter, then we will be at the mercy of change, and not only will education suffer, more importantly, the children and young people in our care will be affected. If we decide on the former, then at least we have made a choice to try to anticipate the future, and we have an outside chance of being right. The less timescale we try to predict, the more chance we have of being right. The farther we try to gaze down the corridor of the future, the more risk we run of being wrong, because there will be more opportunities for unpredictable things to occur.

Over the next few blog posts I intend to examine some of the predictions that have been made on the future of education, with specific reference to technology and the role it will undoubtedly play.  Some of the predictions will be fairly inevitable, others will be wildly speculative, and many will sit somewhere in between, as possibilities that may or may not become reality. If we are prepared for change, then we will be less likely to be taken by surprise. We can at least prepare for a successful new year of teaching and learning based on what we believe is just around the corner.  But we still need to live and work in the present.

I wish you a happy and successful New Year.

"Learn from the past, prepare for the future, live in the present." - Thomas S. Monson

Other posts in this series
Is technology making us smarter?
The future of intelligence
The future of classrooms
Digital classrooms
AR we there yet?
Global learning collectives
The foresight saga
Touch and go

Image source

Creative Commons License
Facing the future by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Saturday, 29 December 2012

Communication and learning in a digital age

The latest issue of the online open journal eLearn Centre Research Paper Series has just been published. Issue 5 considers Communication and Learning in a Digital Age, and features papers from a number of scholars in the field, including my own paper on current research perspectives on digital literacies. The papers originate from a conference held in Barcelona in the Summer of 2012. Here is the introduction, written by Sandra Sanz and Amalia Creus (Open University of Catalonia):

Experience of time and technology also has an important impact on learning. The drastic reduction on lifetime of knowledge, connected with the overflow of information and fragmentation of sources, are just some of the features that are changing the way we learn. This situation challenges us to think more creatively about the interaction between communication technologies and learning, and to explore how our educational models are being impacted by the processes of social change that come with digitalization, the emergence of social media and the Web 2.0. 

Since February 2011 the group ECO (Education and Communication), driven by teachers of Information and Communication Studies at UOC, has been providing a forum for researching communication and learning, and for sharing teaching innovation through e-learning environments based on collaboration, creativity, entertainment and audiovisual technologies. 

The five articles in this edition of eLC Research Paper Series reflect the short but intense trajectory of the group. Some of them are a selection of papers presented at the International Conference BCN Meeting 2012, organized by ECO. The other articles were written specially for this issue by members of the group and give a picture of the themes and questions we are now exploring. 

For those who may experience problems downloading my Digital Literacies paper from the site (it doesn't work well on Macs) below is a downloadable .pdf version.



Image source

Creative Commons License
Communication and learning in a digital age by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Monday, 17 December 2012

Headline Muse

The Muse Headband
They have finally done it. Someone has come up with a way to control computers using mind power. And the device is non-invasive. At least, that's what InteraXon, the company who has designed the Muse 'brain sensing' headband wish to achieve. 'It lets you control things with your mind' runs the sensational strapline for the Muse Headband promotional video. Mind control? This will sound quite sinister to many, and others will be far from convinced. After reading the authoritative but still controversial book Physics of the Future by Michio Kaku recently, I have a more open 'mind' on the matter. I don't doubt the claims InteraXon make, or at least I won't when I get to see Muse demonstrated with my own eyes. But I think 'brain sensing' is an unfortunate tag line. Could it imply that there is no brain there to sense? Are they perchance anticipating that brainless people will buy the device? To me 'brain sensing' infers that it is detecting whether there is a brain present, rather than the more spectacular functionality it can potentially offer. Perhaps InteraXon ought to revise their tag line so it more accurately represents the capability of the device. You see, Muse is actually a wearable Electroencephalograph or EEG, with 4 sensors that are positioned strategically around the Alice-band style headgear that you wear.

If you can get past the irritatingly repetitive and slightly-louder-than-it-should-be background music on the video, and ignore the embarrassing geekiness that exudes from some of the presenters (I think it's really cool!), the Muse headband does look like it has the potential to be a breakthrough technology. The last time we had a true technological breakthrough of any magnitude was 7 years ago, when Microsoft released the Xbox 360 Kinect. Kinect was truly revolutionary because it pointed up all sorts of possibilities around non-touch, voice activated, natural gesture computing, at an affordable price. The simple juxtapositioning of two cameras made all the difference. All you had to do was think creatively, and hack the system to get that Tom Cruise, Minority Report (The future can be seen!) action going. Will Muse have a similar impact to Kinect? Will it launch us into a new era of control technology? Time will tell, because at present Muse is still in an early stage of development, and InteraXon are speculating themselves on its potential to bring advances into the non-touch, thought control of devices.

At present, InteraXon are offering advance devices for a mere US$165, on the understanding that you test out the system for them. What is currently on offer goes in one direction only. The Muse Headband will be configured to measure your 'brain activity' and transfer an analysis to your laptop or iPad. The device will measure areas of your brain as they activate while you play a 'brain training game'.  The manufacturers claim that it will enable you to exercise your memory, measure your attention span and practice relaxation techniques. But is Muse more than simply a measuring device? Later, promise InteraXon, using the data they collect, there will be the possibility of using next generation Muse Headbands to control computers and other devices by mind power alone.

The future has a habit of creeping up on us from behind. And it does it quicker than we sometimes imagine it can. We once thought voice control was science fiction. Enhancing our senses was fine for vision, hearing, even speech. We have prosthetics for all of those. But we have carefully steered away from any mind enhancement. We didn't have the technology. We left that kind of thing to Star Wars, magic and folklore. Now it seems, we have the technology, and at the moment, mind control is right at the edge of our imagination of what technology can possibly offer. From motion sensing to mind sensing in just 7 short years? Who would have thought it? How soon before thought controlled computing becomes a reality for us all? And what then will we need to do (or to become), to adjust to the brave new world that will be upon us?

Images by InteraXon

Creative Commons License
Headline Muse by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Monday, 10 December 2012

Things ain't what they used to be

Not so long ago, objects were simply objects. They only came alive in Disney cartoons, or after a heavy drinking session. Most of the time, objects were simply there to be used to perform a task the user required. Now that is all about to change,  as we advance into the next phase of Web evolution. We are about to see the emergence of what Kevin Ashton called 'the Internet of Things'. In a recent blog post, Jamillah Knowles wrote that a revolution is about to begin where the objects in our homes and workplaces will become smarter, more context aware, and will be able to interpret data fed to them, before taking action. As physicist Michio Kaku wrote recently, 'now we can say to Siri, move my meeting back an hour from 3 to 4, soon we will be able to say to Siri, mow the lawn.' The difference is, at present we can use our devices to interact directly with virtual space, but with smart context aware objects surrounding us, we will be able to interact through virtual tools into the real world.

Already we have QR codes and RFID embedded into objects. These are very effective, but they are superficial compared to what comes next. The next stage, according to this generation of Internet gurus, is to embed smart chip technology, so that objects can have a conversation with our devices. Not only does that have promising implications for health care, engineering, architecture, business and entertainment, it also makes a bright future for ambient learning. Imagine a group of children going on a visit to a museum. Each is equipped with a smart phone. An app on their phones interacts with all of the exhibits in the museum. If they stand in front of a statue, or a model of a dinosaur and hold their phone up, the object will send information to the phone. The longer they stand in front of the exhibit, the more information it will feed them. When they return to their classrooms or homes later, they have a complete archive of all of the objects they have seen that day. They can use this information for projects, essays, blogs, podcasts. It can then be used in whatever content they create to show what they have learnt in the form of text, images, sounds and video. The real learning happens when the kids begin to integrate their experiences, the information they have captured and their interaction with it into creating, organing and sharing their own content.

All of this has been made possible because of the disaggregation of computer and microchip technology. In 2011, the number of smart objects connected to the Internet surpassed the number of people on the planet. This trend will accelerate exponentially in the next few years to the point where we see ubiquitous computing. No longer do we need to carry computers around with us to be able to interact with digital media. Using the smart device in our pockets, and the ubiquitous computing power that is being embedded in objects all around us, we will soon be able to learn from those objects, invest our memories inside them, and even get them to do our bidding.

Things ain't what they used to be. Things are about to get a whole lot smarter.

Photo by Rod Senna

Creative Commons License
Things ain't what they used to be by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.