Sunday 30 December 2012

Facing the future

At the end of each year many of us tend to focus on the future, wondering what it will bring. We wish each other a happy New Year, and hope that life will treat us kindly. We try to shape our own futures by making New Year resolutions, many of which fall by the wayside after a week or two. Much of our future is not ours to shape. But still we persist in trying to predict the future.

Many of our predictions about the future are based on speculation or wishful thinking. Remember the personalised jetpacks we were all going to use, and the Moon colonies many thought would be established in the 1970s? No matter what we think we 'know' about the future, we are unable to predict the future with one hundred percent confidence. Gambling casinos and bookmakers make a fortune out of our desire to guess what will happen next. On 21 December 2012, many people held their collective breaths because of a well studied, but poorly understood 'prophecy' about the ending of an age. Some sold their houses, or gave up their jobs in preparation for the 'end of the world', and were relieved and disappointed in equal measure when nothing happened. The Mayan Apocalypse did not happen. Many of us didn't believe it would. We have seen it all before, several times. Down through the ages self appointed religious cult leaders have predicted the return of Christ, or the start of Armaggedon, or some global catastrophe, largely based on their own personal interpretations of texts or 'signs'. This always spreads fear and uncertainty to many. All the modern day prophets have failed, but have ruined the lives of many gullible and impressionable people in the process.

What about teachers and schools? If we try to predict what will happen to education in the next year, we will probably have reasonable success, especially if we work within the teaching profession. Those of us who are engaged as learning professionals tend to see the trends first, and can better understand the nuances and vagaries of education better than the average 'man in the street'. This is why practising teachers are better placed than politicians to offer ideas for improving education. The caveat is that if we try to predict what will happen in education over a longer time scale, say 3 to 5 years time, we become less accurate, because there are random events, changes in policy, variations in world economy, new technologies, or other unknown variables that can happen to change the terrain.

And yet, you and I have a sneaking suspicion that if we do not try to anticipate the future, and make ready to respond to changes as they occur, we will be caught off guard. And we would be right. Anticipating change is a natural part of our survival strategies, and should be encouraged. So we have a conundrum. Do we try to predict the future and risk being badly wrong, or do we just let the future roll over us and try to adapt to it? If we decide on the latter, then we will be at the mercy of change, and not only will education suffer, more importantly, the children and young people in our care will be affected. If we decide on the former, then at least we have made a choice to try to anticipate the future, and we have an outside chance of being right. The less timescale we try to predict, the more chance we have of being right. The farther we try to gaze down the corridor of the future, the more risk we run of being wrong, because there will be more opportunities for unpredictable things to occur.

Over the next few blog posts I intend to examine some of the predictions that have been made on the future of education, with specific reference to technology and the role it will undoubtedly play.  Some of the predictions will be fairly inevitable, others will be wildly speculative, and many will sit somewhere in between, as possibilities that may or may not become reality. If we are prepared for change, then we will be less likely to be taken by surprise. We can at least prepare for a successful new year of teaching and learning based on what we believe is just around the corner.  But we still need to live and work in the present.

I wish you a happy and successful New Year.

"Learn from the past, prepare for the future, live in the present." - Thomas S. Monson

Other posts in this series
Is technology making us smarter?
The future of intelligence
The future of classrooms
Digital classrooms
AR we there yet?
Global learning collectives
The foresight saga
Touch and go

Image source

Creative Commons License
Facing the future by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Saturday 29 December 2012

Communication and learning in a digital age

The latest issue of the online open journal eLearn Centre Research Paper Series has just been published. Issue 5 considers Communication and Learning in a Digital Age, and features papers from a number of scholars in the field, including my own paper on current research perspectives on digital literacies. The papers originate from a conference held in Barcelona in the Summer of 2012. Here is the introduction, written by Sandra Sanz and Amalia Creus (Open University of Catalonia):

Experience of time and technology also has an important impact on learning. The drastic reduction on lifetime of knowledge, connected with the overflow of information and fragmentation of sources, are just some of the features that are changing the way we learn. This situation challenges us to think more creatively about the interaction between communication technologies and learning, and to explore how our educational models are being impacted by the processes of social change that come with digitalization, the emergence of social media and the Web 2.0. 

Since February 2011 the group ECO (Education and Communication), driven by teachers of Information and Communication Studies at UOC, has been providing a forum for researching communication and learning, and for sharing teaching innovation through e-learning environments based on collaboration, creativity, entertainment and audiovisual technologies. 

The five articles in this edition of eLC Research Paper Series reflect the short but intense trajectory of the group. Some of them are a selection of papers presented at the International Conference BCN Meeting 2012, organized by ECO. The other articles were written specially for this issue by members of the group and give a picture of the themes and questions we are now exploring. 

For those who may experience problems downloading my Digital Literacies paper from the site (it doesn't work well on Macs) below is a downloadable .pdf version.



Image source

Creative Commons License
Communication and learning in a digital age by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Saturday 22 December 2012

I'm dreaming of a White laptop...

Me with Keith in 2009
... or any colour really. It doesn't matter that much, as long as it does the job. My old laptop Keith (named after a Rolling Stone - he's very old and has seen better days, falling to pieces, but is still just about functioning), is just about to pop his clogs I fear. I'm not sure if Keith will even make it to the conclusion of this blog post, he's making ominous chugging and whining noises (which I suppose is also a little like his namesake). You see, Keith is almost 8 years old. In computer years that is way beyond geriatric. His CD-ROM drive packed in ages ago, some of his USB ports have ceased to care, and all of his appendages, his rubber feet and other accouterments have long departed. Even his volume control has shuffled off this mortal coil. He is quite frankly, in a sorry state. But still he soldiers bravely on. If I take Keith out to use with my students, there are often remarks like: Wow, is that a museum piece? or, OMG that laptop must be almost as old as you! Cheeky beggars. They lose a few grade points for that.

In fairness, Keith does come from another era. He is chunky and thick, his battery has hardly any life in it (the only way I can operate him is to plug him in to the mains), and he is slowing down noticeably. If I don't remove him from my lap after 30 minutes I risk sustaining scorch marks to my legs, because he heats up to the point of shutdown. He takes an eternity to boot up every time I switch him on. He takes ages to shut down. He finds it difficult to do simple tasks, like opening a new browser window. Did I mention he is very, very .... slow? He suffers from the laptop version of arthritis I guess. As we get older, we all suffer from some form of mobility issue, but for Keith it has become a part of his core personality. If he ever did anything fast, I think I would run out of the room in shock.

He is crashing out on a regular basis these days. Self induced coma. Keith is asleep more than he is awake, and several times I have thought I have lost him forever, given some of the error messages I see on the screen. Once or twice he has refused to get out of bed at all, but after a few days of black screen, he mysteriously resurrects himself. It's as if he is struggling to escape his inevitable eternal dark void. But the best thing about Keith is that he never suffers from a loss of memory. Not since I invested in an external hard drive. Now Keith never loses any data. Because it's all offloaded into an external medium, which is kept separately to him, in case he ever suffers from the computer equivalent of incontinence or something worse.

I still take care of Keith. I have not dropped him since that notorious incident at a conference in 2007. He survived, but for several glasses of wine and the table cloth, it was a terminal experience. These days Keith doesn't travel with me to far off destinations. You won't see him at conferences, weddings or Bar-Mitzvahs anymore. He is too old for air flights now. He resides at home where he is comfortable, chugging slowly along, performing his tasks in his own time. I wouldn't want to bury him in some far off foreign field.

So it is time for a new laptop. Christmas is nearly upon us, and I will be disappointed if I receive any more gifts of socks, frankincense (Brut aftershave) or myrrh. Gold I will cope with. But this year, at the risk of offending my anthopomorphised little digital companion, and hastening his sad demise, I want a new, fast operating, graphically rich and very streamlined laptop. I want a device I can take with me everywhere, use any time, quickly and without too much fuss, and certainly without attracting any snide comments from my students. And yet, whatever Santa brings me, whatever shape and form my new laptop takes, I will always think fondly of Keith, my faithful laptop from which all my blogs, slideshows and articles have emanated over these last 8 years.

And in the future, if he is still able, I will occasionally fire him up just to say hello. And I will remember.

Photo by James Clay

Creative Commons License
I'm dreaming of a White laptop... by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Monday 17 December 2012

Headline Muse

The Muse Headband
They have finally done it. Someone has come up with a way to control computers using mind power. And the device is non-invasive. At least, that's what InteraXon, the company who has designed the Muse 'brain sensing' headband wish to achieve. 'It lets you control things with your mind' runs the sensational strapline for the Muse Headband promotional video. Mind control? This will sound quite sinister to many, and others will be far from convinced. After reading the authoritative but still controversial book Physics of the Future by Michio Kaku recently, I have a more open 'mind' on the matter. I don't doubt the claims InteraXon make, or at least I won't when I get to see Muse demonstrated with my own eyes. But I think 'brain sensing' is an unfortunate tag line. Could it imply that there is no brain there to sense? Are they perchance anticipating that brainless people will buy the device? To me 'brain sensing' infers that it is detecting whether there is a brain present, rather than the more spectacular functionality it can potentially offer. Perhaps InteraXon ought to revise their tag line so it more accurately represents the capability of the device. You see, Muse is actually a wearable Electroencephalograph or EEG, with 4 sensors that are positioned strategically around the Alice-band style headgear that you wear.

If you can get past the irritatingly repetitive and slightly-louder-than-it-should-be background music on the video, and ignore the embarrassing geekiness that exudes from some of the presenters (I think it's really cool!), the Muse headband does look like it has the potential to be a breakthrough technology. The last time we had a true technological breakthrough of any magnitude was 7 years ago, when Microsoft released the Xbox 360 Kinect. Kinect was truly revolutionary because it pointed up all sorts of possibilities around non-touch, voice activated, natural gesture computing, at an affordable price. The simple juxtapositioning of two cameras made all the difference. All you had to do was think creatively, and hack the system to get that Tom Cruise, Minority Report (The future can be seen!) action going. Will Muse have a similar impact to Kinect? Will it launch us into a new era of control technology? Time will tell, because at present Muse is still in an early stage of development, and InteraXon are speculating themselves on its potential to bring advances into the non-touch, thought control of devices.

At present, InteraXon are offering advance devices for a mere US$165, on the understanding that you test out the system for them. What is currently on offer goes in one direction only. The Muse Headband will be configured to measure your 'brain activity' and transfer an analysis to your laptop or iPad. The device will measure areas of your brain as they activate while you play a 'brain training game'.  The manufacturers claim that it will enable you to exercise your memory, measure your attention span and practice relaxation techniques. But is Muse more than simply a measuring device? Later, promise InteraXon, using the data they collect, there will be the possibility of using next generation Muse Headbands to control computers and other devices by mind power alone.

The future has a habit of creeping up on us from behind. And it does it quicker than we sometimes imagine it can. We once thought voice control was science fiction. Enhancing our senses was fine for vision, hearing, even speech. We have prosthetics for all of those. But we have carefully steered away from any mind enhancement. We didn't have the technology. We left that kind of thing to Star Wars, magic and folklore. Now it seems, we have the technology, and at the moment, mind control is right at the edge of our imagination of what technology can possibly offer. From motion sensing to mind sensing in just 7 short years? Who would have thought it? How soon before thought controlled computing becomes a reality for us all? And what then will we need to do (or to become), to adjust to the brave new world that will be upon us?

Images by InteraXon

Creative Commons License
Headline Muse by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Monday 10 December 2012

Things ain't what they used to be

Not so long ago, objects were simply objects. They only came alive in Disney cartoons, or after a heavy drinking session. Most of the time, objects were simply there to be used to perform a task the user required. Now that is all about to change,  as we advance into the next phase of Web evolution. We are about to see the emergence of what Kevin Ashton called 'the Internet of Things'. In a recent blog post, Jamillah Knowles wrote that a revolution is about to begin where the objects in our homes and workplaces will become smarter, more context aware, and will be able to interpret data fed to them, before taking action. As physicist Michio Kaku wrote recently, 'now we can say to Siri, move my meeting back an hour from 3 to 4, soon we will be able to say to Siri, mow the lawn.' The difference is, at present we can use our devices to interact directly with virtual space, but with smart context aware objects surrounding us, we will be able to interact through virtual tools into the real world.

Already we have QR codes and RFID embedded into objects. These are very effective, but they are superficial compared to what comes next. The next stage, according to this generation of Internet gurus, is to embed smart chip technology, so that objects can have a conversation with our devices. Not only does that have promising implications for health care, engineering, architecture, business and entertainment, it also makes a bright future for ambient learning. Imagine a group of children going on a visit to a museum. Each is equipped with a smart phone. An app on their phones interacts with all of the exhibits in the museum. If they stand in front of a statue, or a model of a dinosaur and hold their phone up, the object will send information to the phone. The longer they stand in front of the exhibit, the more information it will feed them. When they return to their classrooms or homes later, they have a complete archive of all of the objects they have seen that day. They can use this information for projects, essays, blogs, podcasts. It can then be used in whatever content they create to show what they have learnt in the form of text, images, sounds and video. The real learning happens when the kids begin to integrate their experiences, the information they have captured and their interaction with it into creating, organing and sharing their own content.

All of this has been made possible because of the disaggregation of computer and microchip technology. In 2011, the number of smart objects connected to the Internet surpassed the number of people on the planet. This trend will accelerate exponentially in the next few years to the point where we see ubiquitous computing. No longer do we need to carry computers around with us to be able to interact with digital media. Using the smart device in our pockets, and the ubiquitous computing power that is being embedded in objects all around us, we will soon be able to learn from those objects, invest our memories inside them, and even get them to do our bidding.

Things ain't what they used to be. Things are about to get a whole lot smarter.

Photo by Rod Senna

Creative Commons License
Things ain't what they used to be by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Thursday 6 December 2012

Tracking sentiments

Last year I flew via Amsterdam into Cologne, Germany to give a keynote speech at a large international conference. I arrived at the airport and made my way through passport control into the baggage collection area. Along with my fellow passengers I dutifully stood, waiting at the carousel, watching as bags and cases of all sizes, shapes and colours processed slowly by. Passengers began to collect their luggage and leave. I continued to wait. Decreasing numbers of passenger waited with me as one by one, they spotted their bags, grabbed them and made off to find their transport. Soon I began to get the sneaking suspicion that my bag wasn't going to arrive. This is what I sometimes refer to as 'baggonising' and it's something I am becoming increasingly familiar with. After I had been left standing along like a spare lemon at a cocktail party for a while, I admitted defeat, and walked over to the KLM desk to ask why my bag had not appeared.

The woman behind the desk checked, and then with a straight face informed me that there had been 'technical difficulties' at Amsterdam Schipol, and that my bag would not now arrive until tomorrow evening. She asked for the name of my hotel and told me it would be delivered directly to my room. All fine and good, but there was me standing there in my jeans and sneakers, and my best suit and shirt were in my luggage. In Amsterdam. Worse, my keynote was scheduled for the following morning, which left me in something of a dilemma. To say I was furious with the airline would have been an understatement. The plane I disembarked at Amsterdam was exactly the same plane I got into again to fly onward to Cologne. I recognised the crew. I got off the plane, trotted a mile or more across Schipol Airport and then got back on to exactly the same plane, but in the meantime my bag had been removed and left who knew where.

And so I arrived at my hotel, checked in to my room and then proceeded to tweet my problem to anyone on Twitter who cared to read it. I named and shamed KLM, and then went off to find something to eat. An hour later, to my surprise, KLM responded to me on Twitter, apologising for the mix up and advising me that I should go and purchase whatever I needed, and they would foot the bill. Wonderful. Clutching my credit card, I went off and bought a new pair of shoes, two new shirts, underwear, socks, shaving kit and toiletries.  I stopped short of purchasing an expensive new suit. I was wearing a serviceable jacket and anyway, KLM would probably only increase the airfares to compensate if I blew another 1000 Euros on a Ted Baker original.

The keynote went well and my luggage duly arrived the following evening. But how did KLM know to respond to my tweet? Answer - they were scanning for mentions of KLM on Twitter and other social media. This is known as sentiment tracking, a method that may well come in useful in education in the future. I'll give you some examples of how it's used now and how it works...

The Twitter example above is a very primitive form of Sentiment Tracking and Analysis (also known as opinion mining). It simply involves a KLM staff member regularly scanning the popular social media channels to intervene if there is any bad publicity or complaint, before it blows up into something unmanageable. Several tools are available for sentiment tracking on Twitter and other social media channels. Sentiment tracking is becoming much more sophisticated. Many large business do this now, because they want to know what is being said about their brand. They know that a complaint in a public forum can have a highly negative impact on their business if it's not dealt with quicky. But sentiment tracking can also be harnessed positively by businesses. Recently I wanted to buy some black, Italian hand made slip on shoes. I visited one or two online stores, and then without purchasing, I went off to do other things. An hour later, I searched on Google for some e-learning blogs, and landed on my first page. There at the bottom of the Blogger website this advert was staring back at me:


How did the system know how to target me? The online store (Amazon) had logged my IP address, and my interest in that specific product, and the fact that I had not purchased. It had probably sent a cookie. It assumed from this that I must still be interested. At the next available opportunity, Amazon targeted me with an advert through Google Ads via Blogger. The same applies when you mention something on Facebook, or simply let slip your date of birth, location or other personal information such as hobbies and interests. Before you know it, Facebook is pushing targeted advertising to your page, and it's highly effective. Facebook logs dozens of different items of personal data from your actions every time you visit, tag a photo, post a new status update or 'like' someone's comment.

I noticed the following three adverts on my Facebook page just now: You will notice that Facebook knows I am in the UK. It knows a lot more about me than that though. The last advert is because Facebook knows I am a Manchester United fan - that little detail is there in my profile somewhere. The middle advert is because it knows I am a guitarist, again from information in my personal profile. The first advert? I'm not sure why the first is there, because I have never let it be known that I wish to illuminate something 200 metres away from me. Perhaps someone else can shed some light on this. It's not in my profile that I like to bother pilots as they land their jet airliners, or that I have aspirations to be a covert operative for MI6. Sometimes sentiment tracking gets it wrong, and sometimes it just takes a wild punt and hopes for the best, a bit like playing Internet Battleships. But it could be a lot worse. Facebook might decide to send me links to a mature women dating site, or a wholesale Viagra dealer, just for a laugh. That would be hard to explain. Sentiment tracking is usually quite accurate though, picking up on your emotional statements, likes and dislikes, conversations, as well as links you have previously clicked. Sometimes it seems to take a random guess, as with the torch. But sometimes that guess can be disturbingly accurate.


How does sentiment tracking work? At the simplest level, the system uses Natural Language Processing techniques (NLP) to mine the words you type into your status updates or query boxes. At a deeper level, artificial intelligence applications capture the NLP data and process them into clusters that have collective meaning. A lot of modeling can be done with those kinds of data. Essentially, sentiment tracking makes sense of what you do on the web, and then transforms it into recommendations, actions or in this case, advertising. There are many problems with this kind of computation, including questions over how machines can differentiate between various emotional intensities, differentiate between polarities of opinion, or detect subjectivity in a statement. However, refinements in systems will continue to improve their accuracy.

When it comes down to group behaviour, sentiment tracking can be quite accurate. As we have demonstrated with our previous research into Technosocial Predictive Analytics (TPA), using a mashup of NLP, AI, GPS and geomapping, events such as flu epidemics and social movements can be tracked and even predicted quite accurately over geographical location and time. Have you ever shopped for a book on Amazon? You select your book and then Amazon displays a message saying something like '76 people who bought this book also bought...' and you suddenly realise that there's another book you didn't know about on a similar subject to your own purchase, and now you want that book too! It's a very effective marketing ploy, but there is also enormous educational potential. Amazon is using a form of crowd sourcing for its sentiment tracking, and is selling you a book you didn't know you wanted, based on the tacit approval of a cluster of people who are similar in their tastes, profiles or backgrounds to you. In effect, the individual acts of buying books, combined, create a desire line - a slime trail of social enzymes if you will -  that can be mapped and recommended to future purchasers of similar products.

Clearly there are opportunities to harness the power of these methods in education. Imagine students being directed to new and highly useful content they were previously unaware of. Imagine new content being created automatically on the basis of the actions of like minded scholars in dispersed locations. Imagine content being changed and updated automatically, based on the activities of a global community of practice. Finally, imagine being able to track the actions, content creation and decision making of your groups of learners, and mapping these onto information graphics to track their collective and individual progress, knowing when to intervene and when to let them alone. This kind of learner analytics (or educational data mining) will emerge from the collective intelligence of crowd sourcing and the sentiment tracking of individual actions and behaviour. The technology already exists. We now have to determine whether we want this capability in education, and if we do, we next have to ask what will be the ethical, pedagogical and social implications?

In the next blog post: How Google is refining your web search

Photo by David Sky
Other images by Steve Wheeler

   Creative Commons License
Tracking sentiments by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Tuesday 4 December 2012

The Smart eXtended Web

Will the Web recurse infinitely?
Many of us are obsessed with the future, and are constantly wondering what new technologies, trends or events will change our lives forever. The Horizon Report is one of the most eagerly anticipated reports each year by educators, because it peers down the corridor of time and attempts to predict what we can expect to see in our classrooms in the next year, 2-3 years, 5 years. People spend huge amounts of money each year gambling on the future. The average person bets on horse races or the lottery, whilst the high powered executives buy and sell stocks and shares. Some put their faith in clairvoyants, who for a price will attempt to predict your personal future for you. In the world of learning, we are obsessed with questions about where education is heading next, how work based learning will be enhanced, and more effective methods of engaging learners. Many educators have invested their trust in the use of new and emerging technologies for the future success of learning. Others have been more reticent, preferring instead to rely on the old, tried and tested methods of education and training.

Regardless of personal perspectives, our society is advancing rapidly into a technological future in which just about everything will change. Nothing short of a global disaster will stop it. We have seen the trends. Over the last 20 years, mobile phone texting has taken a significant hold on the communication habits of billions of citizens. New computer interfaces are being introduced that will supplant the ubiquitous keyboard and mouse. Soon we will control our computers using voice and gesture, even facial expressions, mood changes.

We have never been so connected as we are today. Global telecommunications mean that anyone connected can link with anyone else, hear and see them in real time, and send and receive documents at the speed of light. We carry our offices in our pockets. We increasingly do more of our shopping online, and we spend significant proportions of our working days dealing in bits rather than in atoms. We generate enough media content every day to dwarf anything previous societies could create in an entire year.

In the last decade, we have seen the liberation of the microchip from the computer. Now processing power can be embedded into any object, allowing it to be connected to the global network. This is significant, because it heralds a new kind of network made not only of knowledge and people, but a network of smart objects, an Internet of Things. Not only will our personal possessions become connected and smarter, so will our homes, our classrooms, our communities, and ultimately our cities. Yet these rapid technological changes could also be our Achilles heel. We are now so reliant on our computing power and telecommunication capability that if it were suddenly removed or disrupted, much of our familiar world would grind slowly to a halt.

The Web has changed, evolving through a number of iterations, to become increasingly prescient not only about what we wish to search for, but also the context within which it is being searched. Semantic search also takes our previous behaviour into account. Now the Web is about to get even smarter. Where Web 1.0 was about connecting content, and Web 2.0 (the social web) was about connecting people, Web 3.0 (the semantic web) will be about connecting collective intelligence. It will be the global network of distributed cognition. But just what will this emerging hive mind look like and what will we be able to do with it?

I wrote about Web 3.0 in an earlier post and speculated that the 'Smart eXtended Web' would be characterised by a number of features that included intelligent collaborative filtering of content, 3D visualisation and interaction and extended smart mobile interfaces. Now several new developments will bring these ideals to fruition, and it will happen sooner than we expected, because change is not linear, it's exponential.

Paul Groth talks about Web 3.0 in terms of what it will be able to do for us. In his paper The Rise of the Verb he explains his vision of how the web will evolve beyond the representation of knowledge in static data sets to the point where it can turn our commands into actions. Already, he writes, we can say to Siri: 'Move my meeting from 3 to 4'. In the future we will be able to say to Siri: 'Mow the lawn' and it will be done. The difference, he suggests, is that at present we can command our tools to action in the virtual world, but in the near future, with the advance of the Internet of Things and an emerging capability of the Web to interpret verbs as calls for action, we will be able to command operations in the real world too. He argues that in the next ten years we will see a web that is not only grounded in mathematical functions and definitions, but one that is also able to operate through the smart objects around, providing us with uses in the real world too. Ten years? I think it will be sooner.

In the next blog post: Sentiment tracking

Image source Fotopedia

Creative Commons License
The Smart eXtended Web by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Monday 3 December 2012

Recycling learning

"...making good use of the things that they find, things that the everyday folk leave behind..."

Yep, that's a blast from the past for those who grew up watching the children's programme The Wombles on television. Essentially, the Wombles were furry creatures who lived on Wimbledon Common and tidied up all the litter left behind by the 'everyday folk'. Not only did they tidy up, they also recycled the objects they found, into something useful. We could do with a few Wombles down our street, I can tell you.  

How does this fit into education? I hear you asking.... well, read on. 

A useful concept to aid the understanding of current web based learning practices is Bricolage (Levi-Strauss, 1996). Art students will recognise it as the technique of creating an image from a variety of materials that just happen to be available. In architecture, bricolage can refer to the seemingly chaotic proximity of buildings from various periods and styles. For Levi-Strauss, bricolage described any spontaneous action, espcially those that are steeped in personal meaning. The principal meaning of bricolage however, evokes a 'do it yourself'ethos, where each individual creates personal meaning through seemingly haphazard actions that draw together disparate objects to form new wholes.

In the UK punk movement of the late 1970s, chains, safety pins and dog collars were all appropriated as fashion items, eventually assuming additional meaning as statements of personal identity. In the context of learning, bricolage is a useful analytical lens. It was applied by Seymour Papert (1993) to explain a particular style of problem solving. He suggests that bricoleurs reject traditional, systematic analyses of problem spaces in favour of play, risk taking and testing out.  Younger users of technology tend to rely less on formal instruction or user manuals when they encounter new tools. Instead, they launch into an exploration of the device, to see what it can do. They learn to use it by testing it out, and also observing their peers. These sentiments are echoed by Shelly Turkle (1995) who argues that those working in digital spaces, such as programmers, often work in a bricoleur style, working through a 'step-by-step growth and re-evaluation process', regularly spending time standing back from their work to reflect.

Many of the above traits are desirable, transferable skills for 21st Century working, and can be witnessed in the daily activities of learning on the Web. As students develop their ideas, they create content, often drawn together through a variety of search and research methods that can be disparate and seemingly unconnected. Learners draw on a wide range of content, not only from the web, but also from other media and non-media sources as they construct personal meaning. Their personal learning environments (PLEs) tend to be a bricolage of free tools, handheld devices and a personal network of friends, family and peers. Haphazard their learning might appear, but over a period of time, the various sources of their content crystalise together into accessible, meaningful and personalised learning.

In essence, today's digital learners are finding content, recycling and repurposing it, organising and sharing it. They are creating their own spaces, developing and using their own tools and apps, and generally 'making good use of the things they find'. In so doing, I believe that this current generation of learners are developing into one of the most innovative, literate and knowledgeable generations this planet has ever seen.

References
Levi-Strauss, C. (1996) The Savage Mind. London: Orion Publishing Group
Papert, S. (1993)  Mindstorms: Children, Computers and Powerful Ideas. New York: Basic Books.
Turkle, S. (1995) Life on Screen: Identity in the Age of the Internet. New York: Touchstone.

Photo by David Radcliffe

Creative Commons License
Recycling learning by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Thursday 29 November 2012

10 characteristics of authentic learning

I argued yesterday that authentic learning is a vital part of education in the 21st Century. The need to create learning opportunities that are grounded in reality, and form a concrete basis for real world transferable knowledge and skills has never been more important. We also need authentic assessment for learning. Too often in school classrooms around the world the delivery of content is abstract, disconnected and decontextualised. Students are then regularly tested on their recall of what they have 'learnt' and graded as successes or failures. But exactly what is their success or failure? And what does this process of assessment teach students about the school system? Part of the problem is that content is delivered, with little opportunity for students to make personal sense of that content. Another problem is that students are then expected to replicate that 'knowledge' in a form that is recognisable as the original. Students are therefore learning exactly what is already known, rather than exploring new knowledge and gaining fresh insight on the world. 

Some have previously argued that students at this stage in their education require some knowledge that they can build on. True, but how long should this priming of initial knowledge be allowed to go on? When do we begin to develop independent, autonomous lifelong learners? Authentic learning (and authentic assessment) are related not only to the knowledge students receive, but also to the knowledge production they can themselves achieve. Such learning is not instant, nor can it be achieved over a brief time period. But it can be nurtured early. Complex and iterative learning of this kind takes a lifetime of study, and is always grounded in real world experience. Reeves et al (2002) have much to say about the characteristics of authentic learning, including an emphasis on personalised learning that can be achieved through ill structured problem based learning, where meaning is negotiated within collaborative learning environments, and learning can be situated within multiple contexts and perspectives. Their list of 10 characteristics below are a very useful toolkit for any teacher who wishes to ensure that authentic learning is supported in their classroom:  
  1. Real-world relevance: Activities match as nearly as possible the real-world tasks of professionals in practice rather than decontextualized or classroom-based tasks.
  2. Ill-defined: Activities require students to define the tasks and sub-tasks needed to complete the activity. 
  3. Complex, sustained tasks: Activities are completed in days, weeks, and months rather than minutes or hours. They require significant investment of time and intellectual resources. 
  4. Multiple perspectives: Provides the opportunity for students to examine the task from different perspectives using a variety of resources, and separate relevant from irrelevant information. 
  5. Collaborative: Collaboration is integral and required for task completion. 
  6. Value laden: Provide the opportunity to reflect and involve students’ beliefs and values.
  7. Interdisciplinary: Activities encourage interdisciplinary perspectives and enable learners to play diverse roles and build expertise that is applicable beyond a single well-defined field or domain. 
  8. Authentically assessed: Assessment is seamlessly integrated with learning in a manner that reflects how quality is judged in the real world.
  9. Authentic products: Authentic activities create polished products valuable in their own right rather than as preparation for something else. 
  10. Multiple possible outcomes: Activities allow a range and diversity of outcomes open to multiple solutions of an original nature, rather than a single correct response obtained by the application of predefined rules and procedures.
How much of this is currently being achieved in our schools? What would it take for schools to adopt some or all of these approaches?

'In real life, I assure you, there is no such thing as algebra.' - Fran Lebowitz.

References 
Reeves, T. C., Herrington, J., & Oliver, R. (2002). Authentic activity as a model for web-based learning. 2002 Annual Meeting of the American Educational Research Association, New Orleans, LA, USA.

Web source
Photo by Dana Bateman

Creative Commons License
10 characteristics of authentic learning by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Wednesday 28 November 2012

Authentic learning

In his 1970 book Deschooling Society, the radical philosopher Ivan Illich wrote: 'Most learning is not the result of instruction. It is rather the result of unhampered participation in a meaningful setting. Most people learn best by being "with it," yet school makes them identify their personal, cognitive growth with elaborate planning and manipulation.' 

This is a real challenge to many schools. Some of the most effective learning methods involve students doing and making, problem solving, and playing games, all of which comply with the notion of being in a meaningful setting. This kind of situated learning is powerful because it immerses students in contexts that are authentic. Medical students learn through problem based learning, often a complex situated form of education that places them in the role of decision maker. Pilots do a lot of their training in simulators, where 'real life' problems and challenges can be presented to them, and to which they must respond. This kind of learning, according to Jean Lave (1988), is powerful because it is rooted in context, and avoids much of the abstract nature of content that is delivered traditionally. Brown, Collins and Duguid (1989) agree, believing that authentic learning contexts are vitally important if students are to acquire and develop cognitive skills that are transferable to real world living. 

So how do we bring these powerful ideas into school classrooms? Often, we see children bored or demotivated because they are presented with content that is abstract and meaningless, or without a specific context or 'situatedness'. It's not all bad news though. There is evidence that some schools are beginning to adopt authentic learning methods. Saltash.net, a school near to my home, managed to get around this issue by placing children in situations where they had to use tools and techniques to solve real life problems. In their small working farm located within the grounds of the school, they kept chickens, pigs and goats. The children took turns managing the farm, and were often required to purchase food for the animals, or sell eggs at the market. To do this they needed to know about how a market operates, and had to understand concepts such as supply and demand, profit and loss, sell by dates, and so on. Teaching them how to use an Excel spreadsheet would have been dull and boring if it was kept within the four walls of a classroom or ICT suite. Taking this skill outside and putting them in a position where they had to learn by applying spreadsheets to the problem of buying of corn and the selling of eggs at a good price and maintaining records placed their learning within a meaningful setting. There are endless examples of situated learning in a school near you. 

In one American school I visited, teachers chose two students each day who were tasked to edit and present the following day's morning news programme on school radio. All of the children took it in turns to be the morning DJs and news presenters, and their responsibility was to make sure their school was kept up to date on current affairs through their research, editing, filtering and presentation. Many schools in the UK are adopting the School Radio approach too, and children are relishing the challenge of informing their classmates and teachers, deciding on music playlists, reporting on weather and sport, while acquiring authentic critical, organisational and reflective skills. This is learning by stealth, and it is incredibly powerful.

Ultimately, it is the teacher's role to create learning contexts that support authentic learning. If teachers can situate learning in meaningful contexts and real life (or realistic) settings, not only will students become more motivated, they will also acquire authentic transferable skills that they can call upon for the rest of their lives.

References
Brown, J.S., Collins, A. and Duguid, S. (1989) Situated cognition and the culture of learning. Educational Researcher, 18(1), 32-42.
Illich, I. (1970) Deschooling Society. London: Marion Boyars Ltd.
Lave, J. (1988) Cognition in Practice: Mind, Mathematics and Culture in Everyday Life. Cambridge: Cambridge University Press.

Photo by Cobalt123

Creative Commons License
Authentic learning by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Monday 26 November 2012

Parabolic learning

Reflection and Amplification
Now that I have some time, I can sit down and reflect on an extraordinary two hour session with my BA Education Studies students this morning. They are only a small group of a dozen students, but over the last few months, my elearning module group has created a very large amount of content, including blogs, wiki pages and videos. The group wiki is here if anyone wishes to view some of their content. We have previously explored a number of learning theories, new learning technologies, concepts around crowdsourcing, wisdom of crowds, folksonomies and user generated content, Web 2.0, mobile learning and a whole host of other themes during the course.

Today was different, because normally I prepare thoroughly for the sessions. Today, I took the risk of going  into the room with just a germ of an idea to see how it would develop. That germ of an idea evolved over the course of the two hour session into something beyond anything I could ever have planned. It proves to me that sometimes spontaneity can pay dividends. The incorporation of a number of social media tools into the mix proved to be an amazing platform from which the students and I could reflect on the process of learning, and amplify our ideas to each other and the world.

I started the session with the aim of encouraging the group to learn deeply and critically about a particular topic - MOOCs (Massively Open Online Courses). I asked them to prepare for a debate next week, and put up the slide: 'This house believes that MOOCs will signal the demise of campus based higher education'.  I then divided the students randomly into two teams, one arguing for the motion, and the other arguing against. I asked the members of the two teams to research their arguments, with supporting evidence, and blog their ideas in preparation for next week's debate.

As a doorknob strategy, I asked two students to act as content curators. Their task would be to create a new wiki page, and begin to populate it with resources related to MOOCs. This would act as baseline reference materials for the two sides to incorporate into their arguments, but it would also mean that the two students would need to investigate both sides of the argument and post content related to the discourse around MOOCs.

I then tweeted (and encouraged the students to do the same) a few messages to the online educator community to ask them their views on the question of whether MOOCs would eventually replace traditional forms of education. This kind of crowdsourcing activity is always a risk and quite unpredictable, because you never know who will respond (if anyone) or what they will say. I added the hashtag #moocplym for good measure so we could track the conversation across the community. Next, I projected Twitterfall and VisibleTweets live backchannel feeds of responses on the large screen at the front of the classroom. Another task then came the way of the curator team. Their next challenge was to create an archive of all the tweets, blogs, and other content related to the hashtag #moocplym and maintain a chronological record throughout the week using Storify or some similar curation tool.

Over the coming week, the two teams (with the curation team in attendance) will therefore explore the history, culture, technology and pedagogy of MOOCs, a topic they are not particularly familiar with. They will critically analyse the discourse surrounding MOOCs, create and share content on their learning, and reflect on it. Their ideas, and their associated content will be presented and amplified through the social media channels, and the ultimate act will be the debate, followed by a discussion of the entire process from start to finish. There will be a lot to talk about if it all goes according to schedule. Oh, and why did I title this post parabolic learning? Because a parabolic reflector collects energy, focuses and transforms it, and then reflects it back with greater intensity. That's exactly what I want my students to do.

First image source
Second image by Steve Wheeler

Creative Commons License
Parabolic learning: reflection and amplification by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Sunday 25 November 2012

Making a difference

Many times I've heard it said that there is no evidence that technology improves learning. This is a vacuous claim, based on ignorance of the research literature, and possibly borne out of a fear or dislike of technology in general. My usual retort to such a claim is that children with special educational needs are a classic example of technology improving learning. For children with special needs, especially those with physical disabilities such as deafness or vision impairment, technology not only improves learning, it actually enables learning. Without adaptive technology, many disabled children could not access certain types of education. But there is a mass of evidence to show that technology is not only making the difference for all learners, it is actually creating new and previously unattainable opportunities for learning. Technology does make a difference.

A recent research study at the Durham University in the North East of England suggests that multi-touch, multi-user surfaces can improve the learning of mathematics. 400 children were involved in the study, which demonstrated that 'smart tables' enabled better collaboration and problem solving during maths lessons. Class teachers receive a live feed of output from the children's interactions on the surface, and can intervene when necessary. Research has shown that the touch surfaces enable children to discover a range of alternative solutions to maths problems, simply through interacting with each other in new ways.

Image source

Creative Commons License
Making a difference by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Friday 23 November 2012

Are QR codes redundant?

It seems only a short while since we first became aware of Quick Response (QR) codes. In fact, they have been around since 1994, and were originally created to enable the Japanese car manufacturing giant Toyota to track its vehicles during the manufacturing process. Now QR tags are just about everywhere you look, including advertising hoardings, buses and trains, magazines and even coins. They are essentially two dimensional bar codes that you can scan using your mobile device. The beauty (if you can call it that) of the QR tag is that it will quickly take your mobile device browser to a web site with no other effort than a button click. But as many users will tell you, scanning a QR code can be a little hit and miss.

QR codes have polarised the education community over their usefulness. Some argue that they have no real use beyond faddishness and 'wow' factor, whilst other educators are forging ahead, developing ideas for their pedagogical use. Slowly over the last few years, educational uses have begun to emerge, with some pedagogical applications already being tried out in authentic contexts. And yet, even while QR codes in education are still in their emergent state, questions are being asked about their future, and whether they have already become redundant.

Enter Blippar, an augmented reality tool that is hailed as the QR killer. Apparently it can do everything QR codes can do, but a whole lot more too. I first heard of Blippar when I picked up the November 2012 issue of the ShortList magazine, currently the most widely circulated free men's lifestyle magazine in the UK. The banner headline read 'Special Interactive Gaming Issue', which immediately piqued my interest. From cover to cover, the magazine features, articles, adverts and editorial are all marked with a small yellow 'Scan this page for more' symbol. Using the downloadable app from Blippar, readers can capture the image of the page, which takes them to an interactive website or gaming application. Blippar's managing director Jessica Butcher is fairly triumphant about what she naturally considers to be the advantages of Blippar over QR tags, declaring 'Rather than adding an ugly black and white pixellated box to an ad creative, Blippar can take the creative itself (the whole poster, a logo, the product itself) as the trigger for an interactive engagement.'

She has a point. We certainly wouldn't wish to ruin the aesthetics of adverts, would we? Seriously, I have always thought QR tags to be a little ugly in their appearance. The Blippar app is designed to recognise an image from almost any angle, at a distance, and even in poor light conditions, depending on the quality of your mobile device camera. This makes it a whole lot more reliable than scanning a QR tag, in my experience at least. Just like QR codes, Blippar can also recognise where the user is geographically through the GPS system on the mobile device they are using. For advertisers this is a distinct advantage, but I can also see many educational uses for these features.

Ultimately, those who are speculating on the future of paper based resources might like to consider Blippar and other similar data capture augmented reality tools. The future is likely to see a combination of paper based and e-books, or more likley a hybrid of paper based and AR enabled products, designed to function together with the user's mobile device, working in concert to provide students with interactive learning experiences wherever they are. Paper is not dead yet. It's just become enhanced.

Read the full article here: Can Blippar make QR codes redundant?

Image by Steve Wheeler

Creative Commons License
Are QR codes redundant? by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Wednesday 21 November 2012

Teaching artistry

I taught my first art lesson today. Ever. Passing colleagues were a little surprised to see me teaching in the art room, completely out of context. Normally I'm found teaching a session on educational theory or psychology, or information and communication technologies. Teaching an art lesson is therefore a little outside my comfort zone. And yet, earlier today, I found myself surrounded by students with easels, wielding pencils, as we conducted a drawing class.

The drawing session was a part of our BA degree in Education Studies, and the module we were teaching - 'Creativity in Education' - which encourages students to explore through embodied practice the theoretical and practical relationships between education and creativity.  Throughout the year we will be exploring creativity through a range of activities, including dance, photography, video, music, and art. During the module the students will be asked to keep a reflective blog or video diary. At the end of the module they will present their work as a creative portfolio, and the final session will see a public performance of their work. Many of the sessions will involve some aspect of learning by making, a powerful pedagogical method also known as constructionism.

I say the drawing session was outside my normal comfort zone, because it is quite a departure from my normal teaching topics. And yet those who know me will recall that when I was younger I studied fine art and graphic design for a couple of years at Hereford College of Art. I have never stopped being an artist. Whether painting a water colour landscape (my favourite medium) or making a new slideshow for a talk, I always try to portray my ideas creatively, in a manner that is pleasing to the eye. Although I have never given an art lesson before, it seemed fairly natural to me to do so now. With the students we explored a range of drawing activities, from conventional still life drawing, through to speed drawing, where the objects were constantly changing. Of particular interest to me, as always, was the conversation I had with the students as we were working. Many also admitted to being outside their comfort zones as they participated in the drawing exercises, because they professed no skill or expertise in art. Their willingness to engage spoke volumes, because ultimately, the session was not about learning how to draw, but learning an appreciation about how creativity can be applied to classroom layout, curriculum design and teaching. One aim of the module is to encourage students to think creatively about education, using their imagination, and exploring a variety of perspectives on how creativity can be unleashed in the current school systems.

Most of us would acknowledge that teaching is an art as well as a science. There is a certain artistry that educators need to acquire and practice if they want success in the classroom. Teaching is a performance, and those who are creative are constantly able to reinvent lessons, resources and spaces. Creative teachers tend not to worry too much about barriers or constraints, but are constantly seeking solutions and new ways to do things, to improve and enhance learning. Too often, teachers and learners are constrained by their environment, time, school culture, legislation or simply not having access to appropriate resources. Probably the worst barrier to good teaching and learning though, turns out to be lack of imagination.

"Anything can make you look, but only art can make you see."

Image source

Creative Commons License
Teaching artistry  by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Friday 16 November 2012

Next generation learning

In my previous blog post, the architecture of learning, I outlined some of the key characteristics of learning in a digital age, and started to identify some of the main differences between Learning 1.0 (before social media) and Learning 2.0. In the summary of the article, I suggested that the distinct differences between the two types of learning are mostly based on how learners are changing the ways they interact, and their increased ability to create, share and organise their own learning. Learning 2.0 is socially much richer and more participatory, and relies more on interaction with other learners than any previous learning approach. This change has been realised through access to inexpensive internet tools that offer easy ways to connect with others of similar interest. There is a growing understanding that it's not so much what you know anymore, but who you know. No longer is the computer your only mind tool and extension of your memory - now you can also call on everyone else in the world. Social media are enabling learners everywhere to connect and work together with each other, forming convenient communities and networks of shared interest. The full power of the Learning 2.0 approach has yet to be realised, but already we are seeing radical shifts in the way learning is conducted. I also argued that if we view sequenced versions of the Web, based on the way learners use it, we will inevitably have to think of Learning 3.0, and beyond. This led me to think about what we might see in the future of learning, based on present trends, and our anticipation of what new technologies and approaches we think are on the horizon. So here we go - Learning 3.0...

Learning 3.0, if we are to believe all the hype, will be located within a semantic based architecture of webs - a 'meta-web'. I see it arising partly from what is happening on the web right now, but also as a result of new intelligent filtering tools. Increasingly, as users contribute to the content, links and pathways of the social web, it will become more 'intelligent', and will recommend to its users the best ways to find what they are looking for. It will also recommend things that users don't know they need yet, predicting their 'needs' based on their previous behaviour and choices. Learning 3.0 will see learners using sophisticated new web tools that are intricately connected to each other, are context aware, and are accessed through intuitive and natural interfaces. Here we begin to think not only of voice activated, gestural controlled interfaces, but we also need to start considering biometric recognition systems such as retinal scanning, facial recognition and even directly implanted devices that allow us to control our devices merely by thinking (see the table below). Where Learning 1.0 was organised around taxonomies and content was largely expert generated, Learning 2.0 has seen as shift toward user generated content, and the emergent property of folksonomies. We have known for some time that people learn better when they are actively engaged in making things, solving problems and engaging with others. Social media have provided the tools to achieve this on a global level.

Learning 3.0 will be user and machine generated, and will in all respects be represented in what I will call  'rhizonomies'. The rhizonomic organisation of content will emerge from chaotic, multi-dimensional and multi-nodal organisation of content, giving rise to an infinite number of possibilities and choices for learners. As learners choose their own self determined routes through the content, so context will change and new nodes and connections will be created in what will become a massive, dynamic, synthetic 'hive mind'. Here I do not refer to any strong artificial intelligence model of computation, but rather a description of the manner in which networked, intelligent systems respond to the needs of individual learners within vast, ever expanding communities of practice. Each learner will become a nexus of knowledge, and a node of content production. Extending the rhizome metaphor further, learners will act as the reproduction mechanisms that sustain the growth of the semantic web, but will also in turn be nurtured by it. Learning 3.0 will be a facet of an ongoing, limitless symbiotic relationship between human and machine.

Whatever Learning 3.0 is or will become, we can be assured it will be completely different to what has preceded it. We will witness new modes of learning, new ways of interacting and new ways of representing knowledge that will be both robust and mutable, personally contextualised, but without boundaries. I believe the future of learning is going to be very exciting indeed.

Postcript: My thinking in this blogpost is embryonic and is as ever, open to challenge. I may be hopelessly off target here, because this is uncharted territory for me. But I am taking the risk to air my views in public about this topic just to see what feedback I will receive from my professional learning network. I therefore value any dialogue (on this blog and elsewhere), corrections, advice and suggestions as I attempt to navigate my way through the thinking process about what kinds of learning we might see in the future.

Image source

Creative Commons License
Next generation learning by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

Tuesday 13 November 2012

The architecture of learning

One of the characteristics of Web 2.0, according to the man who coined the phrase, is to be found in its architecture. As far as Tim O'Reilly is concerned, Web 2.0 tools are configured in such a way that they 'get smarter the more people use them.' This facet was explained very clearly in Michael Wesch's excellent video Web 2.0 .. The Machine is Us/ing Us, which shows how web tools work better the more people use them. Social tagging for example, becomes increasingly stronger as people populate it with content and links. Blogs rely not only on content, but on users, and ultimately on the dialogue that ensues between all those who read the content. In his famous Wired article, Kevin Kelly predicted this by suggesting that Web 2.0 was about leveraging collective intelligence. Web 2.0 has marked a shift in emphasis from the personal computer to the web, and the services it conveys. Web 2.0 is qualitatively different to what preceded it. Essentially, where Web 1.0 was about pushed content, and a 'sticky internet' where users could change very little, the evolution of the web into Web 2.0 has been viewed as epitomising the power of participation, and arguably, it's also about the democratisation of the internet.

So how does Learning 2.0 fit into this landscape? In order to deconstruct Learning 2.0 - Stephen Downes was the first to coin the phrase eLearning 2.0 - we first need to decide what we mean by Learning 1.0. For me, Learning 1.0 (if there ever was such a thing and it can be equated to Web 1.0) represents a relatively passive individual learning mode where expert generated content is pushed at the learner. It represents a top-down, hierarchical delivery of content (and content really is king in this mode), which ideally demands specific (observable) behaviours from the learner that can be measured and assessed objectively.  Behaviourism and Cognitivism are theories that could comfortably be applied to describe the activities seen within a Learning 1.0 scenario. Bloom's taxonomy is also a framework that might be applied to underpin and explain the levels of activity that would ensue from Learning 1.0 type activities. It is reminiscent of the 1980s Computer Assisted Learning model, where learners sat at a computer, received linear sequences of content, responded to it by answering multiple choice questions, and were presented with remedial loops or 'relearning' when they failed to reach the required standard of understanding.

By contrast, Learning 2.0 is recognised by more active and participatory modes of learning, and they are rarely isolated learning activities. As Web 2.0 has evolved, we have seen an increasing amount of interactive content becoming available. This content is generated not only by the experts, but also increasingly by the learners themselves, and tends to be organised by the community rather than by the experts. It is not a hierarchy and it does not obey top down rules, but in more likely to be a heterarchy. The emergent properties of content organisation are folksonomies, and are the product of loose organisation that is bottom-up rather than top-down. One of the best theories to describe how learning is organised in Web 2.0 environments is social constructivism, because learners increasingly rely on social interaction, and appropriate tools to mediate dialogue. Collaborative, shared online learning spaces such as wikis and discussion forums are characteristic meeting places where content can be created and shared, and the community also organises and moderates this content using specialised services such as aggregation, curation and tagging tools.

When we talk about web versions, we inevitably travel down a road where significant step changes in the evolution of the web mark new ways of using it. If there really is a Web 1.0 and a Web 2.0, then we can expect eventually to see a Web 3.0, and can expect to see new forms of learning and social interaction advancing as a result. In my next blog post, I will try to describe what we can expect from Learning 3.0 using a similar explanatory framework.

Photo by Steve Wheeler

Creative Commons License
The architecture of learning by Steve Wheeler is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.