mLearning – it is the next big thing. And it keeps getting bigger. The mobile revolution swarms us with devices from every direction and they truly are the media for transferring content in a whole new way. We carry them around 24/7 and as much as I hate to admit it, obsess about them to a certain degree. Delivering for such a media requires a design overhaul – new screens and resolutions, a plethora of sensors, a shorter attention span driven by context, you name it. Admittedly, there is potential for revolutionizing the way people learn. There is however one not so obvious aspect of making use of mobile devices – they have the precious capability of feeding back all kinds of information that can help us design better mLearning.
Knowing more about the people we design for will enable us to teach in new ways that the static PC never could. Mobile devices’ sensors track much more interesting data. For example:
- Location from where I’m performing a certain task
- Current weather conditions
- Time of day/How bright or dark it is
- Am I moving or not
- Is there an audio/video interaction
- Am I viewing content in landscape or portrait mode
- Am I using GPS or WiFi
- How am I handling the device gesturewise/How many fingers am I using
All in all a largely outranking set of trackable data than the standard laptop or desktop. Add to that the possible interaction between two or more devices, say a trainer and a group of trainees and you start bursting with ideas. So what are the practical designs that we can come up with? What are the real-life applications? Why not:
- Track body language for presentation skills. Track tone of voice and feedback suggestions like “You sound monotonous – tap here to listen to a couple of great speakers and then compare their voice graph to yours.”
- Track body posture for an office safety training.
- Track pitch, volume and other audio data to analyze and improve upon performance for the call-center employee.
- Track reaction time or abruptness for people taking driving lessons. Track motion and fetch data to suggest behavior like “Slow down – you are over the speed limit on a wet and slippery road.”
- Track ‘point and shoot’ data for augmented reality simulations.
- Track impact for sports.
- Track interaction for performing arts. Even pair multiple devices to choreograph space/time/location variables.
- Track current location
- Are you at the office, at home, in the park, in a bus? This can give an idea about the attention span and make the eLearning adjust dynamically – “We are detecting too much noise and would recommend that you take this part of the training at a later time in a more appropriate environment. Would you like to go to another module?”
- You are in a fire safety drill – the training offers to guide you to the closest escape route.
- You are going to meet a client and access the product training prior to. Based on your location and time it suggests a quick refresh of top highlights.
- You are looking at your induction training – it has a map of points of interest around the office – places to eat, to shop, to chill out. The device can lead you to the pizza place your colleagues go to most often or direct you to the restaurant, where food is healthier and fellow employees enjoy a 20% discount.
- Devices have tracked that 15 of your employees go to the health club around the corner and it makes sense to go and contract a discount for anyone from your company.
- Track if you are alone or with a colleague and initiate an interaction.
- Track proximity and suggest if you are looking at the device from too close or adjust font size dynamically.
I could go on and on…
mLearning should not be all about sowing, but reaping too. Reporting, tracking information and compiling and comparing data in an LMS could be done on a whole new level that could give valuable insight. Mobile devices should be utilized to channel additional analysis and data to leapfrog design. It is a brave new world of sharing. Why not ride the wave?
What applications would you come up with?