Larry MacPhee: e-Learning

widgets

Click for Flagstaff, Arizona Forecast

AZ Time:


06.24.2013: The Technology Adoption Curve

The concept of a technology adoption curve was first described in a study of the willingness of farmers to try new agricultural methods, but it applies quite well to technology in general. I like to try new things, but won't promote any new technology just because it's cool. It has to fill an unmet need and be easy enough to use that most people can manage it. Therefore I generally try to stay just on the right hand side of the "chasm." However, that's not always the case. Although I have followed their development with great interest, I only recently got a smartphone. With 56% of Americans now owning smartphones (I'm sure that number skews young), that plants me squarely in the majority, and illustrates that while one might be an innovator with one technology, that doesn't mean one isn't more cautious in some other technical regard. That's ok. Until recently, I had no need to own a smartphone and, now that I own one, I'd still characterise "need" as a stretch ;-)

Where are you?

Are you a technology innovator, do you "go with the flow", or do you take a "wait and see" approach?

Back in March, I wrote about the hype cycle. Although I didn't think about it at the time, can you see how the chasm in the upper graph is connected to the trough in the hype cycle? Overlay these two graphs and the connection is clear. While a technology can show great promise and generate excitement among early enthusiasts, it may never catch on with the general public. Often this is because of some limitation in what it can do or in how easy it is to use that the enthusiasts don't mind but which the general public would not tolerate. That's the chasm. If the chasm isn't crossed, then the technology never reaches that eventual productivity plateau but instead just dies out or remains a hobby for a small group of technophiles. Linux is a great example of a promising technology that hasn't crossed the chasm. Enthusiasts are the key to the spread of new technologies however. Seth Godin makes this point well when he talks about marketing to the people who care. So let's think about where we are, because it's useful to know ourselves. If I say the word "Blackboard" or "iPad" or "clicker" or "3-D printer" or "Arduino" or "smartphone" or "Twitter" or "Facebook," where do you fall on the curve? Now think about your colleagues and where they are relative to you. Are you always in the same part of the curve or do you jump around? Have you learned something about yourself through this little exercise? If you decide not to adopt some new technology that everyone is talking about, does that make you a laggard? Not necessarily. It could be that the technology in question is heavily hyped right now but will not last. There is an implied slur in calling people "laggard" that I don't like. Not every new technology is a good thing, nor will it last. If it's not, or if it doesn't--something we'll only know in retrospect--then you were right not to jump on the bandwagon. Nobody talks much about Second Life anymore and, if you missed it, you didn't miss much. If you never bought a Palm or Windows Mobile PDA, good for you! You saved a bunch of money on a near worthless device. And what if you're an innovator? Do you stick with a technology once everyone is using it? Or does that take all the fun out of it? If you were on Facebook when nobody had heard of it, are you still there today? There's a saying that "good pioneers make bad settlers." Pioneers don't like crowds, and they are always moving to the new frontier. I'm not one, but I appreciate them. They work hard and explore a lot of places that don't lead anywhere useful. But when they make a real discovery, the rest of us get to enjoy it without all the effort ;-)

Where are you?

The "hype cycle." Overlay this graph on the tech adoption curve above.

06.17.2013: Time to change my password? Oh, just shoot me now!
03.27.2013: Technology is the answer. What was the question?
The Hype Cycle

The Hype Cycle: Map your favorite educational technology.

How many times have you heard that some emerging technology is going to solve all of education's woes? In my experience, a technical innovation may allow the job to be done faster, cheaper, or better than before, but rarely, if ever, all three. If you're lucky, you get to pick two! If you're thinking about implementing some new technology that everyone is talking about, It's important to step back and consider its position on the "hype cycle" graph. Google Glass, for example, is just past the trigger point, and visibility is still increasing. MOOCs are at the peak of inflated expectations right now. But does anyone remember Second Life? Once heralded as "the next big thing," it has slid into the trough of disillusionment. Take Second Life out of your resumé, people. It's not doing you any favors. Speech recognition, long ridiculed, is finally climbing out of the trough and up the slope towards a more realistic "plateau of productivity." While still not practical for most uses, it fills a niche for users with repetitive stress injuries that make using the mouse and keyboard painful. Used as intended, with a realistic appreciation for what it can and can't do, technology can be highly effective. But misapplied, technology can make a real mess of things. As the old saying goes, "To err is human. To really screw up, you need a computer." One of the debates that rages in my office relates to what to teach people about a new technology. We want them to get excited about new technologies, as we are, and to be adventurous in their teaching. Often however, people with inflated expectations come to us only wanting to know how some new technology will make their job easier, and they get frustrated when we ask them why they want to use it (what problem are they trying to solve?) or try to explain that there are limitations. They don't want to hear that it won't re-energize their lectures or that it might require just as much effort as what they are doing now. Let's look at a few examples of useful technologies misapplied, and you'll see what I mean.

Technology Misuse Proper Use
Video
Instructor shows a full-length movie to class in order to take a day off from lecture, catch up on grading, etc. Instructor shows a series of relevant video clips, each followed up with insightful questions and guided discussion to engage the class in critical thinking.

PowerPoint

(two ways to wreck a presentation)

1. PowerPoint presentation is viewed in absence of the presenter, but the bullet points are vague or meaningless without the emphasis and interpretation of the speaker. (Did they think the presenter had nothing of value to add?)

2. Speaker, facing away from the audience, reads paragraphs of text from each projected PowerPoint slide, adding nothing of relevance. (Did they think the audience can't read?)

Presenter uses prompts on the slides to make key points to the audience, to jog the memory, and to engage the audience in a lively and only loosely scripted discussion.
SafeAssign or TurnItIn
Instructor uses tool to fail students for unintentional plagiarism. Instructor uses tool to show students how to properly reference the source materials they cite.
Clickers
Rather than make the teaching more engaging, instructor uses clickers to enforce mandatory attendance policy. Instructor uses tool to assess comprehension, engage students, and deepen their understanding with challenging questions and analysis of why they think what they do.

Your assignment: Expand my table with more examples. Begin with the LMS, Facebook, eBooks, MOOCs, and iPads. All great tools. But are they being used as they should?

03.06.2013: After bad stuff happens :-(
02.28.2013: Latest Reports from the LMS Battleground
10.10.2012 The wave of change that's about to hit higher education
mortarboard with @ symbol

Higher Ed Goes Digital

Big changes are coming to the hallowed halls of higher education. As the cost of a four year degree continues to rise because of, well, you might be surprised. And because state funding for education continues to decline, the consumer is left paying an increasing share of the bill. Administrators, who feel pinched to keep doing more with less and to keep a lid on costs, are pushing for increased class sizes, for more classes taught by part-time instructors, for more online classes, and for the adoption of technologies that automate instruction or reduce the teaching effort per instructor, allowing each one to do more. If we step back and look at the big picture, where is all this headed? As a result of these coming changes, the tenure track faculty member who teaches for a living is, by my reading of the situation, an endangered species, and the state funded primarily undergraduate university isn't much better off. Don't believe me? Ask any department chair at any public undergraduate institution what's happening when a tenured professor (one who's primary responsibility is teaching, not research) retires. While enrollment is growing like crazy (because "college is for everyone"), experienced full-time faculty are being replaced, if at all, by much cheaper and often less qualified part-time instructors. It's happening because technology has been identified as a method for regularizing and further automating undergraduate instruction. Undergraduate university teaching is the delivery of specialized, but fairly standard, information to a large market of adults, for a high price. (K-12 is safe for the moment because teachers not only impart knowledge but also serve as workday babysitters for their young charges.) Sure, experts are still necessary to develop the standardized lessons and content for higher education but, once that's done, it can all be deployed on a massive scale and managed by less qualified people. (Well, that's the argument I hear from upper administration anyway. Whether a less qualified instructor can as effectively grasp and deliver that content is another question, but it's a tradeoff administrators seems able to live with.)

Since the market is large and the price is high, there will be lots of competition for students. With instruction going online, students will no longer be placebound, and course capacities will no longer be dictated by the size of the classroom. In the very near future, students will be able to get an online degree in most subjects from anywhere they choose. Some universities are even racing to grant degrees in personalized learning programs where students can shorten their course of study by "testing out" of classes in which they have "life experience!" (I hope the testing is rigorous and occurs in a proctored environment with ID checks!) When future students are choosing where to go for their online degree, why would they choose your institution? If you don't have a good answer, you'll be in trouble. This change will be highly disruptive. Ask yourself this. What happened to the local video rental stores like Blockbuster when Netflix came along? What happened to the local music shops after iTunes? What happened to the local newspapers after Craigslist became the place for classified ads? What happened to all the independent used bookstores and even the big chain bookstores like Barnes and Noble now that Amazon sells more digital books than paper ones? All of these digital information delivery services replaced their analog counterparts in a very short period of time. With high quality content and lessons coming from the big publishers, written by pedagogical and subject area experts and tailored for the web by skilled graphic designers, the courses developed independently by most professors don't compare favorably. Brick and mortar universities teaching traditionally will be like the small quirky independent bookshops competing against Amazon's vastly greater selection of cheaper content. Most of them will fold. What will happen to all those beautiful campuses and the college towns that depended on them? When the undergrad degree goes digital, there will be only a few winners and they will win big. There will also be many losers, as venerable local institutions see in-person enrollment decline and poorly implemented online programs fail to attract and/or retain students. Universities that conduct research and have graduate programs will be less affected, and the private ivy league institutions will continue to do fine by offering an expensive top-notch traditional education to a niche market, but the community colleges and primarily undergraduate institutions competing on price and who can't differentiate themselves will mostly go the way of the Blockbuster Video stores.

Which organization that you haven't heard of yet will be the Amazon or the iTunes of higher education? Will it be a big publisher like Pearson, or a for-profit online institution like University of Phoenix or Capella? Will it be a currently free option like Coursera, EdX, or Udacity or the Khan Academy? Will it be a highly regarded traditional institution like Stanford or MIT? Or will it be a small regional university like NAU, already accredited and experienced in online delivery to its rural population, that gets it right? It's too early to tell. But there are ways to prosper in this new era. Courses from the for-profits are still generally pretty bad, and the selection from the free services is limited, so there's a window of opportunity for some new leaders to emerge. And while Massively Open Online Courses (MOOCs) are currently getting a lot of attention, they require a level of self-motivation and organization rarely found in our undergraduates. Build better service, with better instructors, more courses of study, better than standard "canned" content, and more personal touch into our online programs and we can beat the competition, create more value for the dollar, grow enrollment, and enhance our reputation as a quality online degree granting institution. That will take time and hard work, and it will take a new kind of instructor who knows technology and pedagogy as well as the subject area. And it won't be any cheaper, to the chagrin of those who think that waving some technology pixie dust over the problem will make it all better. But change is coming and academia, steeped in tradition and rife with bureaucracy, is not very good at change, so it's going to be a shock. Are you preparing for the giant wave of change that's about to crash on traditional higher education? Because you can just sit there and get crushed by it, or you can start paddling for your life and ride it into the future!

04.03.2012 Blackboard Embraces Open Source...like a Boa constrictor
04.01.2012 What Google and Facebook have in common
03.25.2012 Message to the eContent providers
03.20.2012 Textbooks of the Near Future.
01.06.2012 Are we putting the technology cart before the instructional horse?
01.03.2012 Unintended Consequences.
10.17.2011 Quality Matters?

magnifying glassRecently NAU was approached by an organization called "Quality Matters" and invited to become a member. While they are a non-profit, that does not mean they are free. Annual membership dues are required, and the implication is pretty clear. If you say you're not interested, you must not care about quality, right? People pay to be trained as reviewers. People also pay to have their courses reviewed, and they pay to receive the QM seal of approval. Based on the success of this operation, QM could easily spin off some other ventures such as, "Motherhood and Apple Pie Matter," or "Patriotism Matters." Their heart is, to be fair, in the right place. The purpose of this organization is to identify things that make for a quality online course, and use a faculty peer review process to evaluate and certify these courses. This movement wouldn't even exist if there weren't some valid questions about the quality of online courses nationally, and if schools weren't feeling a little defensive about their online programs. I do, however, have some issues with their approach. My first issue is that the focus is on courses delivered online. Their scope does not include courses taught in a traditional manner, and I think we can all agree that some of those must be equally bad or worse! While I'd like to level the playing field and look at all courses, it's maybe a bit unfair to criticize QM for what they don't review. So let's look at what they do review. We will leave aside for now whether NAU should cede its authority over the evaluation of course quality to a body outside the university, and over which we have no control, because the question of who's watching the watchers could be the subject of an entirely different discussion. My biggest remaining issue with the "QM Program" is that online courses can be, arguably, broken down into three major components, and QM deals with only one. A better name for Quality Matters might be "Let's Focus on One of Three Things that Matters!" In case you're inclined to disagree with me, here are my three components of quality in an online course: 1) Course Design: this is the way the course is structured, how it displays to the user in the online environment, and the instructional methods used, including the identification and measurement of learning outcomes. 2) Course Content: this includes the selection of appropriate materials and the accuracy and depth of those materials, 3) Course Delivery: this includes all of the interactions between instructor and student, and among students. The QM program deals only with Course Design. I'm not saying that design doesn't matter. I'm pretty convinced that it does. Without good design, it's going to be difficult to get out of the starting blocks. But I think I'd like more than one of the three reviewers of my online course to be a "subject matter expert" and I don't think it makes much sense to slap a seal of approval on a course unless the content and delivery have also been reviewed thoroughly. I have seen the disastrous results that occur when you give great materials to a poor instructor. I have also seen the tragic consequences when you combine a dynamic and motivating instructor with materials that are inappropriate for the students, either because the materials are not challenging enough, are out of date or otherwise inaccurate, or are too challenging because the students do not have the necessary background preparation. What I'd really like to see is a peer review program that looks at all of the aspects of course quality described above, and owned by our own faculty rather than an outside organization. But I think I see the writing on the wall. If we don't start policing ourselves, it may not be too long before someone else is doing it for us.

09.29.2011 The "do-over" mentality in undergraduate education

mortarboard and degreeTrue story. I have a faculty colleage who had a formal complaint filed against him by one of his students for "discriminating against me on the basis of my intelligence." (The "discrimination" was giving the student a lower grade than some of his classmates, based on the student's relatively poor performance on various assessments.) When the professor agreed that this was true, the student became even more convinced that he had a case! I think this raises an interesting point because the professor in question was using an "old" way of thinking, while the student was using a more modern construction.

When I was in college back in the '80s, I'm not sure there was such a thing as dropping a class. At least, if there was, I never did, and I never knew anyone who did, so it was neither common practice nor a well advertised option. It just never occured to me that one could do that. The concept of re-taking a class a second or third time to replace the original bad grade was also completely foreign. When I got the occasional grade that I was unhappy with, I owned it, and there was nothing I could do about it. It was there on my transcript for all to see, like a tenacious piece of gum on the bottom of my shoe. Today, most students would just throw away the shoes and buy a new pair. In my job at the university, we care about student success and we want everyone to get a good grade. We go to greater lengths every year to accomplish this goal, giving students more choice, more flexibility, and we intervene more than ever before to work with students who are struggling. All of this is good, I think. But we rarely think about why this is the goal. Not trying to be cynical here, but let's just step back for a minute and ask ourselves: "Isn't the point of grading students, in large part, to identify (optimistically) which ones have learned or, (pragmatically) which ones have successfully completed the assignments, or (cynically) which ones have successfully jumped through the hoops?"

Question: Is our goal to get everyone over the bar, no matter what it takes, or just to provide everyone an equal opportunity to get over the bar and then report the results? The bar I refer to here, of course, is "learning" even if measuring that intangible substance requires cruder instruments like tests and other assessments. If everyone gets unlimited chances to get an A (assuming here that letter grade correlates with learning achieved, so you can substitute A with "learned a lot" and F with "didn't learn a thing") by the process of do-overs, remedial work, tutoring sessions, interventions, etc, then aren't we artificially levelling the playing field? Aren't we de-valuing the A earned with hard work and without extra credit? Would you rather be seen by the doctor who got an A in Biology the first time through without any outside help, or the one who was failing the course and dropped it, took it again and got a D, found an easier instructor and took the course a third time, got a B- and, with a bunch of intervention, tutoring, and extra credit, got the B- rounded up to an A, which replaced the D on the transcript? I suppose that student has perseverance at least! Of course, there's an old joke: What do you call the medical student who graduated at the absolute bottom of his class? Doctor! Hah :-)

Why has it come to this, and how has it come to this, and is this where we want to be, and, if not, how do we get someplace else? I think part of the reason we have arrived at this point is that so many more kids are going to college. College really is the new high school. Michael Wesch, who I admire and mostly agree with, says "College is for everyone." True. Certainly part of the problem, though, is that if everyone is being admitted, more students are arriving unprepared. More students are here not because they want to be, but because they feel compelled to be so that they will be competitive for a job at the other end. This also explains the impatience of many of our students, who don't really love to learn or want to broaden their minds. They "just want a job, ok, and could you please show me the fastest way out of here?" I'm sympathetic. Who wants to spend $40,000 (minimum) for a bachelor's degree that still can't guarantee them a job? And certainly part of the problem is that universities love all the extra money that's coming in, but feel a twinge of guilt when those students who aren't prepared don't succeed. Legislators and administrators, who hear from the howling parents who pay the bills of these mediocre students, put pressure on faculty to do better. By "better," they mean graduate more students faster with better grades and with less funding. If we rule out the easy way (just lowering standards), and take the challenge to "do better" seriously, what's left?

Solutions: 1) Placement. Students should not be admitted to the university if they are not capable of succeeding and students should not be allowed into courses for which they have a high probability of failure. We can pretty accurately predict success with placement tests and we need to do this more. 2) Remediation. If students arrive without the skills but it is possible to teach them those skills, they need bridging courses to get them there. 3) Academic probation and dismissal. Students who are not succeeding, and who are not likely to turn it around, should not be strung along. 4) Monitoring. Technology can be used to monitor student progress so that intervention occurs quickly before students spiral downward. We do all of these things now. We just need to do them more, and better. But the following are not generally addressed at all. 5) Instruction. Most faculty arrive with good content area knowledge but limited teaching experience or knowhow. This can be addressed, but it would take a mind shift for the university to accept that this is a problem. 6) Compensation. Little attention is paid to the quality of instruction. Typically, only instructors with high D/F/W (drop, fail, withdraw) rates get any attention from administration, and this negative attention can easily be avoided by lowering standards and giving lots of As. But standardized tests, with all their flaws, can measure incoming and outgoing students and be used to reward instructors who show the gains. Will this lead to "teaching to the test?" Possibly. But if the test is good, that's not the worst problem to have. 7) Peer review. Research faculty know all about peer review. It's how they get articles published in good journals. But in the classroom, instruction is siloed. Nobody watches anyone else teach or gives them any tips on how to do it better. Sure, there's muttering in the hallways about which instructors are too easy, or just plain bad, but nothing gets done about it. This could be fixed if there was the will to do it, but again, it would require a major shift in faculty culture. 8) Reporting. Something I've never heard mentioned anywhere is that universities really ought to report not just on the grade a student receives, but how long it took the student to get there, and by what path. We have this data. We could put some of the rigor back into transcripts that are packed with As by reporting the information employers want to know: How much external time and effort was expended to get this student over the bar? 9) Tracks. I know it's sacrilege but, while college is for everyone, the liberal studies degree is not. Universities need to rethink degree granting with an eye towards certificates and diplomas that lead directly to a career path. Want to be a salesman, a dental hygeinist or an x-ray technician, or a database programmer, a forest ranger or a cop? Sure, a bachelors would be helpful, but it's probably not something you "need." Want to be an astrophysicist, a historian, or a philosopher? Ok, get the bachelors. But here's something else we should tell incoming freshmen and rarely do. If you get the bachelors, you probably don't need to come back to school when you change careers, as most of us do these days. With the certificates, you probably do.


12.12.2010 Why going "TSA" on web classes just won't work
06.14.2010 What Google should do next
06.11.2010 Why NAU's Mobile Computing Policy needs rethinking
05.05.2010 College is for Everyone, so Attendance is Mandatory!
04.20.2010 LMS Decisions
04.12.2010 The Hacker-Hipster Manifesto
02.19.2010 What is up with Google lately?
01.22.2010 Working and Learning through Snow Days, Swine Flu and Other Disasters
01.04.2010 Clickers: Treating the symptoms or the disease?
12.20.2009 Spreading the FUD
10.14.2009 NAU adopts MS Exchange; increase in productivity negligible
10.02.2009 How to get attention in Academia
10.01.2009 Universal Design
09.30.2009 Should NAU site license the MacOS as well as Windows?
09.01.2009 Marketshare change among LMSes over time
05.26.2009 Mac Growth in Higher Ed
05.21.2009 Microsoft on the move?
04.15.2009 Free and Open Source Software in the Enterprise
Why Computing Monopolies are Bad
How fast is your network connection?
Data Visualization
Mossberg puts his finger on it, and his foot in it.
Why can't Microsoft get it right?
The truth about telecommuting
Blackboard's Scholar
Learning Spaces
Podcasting with iTunesU
Gaming on Campus
©2007-2013 Larry MacPhee | IM: lmac@mac.com | Skype: larryrmacphee | google: larry.macphee | 928-523-9406