04/3/14
SNAPP_moodle

Using SNAPP in OSX

In recent months, many folks (myself included) have been frustrated by a sudden incompatibility between SNAPP and the latest version of popular browsers and Java. SNAPP (Social Networks Adapting Pedagogical Practice) is a highly useful tool for visualizing discussion board activity in Blackboard, Moodle, and Sakai. I have wanted to use it, and to recommend it to others. With this work around, I can do both.

I should note up front, that I have only successfully implemented this work around on my own desktop environment, and have not tested it on other platforms or using other versions. If this works for you under other conditions, or if you find other more versatile work arounds, please post to the comments.

The environment in which I have been able to successfully launch SNAPP is as follows:

Under these conditions, getting SNAPP to work is actually…well, a snap!

1. In the Java Control Panel (System Preferences –> Java), go to the Security tab and edit the Site List to include both the code source and the base url from which you expect to launch the SNAPP applet. (If you point to http://www.snappvis.org, then Java will yell at you for not using https. You’ll have to accept, or else host the code in some other more secure location. So it goes.)

SNAPP Java Fix

2. Click OK to commit the changes.

3. Hurray!!!

In addition to the primary website for the SNAPP tool (http://www.snappvis.org), Sandeep Jayaprakash from Marist College has provided some excellent documentation on installing SNAPP to work on a local machine. well worth checking out: https://confluence.sakaiproject.org/pages/viewpage.action?pageId=84902193

10/30/13
Fast_food_01_ebru

Predictive Analytics as ‘Haute Cuisine’

Sander Latour recently wrote a brief article in which he reflected upon his experience at EDUCAUSE 2013 and, in particular, on the ways in which learning analytics was represented at the conference.[1] In this article, he argues that the ways in which predictive analytics are currently employed in higher ed share several characteristics with fast food: cheep and readily digestible, but perhaps of little long term nutritive value. While I agree with Latour’s conclusion that “Learning Analytics should be about learning, and learning is what we should aim for,” I would like to both temper and strengthen his argument by clarifying the distinction between learning and predictive analytics.

I have several significant reservations about the definition of learning analytics posited by SoLAR, but it is frequently cited as authoritative and will do for the purpose of this discussion:

“SoLAR defines learning analytics as the measurement, collection, analysis and reporting of data about learners and their contexts, for the purposes of understanding and optimizing learning and the environments in which it occurs” [2]

Learning analytics, then, is a general term referring to any use of data that seeks to facilitate learning (I disapprove of the language of ‘optimization,’ here, which presumes a very narrow conception of learning as something that can be evaluated with respect to a known standard. Although this may be possible in the case of learning conceived in terms of information transfer, or perhaps skill mastery, it fails to appreciate that there are other types of learning that are not so easily measured or evaluated . . . but I digress). Predictive analytics, on the other hand, makes use of inferential statistics and/or machine learning methods in order to make probabilistic determinations about future performance on the basis of past behavior and other dispositional characteristics. Because of the generality of the term ‘learning analytics,’ there will definitely be some situations in which predictive and learning analytics overlap. If we conceive of learning as skill mastery (perhaps operationalized as something like having answered a specific number of questions correctly on a set number of consecutive quizzes), then it may be possible to strongly correlate something like LMS activity to learning rate. There are, however, applications of predictive analytics that are not, strictly speaking, concerned with learning, but rather are better described as academic analytics (i.e. business intelligence specifically applied to educational contexts). My favorite example of predictive analytics comes from Georgia State University, where Timothy M. Renick discovered that high performing students with even small unpaid bills the day before the payment deadline were at an increased risk of attrition. Consequently, GSU offered nearly 200 mini ‘Panther Grants’ to students who were dropped for nonpayment. The grants successfully nudged these students over the payment hump, thereby increasing the school’s retention rates and generating more than $660,000 in tuition and fee revenue that would otherwise have been lost. In speaking with Dr. Renick in person about the project, I was pleased to learn that their predictive analytics initiative also included giving every academic advisor a second monitor, so that at-risk students had an opportunity to review their behaviors, and collaborate with advisors in determining a plan of action that would increase their chances of success at the institution (i.e. retention, minimum grade point achievement, vocational aspirations, and satisfaction).[3]

Predictive analytics, then, amounts to arriving at probabilistic expectations about future performance on the basis of past behavior (this, of course, raises the question of whether learning is a behavior, or whether behaviors merely function as an indirect way of quantifying learning, which is intractable). Is Purdue’s Course Signals (and other similar analytics products, like Blackboard’s new Retention Center, for example) a predictive analytics product? Latour uses Course Signals as exemplary in his criticism of predictive analytics. This is not, however, an accurate characterization, despite the fact that it and other similar products are frequently treated as if they had some kind of predictive power (such products, in fact, encourage this kind of mis-characterization through the use of labels like ‘on track’ and ‘at risk’). These kinds of dashboard are not predictive, because they are not probabilistic. They neither employ inferential statistics nor machine learning methods. Rather, they are dumb indicators that merely report interaction frequencies (login attempts, access to materials, grade performance, etc) and produce alerts if a particular student’s behavior deviates beyond a pre-determined (and usually arbitrary) percentage from the class average (note, that these dashboards do not even employ measures of dispersion, to check to see if a particular student’s performance differs significantly from the mean). These products are easy to produce, and so relatively inexpensive, but I agree with Latour that they are not particularly interesting. I would add, however, that they are also potentially dangerous, since they make implicit claims to predictive power that are illegitimate, but may nevertheless be taken seriously by instructors and students alike.

In a recent meeting with representatives of a large LMS firm, it was mentioned that a review of data from one institution revealed that there was no significant observable difference in performance (grade point or retention) between students with high grades on the basis of engagement. In other words, among students entering a course with high grades, their level of engagement within the online learning environment had no significant impact on their final performance in the course. With ‘fast-food’ analytics products, high performing students with low levels of engagement may quite possibly be flagged as at risk, at the same time as their levels of engagement lower the class average in a way that makes unengaged low-performers more difficult to detect. A truly predictive model (predictive analytics as ‘haute cuisine,’ so to speak) would easily deal with these differences).

Predictive analytics are not all ‘fast food.’ In order for our predictive analytics to be valuable, however, we need to ensure that claims are actually predictive (rather than description masquerading in predictive clothing), and that we are clear about what exactly is being predicted (i.e. a measurable behavior). In the absence of these considerations, however, our analytics will be fast food, indeed.


10/28/13
Image courtesy of Wikepedia Commons

‘Real Time’ versus ‘Student Time’

Image courtesy of Wikepedia Commons

Image courtesy of Wikepedia Commons

In a recent article for IBM Data Magazine, Tom Deutch made an important distinction between ‘Real Time’ and ‘Customer Time’. ‘Real time’ analytics, on the one hand, are updated moment by moment, responding and updating within milliseconds of a trackable user behavior.  What Deutch observes, however, is that real time analytics come at a cost which may be difficult to justify given a particular use case.  I would add to this that real time analytics may in many situations decrease one’s ability to discern the signal from the noise, and so lead to insights that are not actually reflective of what the data are saying as a whole.

‘Customer time,’ on the other hand, gets at the fact that what is ACTUALLY important is not minimizing latency, but rather matching latency to a particular use case.  In other words, customer time recognizes that real time is in most cases unnecessary, and that what is actually important is the ability to produce insights in time to beat users to their next action or decision. Thinking in terms of customer time allows us to make smart investments from both the perspective of infrastructure and that of the skill-sets of personnel.

As always keep in mind a few last points of pragmatic guidance: keep things as simple as possible and be driven by fit-for-purpose principles. Do not try to jump into multiple new technologies at the same time, make sure to fully leverage those technologies you already own, and minimize data movement by instead bringing the compute tasks to where the data is.[1]

What, then, is ‘customer time’ within a learning environment?  In other words, what is ‘student time’? In order to determine ‘student time,’ we must first decide what constitutes a meaningful activity (i.e. an activity that is significantly related to a student’s success).  From the perspective of the instructor, is a meaningful student activity a successful login attempt? Access to an electronic resource? Class attendance? The completion of an assignment?  How quickly must an instructor receive feedback about student performance in order to produce maximally beneficial interventions?  The answer to these questions will likely differ from institution to institution, from discipline to discipline, and even from instructor to instructor.  If individual instructors have the power to build out their analytic capabilities according to their individual needs, then reflecting on student time becomes an important pedagogical decision, related to what actions the instructor themselves considers significant with respect to their quasi-idiosyncratic conceptions of student success.  From the perspective of developing enterprise-level learning analytics capacity (through LMS integrated tools like Blackboard Analytics for Learn, Desire2Learn Insights, etc), however, decisions about latency must be made that appeal to the most common use case.  Of course, the amount of time an instructor wishes to take between efforts to access analytics is up to them, and may range anywhere from hourly to semesterly.  But what is the minimum amount of latency that an institution of higher education should be prepared to support?

I am interested in what you think about the notion of ‘student time,’ and of the factors that lead to its determination, both pedagogically and institutionally. Let me know what you think in the comments below.


[1] Deutch, Tom. “Real Time Versus Customer Time: For big data, how fast is fast enough?” IBM Data Magazine. 16 Aug 2013

10/25/13
Donkey-Carried-by-the-Cart

“Educational Data Mining and Learning Analytics”

This week, Ryan Baker posted a link to a piece, co-written with George Siemens, that is meant to function as an introduction to the fields of Educational Data Mining (EDM) and Learning Analytics (LA). “Educational Data Mining and Learning Analytics” is book chapter primarily concerned with methods and tools, and does an excellent job of summarizing some of the key similarities and differences between the two fields in this regard. However, in spite of the fact that the authors make a point of explicitly stating that EDM and LA are distinctly marked by an emphasis on making connections to educational theory and philosophy, the theoretical content of the piece is unfortunately quite sparse.

The tone of this work actually brings up some concerns that I have about EDM/LA as a whole. The authors observe that EDM and LA have been made possible, and have in fact been fueled, by (1) increases in technological capacity and (2) advances in business analytics that are readily adaptable to educational environments.

“The use of analytics in education has grown in recent years for four primary reasons: a substantial increase in data quantity, improved data formats, advances in computing, and increased sophistication of tools available for analytics”

The authors also make a point of highlighting the centrality of theory and philosophy in informing methods and interpretation.

“Both EDM and LA have a strong emphasis on connection to theory in the learning sciences and education philosophy…The theory-oriented perspective marks a departure of EDM and LA from technical approaches that use data as their sole guiding point”

My fear, however, which seems justified in light of the imbalance between theory and method in this chapter (a work meant to introduce, summarize, and so represent the two fields), is that the tools and methods that the fields have adopted, along with the technological- and business-oriented assumptions (and language) that those methods imply, have actually had a tendency to drive their educational philosophy.  From their past work, I get the sense that Baker and Siemens would both agree that the educational / learning space differs markedly from the kind of spaces we encounter in IT and business more generally. If this is the case, I would like to see more reflection on the nature of those differences, and then to see various statistical and machine learning methods evaluated in terms of their relevance to educational environments as educational environments.

Donkey-Carried-by-the-CartAs a set of tools for “understanding and optimizing learning and the environments in which it occurs” (solaresearch.org), learning analytics should be driven, first and foremost, by an interest in learning. This means that each EDM/LA project should begin with a strong conception of what learning is, and of the types of learning that it wants to ‘optimize’ (a term that is, itself, imported from technical and business environments into the education/learning space, and which is not at all neutral). To my mind, however, basic ideas like ‘learning’ and ‘education’ have not been sufficiently theorized or conceptualized by the field. In the absence of such critical reflection on the nature of education, and on the extent to which learning can in fact be measured, it is impossible to say exactly what it is that EDM/LA are taking as their object. How can we measure something if we do not know what it is? How can we optimize something unless we know what it is for? In the absence of critical reflection, and of maintaining a constant eye on our object, it becomes all too easy to consider our object as if its contours are the same as the limits of our methods, when in actual fact we need to be vigilant in our appreciation of just how much of the learning space our methods leave untouched.

If it is true that the field of learning analytics has emerged as a result of, and is driven by, advancements in machine learning methods, computing power, and business intelligence, then I worry about the risk of mistaking the cart for the horse and, in so doing, becoming blind to the possibility that our horse might actually be a mule—an infertile combination of business and education, which is also neither.

09/4/13
Analytics Four Square Diagram

Four (Bad) Questions about Big Data

A colleague recently sent me an email that included four questions that he suggested were the most concerning to both data management companies and customers: *

  • Big Data Tools – What’s working today? What’s next?
  • Big Data Storage – Do organizations have a manageable and scalable storage strategy?
  • Big Data Analytics – How are organizations using analytics to manage their large volume of data and put it to use?
  • Big Data Accessibility – How are organizations leveraging this data and making it more accessible?

These are bad questions.

I should be clear that the questions are not bad on account of the general concerns they are meant to address. Questions about tools, scalable storage, the ways in which data are analyzed (and visualized), and the availability of information are central to an organization’s long-term information strategy. Each of these four questions addresses a central concern that has very significant consequences for the extent to which available data can be leveraged to meet current informational requirements, but also future capacity. These concerns are good and important. The questions, however, are still bad.

The reason these questions are bad (okay, maybe they’re not bad…maybe I just don’t like them) is that they are unclear about their terms and definitions. In the first place, they imply that there is a separation between something called ‘Big Data’ and the tools, storage, analytics (here used very loosely), and accessibility necessary to manage it. In actual fact, however, there is no such ‘thing’ as Big Data in the absence of each of those four things. Transactional systems (in the most general sense, which also includes sensors) produce a wide variety of data, and it is an interest in identifying patterns in this data that has always motivated empirical scientific research. In other words, it is data, and not ‘Big Data’ that is our primary concern.

The problem with data as objects is that, until recently, we have been radically limited in our ability to capture and store them. A transactional system may produce data, but how much can we capture? How much can we store? For how long? Until recently, technological limitations have radically limited our ability to capture, store, and analyze the immense quantities of data that are generated, and have meant working with samples, and using inferential statistics to make probable judgements about a population. In the era of Big Data, these technological limitations are rapidly disappearing. As we increase our capacity to capture and store data, we increasingly have access to entire populations. A radical increase in available data, however, is not yet ‘Big Data.’ It doesn’t matter how much data you can store if you don’t also have the capacity to access it. Without massive processing power, sophisticated statistical techniques, and visualization aids, all of the data we collect is for naught, pure potentiality in need of actualization. It is only once we make population data meaningful in its entirety (not sampling from our population data) through the application of statistical techniques and sound judgement that we have something that can legitimately be called ‘Big Data.’ A datum is a thing given to experience. The collection and visualization of a population of data produces another thing given to experience, a meta-datum, perhaps.

In light of these brief reflections, I would like to propose the following (VERY) provisional definition of Big Data (which resonates strongly, I think, with much of the other literature I have read):

Big Data is the set of capabilities (capture, storage, analysis) necessary to make meaningful judgements about populations of data.

By way of closing, I think it is also important to distinguish between ‘Big Data’ on the one hand, and ‘Analytics’ on the other. Although the two are often used in conjunction with each other, it is important to note that using Big Data is not the same as doing analytics. Just as the defining characteristic of Big Data above in increased access (access to data populations instead of samples), so to does analytics. In the past, the ability to make data-driven judgements meant either having some level of sophisticated statistical knowledge oneself, or else (more commonly) relying upon a small number of ‘data gurus,’ hired expressly because of their statistical expertise. In contrast to more traditional approaches to institutional intelligence, which involve data collection, cleaning, analysis, and reporting (all of which took time), analytics toolkits quickly perform these operations in real-time, and make use of visual dashboards that allow stakeholders to make timely and informed decisions without also having the skills and expertise necessary to generate these insights ‘from scratch.’

Where Big Data gives individuals access to all the data, Analytics makes Big Data available to all

Big Data is REALLY REALLY exciting. Of course, there are some significant ethical issues that need to be addressed in this area, particularly as the data collected are coming from human actors, but from a methodological point of view, having direct access to populations of data is something akin to a holy grail. From a social scientific perspective, the ability to track and analyze actual behavior instead of relying on self-reporting about behavior on surveys can give us insight into human interactions that, until now, was completely impossible. Analytics, on the other hand, is something about which I am a little more ambivalent. There is definitely something to be said to encouraging data-driven decision-making, even by those with limited statistical expertise. Confronted by pretty dashboards that are primarily (if not exclusively) descriptive, without the statistical knowledge to ask even basic questions about significance (just because there appears to be a big difference between populations on a graph, it doesn’t necessarily mean that there is one), and with no knowledge about the ways in which data are being extracted, transformed, and loaded into proprietary data warehousing solutions, I wonder about the extent to which analytics do not, at least sometimes, just offer the possibility of a new kind of anecdotal evidence justified by appeal to the authority of data. Insights generated in this way are akin to undergraduate research papers that lean heavily upon Wikipedia because, if it’s on the internet, it’s got to be true.

If it’s data-driven, it’s got to be true.

Analytics Four Square Diagram

I’m not really happy with this diagram. Definitely a work in progress, but hopefully it capture’s the gist of what I’m trying to sort out here.


* The source of these questions is an event that was recently put on by the POTOMAC Officer’s Club entitled “Big Data Analytics – Critical Support for the Agency Mission”, featuring Ely Kahn, Todd Myers, and Raymond Hensberger.
07/31/13

Learning Analytics as Teaching Practice

Too often, it seems, conversations about learning analytics focus too much on means, and not enough on ends. Learning analytics initiatives are always justified by the promise of using data as a way of identifying students at risk, in order to develop interventions that would increase their chances of success. In spite of the fact that the literature almost always holds such intervention out as a promise, a surprising lack of attention is paid to what these interventions might look like. A recent paper presented by Wise, Zhao, and Hausknecht at the 2013 Conference on Learning Analytics and Knowledge (LAK’13) goes a long way in putting learning analytics in perspective, taking some crucial first steps in the direction of a model of learning analytics as a pedagogical practice.

Analytics About Learning

Analytics ABOUT Learning

Like so many, I often find myself being sucked into the trap of thinking of learning analytics as a set of tools for evaluating learning, as if learning and analytics inform one another as processes that are complementary, but nonetheless distinct. In other words, it is easy for me to think of learning analytics as analytics ABOUT learning. What this group of researchers from Simon Fraser University show, however, is that it is possible to think of learning analytics as a robust pedagogical practice in its own right. From analytics ABOUT learning, Wise, Zhao, and Hausknecht encourage us to think about analytics AS learning.

Analytics AS Learning

Analytics AS Learning

The paper is ostensibly interested in analytics for online discussions, and is insightful in its emphasis on dialogical factors, like the extent to which students not only actively contribute their own thoughts and ideas, but also engage in ‘listening’-type behaviors (i.e. thoughtful reading) that would engender engagement in community and a deeper level of discussion. More generally, however, two observations struck me as generally applicable to thinking of learning analytics as a pedagogical practice.

1. Embedded Analytics are also Interventions

Wise et al make a distinction between embedded analytics, which are “embedded in the discussion interface and can be used by learners in real-time to guide their participation,” and extracted analytics, which involve the collection of traces from learning activity in order to interpret them apart from the learning activity itself. Now, the fact that student-facing activity dashboards are actually also (if not primarily) intervention strategies is perhaps fairly obvious, but I have never thought about them in this way before. #mindblown

2. Analytics are Valued, through and through

By now we all know that, whatever its form, research of any kind always involves values, no matter how much we might seek to be value neutral. The valued nature of learning analytics, however, is particularly salient as we blur the line between analysis (which concerns itself with objects) and learning (which concerns itself with subjects). Regardless of the extent to which we realize how our use of analytics reinforces values and behaviors beyond those explicitly articulated in a curriculum, THAT we are using analytics and HOW we are using them DO have an impact. Thinking carefully about this latent curriculum and actively identifying the core values and behaviors that we would like our teaching practices to reinforce allows ensure consistency across our practices and with the larger pedagogical aims that we are interested in pursuing.

Wise, Zhao, and Hausknecht identify six principles (with which I am generally sympathetic) that guide their use of analytics as, and for the sake of, pedagogical intervention:

  1. Integration – in order for analytics to be effectively used by students, the instructor must present those analytics are meaningfully connected to larger purposes and expectations for the course or activity. It is incumbent upon the ‘data-driven instructor’ to ensure that data are not presented merely as a set of numbers, but rather as meaningful information of immediate relevance to the context of learning.
  2. Diversity (of metrics) – if students are presented with too few sources of data, it becomes very easy for them to fixate upon optimizing those few data points to the exclusion of others. Sensitive also to the opposite extreme, which would be to overload students with too much data, it is important to present data in such a say as to encourage am holistic approach to learning and learning aims.
  3. Agency – students should be encouraged to use the analytics to set personal goals, and to use analytics as a way of monitoring their progress relative to these. Analytics should be used to cultivate sutonomy and a strong sense of personal responsibility. The instructor must be careful to mitigate against a ‘big-brother’ approach to analytics that would measure all students against a common and rigid set of instructor-driven standards. The instructor must also act to mitigate against the impression that this is what is going on, which has the same effect.
  4. Reflection – encouraging agency involves cultivating habits of self-reflection. The instructor should, therefore, provide explicit time and space for reflection on analytics. The authors, for example, use an online reflective journal that is shared between students and instructor.
  5. Parity – activities should be designed to avoid a balance of power situation in which the instructor collects data on the students, and instead use data as a reflective and dialogic tool between the instructor and students. In other words, data should not be used for purposes of evaluation or ranking, but rather should be used as a critical tool for the purpose of identifying and correcting faults or reinforcing excellences.
  6. Dialogue – just as analytics are used as an occasion for students to cultivate agency through active reflection on their behavior, the instructor should “expose themselves to the same vulnerability as the students.” Not only should instructors attend to and reflect upon their own analytics, but do so in full view of the class and in such a way as to allow students to criticize him/her in the same way as s/he does them.

References

07/22/13

How Big Data Is Taking Teachers Out of the Lecturing Business

A Summary and Response

In his Scientific American article, How Big Data is Taking Teachers Out of the Lecturing Business” Seth Fletcher describes the power of data-driven adaptive learning for increasing the efficacy of education while also cutting the costs associated with hiring teachers. Looking specifically at the case of Arizona State University, where computer-assisted learning has been adopted as an efficient way to facilitate the completion of general education requirements (math in particular), Fletcher describes a situation in which outcomes for students scores increase, teacher satisfaction improves (as teachers shift from lecturing to mediating), and profit is to be made by teams of data-scientists for hire.

Plato_Aristotle_della_Robbia_OPA_FlorenceThere are, of course, concerns about computer-assisted adaptive learning, including those surrounding issues of privacy and the question of whether such a data-driven approach to education doesn’t tacitly favor STEM (training in which can be easily tested and performance quantified) over the humanities (which demands an artfulness not easily captured by even the most elaborate of algorithms). In spite of these concerns, however, Fletcher concludes with the claim that “sufficiently advanced testing is indistinguishable from instruction.” This may very well be the case, but his conception of ‘instruction’ needs to be clarified here. If by instruction Fletcher means to say teaching in general, then the implication of his statement is that teachers are becoming passé, and will at some point become entirely unnecessary. If, on the other hand, instruction refers only to a subset of activities that take place under the broader rubric of education, then there remains an unquantifiable space for teachers to practice pedagogy as an art, the space of criticism and imagination…the space of the humanities, perhaps?

As the title of Fletcher’s piece suggests, Big Data may very well be taking teachers out of the lecturing business, but it is not taking teachers out of the teaching business. In fact, one could argue that lecturing has NEVER been the business of teaching. In illustrating the aspects of traditional teaching that CAN be taken over by machines, big data initiatives are providing us with the impetus to return to questions about what teaching is, to clarify the space of teaching as distinct from instruction, and with respect to which instruction is of a lower-order even as it is necessary. Once a competence has been acquired and demonstrated, the next step is not only to put that competency to use in messy, real-world situations–situations in which it is WE who must swiftly adapt–but also to take a step back in order to criticize the assumptions of our training. Provisionally (ALWAYS provisionally), I would like to argue that it is here, where technê ends and phronesis begins, that the art of teaching begins as well.

07/22/13

Using Data You Already Have

I-WANT-DATA-NOWThis is a rich (and at times quite dense), article by authors from the University of Central Florida that effectively demonstrates some of the potential for developing predictive models of student (non-)success, but also some of the dangers. It emphasizes the fact that the data do not speak for themselves, but require interpretation at every level. Interpretation not only guides researchers in the questions they ask and the ways that certain insights become actionable, but also their interventions.

Dziuban, Moskal, Cavanagh & Watts (June 2012) “Analytics that Inform the University: Using Data You Already Have”

An interesting example from the article is the observation that, when teaching modalities are compared (ex Blended, Online, Face-To-Face, Lecture Capture), the blended approach is found to produce greater success (defined as a grade of C or higher) and fewer withdrawals. Lecture Capture, on the other hand, sees the least success and the most withdrawals, comparatively. This is a striking observation (especially as institutions invest more and more in products like Echo360, and as MOOC companies like coursera begin to move into the business of providing lecture-capture technology). When modality is included in a logistic regression, however, that includes other variables (ex. Cumulative GPA, High School GPA, etc), it is found to have nearly no predictive power. The lesson here, is that our predictive models need to be carefully assessed, and interventions carefully crafted, so that we are actually identifying students at risk, and that our well-meaning, but some-what mechanically generated, interventions do not have unexpected and negative consequences (i.e. What is the likelihood that identifying a PARTICULAR student as ‘at risk’ may in fact have the effect of DECREASING their chances of success? )

06/25/13

Personal Activity Monitors and the Future of Learning Analytics

Jawbone Up and Learning AnalyticsIn Spring 2013, while discussing the details of his final project, a gifted student of mine revealed that he was prone to insomnia. In an effort to understand and take control of his sleeping habits, had began wearing a device called a ‘Jawbone UP.’ I recently started wearing the device myself, and have found it an exciting (and fun) technology for increasing behavioral awareness, identifying activity patterns (both positive and negative), and motivating self-improvement. Part of the movement toward a quantification of self, this wearable technology not only exemplifies best practice in mobile dashboard design, but it also opens up exciting possibilities for the future of learning analytics.

Essentially, the UP is a bracelet that houses a precision motion sensor capable of recording physical activity during waking hours, and tracking sleep habits during the night. The wearable device syncs to a stunning app that presents the user with a longitudinal display of their activity and makes use of an ‘insight engine’ that identifies patterns and makes suggestions for positive behavioral improvements. The UP is made even more powerful by encouraging the user to record their mood, the specifics of deliberate exercise, and diet. The motto of the UP is “Know Yourself, Live Better.” In the age of ‘big data,’ an age in which it has become possible to record and analyze actual behavioral patterns in their entirety rather than simply relying upon samples of anecdotal accounts, and in which our mobile devices are powerful enough to effortlessly identify patterns of which we, ourselves, would otherwise be quite ignorant, the UP (and its main competitor, the Fitbit Flex) are exemplary personal monitoring tools, and represent exciting possibilities for the future of learning analytics.

Personal activity monitors like the UP effectively combine three of the six “technologies to watch,” as identified in the 2013 Higher Education Edition of the NMC Horizon Report: Wearable Technology, Learning Analytics, and Games and Gamification.

Wearable Technology. As a bracelet, the UP is obviously a wearable technology. This kind of device, however, is strikingly absent from the list of technologies listed in the report, which tend to have a prosthetic quality, extending the user’s ability to access and process information from their surroundings. The most interesting of these, of course, is Google’s augmented-reality-enabled glasses, Project Glass. In contrast to wearable technologies that aim at augmenting reality, motivated by a post-human ambition to generate a cyborg culture, the UP has an interestingly humanistic quality. Rather than aiming at extending consciousness, it aims at facilitating self-consciousness and promoting physical and mental well-being by revealing lived patterns of experience that we might otherwise fail to recognize. The technology is still in its infancy and is currently only capably of motion sensing, but it is conceivable that, in the future, such devices might be able to automatically record various other bodily activities as well (like heart-rate and geo-location, for example).

Learning Analytics. Learning Analytics is variously defined, but it essentially refers to the reporting of insights from learner behavior data in order to generate interventions that increase the chances of student success. Learning analytics takes many forms, but one of the most exciting is the development of student dashboards that identify student behaviors (typically in relation to a learning management system like Blackboard) and make relevant recommendations to increase academic performance. Acknowledging the powerful effect of social facilitation (the social-psychological insight that people often perform better in the presence of others than they do alone), such dashboards often also present students with anonymized information about class performance as a baseline for comparison. To the extent that the UP and the Fitbit monitor activity for the purpose of generating actionable insights that facilitate the achievement of personal goals, they function in the same way as student dashboards that monitor student performance. Each of these systems are also designed as application platforms, and the manufacturers strongly encourage the development of third-party apps that would make us of and integrate with their respective devices. Unsurprisingly, most of the third-party apps that have been built to date are concerned with fitness, but there is no reason why an app could not be developed that integrated personal activity data with information about academic behaviors and outcomes as well.

Games and Gamification. The ability to see one’s performance at a glance, to have access to relevant recommendations for improvement according to personal goals, and to have an objective sense of one’s performance relative to a group of like individuals can be a powerful motivator, and it is exactly this kind of dashboarding that the UP does exceptionally well. Although not aimed at academic success, but on physical and mental well-being, the UP (bracelet and app) functions in the same way as learning analytics dashboards, but better. To my mind, the main difference between the UP and learning analytics dashboards–and the main area in which learning analytics can learn from consumer products such as this–is that it is fun. The interface is user-friendly, appealing, and engaging. It is intentionally whimsical, like a video game, and so encourages frequent interaction and a strong desire to keep the graphs within desired thresholds. The desire to check in frequently is further increased by the social networking function, which allows friends to compare progress and encourage each other to be successful. Lastly, the fact that the primary UP interface takes the form of a mobile app (available for both  is reflective of the increasing push toward mobile devices in higher education. Learning analytics and student dashboarding can only promote student success if the students use it. More attention must be placed, then, on developing applications and interfaces that students WANT to use.

Screen shot of the iOS dashboard from the Jawbone UP app

Personal activity monitors like the UP should be exciting to, and closely examined by, educators. As a wearable technology that entices users to self-improvement by making performance analytics into a game, the UP does exactly what we are trying to do in the form of student activity dashboards, but doing it better. In this, the UP app should serve as an exemplar as we look forward to the development of reporting tools that are user-focused, promoting ease, access, and fun.

Looking ahead, however, what is even more exciting (to me at least) is the prospect that wearable devices like the UP might provide students with the ability to extend the kinds of data that we typically (and most easily) correlate with student success. We have LMS information, and more elaborate analytics programs are making effective use of more dispositional factors. Using the UP as a platform, I would like to see someone develop and app that draws upon the motion, mood, and nutrition tracking power of the UP and that allows students to relate this information to academic performance and study habits. Not only would such an application give students (I would hesitate to give personal data like this to instructors and / or administrators) a more holistic vision of the factors contributing or detracting from academic success, but it would also help to cultivate healthy habits that would contribute to student success in a way that extends beyond the walls of the university and into long-term relationships at work, to family, and with friends as well.

03/18/13

At the End of Reason…is PowerPoint

PowerPoint doesn't kill people

[Image Creative Commons licensed / Flickr user cogdogblog]

Somehow, PowerPoint has become, for many educators and researchers, the sine qua non of academic presentation. As though knowledge can only be communicated in a standardized digital format. As if rich concepts require this, in order to become intelligible. As though hollow half-thoughts will take life in this magical medium. The truth, in my observation, is far to the contrary.

I don’t know if my experience is atypical (though I’ve no reason to think it would be), but I’ve seen a lot of bad point presentations. Heck, I’ve probably given a lot of bad power point presentations. But, despite my conviction that PowerPoint often detracts from good content, and further muddles questionable work, I continue to toe the line and articulate my work into this format. The problem is that I often can’t tell if my presentation is working or not. This is my primary concern about the medium, which I’ll return to below. First, a bit of history is in order.

A couple of years ago I attended an interdisciplinary academic retreat for postgraduate students. And amongst the monotonous onslaught of ppt after ppt after ppt, one of my close friends, a philosophical mind of the first order (and something of a self-avowed Luddite), took the stage and just spoke. Without the visual aide, or intrusion, of a PowerPoint. Now, I’m very familiar with the man’s work, which is, in a word, brilliant; his presentation on that occasion was no exception. Yet the audience greeted him as an oddity, offering him half-hearted applause, limited intellectual engagement, and the kind of contemptuously lukewarm questions that make an academic lose sleep. Now far be it for me to claim a transcendent knowledge of others’ motivations, but, given the widely varied topics of the day, the only standout feature of my colleague’s presentation was the format. My hunch: the audience couldn’t reconcile themselves to the anomaly of a PowerPoint-free presentation. This anecdote lies at the heart of my conviction that PowerPoint has become a symptom of academics’ and researchers’ collective madness.

Powerpoint has been with us for quite some time now (if you consider a couple of decades to be ‘quite some time’). Interestingly, this quintessential Microsoft product, and mainstay of the office suite, initially had nothing to do with Microsoft, or the Windows OS. In fact, it was originally called “Presenter” and was developed in the 1980s by a company called Forethought Inc. for use on Macs. It wasn’t until 1990, when the first edition of Microsoft Office was released, that PowerPoint was introduced on the Windows platform. Since then, it has been developed as a staple of the Windows ecosystem and has become synonymous with the Windows experience. Never one to concede market share willingly, Microsoft has also been diligent in developing their Office suite for the Mac OS, making it the de facto standard for digital presentations across platforms (though it’s not so much the idea of a specific software product, as the pedagogical implications of general digital incompetence, that concern me here).

PowerPoint has rightly become the subject of significant critique, and some degree of scholarly scrutiny. Tufte has provocatively declared PowerPoint to be “evil,” and offered a thoughtful and rigorous treatise on “The Cognitive Style of PowerPoint” (the interested reader can access the Coles notes version at the Wired magazine archive: http://www.wired.com/wired/archive/11.09/ppt2.html). Additionally, an interesting body of research suggests that while postsecondary students may show a preference for PowerPoint, and benefit from slide presentations used effectively (Apperson, Laws & Scepansky, 2008), poorly developed presentations unsurprisingly correlate with worse learning outcomes (Bartsch & Cobern, 2003). Even the scientists at NASA have weighed in, suggesting that ineffective PowerPoint use may have been a contributing factor in the Columbia space shuttle’s crash. Evidently, there is some substance to concerns that ‘PowerPoint is evil’, ‘PowerPoint Makes You Dumb,’ and that as academics and educators we are dying a ‘death by PowerPoint’…

There are a lot of manifest problems with the way PowerPoint presentations are often delivered: PowerPoint content so dislocated from the presenter’s spoken content that it becomes unintelligible, or at the other end of the spectrum presenters simply reading slides verbatim (calling into question the need for a presenter in the first place). Or, slides that are hopelessly packed with an overabundance of hierarchically indistinguishable information (NASA’s concern), or slides with fragmented bullet points that just don’t make sense. My own main pet peeve is any disjuncture between spoken content and slide content such that one is forced to choose—either try to follow the content of the slides, or the presenter’s speech. Ultimately, the audience’s understanding of both is bound to suffer.

So, What is to be Done?

Now I’m not trying to vilify the PowerPoint program itself. Quite the contrary. Over the years it’s become a remarkably powerful, feature-rich tool. But the learning curve of its users appears not to have matched the developmental trajectory of the software itself. The idea seems to be that: a bad PowerPoint is better than no PowerPoint at all, which, to me, is crazy (as stated above, I don’t consider myself exempt from this irrationality). In its lifetime PowerPoint has gone through numerous versions, with ever-escalating functionality and, arguably, technical complexity. The latest, greatest version, PowerPoint 2013 is a dizzying cornucopia of features. But I’m hard-pressed to see how it offers any significant advantages to the average user, relative to, say, PowerPoint 2010 or 2007. And it seems that the average user would largely be unable to take full advantage of the features in previous versions anyways.

My intent, however, is not merely to complain about a perceived fault in our academic habitus. In my view, this is a straightforward problem with a straightforward solution. The underlying challenge, it seems, is: as digital technologies’ complexity grows, the infrastructure or our educational needs to follow suite. Presently, I believe, our training is often, simply, lacking. I believe this certainly extends beyond PowerPoint to a significant range of technologies that might enhance our effectiveness as researchers, educators, and professionals. And I think the solution is: educational reform.

In fact, I think the educational reform we require is already taking place at the grade-school level, where computers, and training in computer skills, are increasingly ubiquitous elements in the curricula. My hope is that this will produce a shift in the way we structure post-secondary learning (though in the least, we’re bound to get crops of students who are more-and-more tech savvy when they reach the undergraduate level).

My basic vision is this: compulsory technology education for students in all faculties, at minimum through the first two years of their undergraduate work. I’m not talking about some diluted ‘modular’ training, which offers a superficial introduction to PowerPoint/Word/Refworks. Rather, I’m advocating full-scale coursework designed to provide a range of technical expertise, including:

  • productivity software, like the Office Suite, but also
  • creative software, including graphic design and video editing,
  • operating systems, including Mac OS, Windows, and the dominant mobile platforms, like Android and IOS,
  • skills for making effective use of social media (Facebook, Twitter, and Youtube come most readily to mind), and scholarly media (Moodle, Blackboard, etc.), and
  • a foundational introduction to creating websites—how to register a domain, secure webhosting, and set up a basic WP site, etc.

I’m talking about training for meaningful competencies in the relevant technologies of a digital world. Ideally, my pedagogical utopia would see this implemented throughout undergraduate programs and graduate programs, though I recognize that the increased specialization of upper-undergraduate years and graduate study may be foundation for a counter-argument. One might, reasonably, query: “what has graphic design got to do with a degree in the physical sciences?” However, I believe the argument is to be validated from a wider view of education’s purposes. It’s not merely a romantic ideal to say that education should enhance and improve one’s life. It should. To this end, sophisticated digital competencies are of clear utility. Additionally, the wearisome questions about the value of “arts degrees” might be turned on their head if we were producing graduates who could translate their creativity into viable digital media and, thereby, make themselves competitive in a global market that prioritizes these skills. We live in a day and age of immeasurably powerful and increasingly complex technology. Standardizing competency in these core technologies seems more and more like a basic necessity. Most importantly though, such a reform might mean we’d no longer need to endure so many terrible PowerPoints.


References:
Adams, C. (2006). PowerPoint, habits of mind, and classroom culture. Journal of Curriculum Studies, 38 (4): 389-411.

Apperson, J.M., Laws, E.L. & J.A. Scepansky. (2008). An assessment of student preferences for PowerPoint presentation structure in undergraduate courses. Computers and Education, 50 (1): 148-153.

Bartsch, R.A. & K.M. Cobern. (2003) Effectiveness of PowerPoint presentations in lectures. Computers and Education, 41 (1): 77-86.