Reading, Writing, and Digital Age Assessment

Decorative image

I recently attended a conference hosted by a company that produces a high-cost learning management system (LMS). In one seminar at the conference, a business law professor collaborated with a sales representative from a textbook company—that is to say, with a corporate producer of digital learning content—to present the affordances of a seamless interface between the LMS and the textbook company’s digital content. The instructor could easily assign digital readings and prefabricated online multiple choice and true/false quizzes, the scores of which could be imported directly into her digital gradebook. Students could prepare for these quizzes through adaptive technology-driven review questions that re-present students with yet-to-be-mastered information. No writing or independent review required on the students’ parts; no reading on the instructor’s.

I attended the seminar because I’m always looking for ways that the digital tools students and I use can better support and extend student agency, exploration, and collaboration in ways that non-connected, analog approaches perhaps cannot.

But does the ease with which institutions and teachers adopt learning technologies typically encourage us to consider or to overlook what we know about how students’ brains and learning dispositions develop? In particular, as emphasis in educational assessment moves toward the new metrics of data analytics, are our choices about how to apply such measures sufficiently focused on learning, rather than on outcomes and scorecards?

For example, software “intelligent tutors” such as that used by the business law instructor’s textbook vendor can analyze student online behavior to create individualized tutoring strategies for certain types of learning (Yan et al., 2013). Yet the designs of such software “often neglect theories and empirical findings in learning science that explain how students learn” (Marzouk et al., 2016, p. 1).

This post considers potential blind spots in data-driven assessment practices in higher education, while also suggesting that more work needs to be done, perhaps by using technology-based biometric and analytic tools, on the question of how digital reading and writing impacts learning.

 

Digital Age Assessment

Current trends in higher education assessment include a movement toward a culture, similar to that of K-12 culture, of accountability through standardization and testing that may not reflect the abilities or learning needs of students (Addison, 2015) and an emphasis on big data and “learning analytics,” the application of statistical analysis and explanatory and predictive models “to enhance or improve student success” (Arroway, Morgan, O’Keefe & Yanosky, 2016, p. 7). While such “next-generation assessment strategies” may “hold the potential to measure a range of cognitive skills, social, emotional development, and deeper learning” (Adams Becker, Cummins, Davis,  Freeman, Hall Giesinger & Ananthanarayanan, 2017, p. 14), the measurement goals emphasized by higher education institutions in the age of learning analytics are often predictive or score-focused, and this preoccupation can have a trickle-down effect to the classroom, where assessment based on empirical observation is paramount for translating learning theory into learning.

Yet learning technologies themselves now take their place alongside medical technologies such as functional magnetic resonance (fMRI) imaging of brain activity as a source of data for how learners learn. For example, video cameras and other tracking devices can measure visual attention and inattention and other biometrics, and adaptive learning technologies can measure student progress (Adams Becker, Cummins, Davis,  Freeman, Hall Giesinger & Ananthanarayanan, 2017).

As virtual and physical classrooms move toward increasingly paperless reading and writing formats, are the affordances of these metrics being tasked to measure the impact of medium on reading comprehension and writing quality?

 

Digital Reading and Writing

In the two sections of a first-year research writing course I taught this semester at my community college, I blended process pedagogy, writing in the disciplines (WID) and genre pedagogies, cognitivist emphasis on student structuring and application of content and skills for themselves, constructionist emphasis on peer support and feedback as students developed authentic research skills within classroom “communities of practice,” and an increased emphasis on connectivist pedagogies, including providing students with opportunities to explore various digital content development and curation tools. Following guided and collaborative practice in each of these skill areas, students embarked on a capstone eight-week independent research project. At the halfway point in the independent project, I administered a self-assessment/survey about students’ curation and knowledge construction practices.

I asked students to describe and rate the perceived effectiveness of their methods for organizing materials, generating bibliographical information, taking and organizing notes, and developing and updating a tentative research question, thesis, and outline. I also asked students to describe any barriers to establishing and keeping an organized routine for making progress on their projects. While this survey helps me provide just-in-time support to individual students and know where to re-tool instruction for the next time around, it also provides an opportunity for students to support and learn from each other as they discuss and problem-solve their barriers and solutions.

In our post-survey discussion, students shared their use of approaches such as visual organizing tools, digital note-taking apps, and recursive refining of a topic and thesis based on new source information. I learned that 40% of the respondents also stated that although during the course they had developed ways of digital reading and writing that they felt were personally effective, during their independent research project they had moved back to hand-written note-taking methods because they felt these better supported their learning and writing.

While students and I alike acknowledged the way that digital note-taking as we’ve practiced it can provide meta-organization for complex thinking about a topic, many students stated that for their independent projects, they actually understood their notes better and developed a clearer big-picture understanding of their topic when they used hand-written notes. Revealing comments collected from students working with both high and lower levels of reading and writing proficiency included:

  • “While my computer organization method turned out well, having to physically write out my notes helps me understand my notes quicker, if that makes sense.”
  • “I have been using my annotated bibliography and the Cornell Notes template, but I have decided that I just need to buy a notebook to take my notes by hand so I am more organized and thorough about my notes.”
  • “Even just feeling the paper with my fingers, I absorb better what I’m reading. It takes longer to print it out and read but I remember more.”

Are these students opting out of a paperless approach to learning because their brains are telling them to?

These students’ reflective and deliberate choice to use hand-written notes for a major cognitive undertaking even after reporting satisfaction with learning and using digital note-taking methods has prompted me to ask what empirical evidence is available (or what gaps are evident) in the literature of the learning sciences and of reading and composition studies that addresses the effectiveness of technology-based reading and writing modes and pedagogies.

 

The literature on assessing reading comprehension by mode

Educators can now choose from an overflowing palette of assistive and universal reading, writing, and conferencing technologies when designing virtual or face to face instruction. But teachers need to consider not only the way these technologies may correspond to contemporary learning theories, instructional (or business) models, or the results of small qualitative studies, but also to look for and establish empirical verification of the effectiveness of technology-based learning tools, as called for by ISTE Educator Indicators 1c and 2c.

As college writing courses and writing centers continue to develop pedagogies and technologies to meet the learning needs of a diverse student population, as high stakes tests are increasingly administered in digital formats, and as e-texts increasingly replace print texts in all disciplines, we need to understand how digital and print text and writing affects comprehension.

The meticulousness with which Singer and Alexander (2017) developed a meta-analysis of this question as existing in the research literature of the past 25 years demonstrates some of the considerations—such as reading definitions, processes, assessments, and complexity—that educators should consider when using and coaching students in the use of digital texts.

The authors undertook to “describe the state of the research encompassing print and digital reading and to better ascertain how the affordances of print or digital mediums relate to what students understand from those textual encounters” over a 25 year period between 1992 and 2017, and concluded that, while several major models of text comprehension developed during this timeframe have been used to understand text processing or guide the creation of multimedia materials, the question of the impact of digital vs. print text on reading comprehension is  “underinvestigated” (p. 1008). A similar meta-analysis conducted at Johns Hopkins University of empirical studies comparing technology-enhanced reading instruction to control groups also reported a lack of experimental research on new technology applications (Cheung & Slavin, 2012).

Singer and Alexander (2017) interrogated how reading comprehension has been defined and assessed since an important early literature review in 1992 by reviewing empirical studies in which participants engaged in reading both print and digital texts, including only studies in which evidence collection involved more objective means than participant self-reporting. Only 36 studies met these criteria. The authors were “unpleasantly surprised by the paucity of either explicit or implicit definitions” for either digital or print reading, as well as by “lack of details regarding reliability and validity” of the studies that fit their criteria (p. 1016; p. 1020). The authors assembled a definition of reading that included “both conceptual (‘what is’) and operational (‘how’) elements” from the work of Mayes, Sims, and Koonce (2001; qtd. in Singer & Alexander): “Reading [relies on working memory] [to retain words to allow the reader to link words together in a meaningful process]. Reading also includes [word recognition] and other [processes that occur automatically]” (p. 1017). Similarly, Singer and Alexander did not find a definition of digital reading that reflected a sense that digital reading may involve different processes in addition to a different context.

For example, reading in digital modes may or may not be accompanied by various reading tools (such as searching, highlighting, and hyperlinked navigation), and can change reading strategies such as pre-reading, skimming, and re-reading (de Lima Lopez, 2013).

Singer and Alexander call educators to consider the “unique processing demands that come with processing in an online environment,” drawing a distinction between “reading digitally,” which is simply the reading of unenhanced digital text, and “digital reading,” in which additional cognitive processing skills such as web navigation are also involved (Singer & Alexander, 2017, p. 1031). The authors found relatively more empirical studies to have taken place in the last third of their 25-year timeframe, as researchers seek to measure the effects of the many features now available on various digital reading devices, features such as text manipulation or scrolling features, on reading comprehension.

Overall, the authors found that while various factors beyond medium (factors such as individual differences and assessment factors) influence reading comprehension, medium can have a significant impact on reading comprehension in specific contexts. For example, a study by “Lenhard et al. (2017) [qtd in Singer and Alexander, 2017] concluded that although participants read more quickly in [a] digital medium, it led to a shallower processing of the text” (p. 1034). In other studies, varying text lengths and assessment types also revealed differences in reading comprehension by medium. A clear takeaway from Singer and Alexander’s study is that educators need to consider students’ reading ability, broader digital literacy backgrounds, and the level of text difficulty and length when choosing reading media.

In another meta-analysis, Cheung and Slavin (2012) found that while studies comparing reading achievement by mode were few, the small pool of studies available suggested that “technology-assisted reading instruction” (a term distinct from “digital reading”) may have disparate impact on students, in some cases with greater gains in reading outcomes shown by students of low and middle ability and low and middle socio-economic status.

As reading has been transformed and arguably re-defined by the Digital Age, and as research on the implications for learning of technological choices in reading instruction has yet to catch up to the speed with which these choices are being implemented, so writing has been transformed and redefined.

 

Digital composing and assessment

At the college level, digital modes of writing and associated technologies can “facilitate the teaching of long-held, research-based principles within composition pedagogy such as collaborative learning and writing, development of academic discourse, and evaluation of student texts” (Marlow, 2011, p. iii). Not only do electronic composing environments impact writing processes and the teaching of writing (for example, by replacing traditional notions of the writing process—from planning to drafting to revising—with a recursive, ongoing revision processes), but the connectivity of writing environments (for example, in which sources and compositions exist in the same digital composing space) and mobility of such devices provide new ways of making meaning and new affordances for teaching (Herron, 2017).

So, perhaps even more than in the case of reading comprehension, what it means to write in a digital age is much more than a question of what it means to use a computer when writing: the nature of writing itself has been radically impacted by the ubiquity, genres, devices, and other affordances of digital composing. This complicates questions of how to meaningfully evaluate writing instruction or writing quality by mode.

One area where the importance of considering the impact of digital vs. handwritten composing on performance is obvious is in the area of high-stakes assessments. Horkay, Bennet, Allen, Kaplan, and Yan (2006) found in a comparative study of high-stakes writing tests by mode that individuals typically write better in one mode or the other, so that computer-based assessments may underestimate students’ writing ability either due to student lack of familiarity with basic computer literacy skills or to being tested in a non-preferred writing mode.

In addition, the example from the beginning of this post suggests a number of ethical considerations for how teachers, both those who teach writing and those who teach in other content areas, may be influenced by the availability and by the providers of digital, reading and composing technologies.

First, digital technologies can facilitate an increased corporatization of education in multiple ways. To begin with, the business law professor is responding to her institution as it pushes her to teach more students and generate institutional revenue through online instruction with the expedient pedagogical choice to eliminate constructed (written) responses from her instruction. Next, in addition to consulting disciplinary and pedagogical practices in her decisions about how to teach, she is responding to—and now participating in—the marketing of educational software. As Addison (2015) points out, it is not only through standardization that the teaching of writing can be influenced by corporate interests, but also through the increased role that textbook companies seek to take in individual classrooms through the marketing of digital education products, as the market for traditional textbooks shrinks. Yet another, and a critical, consideration of the way digital learning environments facilitate the corporatization of education that is raised by this example is in its implications for assessment. Writing is no longer present in the assessment for this content area in part due to corporate influence.

Second, these pressures are likely to result, as in this example, in a decreased use of writing as a mode of learning in the disciplines. Writing instruction thus becomes more and more the province of the English department and the general education course battery, and may, as a result, be viewed by faculty of other disciplines and by students as irrelevant to students’ career fields or major educational goals.

As Marlow (2011) and Herron (2017) acknowledge, not only is digital-age writing a new form of literacy, but writing instruction is changing to reflect the digital nature of that literacy.

Yet even within the field of technology education, empirical studies of the impact of technology on learning—and certainly on literacy learning—can be de-emphasized relative to other focuses, such as defining standards and frameworks, implementing technologies, and designing lessons. In 2004, Zuga called for more research on whether and how technology education frameworks are related to what is known about cognition. In 2015, Katsioloudis analyzed K-12 technology learning standards from the U.S., England, and Australia, and concluded that these were not aligned with Epstein’s Brain Growth Model or Jean Piaget’s Stages of Intellectual Development.

It may be up to the literacy disciplines and to individual teacher-scholars to enact an increased focus on empirical studies of the impact of digital reading on reading comprehension and of digital composing on writing. For example, Davis, Orr, Kong and Lin (2015) compared the achievement levels of secondary students using tablets to that of those using laptops and found little difference in outcomes, with the caveats that the written assignments were relatively short (350-450 words) and the assessment focused on surface features more than on higher order writing concerns. On a larger and more applied scale, education scholar Suzanne Miller (2013) took up the challenge of analyzing the relationship between multimodal composition practices and student learning through a meta-analysis of qualitative studies, related the results to what is known from neuroscience about “embodied learning,” and developed an evidence-based pedagogical framework for multi-modal composing.

I close this reflection on the need for empirical assessment of how digital reading and writing impacts learning and on the need for thoughtful consideration of whether we are best applying the affordances of digital metrics to measuring and improving learning with the final lines from Addison’s (2015) appraisal of how standardization and high-stakes testing are influencing the teaching of writing. She calls faculty, and those who support them, to become leaders in assessment: “[W]e need to place teachers’ professional judgment at the center of education and help establish them as leaders in assessment. Doing so requires re-envisioning formative and summative assessment as a process of inquiry at the heart of the work of teachers.”

 

References:

Adams Becker, S., Cummins, M., Davis, A., Freeman, A., Hall Giesinger, C., & Ananthanarayanan, V. (2017). NMC Horizon report: 2017 higher education edition. Austin, TX: The New Media Consortium.

Addison, J. (2015). Shifting the locus of control: Why the common core state standards and emerging standardized tests may reshape college writing classrooms. The Journal of Writing Assessment, 8(1). Retrieved from: http://journalofwritingassessment.org/article.php?article=82

Arroway, P., Morgan, G., O’Keefe, M., & Yanosky, R. (2016, March). Learning analytics in higher education. [Research report]. Louisville, CO: ECAR.

Cheung, A., & Slavin. R.E. (2012, April). The effectiveness of educational technology applications for enhancing reading achievement in K-12 classrooms: A meta-analysis. Baltimore, MD: Johns Hopkins University, Center for Research and Reform in Education.

Davis, L.L., Orr, A., Kong, X. & Lin, C. (2015). Assessing student writing on tablets. Educatoinal assessment, 20(3), 180-198. Doi:10.1080/10627197.2015.1061426

de Lima Lopez, R.E. (2013). Some considerations on digital reading. In Sampson, D.G., Spector, J.M., Ifenthaler, D., & Isaias, P. (Eds.), Proceedings of the IADIS International Conference on Cognition and Exploring Learning in the Digital Age (CELDA 2013) (pp.419-421). IADIS Press.

Herron, J.P. (2017). Moving composition: Writing in a mobile world(doctoral dissertation). Retrieved from ProQuest. (10607617)

Horkay, N., Bennet, R.E., Allen, N., Kaplan, B., & Yan, F. (2006). Does it matter if I take my writing test on computer? An empirical study of mode effects in NAEP. Journal of technology, learning, and assessment, 5(2). Retrieved from http://www.jtla.org

Katsioloudis, P. (2015, May-July). Aligning technology education teaching with brain development. Journal of STEM education: Innovations and research 16(2), 6-10. Retrieved from http://www.jstem.org/index.php/JSTEM/article/view/1658

Marlow, J.M. (2011). Remediating composition: Landmark pedagogies meet new media practices (doctoral dissertation). Retrieved from ProQuest. (3454518)

Marzouk, Z., Rakovic, M., Liaqat, A., Vytasek, J., Samadi, D., Stewart-Alonso, J.,…Nesbit, J.C. (2016). What if learning analytics were based on learning science? Australasian journal of educational technology, 32(6), 1-18. Retrieved from https://ajet.org.au/index.php/AJET/article/view/3058

McNabb, M.L. (2005). Raising the bar on technology research in English language arts. Journal of research on technology in education, 38(1), Fall 2005, 113-119. Retrieved from https://files.eric.ed.gov/fulltext/EJ719940.pdf

Miller, S. (2013). A research metasynthesis on digital video compsing in classrooms: An evidence-based framework toward a pedagogy for embodied learning. Journal of literacy research, 45(4), 386-430. doi:10.1177/1086296X13504867

Singer, L.M., & Alexander, P.A. (2017). Reading on paper and digitally: What the past decades of empirical research reveal. Review of educational research, 87(6), 1007-1041. doi:10.3102/0034654317722961

Yan, P., Slator, B.M., Vender, B., Jin, W., Kariluoma, M., Borchert, O.,… Marry, A. (2013). Intelligent tutors in immersive virtual environments. In Sampson, D.G., Spector, J.M., Ifenthaler, D., & Isaias, P. (Eds.), Proceedings of the IADIS International Conference on Cognition and Exploring Learning in the Digital Age (CELDA 2013) (pp.109-116). IADIS Press.

Zuga, K.F. (2004). Improving technology education research on cognition. International journal of technology & design education, 14(1), 79-87.

2 Replies to “Reading, Writing, and Digital Age Assessment”

  1. Hi Stephanie-

    Thank you for the thought-provoking post. Have you heard of Actively Learn? It’s a website similar to the system mentioned in the beginning of your post. You can direct students to a story, novel, or article and they can interact with the reading through short responses or multiple choice questions. Data is then returned to the teacher and can be integrated with Google Classroom. I used the free version a bit with my 8th graders, but like you mentioned, a significant portion preferred paper. I can’t say I disagree. For fiction or nonfiction that I’m reading for fun, I like using Kindle. But when I am reading a content-related text for teaching, I prefer to get a paper copy. As your students mentioned, I think there is a cognitive change that occurs when reading and taking notes on paper. However, I see a value in utilizing both and providing students with choice.

    Lauren

  2. Stephanie, Thanks again for asking such an important question. As we weigh the affordances of new technology for education, will we stop to consider the medium of delivery? As a teacher with a 2:1 student-to-device ratio in class and as a learner, I wonder how our reading and writing is affected by the fact that most of what I read every day is on screens. I can relate to your students (thanks for sharing those artifacts from your survey!) when they talk about the feel of paper. I miss it, too. Thanks for this investigation and bringing to light some of the early findings in this area!

Leave a Reply

Your email address will not be published. Required fields are marked *