Stitz & Zeager Open Algebra 3rd Edition

Can OER improve learning outcomes?

The June issue of the International Review of Research in Open and Distributed Learning (IRRODL) is dedicated to Open Educational Resources (OER). Several of the studies focus specifically on OER and student learning outcomes.

In, Exploring Open Educational Resources of College Algebra, Marcela Chiorescu, of Georgia College describes a case-study in which the instructor of a hybrid college algebra course switches from commercial software and textbook after the first two semesters, to an open text and supplemental low-cost software for a semester, and then back again to the commercial content.

For the semester using OER, Chiorescu estimates students collectively saved over $13,500  – spending approximately 75% less than students using the commercial text and software. Certainly one of the main advantages of adopting OER is lowering the costs to students. However, there may be other advantages more closely tied to student success.

Analyzing the grade distribution over the four semesters, the percentage of students earning a grade of C or better was significantly higher at 84.3% for the students enrolled in the semester using OER over the previous or subsequent semesters. The percentage of students earning an A in the course was also higher for those using OER. In addition, the OER sections reported significantly fewer withdrawals.

The decision to return to commercial content after one semester was related to technical issues with the low-cost software (quizzes locking up and slow download speeds). Chiorescu was also concerned that the low-cost software was not comparable in to the commercial version, due to a “lack of resources”. To compensate for the deficiency of course materials, she developed a LibGuide to accompany the course including supplemental videos and tutorials.

In another study: The Impact of Enrollment in an OER Course on Student Learning Outcomes, Kim Grewe and Preston Davis, of Northern Virginia Community College, compared learning outcomes for students enrolled in an online history course using OER with a similar number of non-OER sections over two semesters. The study took into account student cumulative GPA and, as expected, found a correlation between prior academic achievement and student course achievement. However, an even stronger correlation was found between student achievement and OER section enrollment.

Both studies build upon previous research looking at the efficacy of OER and student achievement (Grew & Davis) finding that students enrolled in courses using OER perform as well, if not better, than students enrolled in non-OER courses. In addition, OER supported courses are more affordable and students are more likely to enroll in a higher number of credit hours per semester – and thereby, achieve their academic goals in a more timely manner.

Another subtle, but important, takeaway was the use of LibGuides to supplement OER textbooks. One of the challenges of adopting OER is that the open textbook may not include all of the supplemental materials that oftentimes accompany the commercial texts. LibGuides offer the opportunity to engage the academic librarian in the course design process and potentially improve the overall quality of OER course offerings.

References:

College Algebra, 3rd edition by Stitz C. and Zeager, J. (2013) Available at http://www.stitz-zeager.com/

CHIORESCU, Marcela. Exploring Open Educational Resources for College Algebra. The International Review of Research in Open and Distributed Learning, [S.l.], v. 18, n. 4, jun. 2017. ISSN 1492-3831. Available at: <http://www.irrodl.org/index.php/irrodl/article/view/3003/4223>. Date accessed: 26 Jun. 2017. doi:http://dx.doi.org/10.19173/irrodl.v18i4.3003.

GREWE, Kim; DAVIS, William Preston. The Impact of Enrollment in an OER Course on Student Learning Outcomes. The International Review of Research in Open and Distributed Learning, [S.l.], v. 18, n. 4, jun. 2017. ISSN 1492-3831. Available at: <http://www.irrodl.org/index.php/irrodl/article/view/2986/4211>. Date accessed: 26 Jun. 2017. doi:http://dx.doi.org/10.19173/irrodl.v18i4.2986.

 

 

 

Advertisements
Keeping score at relay race for Melrose Running Club

Measuring Student Success

For many years, community colleges have focused on providing access to everyone seeking a post-secondary education. More recently the narrative has changed to that of completion and how long it takes the student to complete. The literature tells us the longer a student is in college, the less likely she is to complete. The completion rates for first-time full-time degree seeking students are low – especially for community college students. Generally speaking, when we measure student success, what we are talking about is completion rates.

The fact is many – if not most – of our students are not first-time, full-time degree seeking students. Many have already completed a college education, some having attained bachelors and masters degrees, before coming back to start over in a new career. More and more of our student population are enrolling part-time. Others not seeking a degree at all, but instead, looking to update their job skills by taking a course or two. How do we measure student success for these learners?

California Community Colleges studied these non-completers over a two-year period. Referred to as Skills Builders, these students enrolled in a limited number of courses for the express purpose of enhancing their job skills or moving up the ladder within their careers. The study found that the average Skills Builder improved their salary by 13.6% – averaging $4,300 / year! Clearly these students were successful in meeting their educational goals.

EDUCAUSE Review recently published a collection of short essays entitled, Student Success: Mission Critical.  In the introduction, John O’Brien says, “If students don’t succeed, colleges and universities don’t succeed. Our full attention must be concentrated on the mission-critical goal of helping students define – and meet- their educational goals.”

To my way of thinking, this is what we should be talking about when we talk about student success: “helping students define – and meet – their educational goals“.

Regardless of whether students come to college to take a few courses to improve their employability, complete a program of study, or transfer to university, achieving their educational goals requires persistence. We often hear the terms “persistence and “retention” used synonymously. The National Center for Education Statistics (NCES), however, differentiates between the terms: “retention” as an institutional measure and “persistence” as a student measure. In other words, institutions retain and students persist.

This is an important distinction when it comes to measuring student success. As an institution, we can measure retention: did the student drop / complete? Has the student continued their program of study? However, whether the learner persists or not, is really up to the student. For this reason, we can, and should, create a learning environment that encourages student persistence.

“Tinto posits that students are more likely to remain enrolled in an institution if they become connected to the social and academic life of that institution” – Community College Research Center (CCRC).

The CCRC study found that community college students who make academic connections… “create a sense of belonging and attachment that seems to encourage persistence” – CCRC.

In Student Success: Mission Critical, George Mchaffey reminds us to not “avoid the academic heart of the enterprise… But the core of the enterprise is the curriculum and particularly the classroom. Some people avoid tackling that area because it is likely the most difficult. However, substantive change in student success outcomes must include attention to what happens to students in classrooms and in their academic journeys.”

We can measure student success in different ways: retention is measured by the quantity of students who continue and complete, but persistence in measured by the quality of the student’s experience – whether they belong here and how much we care.

References:

http://www.aacc.nche.edu/AboutCC/Trends/Pages/completion_report.aspx

https://www.insidehighered.com/news/2016/03/04/california-community-colleges-find-new-way-measure-success-noncompleters

http://er.educause.edu/articles/2017/5/student-success-mission-critical

http://ccrc.tc.columbia.edu/media/k2/attachments/exploration-tintos-integration-framework.pdf

 

Teen on Smarphone

Smartphone Learning

For the past several years the Horizon Report has listed mobile learning, in one form or another, as an emerging educational technology (e.g. mobile computing, mobile apps, social media, BYOD, mobile learning). Mobile technologies have changed over the years: from the early PDAs, Blackberrys and feature phones with texting capability and cameras, to tablets and eReaders to the ubiquitous smartphones of today. According to the ECAR 2016 Study of Undergraduate Students and Information Technology, 96% of undergraduate students now own a smartphone. Smartphones have clearly emerged as the mobile technology of choice, while the tablet, eReader and wearable technology ownership has dropped off.

Chart showing rate of undergraduate smartphone ownership
Undergraduate Smartphone Ownership

Despite near universal device ownership, students have yet to fully embrace the smartphone as a tool for learning. The ECAR study indicates that most students (appx 80%) do use their smartphones for one or more classes while only 46% consider them “essential for coursework” compared to their laptops at 93%. This is understandable considering the fact that many online courses tend to be reading and writing intensive. The size of the screen and necessity of “typing” with a virtual keyboard, can mean reading and writing with the smartphone a laborious task.

The top three ways listed by students for using academic technology include making it easier to access coursework (72%), increasing communication with other students (65%), as well as with their instructors (60%) – in other words, student-to-content, student-to-student, and student-to-instructor interactivity. Anderson’s Interaction Equivalency Theorem states that “deep and meaningful formal learning is supported as long as one (or more) of the three forms of interaction is at a high level”.

What if we were to design the course with the smartphone learner in mind? Not necessarily that the course must be taken using a smartphone, but that the learner who uses a smartphone as their primary technology would not be disadvantaged. What would we then need to do differently?

Considering student-to-content interaction, rather than delivering content primarily in text, the use of video and/or audio formats might prove more mobile-friendly. Smartphones are great for watching short videos or listing to music. Video and audio files (podcasts) can can be easily created using various mobile apps or web-conferencing solutions (e.g. Voice Recorder, Zoom.us, Skype). By using Google Drive or Archive.org, media can be made available for students to download offline, when they may be without a WiFi connection.

The ability to take photos, record and share images, audio and video via the smartphone camera can be a powerful tool for both student-to-content and student-to-student interactivity. By sharing or attaching photos, screenshots, video or audio files, learners can create authentic artifacts. Such media can be submitted to an e-portfolio or blog (e.g. Tumblr) for peer review or assessment of learning.

Most social media technologies (SMT) are designed to work with the smartphone as well as with desktop browsers. By replacing LMS threaded discussion with SMT (e.g. GroupMe), messaging, engaging in group discussions, as well as sharing news, scholarly articles, video, etc. becomes a simple and familiar process.

Scheduling virtual office hours using Skype, Zoom, or Hangouts can increase student-to-instructor interactivity and improve student satisfaction. Skype also can be used for asynchronous video and audio communication, supporting teaching presence and instructor immediacy.

Despite the pervasiveness of smartphone ownership by today’s undergraduate students, their use of the technology for academic purposes has not kept up with the rate of adoption. One reason students may not leverage their mobile devices for formal learning is educators have yet to fully “harness” the affordances of the technology for teaching and learning.

References:

Brooks, D.C. (2016). ECAR Study of Undergraduate Students and Information Technology 2016. EDUCAUSE.

ANDERSON, T (2003). Getting the Mix Right Again: An Updated and Theoretical Rationale for Interaction. IRRODL

Cochrane, T., Bateman, R. (2010). Smartphones give you wings: Pedagogical
affordances of mobile Web 2.0. Australasian Journal of Educational Technology.

Can Transactional Distance Theory inform instructional design for CBE?

For the past several years, online learning in higher education has focused on delivering a highly structured learning environment. Courses are typically designed to deliver content sequentially (week one, week two, etc.) with required reading assignments, discussion forums, quizzes, etc. This approach to online learning has served to help new-to-online learners navigate the online course, learn to use the most common learning management system (LMS) tools, and interact with their classmates in ways that have modeled the traditional classroom. However, with emerging deliver models such as Competency-based Educational (CBE) and Massive Open Online Courses (MOOCs) gaining traction in higher education, it may be time to consider design approaches to better support the autonomous learner.

Billiards Table - CC-BY by Oliver Clark on Flickr
CC-BY by Oliver Clark on Flickr

Transactional distance doesn’t refer to the distance between the instructor and student in regards to space or time but rather the distance in regard to transactions or interactions between the learner, instructor, and content. According to Transactional Distance Theory the “degree of structure and dialogue [required] in a course depends on the students’ ability to learn autonomously”.  Students who lack the necessary skills to self-regulate their learning may require a more structured learning environment, whereas autonomous learners are better positioned to succeed in a less structured environment (Koslow & Piña, 2015).

The inverse relationship between learner autonomy and course structure is important to keep in mind as we design for emerging distance learning models. In the CBE environment students enter a program of study at different times and progress at different rates. In order to support the flexibility required by CBE we may need to abandon the more traditional lock-step approach used over the past several years for designing online learning environments in higher ed.

Kozlow et al (2015) suggests that in order for autonomous learners to be successful they must possess self-regulated learning (SRL) strategies. The self-regulated learner has the ability to plan their own approach to learning as well as to review and evaluate their own understanding.

Online courses designed with SRL in mind might begin by offering pre-assessment opportunities for students to discover what they do or do not know on a given subject. Assessment feedback could include contextual links to additional resources / material for students to review. The use of practice quizzes along with digital badging systems and other formative assessment tools can assist students in measuring their own progress, as well as providing motivational support. Journals, blogs, and e-portfolios could replace the discussion forum commonly used in the “traditional” online course – providing an autonomous learning tool to assist with learner reflection.

As more colleges explore delivery models offering less structure and providing fewer opportunities for dialog, we need to consider instructional design approaches that can support student success in environments with greater learner autonomy.

References:

A. Kozlow & A. Pina (2015) Using Transactional Distance Theory to inform online instructional design. International Journal of Instructional Technology and Distance Learning. Vol. 12. No. 10  http://www.itdl.org/Journal/Oct_15/Oct15.pdf#page=67&zoom=auto,87,688

M. Weimer (2010). What it means to be a Self-regulated Learner. Faculty Focus, Magna Publications. http://www.facultyfocus.com/articles/teaching-and-learning/what-it-means-to-be-a-self-regulated-learner/

Using LMS Data to Improve Self-regulated Learning

A recent study examined learning management system (LMS) log files to look at course interactions for 530 online students and found that the students’ self-regulated study habits significantly influenced course achievement. The study focused on those interactions related to self-regulated learning, including such habits  as maintaining a regular study schedule, timely submission of assignments, frequency of course logins, and proof of reading course content (You, 2016).

It may seem obvious that students who possess good study habits are more likely to succeed in the online learning environment (or any learning environment for that matter). However, for me, the important take-away from the study is the potential for leveraging the data collection systems and early alert functionality within our LMS toward reinforcing self-regulating learning habits with those students who may be at-risk of dropping or failing their online course.

The Retention Center is an early alert feature included in our campus LMS – Blackboard Learn. The Retention Center comes set up with four default rules: Course Access, Activity, Grades, and Missed Deadlines. Although these “rules” align fairly well with the study habits mentioned in the study, we are able to customize, as well as to design new rules that can be added to the Risk Table to further support and reinforce study habits.

Screen Shot - Blackboard Retention Center - Rules Customization
Blackboard Retention Center – Rules Customization

The course access and activity rules align with the frequency of course logins habit. The default rule is five days since last access, but in light of the research, I suggest shortening the number to two or three days. The User Activity default rule is set to twenty percent below average for a week – again, I would suggest changing it to three or four days.  The timely submission of assignments and proof of reading course content will depend on setting up corresponding columns in the grade book. By developed assignments that require students to review material (possibly video content) and then to complete a related assessment, grade alerts can be triggered in the corresponding grade book column. By tweaking the default rules or adding new rulee, the instructor can quickly identify students who display poor study habits and immediately reach out to reinforce good habits that support student success.

In addition to the Retention Center, the Performance Dashboard tool can be used to view the content items the student has accessed along with the number and length of posts a student has submitted to course discussion forums. By requiring students to review content and submit substantive posts within a discussion forum, the instructor can encourage the reading of course content – another study habit predictive of student achievement. Encouraging students to subscribe to the discussion forums can further support regular and substantive interaction with classmates.

In a survey of unsuccessful online students at Monroe Community College, students reported the number one challenge they experienced in their online courses… “I got behind and it was too hard to catch up” (Fetzner, 2013). By designing online courses that leverage the LMS analytics features to identify and support at-risk students within the first few weeks, new-to-online students can develop the skills and habits required to be successful in the online environment.

References:

You, J. W. (2016). Identifying significant indicators using LMS data to predict course achievement in online learning. The Internet and Higher Education, 29, 23-30.

Fetzner, M. (2013). What Do Unsuccessful Online Students Want Us to Know?. Journal of Asynchronous Learning Networks, 17(1), 13-27.

 

 

 

 

The Ninety-day Action Plan

A recent article in Forbes encourages business leaders to consider putting together 90 day action plans to be reviewed and updated every 90 days in an effort to improve organizational agility.

Falguni Desai: “Strategic plans that cast a five year look into the future provide a sense of calm. They coax management teams into thinking that there is still time to put the plan into action.” – The Digital Era is Crippling the Five-year Strategic Plan

The author points out that in this digital age the role of the strategist is changing, and if we are to keep up with change we need to become “fututists” – keeping up with current trends, we should consider shorter planning cycles allowing us to be better prepared to change course if necessary.

Speaking of looking toward the future, one of the publications I look forward to reading each year is the Horizon Report put out by the New Media Consortium (NMC). The Horizon Report is put together by a panel of experts offering insights into what is emerging in educational technology on the college campus. The report offers key trends and significant challenges in technology adoption, as well as emerging technologies for the immediate future (one year or less), mid term (two to three years, and longer term (four to five years) in higher education.

Horizon Report 2016 Trends, Challenges, and Technologies for Higher Ed
CC-BY New Media Consortium Horizon Report 2016

Bring Your Own Device (BYOD), Learning Analytics and Adaptive Learning are listed in the current report as this year’s emerging technologies. Augmented and Virtual Reality, and Makerspaces time-to-adoption as two to three years. For four to five years, Affective Computing and Robotics are on the list.

The Horizon Report is not meant to replace the strategic plan but it can offer some useful insight into current and emerging trends in higher ed. In fact, our campus has implemented a number of these same strategies: in response to the BYOD trend we have implemented responsive web design and adopted mobile apps for our LMS. We are also planning to include a Makerspace as part of the Health Technologies capital project over the next two years.

Six years ago I would have agreed with the 2010 Horizon Report in that Open Educational Resources was set to take off but still today, we have yet to fully embrace OERs on most campuses and it is no longer listed in the report. Five years ago Competency Based Education (CBE) was not listed as one of the trends but with today’s focus on a campus completion agenda, getting students to degree faster has become a priority and that means considering new online delivery models.

The five-year strategic plan may no longer make sense for ed tech. Perhaps by employing such practices as the twelve to twenty-four month strategic plan and ninety-day action planing cycles we can learn to improve institutional agility and realize our efforts to embrace change and foster innovation on the college campus.

Advancing a Culture of Innovation

According to the 2017 Horizon Report , “Advancing Cultures of Change and Innovation” is one of the long-term trends to watch for in Higher Ed over the next five years.

“It will require visionary leadership to build higher education environments that are equipped to quickly change processes and strategies as startups do. If these organizational models are designed well, universities can experience more efficient implementation of new practices and pedagogies.” -2016 Horizon Report

Changing direction by Kapapuka Argazkiak on Flickr
Changing Direction by K. Argazkiak on Flickr, CC-BY-NC-SA

The report references Eric Ries’s book, The Lean Startup (2011) as an example of an approach educators may employ to advance cultures of change on the college campus. The process is a business model for entrepreneurs to rapidly design and develop new ventures and involves a cycle of deploying “lean” (less than fully developed) prototypes, followed by collecting feedback from consumers, which in turn informs the next step in the development / design process. Fully developed products and services may have undergone several iterations, oftentimes resulting in a final product that may have changed significantly from the original prototype but has proven more attractive to the consumer.

This idea of rapidly cycling through numerous iterations is a method also used in DevOps – an approach to application development that brings together software developers (coders) and information technology (operations) upon the goals of improving quality and lessening time to market. An important characteristics of the DevOps approach is the focus on cultural change.

The word “culture” comes from the Latin – cultura, meaning to cultivate or prepare for growth. It seems to me this serves as an excellent metaphor for fostering change in the organization.

“When a college is undertaking a broader reform effort, a culture of inquiry can be used to define a framework for action, cultivate the engagement of a broad range of practitioners and identify discrete action steps at various levels of the institution.”

The Research and Planning Group (RPgroup) of California Community Colleges published a paper on Building a Culture of Inquiry: Using a cycle of exploring research and data to improve student success (2010). The project was funded by Completion By Design and describes the use of an Applied Inquiry Framework: a cycle of evidence based improvement consisting of five stages:

  1. Defining a focus of inquiry
  2. Gathering relevant and meaningful evidence
  3. Engaging a broad range of practitioners and exploring the evidence
  4. Translating collective insight into action
  5. Measuring the impact of action

This cycle of evidence can certainly be applied to advancing a campus culture of innovation. As an example, consider how the adoption of Open Educational Resources (OERs) may impact online student success. This would serve as stage one – defining our focus of inquiry. In stage two, we gather research about OERs and student success (e.g. Multi-institutional study of the impact of open textbook adoption on the learning outcomes of post secondary students (Fisher, L. Hilton, J., Wiley, D. 2015)).

The third stage where we bring together a broad range of practitioners to explore the evidence, is critical in advancing a culture of innovation. It is at this stage we share insights, explore and challenge our collective beliefs and assumptions in an effort to get to the fourth stage, where we translate this collective insight into action.

In our scenario we would invite faculty who use OERs as well as those who are reluctant to adopt open texts for whatever reasons. Instructional designers, librarians, and others would be invited to the table as well to engage in discourse and inquiry. Unfortunately, in higher education we often work in isolation. Even our classrooms, both virtual and physical, are essentially closed environments. However, they could potentially become environments of inquiry and experimentation, where not only students learn, but the faculty and the campus community learn as well.

Stage four is where we test our assumptions and collect feedback and data. If we already have instructors using OERs, what do students think of the course and materials? Is there any data on student outcomes that we can compare to similar courses where the materials are not used? Such feedback need not be especially burdensome. The “lean” approach is used to test our assumptions and evaluate the feedback. A simple survey or focus group may provide enough information for the next stage.

The fifth stage, measuring the impact of our action, is not actually the final stage. In a culture of inquiry and innovation, the feedback we collect is used to inform subsequent iterations of our innovation. We may find that students appreciate access to the free digital text but they may in fact, be printing out each chapter as the course progresses. How does this inform our next iteration? Should we consider offering a low-cost print alternative?

It seems to me the Applied Inquiry Framework is similar in many respects to Design-based Research (DBR) – a qualitative research approach used in authentic educational settings. The goal of DBR is to learn about learning in real-world settings which are often complex and unique environments. The virtual classroom is such a setting and to improve learning in the online environment is an iterative process. If our goal is to advance a culture of change and innovation, we will need to change our approach to that which fosters experimentation and to share with others what we are learning even as we are learning.

Virtual Office Hours

In their paper, “Using Virtual Office Hours to Enhance Student-Faculty Interaction”, Lei Li and Jennifer Pitts (2009) from Colorado State University found that students enrolled in courses that offered virtual office hours experienced a higher level of satisfaction than those students enrolled in traditional on-campus courses. Apparently, just by making the virtual office hours an option for students, regardless of whether they availed themselves of the opportunity, was enough to increase the students’ comfort level with the instructor and the online course.

11513097184_11306a8c98_z
Skype & Coffee by I. Keller of Flickr, CC-BY-NC-SA

Ll, L., Pitts, J. (2009). Does it really matter? Using virtual office hours to enhance student-faculty interaction. Journal of Information Systems Education, Vol 20. No. 2. http://jise.org/Volume20/20-2/Pdf/V20N2P175-abs.pdf

 

What can we learn from the unsuccessful online student?

We have quite a bit of information on what it takes to be a successful online student but we may also be able to learn a few things from a couple of studies focusing on the unsuccessful online student.

24566052442_97a919e823_z
Great Expectations by Greg Myers on Flickr CC-BY-NC-SA

Monroe Community College in Rochester, NY, surveyed 438 unsuccessful online students over a ten year span about their online learning experience. “Unsuccessful students” were defined as those who either failed or withdrew from the online course (Fetzner, 2013).

Online course retention rates are on average, between 5 – 10% lower than for on-campus courses. Findings from the study show the group of students least likely to complete include the “first-time, full-time” students, with a difference in the success rate of 32%. The top three reasons given by the unsuccessful students for dropping or failing their courses, were “falling behind in coursework”, “personal problems”, and conflicts with work and/or family responsibilities (Fetzner, 2013).

In another study examining student engagement and online persistence, researchers from from the University of Georgia collected data on 423 students enrolled in thirteen online course sections over three semesters. The withdrawal rate for the asynchronous online courses was very high at 32%. Of those students who stayed enrolled in the courses, only 75% successfully completed the course (Morrie, Finnegan, Wu 2005).

Findings from the UG study (2005) indicated that unsuccessful completers (those finishing the course earning a D or F grade) were much less likely to participate or engage meaningfully in course discussions / postings, etc.

It is apparent from both studies that students may have very unrealistic expectations of what it takes to succeed going into the online learning environment. I have heard anecdotally on numerous occasions of students who have enrolled in an online course for the first time because they thought it would be “easier”.

Our campus is developing an “Introduction to Distance Learning” module in our LMS for students enrolling in an online or blended course for the first time at our college. At the moment they enroll they will be automatically enrolled in the module. An email message will alert them that they need to complete the module before their online course begins. The purpose of the module is to help students better understand what they can expect in taking an online course in regards to organizational skills, time-on-task, the amount of reading and writing required, as well as their access to, and comfort with, technology, etc.

I would be interested to hear of what other colleges have done to help mitigate unrealistic expectations of the first-time online student.

References:

Fetzner, M. (2013). What do unsuccessful online students want us to know? Journal of Asynchronous Learning Networks, Vol 17, No. 1

Morris, L., Finnegan, C., Wu, S. (2005). Tracking student behavior, persistence, and achievement in online courses. Internet and Higher Education Vol 8. 221-231.

 

The value of Student Evaluations of Teaching

A few years back a small group of faculty at our college were charged with redesigning a student survey of instruction for our online and hybrid courses. The resulting evaluation was a significant departure from the previous version with much more focus on student effort and expectations.

Teacher Evaluation Form CC-BY-NC-SA by Kevin Lim on Flickr
Teacher Evaluation Form CC-BY-NC-SA by Kevin Lim on Flickr

Instead of asking students questions along the lines of “Was you professor prepared for class?” or “How knowledgeable was your instructor about the course material?” – questions designed to evaluate instructor performance – the new and improved version took into consideration student investment: What grade do you expect to earn? How much effort did you put into the assignments? How much time did you set aside for course study each week? These kinds of questions help students to consider their own expectations and whether they align with their actual effort.

When asked about the quality of a thing (whether a product, service, or experience), we are actually being asked whether it met our expectations. These expectations may not be very realistic or accurate depending on past experience. When I talk with a group of faculty about innovating instruction in their courses, I suggest they tell students up front that they are trying something new and ask for their cooperation. I also suggest they warn their deans and department chairs that student evaluations may be negatively effected – students don’t always appreciate having to work at their own learning.

Unfortunately, student satisfaction surveys are all to often used as part of the faculty evaluation process. This is not only unfortunate but unfair as there is evidence that such assessments are not very useful or accurate for this purpose. A recent Inside Higher Ed article “Bias Against Female Instructors” reported on a study that showed male instructors tended to be rated higher than females regardless of the gender of the student.

In my opinion student evaluations can be useful for helping instructors, departments and programs to improve course design and delivery but they need to focus not on student satisfaction but on instructional feedback. And, they need to be performed not at the end of the semester but much earlier in the process. By asking students to provide feedback on course delivery and design earlier in the semester, faculty have the opportunity to improve instruction and at the same time, get a sense of how their students are doing in regards to learning the material.