Lessons Learned: Graduate Student Assessment

This article was published earlier:

Page, L., & Cherry, M. (2017). Lessons learned: Graduate student assessment. Assessment Update: Progress, Trends, and Practices in Higher Education, 29(6), 1-15.

 

Similar to many other graduate programs, the Master of Arts in Organizational Leadership (MAOL) program at Lewis University had more questions than answers when we began our assessment efforts in 2014. As assessment was a relatively new buzzword around campus, we understood the need to be purposeful and intentional about student learning in our graduate program, but we did not know how we would begin to measure or assess our students’ learning as a result of the curriculum.

In addition, most of the dialog around campus was focused on undergraduate assessment. This left us wondering, “What, if anything, should be different about assessment at a graduate level?” To date, we still find difficulty answering this question as a vast majority of published research and rubrics related to student assessment are focused on the undergraduate learner (e.g., AAC&U Value Rubrics).

For context, the MAOL program is staffed by six full-time faculty members with assistance from several adjunct faculty. As of Spring 2016, the program supported 234 graduate students in both online and face-to-face formats. Both formats utilize consistent student learning outcomes (SLOs), textbooks, course materials and assignments, which aided our efforts as we planned for assessment. MAOL student demographics include 192 females and 42 males with an average age of thirty-seven, with nearly two-thirds of students taking courses online rather than in the face-to-face teaching format.

Not knowing exactly where to begin our journey, we heeded some very well meaning advice from fellow faculty members at the University, “Just start somewhere.” Fortunately, we had already established many of the core components of a successful assessment program (Miller and Leskes, 2005): program goals and a statement of mission, seven fairly well-written SLOs, and mapping of the learning outcomes from each master’s level course in the program (of which there are 23) to the seven program SLOs. By most standards, we were ahead of the game at the University and doing quite well conceptualizing the notion of assessment on campus.

Nevertheless, we had quite a bit of learning ahead of us. The department had not decided which of the seven SLOs would be assessed, what artifact or assignment we would use to assess student learning, nor had we collected any data. Following the faculty advice we were provided, we “just started somewhere.”

This led to our first attempt of assessment, choosing to measure two of the seven SLOs for the program. Faculty decided that the final exam from our first course in the program, Leadership Theories, would be an ideal assignment as students had to demonstrate their mastery of leadership theories and apply concepts such as conflict, teamwork, and change throughout their responses. See Figure 1 for a listing of the seven SLOs originally used by the program.

Figure 1: Original MAOL Student Learning Outcomes.

Lesson 1: Choose your assignment carefully. Can you imagine what limitations may exist by choosing a final exam in an entry-level course in the program? Can students truly show mastery of their learning after just one graduate-level course? For us, the answer was no.

While our first round of assessment resulted in some difficulty in terms of rating student learning, we did benefit greatly by the open dialog about grading, calibration of rating expectations, and refinement of the instructions/expectations for the final exam assignment in our Leadership Theories course. For us, the richest part of the assessment process was the dialog between full-time faculty about continuous process improvement.

After another failed attempt assessing two different SLOs (through another poorly chosen assignment), we decided that graduate students could best show their mastery from the program by assessing student learning from the Capstone paper they completed in their final course in the program. In fact, we found that others were successful using Capstone projects in this way as well (Gray, Boasson, Carson & Chakraborty 2014; van Acker and Bailey 2011). Therefore, beginning in Spring 2015, we assessed all seven SLOs using students’ Capstone papers. To ensure our assessment was thorough and fair, we assigned two raters (both full-time faculty) per paper and averaged the assessment rating for each SLO.

Using the Capstone assignment for assessment provided our department with additional insight and resulted in a few important considerations. First, we found that some of the SLOs were consistently more challenging to rate and resulted in poor inter-rater reliability in terms of our ratings. Second, with two new faculty members in the department, we had wider discrepancies in grading than previously reported, so more training and grade calibration exercises were in order.

Before assessing students again, we made some significant changes to both our assessment process as well as the instructions provided in the Capstone Course. Specifically:

  • For the Capstone Course, we enhanced the clarity of our assignment instructions and expectations. We also added a video of full-time faculty describing the Capstone project and its purpose.
  • From a grading perspective, we, as a full-time faculty, held several calibration sessions discussing in depth how each SLO is defined and how it might “look” in a student’s paper.
  • Finally, based upon point #2, we revised the wording of our SLOs to be more direct and concrete. Below is an example of the changes we made to one student learning outcome.
  • Original: Explore the foundational basis of ethical leadership with a focus on Lasallian and Servant Leadership.
  • Updated: Explore the role of ethics in leadership.

Lesson 2: Choose Your Words Carefully. The wording of our SLOs, originally complex and “scholarly” (based upon Bloom’s taxonomy), detracted from our ability to consistently and accurately rate students’ learning. As a result, we updated our SLOs, as shown in Figure 2.

Figure 2: Updated MAOL Student Learning Outcomes.

Table 1 reveals the historical pattern of assessment data collected by the MAOL program, beginning with our earliest assessment using the Capstone paper in Spring 2015 and ending with the most recent collection of data in Spring 2016. Our target was for 80% or more of students to be rated a 3 or higher on the 4-point assessment rubric. We found that the calibration sessions held within the department helped us significantly in terms of increased inter-rater reliability and consistency of the ratings.

Lesson 3: Talk about Ratings and Foster a Dialog about Student Learning. The Department of Organizational Leadership found that the more we talked, the more we learned about how each faculty member graded, interpreted the rubric, and defined the SLOs. This rich dialog resulted in greater collaboration to revise the wording of our SLOs, greater appreciation for what our students experience in the program, and greater understanding of when strong student performance was demonstrated. We considered this a “win” for our department’s goal to increase student focus.

Table 1: MAOL Assessment Data by Student Learning Outcome Spring 2015-Spring 2016.

As seen in the table above, we had our most successful assessment of SLOs in Spring 2016. So, what changed between Fall 2015 and Spring 2016? The Department of Organizational Leadership:

  • Updated the wording of our SLOs (and eliminated one, resulting in a total of six current SLOs).
  • Used a simplified rubric (see Figure 3).
  • Continued calibration sessions to discuss ratings and increase inter-rater reliability.
  • Added clarity to assignment requirements and expectations.

Figure 3: Assessment rubric.

As we look ahead to assessment efforts we are planning for Spring 2017, we will reflect on the following lessons learned. First, many students in the program achieved success in demonstrating the learning outcomes for the program, but not all. This means we still have room for improvement in terms of making sure assignments are clear and expectations are well-defined. Second, the best “win” for assessment comes from ensuring student success. For our department frequent discussions and rating calibration sessions helped to make this happen. We all benefited as a result of the assessment process, leading to continuous change and improvement across the department.

 

References

Gray, D. M., V. Boasson, M. Carson, & D. Chakroborty.  2014. Anatomy of an MBA Program Capstone Project Assessment Measure for AACSB Accreditation. International Journal of Business Administration, 6 (1).

Miller, R. & A. Leskes. 2005. Levels of Assessment: From the Student to the Institution. A Greater Expectations Publication: AAC&U.

van Acker, L. & J. Bailey. 2011. Embedding Graduate Skills in Capstone Courses. Asian Social Science, 7(4).

Leave a Reply

Your email address will not be published. Required fields are marked *