1) Does Missing Classes Decelerate Student Exam Performance Progress? Empirical Evidence and Policy Implications
- The study followed 389 undergraduate business students to understand whether missing classes affects not just exam scores, but students’ improvement over time. Instead of looking at one test score, the researcher examined performance progress—how much students improved between exams. The results showed that students who missed more classes improved less across exams. Even high-performing students (those earning A and B grades) experienced slower progress when they skipped classes. This challenges the common belief that “smart students can easily catch up.” Interestingly, the study found that gender did not significantly change the impact of absenteeism—both male and female students were negatively affected. The research also highlights financial pressure as a key reason for absenteeism, since many students work long hours to afford college. The study recommends attendance policies, daily quizzes, financial aid improvements, and more campus jobs to help students attend classes regularly and improve their academic growth over time.[references no. 1 – Braun, K. W., & Sellers, R. D. (2012]
- 2) The comprehensive business exam: Usefulness for assessing instructional and student performance outcomes
- The study examines how the Comprehensive Business Exam (CBE) can be used to better assess what graduating business students have actually learned. The CBE is presented as an alternative to the widely used Major Field Test for Bachelor’s Degree in Business (MFTB), with the advantage of providing more detailed, course-level feedback rather than just scaled comparison scores. The researchers analyzed data from 48 senior business students and found a strong relationship between CBE scores, ACT scores, and GPA. In fact, ACT score was the strongest predictor of CBE performance, followed by GPA and gender. Male students scored slightly higher than female students. Together, these variables explained 82% of the variation in CBE scores. The study also found that accounting majors outperformed some other majors, which suggests that student ability levels differ across programs. Importantly, CBE results closely matched course grades in subjects like accounting and economics, but not in others. Instructor teaching methods and grading policies also influenced how well course grades reflected actual learning.
- Overall, the study shows that the CBE can be a powerful tool for evaluating student learning at both program and instructor levels.[references no. 2 – Bagamery, B. D., Lasik, J. J., & Nixon, D. R. (2005)]
- 3) How much is That Exam Grade Really Worth? An Estimation of Student Risk Aversion to Their Unknown Final College Course Grades.
- The study introduces a creative classroom experiment to help students truly understand the concept of risk aversion using something that matters to them—exam grades. Instead of using money-based experiments, the researchers designed activities where students had to choose between accepting a “safe” guaranteed grade or taking a “risky” exam grade that could be higher or lower. Because some decisions actually affected their real grades, the experiment felt meaningful and realistic. First, students’ risk preferences were measured using a quiz-point lottery. Most students were found to be risk-averse, meaning they preferred safer outcomes over gambles. Then, across three exams (two hypothetical and one real), students chose their minimum acceptable “certain” grade instead of their unknown exam score.
- The results showed that risk aversion only strongly influenced decisions when the situation was real and affected actual grades. More risk-averse students were willing to accept lower guaranteed grades, while risk-loving students preferred to gamble. Overall, the experiment successfully made the abstract idea of risk aversion practical, engaging, and easier for students to understand [references no. 3 – Chetan, D., Eckel, C., Johnson, C. and Rojas, (2010)]
- 4) Capability matters: Relating student achievement on the comprehensive business exam to skill and effort.
- Study explores how students’ natural ability and personal effort work together to shape academic success. Using ACT (or converted SAT) scores as a measure of entry-level skill and the Comprehensive Business Exam (CBE) as a measure of achievement, the researchers found a strong connection between the two. ACT scores were the strongest predictor of CBE performance, showing that students who enter college with stronger academic skills tend to perform better later. However, ability alone does not guarantee success.
- Drawing on Duckworth’s idea that achievement equals skill multiplied by effort, the study shows that students who combine strong preparation with consistent hard work perform at or above expectations. Some lower-ACT students outperformed peers because of dedication and persistence (grit), while some higher-ACT students underperformed due to lower effort. The findings suggest colleges should provide extra academic support to lower-skill students and create engaging learning environments that encourage stronger study habits, helping more students turn potential into real achievement.[reference no. 4 – Chowdhury, M. I., & Wheeling, B. ( 2013 )]
- 5) The Effect of Curriculum-Based External Exit Exam Systems on Student Achievement
- Article explores whether curriculum-based external exit examination systems (CBEEES) actually improve student learning. These systems are built around clear content standards and subject-based exams that carry real consequences for students. Unlike minimum competency tests or aptitude exams like the SAT, CBEEES are tied directly to what is taught in class, measure multiple levels of achievement (not just pass/fail), and apply to nearly all students. The idea is simple: when exams matter and reflect clear standards, students, teachers, and schools have stronger incentives to focus on real learning. Looking at international data from studies like TIMSS and IAEP, countries with these exam systems generally perform better in math and science, even after accounting for economic differences. Within the United States, New York’s long-standing Regents exam system shows similar patterns—students perform better on SAT scores compared to demographically similar states.
- CBEEES also appear to influence school policies by encouraging stronger curricula, more instructional time, better teacher preparation, and higher investment in K–12 education. While not the only factor affecting achievement, the evidence suggests that well-designed, standards-based external exams can meaningfully raise academic performance and improve educational quality.[reference no. 5 – Costrell, R. 1994]
- 6) Classroom instruction results in better exam performance than online instruction in a hybrid course.
- The experiment examined whether students learn better from classroom instruction or online instruction when everything else is kept almost identical. In a hybrid psychology course, the same students experienced half their lessons in class and half online, using narrated PowerPoint slides and structured question sets. Pre-lesson, post-lesson, and exam questions were carefully designed to test the same content. Because the design compared each student’s own performance across formats, differences could not be explained by ability or motivation.
- Results showed that students performed better on post-lesson questions taught in class than those taught online. More importantly, they also scored significantly higher on exam questions based on classroom lessons compared to online lessons. The findings suggest that live, face-to-face social interaction enhances understanding and long-term retention. While online learning was only modestly less effective, classroom discussion and immediate interaction appear to strengthen learning. However, hybrid courses still provide meaningful learning opportunities, especially for students who cannot attend fully in-person classes.[reference no. 6 – Campbell, M., Gibson, W., Hall, A., Richards, D., & Callery, P. ( 2008 )]
- 7) Integrating digital pedagogies into a typical student learning lifecycle and its effect on exam performance
- Study explored whether a fully integrated digital teaching approach could improve accounting students’ exam performance. The research was conducted in two phases. In Phase 1, the researchers worked closely with students to design a custom digital pedagogy using tools within the Google Classroom suite. Students were trained through detailed user guides tailored to different devices (Windows, iOS, Android), and their feedback was collected through questionnaires and open-ended responses. Based on this feedback, certain tools were refined while others—such as Hangouts and real-time assessments—were removed due to issues like poor Wi-Fi access and limited resources. This iterative, student-informed process helped create a practical and structured digital learning model. In Phase 2, the refined pedagogy was implemented with a new cohort of students and tested over six months. Using hierarchical multivariate regression analysis, the researchers controlled for factors such as gender, age, prior math and accounting background, and socio-economic school background. The results showed that students who experienced the digital pedagogy performed significantly better in their final exams compared to those in the control group.
- Overall, the study highlights that thoughtfully integrating multiple digital tools across the entire learning journey—while actively involving students in the design process—can meaningfully improve academic performance. It also shows that structured, inclusive, and well-supported technology use can enhance both teaching effectiveness and student success.
- 8) EXAM SCHEDULING AND STUDENT PERFORMANCE
- The study looks at what happened when a UK university changed the timing of its final exams. Instead of holding finals at the end of each semester, the university moved all final exams to the end of the academic year. While this may sound like a small administrative adjustment, the researchers wanted to see whether it actually affected student performance.To measure the impact, they compared students’ final exam scores with their mid-term scores before and after the change. Mid-terms stayed at the same time during the semester, so they worked as a useful comparison point. Using a difference-in-differences approach, the study found that after the reform, final exam scores dropped by about 3 to 4 marks on average. In other words, students performed worse when finals were delayed to the end of the year.
- The researchers explored several possible reasons. Having many exams concentrated in a short period may increase stress and mental fatigue. There is also a longer gap between teaching and assessment for first-semester subjects, which may lead to forgotten material or reduced motivation. Although the reform was partly introduced for financial and administrative reasons, the findings suggest it had unintended academic consequences.
- Overall, the paper highlights an important lesson: changes in exam scheduling, even if logistically convenient, can meaningfully affect student learning and achievement.[reference no. 7 – Cameron, A. C., Gelbach, J. B. and Miller, D. L. ( 2006)]
- 9) Measuring business school faculty perceptions of student cheating
- This study explored how business school faculty perceive student cheating and whether those perceptions are linked to understanding and reporting academic dishonesty. Using survey responses from 233 faculty members at a Tier 1 Association to Advance Collegiate Schools of Business (AACSB)–accredited university, the researchers examined different types of cheating and faculty reporting behavior. The findings revealed three distinct types of cheating: paper-based (e.g., plagiarism, submitting others’ work), internet-based (e.g., using online resources or sites like Course Hero during exams), and direct exam cheating (e.g., copying answers, stealing exams). Faculty generally perceived moderate levels of cheating. Importantly, those who had formally reported a student for cheating perceived significantly higher levels of all three types, as well as a greater overall cheating problem. About 31% of faculty admitted they did not fully understand the reporting process. Those who lacked understanding were more likely to view reporting as a hassle. The study suggests that clearer communication, better training, and simplified reporting systems could encourage faculty to address cheating more formally, ultimately strengthening academic integrity within business schools.[reference no. 9 – Detert, J. R., Trevino, L. K., & Sweitzer, V. L. (2008)]
- 10) Inter-topical sequencing of multiple-choice questions: Effect on exam performance and testing time
- The study looked at whether changing the order of multiple-choice questions affects students’ exam performance or the time they take to finish. In intermediate accounting courses, 127 students were given six exams over two semesters. For each exam, some students received questions arranged in the same order as the material was taught (forward order), while others received the same questions in a scrambled order. Students had to complete and submit the multiple-choice section before moving on to the problem section, allowing researchers to better track completion time. Overall, the results were reassuring. For five of the six exams, there was no significant difference in scores between students who received ordered versus scrambled versions. Only one exam showed a small difference. When all exams were combined, item order had no meaningful impact on performance. There was also no strong evidence that scrambled exams made students take longer to finish.
- In simple terms, the study suggests that rearranging multiple-choice questions to reduce cheating does not generally harm students’ scores or slow them down.[reference no. 10 – Heck, J. L., & Stout, D. E. (1991)]
- 11) Conclusion:
- Across these ten studies, one clear message emerges: student performance is influenced by much more than intelligence alone. Attendance, effort, exam design, teaching methods, digital tools, scheduling policies, and institutional systems all play meaningful roles in shaping academic outcomes. Regular class attendance supports steady academic progress, even for high-performing students. Natural ability, measured through ACT scores or prior achievement, certainly matters—but consistent effort and engagement are just as important. Students who combine skill with persistence perform the best, reinforcing the idea that success is built, not just inherited. Assessment structure also matters. Research shows that rearranging multiple-choice questions to reduce cheating does not generally harm performance or increase completion time. However, larger structural decisions—such as delaying final exams to the end of the academic year—can negatively impact results. Similarly, classroom-based instruction often leads to stronger learning outcomes compared to purely online delivery, highlighting the value of face-to-face interaction. At a broader level, well-designed evaluation systems like comprehensive exams and curriculum-based exit exams help institutions better measure learning and encourage higher standards. Digital pedagogies, when thoughtfully integrated and student-centered, can also significantly enhance performance. Meanwhile, issues like academic dishonesty remind institutions that clear communication and supportive reporting systems are essential.
- Overall, these studies collectively show that academic success is not driven by one single factor. It is shaped by a combination of student responsibility, instructional quality, institutional design, and policy decisions. When educators and institutions thoughtfully align these elements, they create an environment where students are not only tested—but truly able to grow, improve, and succeed.
- 12) REFERENCES
- Braun, K. W., & Sellers, R. D. (2012). Using a “Daily Motivational Quiz” to increase student preparation, attendance, and participation. Issues in Accounting Education, 27, 267–279.
- Bagamery, B. D., Lasik, J. J., & Nixon, D. R. (2005). Determinants of success on the ETS Business Major Field Exam for students in an undergraduate multisite regional university business program. Journal of Education for Business, 81, 55–63. http://dx.doi.org/10.3200/JOEB.81.1.55-64
- Chetan, D., Eckel, C., Johnson, C. and Rojas, (2010). Eliciting risk preferences: When is simple better?. Journal of Risk and Uncertainty, 41(3): 219–43.
- Chowdhury, M. I., & Wheeling, B. ( 2013 ). Determinants of Major Field Test (MFT) score for graduating seniors of a business school in a small mid-western university. Academy of Educational Leadership Journal, 17 ( 1 ), 59 – 71.
- Costrell, R. 1994. A simple model of educational standards. American Economic Review 84 (Sept.): 956-71
- Campbell, M., Gibson, W., Hall, A., Richards, D., & Callery, P. ( 2008 ). Online vs. face-to-face discussion in a Web-based research methods course for postgraduate nursing students: A quasi-experimental study. International Journal of Nursing Studies, 45 ( 5 ), 750 – 759. doi: 10.1016/j.ijnurstu.2006.12.011
- Cameron, A. C., Gelbach, J. B. and Miller, D. L. ( 2006 ). ‘Robust inference with multi‐way clustering’, Technical Working Paper 327, NBER.
- Detert, J. R., Trevino, L. K., & Sweitzer, V. L. (2008). Moral disengagement in ethical decision-making: A study of antecedents and outcomes. Journal of Applied Psychology, 93, 374–391
- Heck, J. L., & Stout, D. E. (1991). Initial empirical evidence on the relationship between finance test-question sequencing and student performance scores. Financial Practice and Education, 1, 41-48.