The State of Academic Program Review Guidelines and Instrumentation in Public Institutions

While academic program review (AGR) is a best practice for ensuring academic quality, institutions are moving to make the process more robust by qualifying guidelines and instructions about the process in order to use results for more strategic purposes such as demonstrating impact. Backlud et al. (2011) notes that the American history of academic program reviews is grounded in the assessment and accountability movements started in the 1970’s and 1980’s, with state legislatures requiring assessment-centered models to demonstrate effectiveness. The Council of Graduate Schools (2005) notes the purposes of academic program review: “quality assurance, quality improvement, accountability, identification of strategies for improvement, and information collection and synthesis to facilitate prioritization of resources”.

 Multiple step models are evolving that describe a robust self-study process as involving internal and external expertise to collect and examine evidence that reflects practices, processes, policies, and effectiveness, in response to guidelines or standards such as the Council for Advancement of Standards in Higher Education’s (n.d.) seven step model. Typically in the most progressive models, after data analysis, actions and strategic planning should allow the academic unit to track changes and demonstrate how it will further its goals. Field specific models, such as those of the Accreditation Council for Business Schools and Programs (ACBSP, 2017), National Communication Association (NCA, 2009), and the National League of Nursing Accrediting Commission (NLNAC, 2016), are being increasingly used to standardize guidelines, recognize excellence, convey expectations for continuous improvement, and advance the field in teaching, learning, and assessment areas. For academic programs not accredited under a field organization, the expectation of a robust process is expected by regional accreditation bodies, such as the Higher Learning Commission (HLC, 2018), Middle States Commission on Higher Education (MSCHE, 2015), and the Southern Association of Colleges and Schools Commission on Colleges (SACS, 2018), amongst others.

While data elements of a program review may be benchmarked to gauge comparative progress, the collection of program review data points is difficult to compare due to the complexity of data types and purposes for gathering data; the National Higher Education Benchmarking Institute (n.d.) recommends a mixture of over 150 quantitative and qualitative types of data to capture credited program quality and accurately assess needs and priorities. Given institutional and state system pressures, such as a need to justify program continuance, and influences that make each institutional program review process unique, overall measures may also not provide an accurate reflection of program quality due to time, population, and local factors.

The objective of this study was to analyze differences between materials related to academic program review as posted on institutional websites in order to describe the current state of practices. Overall analysis as well as analysis by institutional type allowed frequency percentages to argue each element’s inclusion as best practice in implementation. A systematic literature review examines aspects of program review in terms of their recognition as important components. Hopefully, this literature review identifies gaps in program review research that have prevented empirical study from forwarding effective evaluation and identification of best practices.

Literature Review

A cyclic academic program review is a focused method for gathering evidence of programmatic practices and policies including inputs, outputs, processes, and mapping between each in a continuous improvement framework. For some institutions and academic field organizations, program review processes are still being formalized, usually to promote accountability, legitimacy and promote effective change . Increasingly, field literature is advocating for strong connections between program review and strategic planning in order to impact institutional efficiency . In addition, stakeholders are increasingly advocating for alignment between accreditation standards and academic program reviews to foster resource efficiency and directed action (Bowker, 2017). Perhaps the strongest arguments for deploying academic program review is made by those who have conducted empirical studies linking academic program review activity to policies and practices such as curricular improvements, staffing realignment, support for an outcomes-based assessment model, and unit restructuring , and institutional impacts such as increased transparency, accountability and productivity .

Benchmarking efforts as part of the program review process are historically low ) with Deming and Figilo  noting stakeholder difficulty in identifying benchmark categories and scope. Current communications from regional accreditation bodies still provide instruction on basic benchmarking standards for use in academic programs or allowing institutions to explore benchmarking as a quality improvement initiative , so it is clear that the development of a rich set of metrics is still ongoing for a significant portion of institutions. For institutions, the number of empirical studies exploring effectiveness of academic program review (APR) is low; only 15 directly related recent studies were found to support the current study, although particular elements of APR, such as outcomes assessment, are better studied and literature offering guidance is widely available with most field accrediting bodies now publishing guidelines or standards and offer training to assist programs in their accreditation efforts. Still, the selection and weighting of evaluation indicators is difficult; Praslova used this contention as a justification to adapt by adapt Kirkpatrick’s  organizational training evaluation framework.

An important predecessor to this work is Ewell, Paulson, and Kinzie’s comprehensive study of program level assessment practices in two and four year institutions; findings indicated that for both accredited and non-accredited programs, assessment results were used for program review activity more than any other category, including improving curriculum and instruction. Conclusions from that study revealed the importance that field assessment expectations and internal improvement interests have on assessment practice including the range and frequency of use of assessment practices related to program reviews.

Program review as a process

Similar to AGR purposes, numerous stakeholders illustrate that their understanding of a AGR process that represents best practices is to be objective (Dee & Heineman, 2016; Small, 2015), rigorous or systematic (Lopez, et al., 2016; Shambaugh, 2017), action-based (Coppard, et al., 2015; Henry, et al., 2014), and reflective of qualitative and quantitative data (Henry, et al., 2014). Field literature and portrayals seem to agree that the process usually involves the institution setting a purpose, stating requirements, and holding stakeholders to a schedule and framework, such as collecting data and gathering input for a self-study (Cornell University, n.d; Hanover Research, 2012; University of Texas at Dallas, 2016). Reported outcomes are expected to reflect analysis with Ryan (2015) suggesting that commonly assessed items including learning design, content and pedagogy in a feedback-heavy framework with emphasis on faculty-student interactions. Finally, the field seems to recognize that recommendations, strengths, weaknesses, and opportunities be identified with internal review to foster stakeholder engagement and external review to validate findings. In fact, the data-driven aspect of AGR may serve as one of its most important processes especially given faculty concerns about their voice in the decision-making process (Schoepp & Tezcan-Unal, 2016) and the charge of implementing actions based on data (Schoepp & Benson, 2015).

Field literature seems to suggest that process-related elements of program review still lack best-practice models and recommendations with findings from Wickham (2017) that ownership and alternative use of data are issues in some sample populations and from Nicholson (2011) that learning or educational theory are infrequently linked to improvement processes.

Stakeholder involvement

Bresciani (2006) notes that higher education’s adoption of continuous quality improvement practices which may have had the unintended effect of isolating faculty from processes despite accrediting body efforts to expect or require their participation. With the shift in focus to an expectation that student outcomes are central to academic assessment (Council of Regional Accrediting Commissions (CRAC), 2003), stakeholder involvement has moved from best practice to expected demonstration of nuanced involvement at all process stages (CRAC, 2004), that is increasingly expected to address intellectual, interpersonal and psychological domains (Young, 2018) for broader ranges of stakeholders (Robinson & Demeter, 2017).

Program evaluation criteria

Several field theorists have put forward potential criteria for evaluating academic programs. In an update from his 1999 seminal work, Dickeson (2010) suggests ten (10) evaluation criteria, which receive representation in some form in this study’s coding form that as discussed in the methodology section. Wells and Wells‘s (2011) ten (10) criteria are also well-represented in this study with the expectation of two specific criteria that were considered but excluded due to lack of findings: per student costs were not typically presented and while graduation rates were presented, many institutions did not clearly present rates in terms of commonly noted 4 year, 5 year, 6 year time frames. In addition, performance funding formulas are having an impact on the mechanisms that higher education uses to addresses budget allocation (Dougherty & Reddy, 2013), so field discussion indicates that impact indicators will have an outsize effect on determinations.

In conclusion, reviewed literature makes clear that due to the complexity of analyzing and benchmarking quantitative and qualitative data sets, academic program review effectiveness is best examined through the effectiveness of its elements particularly those that reflect outcomes assessment and data-driven decision making.

Methodology

Content analysis is deployed in this research as a robust methodology of qualitative studies, for its flexible way of discovering patterns in data manifestations (Stemler, 2015) and has been widely used in research relating to presentation of education settings (Leach & Ramachandran, 2018). In particular, we collected data about 53 independent program review processes that represented a diversity of regional settings in small, medium, and large sized community colleges, state colleges, and universities of the American public education system. Very large public institutions were excluded from the study due to pilot findings and field literature (Audretsch, 2014; Geiger, 2017; Johnson, et al., 2016) that suggest that the independent structure of individual colleges with their inherent research structure and robustness of funding including commercial and technological partnerships means that APR will reflect strategic development of the college unit instead of the institution. 

Therefore, to maintain focus on institutional strategic development the convenience sample consisted of public institutions of six states representing United States regional categories with the listing culled from the appropriate state department of education, deploying a convenience selection of three institutions from each institutional category of associate-degree granting, bachelor-degree granting, and masters-degree granting. Inclusion in the sample mandated at least one relevant finding; therefore, findings presentation will be in terms of prevalence. All materials collected were from public areas of institutional websites in order to inform discussion on transparency of the process.

 As Volkwein (2010) confirms, most higher education institutions have a APR system in place, however, the materials guiding that process may not be public, hence the need for convenience sampling method. Although search terms involved ”program review”, “self-study”, and extensive searches of Academic Affairs pages, the scope of this investigation also included degree, and major reviews, as long as it was clear that the review was a comprehensive study that included outcomes assessment and was intended as a review that considered program viability, not just an evaluation of departmental effectiveness. Consensus between raters confirmed that each evaluated artifact was within the defined scope of the study. Each coder was assigned to independently code each site; interrater reliabilities were calculated with an average of ? = .70 (Stemler, 2004). Inclusion of prevalence findings presented in the next section were contingent on the level of agreement between coders.

The coding instrument developed through three iterations: the first version developed in response to the literature review findings and the second developed to adapt to a pilot study of six program review processes culled from institutions of the reviewer’s home state. The end of the pilot study yielded the final instrument which reflects active findings of the field. Items on the coding form included categories about the purpose of the program review, designated roles and responsibilities of stakeholders, related timeline information, provision of a program or departmental vision or mission, and information regarding review of past recommendations. 

The form also sought information about program student learning outcomes, program description and history, program goals, outcomes, and curriculum mapping. Other items on the form sought program demographics such as enrollment, student diversity, faculty profiles as well as information about faculty research and advising interests. A category of effectiveness indicators relating to budget, benchmarks, resource allocation, facility, staffing, and technology adequacy was also cataloged. Finally, program strategic and action planning, overall functioning, leadership and policy development, and use of indirect measures formed categories. A meta information section sought information about internal and external reviews, scheduling, sponsorship, and portfolio requirements. Most ratings were dichotomous (required or not required), with significance calculated from the frequency. Two coders, a research professional, and a student trained how to use the coding instrument completed the evaluation of program review information. 

Did you like this example?

Cite this page

The state of academic program review guidelines and instrumentation in public institutions. (2021, Oct 13). Retrieved April 26, 2024 , from
https://studydriver.com/the-state-of-academic-program-review-guidelines-and-instrumentation-in-public-institutions/

Save time with Studydriver!

Get in touch with our top writers for a non-plagiarized essays written to satisfy your needs

Get custom essay

Stuck on ideas? Struggling with a concept?

A professional writer will make a clear, mistake-free paper for you!

Get help with your assignment
Leave your email and we will send a sample to you.
Stop wasting your time searching for samples!
You can find a skilled professional who can write any paper for you.
Get unique paper

Hi!
I'm Amy :)

I can help you save hours on your homework. Let's start by finding a writer.

Find Writer