Office of Institutional Research and Effectiveness

Institutional Effectiveness Committee

MSU established the Institutional Effectiveness (IE) Committee, a standing committee that reports to the Provost and Executive Vice President. This committee provides oversight for the annual self-assessment process. It reviews all IE Reports, provides feedback to units, and coordinates with the SACSCOC reaffirmation process. Membership consists of elected and appointed administrators, faculty, and staff.

Review Process

The IE Committee review process begins in October or November of each year and in December of January. Each fall, the Chair of the committee convenes a meeting to review the Institutional Effectiveness process. Members are paired with a partner, and all of the university's IE reports are allocated among the teams (each team usually has between 10 and 15 reports to review). No member of the IE Committee may review a program within his or her own college or unit. The committee has designated rubrics for evaluating each IE report: academic and non-academic. Based on these rubrics, members review the IE reports individually and then meet with their partners to discuss their scores. The team then settles on a validated score, which is logged into the university's ClassClimate system. The validated scores are compiled into feedback reports, which each unit receives during the spring semester as they begin planning the next year's assessment cycle. Based on the results of the feedback reports, OIRE staff may take actions such as provide more training to the personnel responsible for completing the annual IE reports.

Improvement in the Institutional Effectiveness Process

Mississippi State's Institutional Effectiveness process has improved over the years thanks to the peer review process of the IE Committee and staff within OIRE. Based on the IE Committee rubric, much improvement can be seen from the state of IE reports in 2010 to 2016. In particular, the number and appropriateness of outcomes and assessment procedures has improved dramatically over the years.

2010 2016
3-5 outcomes are reasonable 58.7% 93.4%
3-5 outcomes can be assessed 50.8% 79.8%
Instructional units have learning outcomes 54.1% 89.1%
Assessment procedures measure outcomes 34.4% 69.8%
Assessment procedures are appropriate 34.1% 88.5%
Adequate number of procedures 41.6% 91.3%
Provides sufficient data for results 42.8% 66.7%
Action has been taken for improvement 31.4% 40.0%

Although results and use of results have increased scores, more work is needed in these areas. OIRE hypothesizes that an adjustment in the assessment procedures to move from quality assurance to excellence may help improve the overall scores for results and use of results in future assessment cycles. During spring 2016, staff within institutional effectiveness created an internal rubric to distinguish aspects of quality assurance from excellence. This work concentrates on improving the assessment procedures, which in turn could lead to more meaningful and actionable results.

Direct Assessment Measures

Indirect Assessment Measures





Criterion is based on a scale with a benchmark that was not set at a level they can achieve readily. These can be distinguished from QA based on the criterion, whether it was met or not, and/or the responses in use of results. So even if they met the criterion but highlighted an area of weakness, then it would be excellence over QA.

Criterion is based on a scale, but the benchmark is set at a level that they can easily achieve or that they’ve met many times over. These types of assessments cannot improve a program, but help maintain a program.

Criterion is based on a survey (or other form of self-report) scale that was not necessarily at a level that the program could meet readily.

Criterion is based on a survey (or other form of self-report) scale that can easily be achieved or that they’ve met for many years.


Contact Information

269-A Allen Hall
Mailstop: 9712

Phone: 662.325.3920
Fax: 662.325.3514