Assessment of academic advising

Categories: Assessment

Assessment of Academic Advising: A Summary of the Process

 

Authored By: Rich Robbins and Kathy M. Zarges 

While assessment is often viewed as a means to accountability, assessment is intended to be a positive, ongoing process focused on continuous feedback about and improvement of services to students (Aiken-Wisneiwski, Campbell, Nutt, Robbins, Kirk-Kuwaye, & Higa, 2010, Campbell, Nutt, Robbins, Kirk-Kuwaye, & Higa, 2005).  Assessment of all facets of higher education has become major foci of various external entities, but the real purpose of assessment should be to determine whether programmatic goals are being achieved and students’ needs and learning goals are being met. Student learning is an individualized and complex process (Aiken-Wisneiwski, et al., 2010, Campbell, et al., 2005), and traditional techniques of understanding student learning through simple summative evaluation cannot address this complexity (Robbins, 2009a, 2011).
 
In academic advising, like most areas of higher education, one of the primary objectives is student learning and success.  With the understanding that student learning and success is closely tied to individual academic advisor and academic advising programmatic effectiveness, it is important to note that the true purpose of assessment is continuous and consistent improvement.

 

Evaluation versus Assessment
 
Although the terms are often used interchangeably (for example, Creamer & Scott, 2000; Cuseo, 2008; Lynch, 2000; Troxel, 2008), there are important distinctions between evaluation and assessment in higher education (Robbins, 2009a, 2011).  Evaluation is centered around the performance of the individual academic advisor, while assessment is concerned with the academic advising program and services overall, primarily the achievement of student learning outcomes (SLOs). Evaluation of individual academic advisor performance may be part of an assessment process (Robbins, 2009a, 2011), but evaluation tends to be episodic and individually focused while assessment is a holistic and continuous process conducted at the programmatic level.

The Assessment Cycle

The assessment cycle begins with the identification of the institution’s values, vision, and mission for academic advising, and ends when the information gathered is acted upon and a new assessment cycle starts.  In between, the assessment cycle includes: the development and identification of programmatic goals and objectives related to the mission; the development and identification of measurable outcomes; the identification of multiple outcome measures for each; the establishment of satisfactory criteria for each measure employed for each desired outcome; the gathering of data; and the reporting and sharing of data.  These steps lead up to the implementation of change based on the evidence gathered, and the start of another cycle of assessment (Robbins, 2009a, 2011).  Maki (2002, 2004) illustrates the basic processes of the assessment cycle in figure 1 while Darling (2010) elaborates on the assessment cycle with more detailed descriptions of each stage in the form of a flowchart in figure 2.

 
A third visual representation of the assessment cycle is the Assessment Matrices (Robbins, 2009a, 2011) in figure 3. While these representations of the assessment cycle provide an overview of the general process, it is important to note that assessment of academic advising can be performed at the department, program, college, and school level, based on the structure of an institution, and the specific purposes of that assessment.

Stakeholders: A Key Component


Assessment of academic advising is a continuous and collective process (Aiken-Wisneiwski, et al., 2010, Campbell, et al., 2005).  Therefore, two of the primary components are the identification and inclusion of stakeholders and the creation of the assessment team.

Stakeholders are the key individuals who will be kept informed and updated on the progress throughout the entire assessment process.  Their buy-in will be critical to the success of an assessment plan as their support has the potential to be far reaching throughout the university community.  Stakeholders include representatives from cohorts affected by academic advising, as well as those who can provide influence over the advising program.  The specific stakeholders depend on the programmatic mission, goals, and desired outcomes for the specific academic advising program, and may change throughout  the course of the assessment.  Colleagues, faculty, staff, administrators, institutional researchers, students, and others within the institutional community may be included, with stakeholders outside of the institution potentially  including employers, internship site supervisors, alumni, parents, and members from governing agencies.  While not all stakeholders will be members of the working assessment team, collaboration with the stakeholders is critical during the assessment process to promote a shared feeling of trust, motivation, terminology, agreement of your advising goals, language, support, and ownership and belief in the assessment process (Aiken-Wisneiwski, et al., 2010, Campbell, et al., 2005).
 
The assessment team, in contrast, includes those stakeholders who are responsible for conducting the assessment.  They identify the methods to be utilized, how these methods will be implemented, the timing of measurements, the parties responsible for collecting and analyzing the data, the reporting of results, and actions taken based upon the resulting data (Robbins, 2009a, 2011).  The assessment team is responsible for working with the stakeholders to promote the assessment process and to build a culture for assessment of academic advising.  The assessment team should be selected keeping in mind skills, resources, expertise, and the political considerations of the institution.

Outcomes of Academic Advising
 
The specific phenomena being assessed are the process/delivery outcomes (P/DOs) and/or the student learning outcomes (SLOs) of academic advising.  P/DOs are statements which articulate the expectations regarding how academic advising is delivered and what information should be delivered during the academic advising experience (Aiken-Wisneiwski, et al., 2010, Campbell, et al., 2005).  These outcomes are anchored in the academic advising interaction, are concerned with what occurs and what information is exchanged during that interaction, and are what is typically measured through student satisfaction surveys (Robbins, 2009a, 2011).  SLOs are the statements which articulate what students are expected to know (cognitive learning), do (behavioral learning), and value (affective learning) as a result of their involvement in the academic advising experience (Aiken-Wisneiwski, et al., 2010, Campbell, et al., 2005, Robbins, 2009a, 2011).
 
The significance of both P/DOs and SLOs in the assessment of academic advising is that both sets of outcomes are complementary to one another (Robbins, 2009a, 2011).  For example, the degree to which any form of student learning occurs is due, in part, to the processes involved in the delivery of academic advising.

Identifying Measurable Outcomes

 
Assessment of advising may be occurring at the program, department, college or institution level, depending on the structure of the institution and the needs and purpose of the assessment.  Ideally, the development of desired outcomes starts with the development or review of the mission of academic advising for the particular unit that is involved in the assessment, relative back to the institutional mission and forward to goals for academic advising for that unit (Robbins, 2009a, 2011).  Having a clearly delineated mission statement and specified programmatic goals allows assessment to be performed more effectively (Campbell, 2008; White, 2000).  See Abelman, Atkin, Dalessandro, Snyer-Suhy, and Janstova (2007), Abelman and Molina (2006), and Campbell (2008) for information on developing a mission statement for academic advising.
 
Goals are derived from the local mission statement and identify exactly what the program should achieve.  Objectives follow from the goals and articulate how academic advising is to be delivered.  It is from these objectives that specific P/DOs and SLOs are derived (Robbins, 2009a, 2011).  However, even in the absence of a formal mission, goals, and/or objectives for an advising unit, there are likely desired outcomes for an academic advising program which can be identified.  The assessment process can therefore begin based on these with the development of the objectives, goals, and mission occurring while the assessment process is underway.  There are other resources which can assist in the identification of P/DOs and SLOs for academic advising as well:  The CAS Standards for Academic Advising (Council for the Advancement of Standards in Higher Education, 2005) include specific outcomes for academic advising, the NACADA Core Values (NACADA, 2005) are statements of advisor values and expectations in the academic advising relationship, and the NACADA Concept of Advising (NACADA, 2006) provides a general description of and examples of SLOs for academic advising.

Mapping

Mapping is the process of determining when, where, and through what experiences the desired outcomes for academic advising will be achieved (Aiken-Wisneiwski, et al., 2010, Campbell, et al 2005).  Both P/DOs and SLOs can be mapped across the advising experience, providing a visual to see both when the advising office is delivering the information and when the student is expected to learn the information   When mapping P/DOs, the outcome is evaluated to determine what opportunities exist for the outcome to be achieved, and when these opportunities are offered during the students’ academic career.  Mapping SLOs involves evaluating the opportunities that are provided for students to achieve the desired outcome and the stage at which an advising unit may expect the students to have achieved the outcome.
Mapping both P/DOs and SLOs allows advising units to identify discrepancies between the advisor and student expectations and potential gaps in the delivery of academic advising.


Types of Measurement and Data

Outcome measurement involves identifying how the data will be gathered in order to determine whether desired outcomes have been met.  A simple, one dimensional survey is not the only way – or even the best way - to gather outcome data, as not all P/DOs or SLOs involved in the academic advising process can be fully measured or understood by a simple survey (Robbins, 2009a, 2011).  Multiple methods of measurement are needed and may include qualitative, quantitative, direct, and indirect measurements and data.
 
Qualitative measurement is exploratory, with data being described in words, such as responses to open-ended questions about the academic advising experience.  Information emerges from the process in the form of rich, in depth responses to questions (Robbins, 2009a, 2011) which are generally categorized into themes.  The interpretation of qualitative data is subjective and inductive.  Qualitative methods are often utilized when little is known about the topic being evaluated or assessed, and/or when closed-ended items which would yield data in numerical form cannot yet be determined.  Qualitative methods may include focus groups, case studies, and naturalistic observation; the key is that the emerging information is the result of responses to open-ended inquiry.
 
Quantitative measurement is descriptive and structured, with resulting data in the form of numbers or statistical measures in response to inquiries about the academic advising process (Robbins, 2009a, 2011).  The resulting data is interpreted objectively and deductively.  Quantitative methods are utilized when students (or other target cohorts) are not available for extensive interactions or observations, when time and funds are limited, and/or when your audience requires “hard numbers.”  Quantitative methods may include surveys and questionnaires, with the key being that the responses to the items are forced-choice (for example, multiple choice, rating scale, true-false) rather than open-ended (or qualitative).
 
Both direct and indirect measures may be qualitative or quantitative, with direct measures involving empirical or “first-hand” observation of or access to the process and resulting data and indirect measures reporting data that has already been gathered or recalling events that have already occurred.  Whether direct and indirect measures are qualitative or quantitative depends upon the types of questions posed (open-ended or forced-choice); whether qualitative and quantitative measures are direct or indirect depends upon how the data is collected (Robbins, 2009a, 2011).
 
Once the multiple measures have been determined, the minimum criteria for success - what the data must demonstrate in order to verify that a given desired outcome has been met - must be identified. Further, the minimum acceptable criteria must be identified and achieved across multiple measures for a given desired outcome in order to consider that outcome as being met.  If baseline data exists, the minimum criteria for a given outcome may be determined based on that information.  If assessment of academic advising is being conducted for the first time, it may be beneficial to identify the initial process as gathering baseline or benchmarking data, and to not initially set minimum criteria (Robbins, 2009a, 2011).
 
There likely exist institutional data such as retention rates, grade point averages, and other student tracking data that may be utilized as part of the multiple measures for any given desired outcome (Robbins, 2009a, 2010, 2011).  Similarly, there may be institutional benchmarking data, national benchmarking data, or peer institutional data that may be used as baseline or comparison data, or to inform minimum criteria for success.

Types of Measurement and Data

 
Outcome measurement involves identifying how the data will be gathered in order to determine whether desired outcomes have been met.  A simple, one dimensional survey is not the only way – or even the best way - to gather outcome data, as not all P/DOs or SLOs involved in the academic advising process can be fully measured or understood by a simple survey (Robbins, 2009a, 2011).  Multiple methods of measurement are needed and may include qualitative, quantitative, direct, and indirect measurements and data.
 
Qualitative measurement is exploratory, with data being described in words, such as responses to open-ended questions about the academic advising experience.  Information emerges from the process in the form of rich, in depth responses to questions (Robbins, 2009a, 2011) which are generally categorized into themes.  The interpretation of qualitative data is subjective and inductive.  Qualitative methods are often utilized when little is known about the topic being evaluated or assessed, and/or when closed-ended items which would yield data in numerical form cannot yet be determined.  Qualitative methods may include focus groups, case studies, and naturalistic observation; the key is that the emerging information is the result of responses to open-ended inquiry.
 
Quantitative measurement is descriptive and structured, with resulting data in the form of numbers or statistical measures in response to inquiries about the academic advising process (Robbins, 2009a, 2011).  The resulting data is interpreted objectively and deductively.  Quantitative methods are utilized when students (or other target cohorts) are not available for extensive interactions or observations, when time and funds are limited, and/or when your audience requires “hard numbers.”  Quantitative methods may include surveys and questionnaires, with the key being that the responses to the items are forced-choice (for example, multiple choice, rating scale, true-false) rather than open-ended (or qualitative).
 
Both direct and indirect measures may be qualitative or quantitative, with direct measures involving empirical or “first-hand” observation of or access to the process and resulting data and indirect measures reporting data that has already been gathered or recalling events that have already occurred.  Whether direct and indirect measures are qualitative or quantitative depends upon the types of questions posed (open-ended or forced-choice); whether qualitative and quantitative measures are direct or indirect depends upon how the data is collected (Robbins, 2009a, 2011).
 
Once the multiple measures have been determined, the minimum criteria for success - what the data must demonstrate in order to verify that a given desired outcome has been met - must be identified. Further, the minimum acceptable criteria must be identified and achieved across multiple measures for a given desired outcome in order to consider that outcome as being met.  If baseline data exists, the minimum criteria for a given outcome may be determined based on that information.  If assessment of academic advising is being conducted for the first time, it may be beneficial to identify the initial process as gathering baseline or benchmarking data, and to not initially set minimum criteria (Robbins, 2009a, 2011).
 
There likely exist institutional data such as retention rates, grade point averages, and other student tracking data that may be utilized as part of the multiple measures for any given desired outcome (Robbins, 2009a, 2010, 2011).  Similarly, there may be institutional benchmarking data, national benchmarking data, or peer institutional data that may be used as baseline or comparison data, or to inform minimum criteria for success.

Conclusion

Assessment is neither quick nor easy. Perhaps the most critical aspect to remember is to start small and have some successes.  Identify one or two desired outcomes to start.  Do not attempt to do too much initially or the result will likely be a lack of support and a perception of being overwhelmed by the assessment process.  After all assessment often has negative connotations and part of this process is to demonstrate the utility for assessment and the ways it can be part of the everyday activities of the academic advising program.



Rich Robbins
Associate Dean, College of Arts and Sciences
Bucknell University

Kathy M. Zarges 
Director Undergraduate Advising
Kent State University

2011 NACADA National Survey Implications for Assessment


References

Aiken-Wisneiwski, S. (Ed).,(2010). Guide to Assessment in Academic Advising (second edition). [Monograph No. 23].  Manhattan, KS: The National Academic Advising Association.

 

Abelman, R., Atkin, D., Dalessandro, A., Snyer-Suhy, S., and Janstova, P. (2007). The trickle-down effect of institutional vision: Vision statements and academic advising. NACADA Journal 27 (1): 4-21.

 

Abelman, R. and Molina, A.D. (2006). Institutional vision and academic advising. NACADA Journal 26( 2): 5-12.

 

Campbell, S. M. (2008). Vision, mission, goals, and program objectives for academic advising programs. In Gordon, V.N., Habley, W.R. & Grites, T.J. (Eds.). Academic advising: A comprehensive handbook (second edition). San Francisco: Jossey-Bass.

 

Campbell, S., Nutt, C., Robbins, R., Kirk-Kuwaye, M., & Higa, L. (2005). NACADA guide to assessment in academic advising.  Manhattan, KS: National Academic Advising Association.

 

Creamer, E. G., & Scott, D. W. (2000).  Assessing individual advisor effectiveness.  In Gordon, V.N. & Habley, W.R. Academic advising: A comprehensive handbook (339-348).  San Francisco: Jossey-Bass.

 

Council for the Advancement of Standards in Higher Education (CAS). (2005). Academic Advising Programs: CAS Standards and Guidelines. Retrieved from http://www.cas.edu/getpdf.cfm?PDF=E864D2C4-D655-8F74-2E647CDECD29B7D0

 

Cuseo, J. (2008). Assessing advisor effectiveness.In Gordon, V. N., Habley, W.R., & Grites, T.J. (Eds.) Academic advising: A comprehensive handbook (2nd edition). (pp. 369-385).  San Francisco: Jossey-Bass.

 

Darling, R. (2010).Flowchart of assessment in academic advising.  In Aiken-Wisneiwski, S (Ed). (2010). Guide to Assessment in Academic Advising (second edition). [Monograph No. 23].  Manhattan, KS: National Academic Advising Association.

 

Lynch, M. L. (2000).  Assessing the effectiveness of the advising program.  In Gordon, V.N. & Habley, W.R. Academic advising: A comprehensive handbook (pp 324-338).  San Francisco: Jossey-Bass.

 

Maki, P. L.  (2002).  Developing an assessment plan to learn about student learning. Journal of Academic Librarianship, 28 (1-2), 8-13.

 

Maki, P. L.  (2004). Assessing for learning: Building a sustainable commitment across the institution.  Sterling, VA: Stylus Publishing.

NACADA. (2005). NACADA statement of core values of academic advising. Retrieved from theNACADA Clearinghouse of Academic Advising Resources http://www.nacada.ksu.edu/Clearinghouse/AdvisingIssues/Core-Values.htm

NACADA. (2006). NACADA concept of academic advising. Retrieved from http://www.nacada.ksu.edu/Clearinghouse/AdvisingIssues/Concept-Advising.htm

 

Robbins, R. (2009a).  Evaluation and assessment of career advising.  In Hughey,K. Burton Nelson, D.,  Damminger, J. and McCalla-Wriggins B., (Eds) Handbook of Career Advising (chapter 12).  San Francisco: Jossey-Bass.

 

Robbins, R. (2009b). Utilizing Institutional Research in the Assessment of Academic Advising. NACADA   Clearinghouse of Academic Advising Resources series Academic Advising as a Comprehensive Campus Process. Retrieved from www.nacada.ksu.edu/Clearinghouse/MO2/assess.htm

Robbins, R. (2011).Assessment and accountability. In Joslin J. and Markee, N. (Eds.) Academic Advising Administration: Essential Knowledge and Skills for the 21st Century . [Monograph No.22].  Manhattan, KS: National Academic Advising Association. 

Troxel, W. G. (2008). Assessing the effectiveness of the advising program. In V. N. Gordon, W. R. Habley, and T. J. Grites, Academic advising: A comprehensive handbook (2nd edition) (pp. 386-395).  San Francisco: Jossey-Bass.

 

White, E. R.  (2000).  Developing mission, goals, and objectives for the advising program.In Gordon, V. N. & Habley, W.R. Academic advising: A comprehensive handbook (pp. 180-191).  San Francisco: Jossey-Bass.

Read More About It! Annotated bibliography of suggested readings

 


Cite this resource using APA style as:

Robbins, R. & Zarges, K.M. (2011).Assessment of Academic Advising: A Summary of the Process. Retrieved from NACADA Clearinghouse of Academic Advising Resources Web Site: http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Assessment-of-academic-advising.aspx