Editorial Type: Articles
 | 
Online Publication Date: 01 Dec 2013

Capturing the Student Perspective: A New Instrument for Measuring Advising Satisfaction

and
Article Category: Research Article
Page Range: 4 – 15
DOI: 10.12930/NACADA-12-132
Save
Download PDF

When students leave their advising appointments, how do they feel? Excited? Disappointed? If advisors and students do not share expectations and goals, the student may harbor negative feelings about the advising experience, which have the potential to lead to withdrawal and dissatisfaction. We surveyed students at a large midwestern university to see how students feel about their past and recent advising experiences. Overall, students reported satisfaction with their advising involvement, as average rating scores were high and positive. The measurement scale created to evaluate student satisfaction with advising was analyzed using exploratory and confirmatory factor analyses. This analysis showed two reliable scales: advising and outreach functions, which may be used in the future to evaluate advising programs.

An important facet of higher education, student retention inspires university leadership to investigate the extent to which their students feel connected to campus and related resources. Students utilize academic advising to make these important linkages to their institution, trusting the advisor as they transition from high school to college. Furthermore, advisor presence and support could make the difference between a frustrated withdrawal and a determined effort to graduate with honors (Drake, 2011).

When investigating various factors related to student retention, Kuh (2008) pointed to the quality of advising on a college campus as among the most powerful predictors of overall campus satisfaction. Metzner (1989) found that lower attrition rates were linked to high quality advising rather than lower quality advising, but students who received some advising persisted to a greater extent than those who received no advising. McLaughlin and Starr (1982) cited numerous studies that have connected high quality academic advising to retention and persistence as well as low quality or no academic advising to dropped courses and attrition.

Because advising forms an integral part of a successful educational institution, stakeholders at colleges and universities concerned with student retention must continuously monitor, develop, evaluate, and assess advising services for consistency and high quality. One of the most popular ways to indirectly measure the success of an academic advising program involves use of a standardized scale. However, previous publications on evaluation efforts, based on a few well-known instruments, do not show the statistical properties of those scales. For example, Alexitch (2002) and Hale, Graham, and Johnson (2009) used the Academic Advising Inventory (AAI) by Winston and Sandor (1984). The AAI is a four-part evaluation instrument that determines the levels of prescriptive and developmental advising that students are receiving, frequencies of various discussion topics discussed, student satisfaction levels, and demographic information. Others have utilized institution specific scales (e.g., Creeden, 1990; Ford, 1985; Grites, 1981; Habley, 1994) not tested for analytic fit, reliability, or validity.

Some developers of evaluation initiatives have introduced new quantitative instruments comparing student preferences of advising to advising sessions in practice (Dickson & McMahon, 1991; Fielstein, 1989; Fielstein & Lammers, 1992; Fielstein, Scoles, & Webb, 1992), evaluating the differences between student and faculty perceptions (Creeden, 1990; Grites, 1981; Saving & Keim, 1998; Severy, Lee, Carodine, Powers, & Mason, 1994), and measuring overall satisfaction with advising (Bitz, 2010; Kelley & Lynch, 1991; Lynch, 2004; Reinarz & Ehrlich, 2002; Smith & Allen, 2006; Zimmerman & Mokma, 2004). Additionally, Lynch (2004) investigated differences between advisor type (general, departmental, and faculty advisors), and Fielstein et al. (1992) evaluated satisfaction differences between traditional- and nontraditional-aged students.

Furthermore, based on qualitative methods, findings from interviews (Beasley-Fielstein, 1986; Fielstein, 1987; Fielstein & Lammers, 1992) and focus groups (Kramer, 1992; Smith, 2002) have contributed to the literature. Other qualitative studies focused on the relationship between graduate students and their advisors (Bloom, Propst Cuevas, Hall, & Evans, 2007; Schlosser, Knox, Moskovitz, & Hill, 2003). Srebnik (1988) and the National Academic Advising Association (2012) listed numerous institutions that have created qualitative or quantitative evaluation instruments, each relevant for their own culture and needs.

Although these previous initiatives have expanded the research literature, the overall evaluation and assessment processes used in academic advising to date have been inconsistent (Allen & Smith, 2008). Likewise, few studies have been grounded in statistical analyses and scale development. Reliable and valid measures are needed to measure complex processes such as academic advising (Banta, Hansen, Black, & Jackson, 2002), but many of the existing informal assessments neglect these traditionally necessary scale properties. Additionally, some publications lack details regarding the scale development process, and others offered vague descriptions of their scale creation, declaring acceptable reliability and validity without statistical information to confirm these claims. In other words, more statistically valid measuring tools are needed to fully assess the impact and quality of academic advising. For this reason, we turn to the basics of academic advising literature to determine the items that should be measured.

O'Banion (1972/1994/2009) listed the crucial functions of academic advising in five dimensions: exploration of life goals, exploration of vocational goals, program choice, course choice, and scheduling courses. Mainly, advisors carry out these functions using two main practitioner styles: prescriptive and developmental advising. Prescriptive advising involves an authoritarian relationship between the advisor and the advisee in which the advisor simply tells the student what to do. Crookston (1972/1994/2009) compared the relationship between a prescriptive advisor and the advisee to one of a doctor and patient in which the patient assumes no responsibility for any poor outcomes. Despite the negative connotations associated with it, prescriptive advising functions prove essential to student success because they include discussions of graduation requirements, course selection, and registration procedures (Fielstein, 1994).

Developmental advising characterizes an equal and deeper relationship between advisor and advisee in which the student receives advising as a whole person. Developmental advising “goes beyond simply giving information or signing a form” (King, 2005, para. 2). To be effective at enhancing student development, advisors must be educated on student development theories and ways to properly utilize them in their practice. Williams (2007) and Creamer and Creamer (1994) identified theories often embraced in developmental advising, including those on the psychosocial and cognitive aspects as well as career development. Developmental advising should be a team effort in which the advisor guides the student in developing skills and self-awareness that will lead to a rewarding college career (O'Banion, 1972/1994/2009). Examples of developmental advising outcomes include strengthening communication and problem solving skills, identifying values and life goals, and broadening interests (Creamer & Creamer, 1994).

Although a significant amount of literature on advising has been devoted to determining whether prescriptive advising or developmental advising is superior, both methods of advising should be utilized at certain times throughout a student's college career in a comprehensive approach. Fielstein (1994) noted that much like Maslow's hierarchy of needs, a student's basic needs should be met using prescriptive advising before higher level needs can be met by developmental advising. Brown and Rivas (1994) agreed and stated that advising should be more of a continuum in which the relationship begins through prescriptive advising and slowly transitions into a developmental mode.

The literature shows that students are positively inclined toward prescriptive advising. In fact, students from other cultures may feel more comfortable with an authority figure directing their path (Brown & Rivas, 1994; Cornett-Devito & Reeves, 1999). Research also shows that some students may only want prescriptive functions from their advisors rather than a relationship and rank these services higher than developmental services (Fielstein, 1994).

Regardless of an advisor's good intentions, students may be dissatisfied with the advising services received. This dissatisfaction may reflect a disconnect between an advisor's and student's expectations and values of advising (Allen & Smith, 2008). Therefore, program administrators need to know student expectations of advisors as well as practices in advising sessions that lead to desirable outcomes. When beginning this study, to examine current students' feelings about their advising experiences we created an evaluative tool that would serve this purpose as well as contribute to the advising literature. While the former satisfied our desire to better understand students at our institution, we quickly realized that the latter was much more relevant for the field of academic advising.

In this study, students responded to questions about their advising experience at a large midwestern university. They indicated where they have received advising services as well as their contentment level with the advising received. Questions were originally designed to measure satisfaction with prescriptive functions (e.g., class scheduling and graduation requirements), developmental functions (e.g., developing career goals), and overall advisor traits (e.g., personality, professionalism).

Experiment 1

Method

Participants

We recruited 155 participants from the university undergraduate research subject pool. Volunteers received course credit for their involvement. Three participants submitted incomplete surveys and so their data were deleted, leaving 152 completed questionnaires for the analyses. Table 1 contains the demographic data for all three experiments we conducted.

Table 1. Descriptive statistics for all versions of the advising scale

              Table 1.

Materials and procedure

In surveying the literature on academic advising noted above, items for a new questionnaire (created using Qualtrics, 2012, survey software) were created in the spirit of previous evaluative scales (Cuseo, 2003; Winston & Sandor, 1984). These questions match specific university goals and academic advising mission statements, such as the public affairs mission. Additionally, numerous aspects of academic advising were investigated, including advisor traits (e.g., patience and trustworthiness), activities relating to prescriptive advising (e.g., schedule planning and graduation requirements), and activities relating to developmental advising (e.g., campus/community involvement and overall student development). The complete scale is shown in Table 2. The item order was randomized for each participant so that each saw a uniquely arranged scale.

Table 2. Fit indices for all survey versions

              Table 2.

After indicating consent, participants completed the questionnaire. They rated statements describing different characteristics of an academic advising session using a 7-point Likert-type scale (1 indicated strongly disagree, 4 indicated neutral, and 7 indicated strongly agree). For example, participants indicated the extent to which they trust their advisor. Basic demographic information, such as gender, postsecondary year (freshman, sophomore, etc.), major, transfer status, and ethnicity, was collected. After completing the survey, participants were thanked and granted participation credit.

Data analytic approach

We used exploratory factor analysis (EFA) to analyze the underlying factor structure of the advising scale presented to participants. We followed guidelines established by Preacher and MacCallum (2003), including the selection of EFA over a principle components analysis. We originally hypothesized that the ratings on our scale were based on an underlying understanding of prescriptive, development, and advisor traits such that items would be grouped together based on the participant conceptualization of their feelings about their advisor and the services they were receiving (developmental and prescriptive functions). When factors are thought to cause ratings, factor analysis is an appropriate exploration of the data. Furthermore, under the belief that these factors would be correlated, we used oblique rotations (direct oblimin) when more than one factor was selected. To select the number of factors, we considered both a scree plot and parallel analysis, which was calculated using the FACTOR program (freely available from Lorenzo-Seva and Ferrando, 2006).

We chose maximum likelihood estimation to calculate question loadings for each analysis. As per Preacher and MacCallum (2003) standards, we considered items to load on a factor if their relationship to the factor was over .300. Additionally, we wanted questions to load on at least and only one factor. Therefore, we discarded questions that loaded on more than one factor from analyses with more than one factor as well as those that did not load on any factor.

The following fit indices were used to assess model fit: (a) root mean square error of approximation (RMSEA) (Steiger, 1990), (b) standardized root mean residual (SRMR) (Jöreskog & Sörbom, 1981), (c) Tucker-Lewis non-normed fit index (NNFI) (Bentler & Bonett, 1980), and (d) the comparative fix index (CFI) (Bentler, 1990). The RMSEA and SRMR are scaled so that very low values show good model fit (< .06 excellent, < .10 moderate fit) (Browne & Cudeck, 1993), and the NNFI and CFI are scaled such that high values (>.90) reflect good model fit (Bryant & Yarnold, 1995; Thompson, 2004).

Results

The data were first screened for missing information, multivariate assumptions, and outliers. Six data points were missing primarily due to participants skipping a question in the online survey. These missing data were replaced with linear-trend-at-point calculations through SPSS 20. Eight multivariate outliers were found using Mahalanobis distance as a criterion, but were included in analyses because the results did not change when they were excluded (Tabachnick & Fidell, 2012). All other assumptions were satisfactory.

We designed the advising scale to examine prescriptive, developmental, and advisor functions, so accordingly, we expected three factors. However, scree plots and parallel analyses indicated that a one-factor model would be more appropriate. Therefore, we examined one-, two-, and three-factor models for fit indices and factor loadings. Table 2 contains the fit indices for all experiments, and Table 3 shows the final factor loadings for our first draft of the scale. All items, as they appeared in the instrument, are shown in Table 3.

Table 3. Factor loadings for Version 1 of advising scale

            Table 3.

After examining both the factor loadings and fit indices for each model, we selected the one-factor model as the best fit combination. Fit indices will increase with additional factors, as seen in Table 2. Although the fit indices seem to indicate that the three-factor model was better than the other models, the factor loadings for both the two- and three-factor models were unsatisfactory. Many items split loadings between multiple factors, and when we removed them from the scale, Factors 2 and 3 were eliminated as well. The factor loadings seen in Table 3 show that all questions load strongly on one overall advising factor. These results appeared to indicate that when students rate advising, they referred to their general feelings about advisors. The reliability of the one-factor model was .98 per Cronbach's α, and the average score on the survey was M = 5.32 (SD = 1.22), indicating that student ratings are above a neutral 4 rating on the Likert scale: t(151) = 13.30, p <.001, Cohen's d = 1.08.

However, fit indices for the one factor model were fairly poor overall. The RMSEA, CFI, and NNFI are outside acceptable ranges: low values for RMSEA (<.10 at minimum) and high values for CFI/NNFI (>.90) are desirable. The SRMR indicated good model fit (0.07) but also could improve with modifications to the scale.

A further examination of our items indicated some problems with scale design. Several items were compound sentences (e.g., “My advisor encourages me to speak freely and listens to what I have to say”) such that students needed to consider multiple parts of an item. Further, we reworded several items for clarity, and we retested them in Experiment 2 to examine factor structure for the second draft of the advising survey (see Table 4).

Table 4. Factor loadings for Version 2 of the advising scale

            Table 4.

Experiment 2

Method

Participants

Another set of participants (N = 181) was recruited from the university undergraduate research subject pool and received course credit for their involvement. Four participants were excluded for submitting incomplete surveys, leaving 177 surveys for the analyses. Furthermore, data from 20 participants were excluded from analyses because responses were multivariate outliers as per Mahalanobis distance scores. Table 1 contains the demographic data for all experiments.

Materials

After considering the results of the EFA examined in Experiment 1, we revised the survey, which contained 30 items. Compound sentences (e.g., “My advisor acts in a professional and ethical manner”) were separated into different items (e.g., “My advisor acts in a professional manner” and “My advisor is ethical”), and reworded for enhanced clarity. Items are listed in Table 4.

Procedure

The procedure for Experiment 2 was exactly the same as that for Experiment 1.

Results

Data were screened for multivariate assumptions and outliers. Missing data points (21 across all surveys) were replaced with linear trend at point and appeared to be missing at random. Data from 20 participants were removed as multivariate outliers, leaving data from 157 participants for EFA examination. We applied the same analysis described for Experiment 1 to the data set of Experiment 2.

Parallel analyses and scree plot examination indicated one or two factor models would be the most appropriate for our new set of advisor-related survey items. Therefore, we analyzed both one and two factor models on the 30-item version. For the one-factor model, fit indices were poor, consistent with Experiment 1, with high RMSEA (0.13) values and low CFI (0.80) and NNFI (0.79) values. The two-factor model showed improved fit indices with lower RMSEA (0.10) and SRMR (0.04) values and higher CFI (0.90) and NNFI (0.89) values. These fit indices, while not excellent, showed improved fit and were generally in acceptable ranges.

Furthermore, factor loadings for the two-factor model also appeared suitable. Many questions loaded cleanly (with >.30 loading only on one factor) onto Factor 1, while several questions double loaded onto both Factors 1 and 2. These items are shown at the bottom of Table 4, but without loadings for factors. Five items on the revised version cross loaded onto both factors and were removed from further analyses. These items featured information about advisor activity outside the scheduled meeting time: grade inquiries, adjustment to college life, and availability as well as items about campus resources and advisor relationship.

We tested an EFA on the 25-item scale to see if removal of these cross-loading items would improve model fit. As seen in Table 2, fit indices were improved or unchanged for the 25-item version of the advising scale. After inspecting new factor loadings, one item loaded onto both factors and was removed from the last analysis. Finally, we examined a 24-item questionnaire with EFA, and it yielded good fit indices and appropriate factor loadings for each item. Table 4 shows that 20 items loaded onto a general advising subscale with very strong loadings. These items range from questions about the advising appointment to the relationship between advisor and advisee. The reliability for this factor was measured with an α of .99. The second factor appears to concern items related to advisor connection and student outreach, specifically about public affairs and student organizations. The factors are correlated (r = .62, p < .01), but the second factor is a reliable subscale with a Cronbach's α = .92, which is high for a 4-item subscale. The mean score for advising functions was 5.72 (SD = 1.30), while the average score for outreach functions was significantly lower: M = 4.58 (SD = 1.42), t(176) = 13.95, p < .001, Cohen's d = 1.06.

Experiment 3

Method

Participants

We recruited 184 participants from the university general human subject participant pool. Demographic data are presented in Table 1. Participants received course credit for taking the survey. Seventeen participants were excluded from further analyses as scale item responses fell as multivariate outliers. Therefore, data from 167 participants were used in the analyses. Fifty-nine participants took the survey twice (once for Experiment 2 and once for Experiment 3), and data from their responses comprise the test–retest reliability measure.

Materials

We adjusted the 30-item advising scale from Experiment 2 by removing 6 items, which had loaded on multiple factors. The final 24 items can be found in Table 4.

Procedure

The procedure for Experiment 3 was exactly as described for Experiment 1. Participants could retake the questionnaire through the online system, but could not see their original answers. Several weeks elapsed between the first posting of sign-ups for Experiment 2 and the posting of sign-ups for Experiment 3 for undergraduate participants.

Results

Because the factor structure in previous analyses showed good fit with adequate indices and excellent final factor loadings, we tested the advisor scale with confirmatory factor analysis (CFA). Through CFA, the model is deemed replicable when items are programmed to load directly onto only their expected factor. Fit indices are similar to EFA with the addition of χ2, the ratio of chi-square to degrees of freedom (χ2/df) (Bryant & Yarnold, 1995; Hoelter, 1983), and the Tucker-Lewis index (TLI) (Tucker & Lewis, 1973) instead of the NNFI. The CFA model was programmed into SPSS AMOS 18.0 (Arbuckle, 2006) using maximum likelihood estimation. Low RMSEA and SRMR values indicate good fit (<.06), while CFI and TLI values should exceed .90 to indicate good fit. χ2/df values are used to minimize the effect of sample size on chi-square, and χ2/df values below 3 indicate well-fitting models (Bollen, 1989; Bryant & Yarnold, 1995).

The CFA of the two-factor 24-item scale presented in Table 5 showed excellent fit values: RMSEA (0.09), SRMR (0.04), CFI (0.94), TLI (0.94), and χ2/df (2.26). All items loaded highly onto their factors, as shown in Table 5. The correlation between factors was still high (r = .72, p < .01), but we found high reliability coefficients for both factors: Factor 1 α = .98, Factor 2 α = .88. The advising factors showed a higher subscale average, M = 5.74 (SD = 1.26), than the outreach functions, M = 4.76 (SD = 4.76), t(166) = 11.63, p <. 001, Cohen's d = .87. Test–retest reliability was high for both subscale averages where advising functions (r = .92) and outreach functions (r = .85) showed good reliability across test times.

Table 5. Factor loadings for confirmatory factor analysis (CFA) of advising scale

            Table 5.

Discussion

We present another tool for evaluating the perceptions of advising through a standardized advising scale. We tested the scale with three samples to determine the best items and scale structure. We reworked items for clarity or eliminated them when they did not conform to model fit. We included the best combination in a final 24-item scale. Even though original research indicated that three subscales of perceptions (developmental, prescriptive, advisor traits) would aptly measure advising, undergraduates apparently lump many of these facets of advising together. Only two factors emerged: general advising concerns and outreach functions. The outreach subscale may indicate that many students know that extracurricular activities reflect positively on successful applications in the job or graduate school market. Although our university emphasizes the public affairs mission, many of the freshmen we surveyed may be unaware of the opportunities for nonacademic development, which would lead them to group the items together in a nonspecific category.

When we examined factor subtotals, both groups (from Experiments 2 and 3) showed lower subscale averages for the outreach factor, indicating that either advisors do not cover this material in their sessions or students are not particularly satisfied with the discussions of outreach. This finding may provide an interesting avenue of research, as freshmen may comprise the appropriate target for discussion of these opportunities as their engagement in university life is important early in their careers.

These results may also indicate that the common understanding of student perceptions about advising sessions needs to be retooled. Questions were developed to measure the differences in prescriptive and developmental advising (Creamer & Creamer, 1994; Crookston, 1972/1994/2009; Williams, 2007), but these designations did not emerge during analysis. Students may comprehend advising to be a one-stop shop for scheduling, registration, and graduation questions, but clearly advisors of all types have the opportunity to further engage students in university life. These advisee connections to campus could potentially lead to higher retention of students who otherwise would withdraw or transfer to a university with more appealing extracurricular options.

To further assess reliability, we asked a subset of participants to take the instrument twice over several weeks. The correlations between factor subtotals were quite high, indicating reliability for answers across testing. We calculated Cronbach's alpha for the second and third administration of the final 24-item scale, and the values indicated high reliability as well, which is especially important for scales with few items. Therefore, we believe that the scale presented will be useful in evaluating advising at other universities where assessors wish to understand student perceptions of their advising services. Further, this scale could be paired with other evaluation tools, such as structured interviews (Demetriou, 2005; Hunter & White, 2004) to get a well-rounded view of current programs.

Copyright: 2013

Contributor Notes

Marilee L. Teasley is in her second year in the psychology master's program at Missouri State University (MSU), with an emphasis in experimental psychology. A proud member of NACADA, she works as a career counselor/advisor and teaches major/career exploration courses at the MSU Career Center as her graduate assistantship. Additionally, she is two classes away from completing the Graduate Certificate in Academic Advising from Kansas State University. She received the NACADA Graduate Student Annual Conference Scholarship in 2012. Upon graduation in May, she plans to move to Southern California to pursue a career in academic advising. You can find her on Twitter (@thatgradstudent) tweeting about higher education, advising, and technology.

Erin M. Buchanan is an assistant professor of Psychology at Missouri State University. She has an undergraduate degree in psychology from Texas A&M University, and her master's degree and PhD from Texas Tech University. Her research specialties include applied statistics with a focus on scale development and validation, as well as research on new statistical procedures and their implementation in the social sciences. She mainly teaches undergraduate and graduate statistics courses that cover the whole range of types of statistics, including structural equation modeling. Finally, she also is interested in understanding the underlying structure of our language systems and how those systems interact with our ability to make judgments about the relationships between words.

  • Download PDF