This article was originally published in the 2011 edition of the Continuing Higher Education Review.

Introduction

Distance education permeates the field of professional and continuing education to such an extent that quality assurance (QA) is a topic no distance educator or administrator should avoid. Quality assurance is an issue not just for continuing education but also for higher education generally. As former UPCEA Executive Director Kay Kohl (2010) noted, distance technologies have helped reach working adults and grow the institutions that serve their continuing education needs, and in the process have “disrupted institutional structures in areas such as financial aid, quality assessment, and criteria for earning degrees” (p. 15). Given the disruptive impact of distance education and technology on understandings of educational quality, this article will provide an overview of quality-assurance issues in distance education. In that spirit, we will examine topics across four general themes: the external and internal dimensions of quality assurance; major initiatives that affected distance education quality assurance; the most promising tools and techniques in the field; and a look forward to what will likely influence the landscape of quality assurance in distance education.

 

The External and Internal Dimensions of QA and the Inputs-Outcomes Debate

The regulatory environment and institutional mission shape how one views quality assurance in the context of distance education as does the debate about outcomes and inputs as measures of institutional quality. The interplay among these is complex often not aligned.

External


Institutions must comply with external accountability mandates and regulatory requirements, providing transparent, accessible, and meaningful data to various stakeholders that include state and federal agencies. Although the former are primarily motivated by a need to protect students as consumers, the current system is a patchwork of antiquated and inconsistent regulations created for state-based brick-and-mortar institutions, and can hinder institutions offering distance and online learning. One fortunate aspect of the recent federal regulation that is enforcing state authorization for distance education is that efforts are underway to develop a common set of standards or a common application process as, for example, the Multi-State Reciprocity in Postsecondary Approval and Regulation Project. This project is being led by the Presidents’ Forum and is funded by the Lumina Foundation. In spite of these efforts, however, federal regulation may still generate obstacles to innovation, as the regulations focus on instrumental measures such as seat time. Current Title IV requirements that mandate regular and substantive interaction between faculty and students could stand in the way of learning technologies in which learners access content directly without faculty as mediator or facilitator.

There are also less formal external forces in play as, for example, Transparency by Design (TbD), a voluntary cooperative in which member institutions provide program-level outcomes data, student demographics, program information, as well as results of nationally normed assessments such as the Priorities Survey of Online Learning and the National Survey of Student Engagement. WCET/WICHE TbD manages data from institutions and report them on a public website, College Choices for Adults (www.collegechoicesforadults.org).

Internal


A second frame—and to our minds a much more meaningful one—from which to view quality assurance is that of internal continuous improvement. The ultimate measure of quality is the degree to which students can demonstrate learning outcomes at the level deemed appropriate for the course and the degree regardless of delivery mode. Assessment of distance-education course and program outcomes on multiple levels through valid instruments and methods is the kind of quality assurance efforts that should receive most of our attention as educators. Technology-enabled courses (or even hybrid courses) have an advantage over conventional face-to-face courses because of the tools that can be incorporated for students to demonstrate learning and the data that can be collected not only about individual student learning but also for subsequent course and program improvement. Fortunately, in terms of outcomes assessment as the ultimate indicator of institutional effectiveness and quality, both regional- and programmatic-accrediting bodies are driving all programs—including distance education—in the right direction.

Inputs


The traditional quality indicators most emphasized by the external environment have been input-driven. The debate on inputs versus outputs as measures of institutional quality is still prevalent in higher education. The long-dominant input approach is steeped in a view of institutional and programmatic quality that assumes that putting appropriate people and resources into an institution—qualified faculty, qualified and motivated students, solid academic resources, student services, and sufficient funding—leads to the desired outcomes in the form of educated citizens, research to improve lives, and service to communities. Ranking systems encourage institutions to focus on inputs in order to make them more attractive to students, passing off institutional prestige as a measure of quality. Our experience in higher education has demonstrated the fallacies of these assumptions as questions arise about how well our graduates have been educated and the magnitude of the return from the vast amounts of state, national, and private investment.

Outcomes


An encouraging development is that both external and internal stakeholders are increasingly demanding outcomes data to assess institutional quality. The US Department of Education got headlines when it cast a broad net to capture the impact of distance-education providers, especially the for-profit sector. But less publicized are the efforts of institutions to examine student trajectories throughout their college years by such means as the National Survey of Student Engagement (NSSE). More than 1,300 colleges and universities utilized this self-assessment of student learning and students’ personal development, which is another measurable component of educational quality. It has since announced an update to its survey because “higher education is constantly changing, with increasing demands for diagnostic and actionable data, and rapid adoption of new technologies for … distance learning programs” (NSSE, 2011). The Collegiate Learning Assessment (CLA) was developed to assess “core outcomes espoused by all of higher education – critical thinking, analytical reasoning, problem solving and writing” (Hersch, 2007, p. 6). Among the most comprehensive efforts to measure the actual learning of our students, the CLA “promotes a culture of evidence-based assessment in higher education” (Grigsby, 2009, p. 58-59). With release of Academically Adrift (Arum & Roksa, 2011), the Council for Aid to Education applauded the message that adherence to high expectations of quality and academic rigor do matter, but that the publication’s “overall portrait of the quality of undergraduate education is deeply disturbing” (Benjamin, 2011).

We are encouraged that external stakeholders are increasingly demanding outcomes data as much as inputs. So one would hope that the two perspectives and needs for quality assurance can merge in frameworks that serve both external accountability requirements as well as internal needs for continuous improvement. Indeed, some scholars have suggested that rather than focusing on how these external and internal assurance measures are different and sometimes conflicting, administrators should think about combining internal and external perspectives to create a lasting and holistic culture of quality for an entire institution (Ehlers, 2009). For example, external accountability requirements can provide “a reason to explore ways to improve the institution…[and] an opportunity to design an inquiry to address institutional needs (Behr & Walker, 2009, p. 2).

 

 Significant QA Initiatives

A variety of quality assurance initiatives promise to influence the design of distance teaching and learning, including best-practice frameworks, quality course rubrics and checklists, and regular discussion forums on teaching.

Best-practice frameworks


The WICHE Cooperative for Educational Telecommunications (WCET), University of Texas TeleCampus, and the Instructional Technology Council have put together “Best Practice Strategies to Promote Academic Integrity in Online Education,” one of the most notable lists of best practices (2009). The Sloan Consortium (http://sloanconsortium.org/quality_scoreboard_online_program) has recently published the “Quality Scorecard for the Administration of Online Programs,” which contains 70 quality indicators that can aid in the design and evaluation of online programs as well as help demonstrate programmatic quality to university administration, governance boards, and accrediting bodies. Best practices in online learning have matured from segregated “instructional technology” teaching methods or standards developed from vendors such as BlackBoard’s Greenhouse Awards Program, to integrated best practices based on sound pedagogical principles that transcend modes of delivery.

Quality course rubrics and checklists


The University of Northern Colorado (UNC) has adopted a program that reaches deeper into the practices of faculty by diagnosing rather than evaluating. In the online nurse-education programs of UNC’s College of Nursing PhD and Doctor of Nursing Practice degree programs, faculty have embraced peer review. The institution offers grant support to its faculty for presentations and publications (Dougherty and Roehrs, 2009). Current research among the nursing faculty and the CETL focuses on the effectiveness in peer review of the QM-facilitated degree-planning process. The faculty themselves are leading the effort. At UNC’s Monfort College of Business, the dean and faculty support a culture of assessing quality from within using self-checks in standards, resulting in the Malcolm Baldridge National Quality Award.

Discussion forums


The University of Colorado (CU) at Boulder has initiated the Collaborative Preparing Future Faculty Network (COPFFN) through its long-established Graduate Teacher Program. It serves as a check for quality among graduate teaching assistants and prepares doctoral students for the professoriate through professional development in teaching and research. The 14th Annual COPFFN Forum featured multiple sessions on “Assessment Issues for Teachers in Colleges and Universities” and featured directors, administrators, faculty, and other specialists from CU and neighboring institutions (COPFFN, 2011). Why did the COPPFN deem assessment at the classroom, program, and institutional levels a key issue for the forum? The answer may be that just as doctoral-granting institutions (particularly teacher colleges) are obligated to prepare future faculty with new strategies for implementing quality in a web-based teaching environment, it would be a disservice not to inform them of the institutional culture and climate of assessment and accountability awaiting them as full-time professionals in higher education.

 

Notable QA Tools

Learning analytics are methodologies for capturing data that are aimed at improving the conditions for teaching and learning. The methodologies include using a variety of technological tools, analytical models, and add-on features to course-management systems. Learning analytics have been most often used to identify students at risk so that pedagogical or other interventions can be applied (e.g., the MAP Works-Making Achievement Possible Program). However, the promise of learning analytics could well involve the identification of student learning styles and abilities that can lead to customizing teaching methods, materials, and curricula for individual students or small groups of students (Johnson, Smith, Willis, Levine, and Haywood, 2011).

Perhaps the leading factor driving the development and popularity of learning analytics is that the e-learning environment is providing and unprecedented opportunity. The e-learning environment provides an unprecedented opportunity to collect and analyze data to support quality improvement efforts in student learning and retention. Higher education is increasingly applying data mining and analytics long used by industry. Institutions can develop predictive models by analyzing massive data sets to predict if a student will be successful in an online course and to target intervention strategies. They are often aided in these efforts by software solutions developed by private industry or they are developing proprietary applications. Data analytics can be used to support quality assurance by providing data-driven information for improvements at the course, program, and student-support levels. And the best applications of data mining can provide actionable reports to faculty, students, and staff to develop and implement targeted strategies and improvements and to assess the effectiveness and quality of interventions and improvements. Data mining from most learning management systems (LMSs) can show where students spend their time and how this behavior relates to student success, as measured as course completion and learning-outcomes attainment. Data analytics enable institutions to drive quality improvements through valid data, not merely on the basis of anecdotal information or intuition. Examples of data mining and data analytics include projects from Starfish Retention Solutions (see www.starfishsolutions.com), Purdue University’s “Signals” program (see Arnold, 2010), the Iowa Community College Online Consortium (ICCOC), and the Bill and Melinda Gates Foundation.

The ICCOC project utilizes Pearson/eCollege’s reporting software called “Enterprise Reporting”, which allows customized reports on various types of student activity and the continuous collection of data related to that activity. ICCOC leaders used the learning-analytics reporting in conjunction with a program of student and academic support interventions, and during the fall 2005 to fall 2009 period course completion rates went from 77 to 85 percent (Leavy and Rheinschmidt, 2010). The Bill and Melinda Gates Foundation has awarded WCET a grant to validate the Predictive Analytics Reporting (PAR) framework. This project will aggregate data representing more than 400,000 student records from six WCET member institutions to conduct large-scale analyses of federated data sets within postsecondary institutions to better inform loss prevention and identify drivers related to student progression and completion. A critical area of focus for this project will be identifying factors that affect loss, progression, and completion for the 26-and-under demographic in the United States.

Perhaps as important as developing appropriate instruments to assess programmatic and instructional quality is finding tools that will help distance-education leaders to effectively administer, manage, and analyze assessments. EvaluationKit is software that provides a systemic approach to these functions that can be used in conjunction with a web portal or learning management system, or can stand alone as an assessment system. The EvaluationKit system provides “dashboards” that serve as a single point of contact for course and program assessment information that administrators, instructors, and students can use. The user can customize reports, and a user hierarchy makes control of the level and scope of data possible. Web hosting and storage are an important part of the package. The key to understanding the power of EvaluationKit is that it is merely a system or web-based framework for effectively administering assessment instruments. Appropriate administrative and academic leaders handle the content of assessments as well as the analysis of results from those assessments (Evaluation Kit, 2011).

The University of Northern Colorado sought to evaluate online courses and programs with EvaluationKIT. Most attractive were the quality checks and separate reporting mechanisms at multiple levels: student, instructor, chair, program director, and dean. The UNC Center for the Enhancement of Teaching and Learning (CETL), whose mission is to improve quality in the classroom, maintains a strong faculty-centered approach and stresses that utilization of the tool and the reports generated are shared only with the faculty and, in fact, are faculty generated. The integrity of the center depends on its non-punitive approach, and the tool gives summative feedback and improvement. At another level, assessment may be modified and redirected for the department chair or program director with a design aimed at course or program review. Indeed, the dean of the college may build another assessment for its purposes, as well.

 

Leading Influences on Future QA Decisionmaking

This paper offers a number of perspectives on issues that seem most relevant in 2011. Leading QA initiatives, technology and data mining tools, and the interrelationships between external and internal accountability, are significant parts of the current landscape. There are a number of considerations that are likely to figure in the reshaping of that landscape over time, and those trends includes globalization of higher education, the increasing role of faculty contingency, and the increasing regulation of higher education.

Demand for higher education globally is predicted to increase. The number of students in higher education worldwide will grow from 120 million to 150 million by 2025 (Wiley, 2010). There is already an increasing recognition of the circulation of academic talent—both students and faculty—across national borders (Staley & Trinkle, 2011). Some of this takes the form of joint degree programs, international branch campuses, and distance-education programs, where students stay in their home countries and earn a foreign degree. Transnational education (TNE) is part of this globalization phenomenon and has created challenges to quality assurance efforts, as even many signatories to the Bologna Accord do not have quality standards for TNE-related programs. Many internationally recognized accreditation organizations are stepping forward to guide accreditation and quality assurance agencies, including UNESCO/OECD (United Nations Education, Scientific, and Cultural Organization), INQAAHE (International Network for Quality Assurance Agencies in Higher Education), and ENQA (European Association for Quality Assurance in Higher Education). As TNE programs create increasing numbers of multilateral academic relationships, US distance-education leaders will need to gain and maintain awareness of new and developing guidelines (Bennett, 2010).

An issue often left unspoken in higher education discussions is the steady increase in faculty contingency. Most distance-education administrators have utilized part-time and non-tenured faculty. Though they do not come without costs, these faculty provide instructional staffing options that are often more flexible and affordable than a permanent corps of faculty. It is worth noting that faculty contingency is not only a distance-education phenomenon but also a growing trend in higher education generally. A recent article noted that in 1975, approximately 43 percent of all postsecondary faculty were on contingent appointments; in 2010, that number rose to approximately 75 of all postsecondary faculty (Maisto & Street, 2011). The implications of a shrinking proportion of permanent faculty are significant. For one, faculty are likely to play a reduced role in many higher educational functions, ranging from instructional decision making to quality assurance. This can result in different views on what factors determine quality and who assures that quality.

The current regulatory environment for distance-education educators and administrators is complex and difficult. Regulations that the US Department of Education issued in October 2010 cover topics such as state authorization to deliver distance-education programs out of state and confirmation of non-degree programs as leading to gainful employment (see US Department of Education, 2011). These regulations have come about mainly from the concerns regarding the financial and operational practices of some for-profit providers, but the regulations have ensnared public, non-profit, and for-profit providers alike (WICHE Cooperative for Educational Telecommunications, 2011).

Given the complexity, expense, and staff time required for compliance, it is difficult to discern any silver lining in the cloud of regulation. One positive aspect is that the regulations are causing us to think about foundational questions regarding distance education and administration. For example, when state authorization regulations cause us to look deeply into who and where our students are, we may come out knowing more about our students and their needs. Similarly, gainful-employment requirements can make us connect the programs we develop to jobs, grounding distance educators and administrators more firmly in the work of preparing graduates for an increasingly challenging work environment.

Judging by the content of the last issue of the Continuing Higher Education Review, distance education features prominently as a concern of continuing education, and its role is becoming so great that eventually it may be difficult to discern whether distance education is part of continuing education, or continuing education is part of distance education. No matter what its role, distance education must inevitably include quality assurance as a key issue.

 

References

Aram, R. and Roksa, J. (2011). Academically Adrift. Chicago: The University of Chicago Press.

Arnold, K. (2010). “Signals: Applying Academic Analytics.” EDUCAUSE Quarterly, 33(1).

Behr, M. and Walker, I. (2009). “Getting Past Accountability.” Inside Higher Education, Jun. (2009. Retrieved from http://www.insidehighered.com/layout/set/popup/views/2009/06/02/behr.

Benjamin, R. (2011). “CAE Applauds the Publication of Academically Adrift: Forms Advisory Committee to Consider Additional Uses of CLA Data.” Retrieved from http://www.collegiatelearningassessment.org/pressrelease11811.html.

Bennett, P., Bergan, S., and Cassar. D. (2010). “Quality Assurance in Transnational Higher Education: ENQA Workshop Report 11.” (ED512347).

Cheng, M. (2011). “Transforming the Learner versus Passing the Exam: Understanding the Gap between Academic and Student Definitions of Quality.” Quality in Higher Education, 17(1), 3-17.

COPFFN. (2011). Retrieved from http://gtp.colorado.edu/events/colloquia/copffn_forum_current.

Dougherty, J. and Roehrs, C. (2009). “The Application of Best Practice Standards in Internet-based Education.” Paper presented at EduLearn09, International Conference on Education and New Learning Technologies, Barcelona, Spain. Retrieved from http://www.iated.org/edulearn09/ and http://www.iated.org/edulearn10/publications.

Ehlers, U. (2009). “Understanding Quality Culture.” Quality Assurance in Education, 174, 343-363.

EvaluationKit (2011). Retrieved from http://www.evaluationkit.com/

Grigsby, M. (2009). College Life through the Eyes of Students. Albany: State University of New York Press.

Hersch, R. (2007). “Going Naked.” AAC&U Peer Review, 9(6), 1-4.

Johnson, L., Smith, R., Willis, H., Levine, A., and Haywood, K. (2011). The 2011 Horizon Report. Austin, TX: The New Media Consortium.

Kohl, K. (2010). “Coping with Change and Fostering Innovation: An Agenda for Professional and Continuing Education.” Continuing Higher Education Review, 74, 9-22.

Leavy, M. and Rheinschmidt, S. (2010). “How the ICCOC Uses Analytics to Increase Student Success.” EDUCAUSE Quarterly, 33(4).

Maisto, M. and Street. S. (2011). “Faculty Equity and the Goals of Academic Democracy.” Liberal Education, 9 (1), 6-13.

Malcolm Baldridge National Quality Award. Retrieved from http://mcb.unco.edu/About/Baldrige/.

NSSE. (2011). “National Survey of Student Engagement.” Retrieved from http://nsse.iub.edu/nsse2013/.

Staley, D. and Trinkle, D. (2011). “The Changing Landscape of Higher Education.” EDUCAUSE Review, 46(1), 1-6.

US Department of Education “Dear Colleague” letters GEN 11-10, GEN 11-05 and 11-11. (2011). Retrieved from http://ipaf.ed.gov/dpcletters/GEN1110.html and http://www.ifap.ed.gov/dpcletters/attachments/GEN1111.pdf).

Western Cooperative for Educational Telecommunications, University of Texas TeleCampus, and the Instructional Technology Council. (2009). “Best Practice Strategies to Promote Academic Integrity in Online Education.” Retrieved from http://wcet.wiche.edu/wcet/docs/cigs/studentauthentication/BestPractices.pdft.

WICHE Cooperative for Educational Technologies. (2011). Retrieved from http://wcet.wiche.edu/advance/financial-aid-and-distance-education.

Wiley, D. (2010). “The Open Future: Openness as a Catalyst for an Educational Reformation.” EDUCAUSE Review, 45(4), 15-20.

 

(c) 2011 W. Reed Scull, Director, Outreach Credit Programs and Associate Dean, The Outreach School, University of Wyoming, Laramie, WY; David Kendrick, Director, Center for the Enhancement of Teaching and Learning, University of Northern Colorado, Greeley, CO; Rick Shearer, Director, World Campus Learning Design, Pennsylvania State University, State College, PA; Dana Offerman, Provost and Chief Academic Officer, Excelsior College, Albany, NY

This article was originally published in the 2011 edition of the Continuing Higher Education Review.