Developing an Evaluation Model for a Virtual Learning Environment

Computer Based Learning and Training

Hilary Page-Bucci

(April 2002)



Table of Contents



MLE Content

Heuristic Evaluation


VLE Content

Cognitive Analysis

Managed and Virtual Learning Environments

Traditional Evaluation Methodologies

Established Evaluation Guidelines/Rules

A Selection of VLE Packages Available

Expert Evaluation

Conclusion: Own Model






(Back to Table of Contents)

Significant changes within Education are taking place, in some sectors of FE and HE the growth and development of computer-based technology has given rise to increasing pressure for the implementation of on-line course delivery. This paper summarises some of the many different approaches to evaluation; subsequently, through research and discussion the author will investigate an evaluation model for a virtual learning environment for the delivery of vocational training courses.



(Back to Table of Contents)

It is suggested; the quality of instructional delivery within a single learning environment is becoming increasingly enhanced by the ability to offer the student access to teaching, which may take the form of graphics, animation, audio and video as well as the usual text-based materials. If this is the case, how do we go about identifying the most appropriate software or learning environment to suit our training and delivery needs? To facilitate the process and allow an informed decision, we need to explore the various options for evaluation and the types of learning environment available. As, in the main, this paper is concerned with the delivery of vocational training, the investigation will lead to the development of a set of principles for evaluation that will guide the choice of virtual learning environment for our purpose.


Managed and Virtual Learning Environments

(Back to Table of Contents)

Clarification of the terms 'Managed Learning Environment' (MLE) and 'Virtual Learning Environment' (VLE) is required as they are often interchanged. The following definitions have been suggested:

"Virtual Learning Environment (VLE) refers to the components in which learners and tutors participate in on-line interactions of various kinds, including on-line learning."

"Managed Learning Environments (MLE) include the whole range of information system and processes of the educational institution (including the VLE if it has one) that contribute directly or indirectly to learning and learning management" (Becta, 2000)

Figure 1 Managed Learning Environment (Source: Becta 2001)

Although the thesis of this paper is involved with the evaluation of virtual learning, the investigation and eventual choice of package will be the 'Managed Learning Environment' this is because an MLE would be purchased as a 'complete package'. It is a computer-based system that provides the infrastructure for a "virtual" university; and the nucleus, the 'Virtual Learning Environment' will be the section that this paper discusses.


A Selection of VLE Packages available

(Back to Table of Contents)

First Class
Learning Environment
Virtual Campus


Managed Learning Environment Content

(Back to Table of Contents)

Figure 1 shows the suggested general content incorporated in an MLE. It could provide all or part of course delivery, and would provide various tools to enhance teaching and learning, these could include online courses that simulate traditional classroom environment by using schemes of work, lesson plans, course notes, assignments and online discussion rooms. In addition students are also provided with the opportunity to communicate with the instructor or other students by using e-mail, bulletin boards, live chat rooms. Capabilities such as online assessment, simulations, multimedia, course delivery, access to external resources, provide potential advantages over lecture-only classes and the tutors are able to track progress and times of student access.

The delivery of the courses would be via a common user interface, in most instances, a web browser.

"Evaluation is any activity that throughout planning and delivery of innovative programmes enables those involved to learn and make judgements about the starting assumptions, implementation processes and outcomes of the innovation concerned" (Stern, 1998 cited by Jackson, 1998)


Jackson (1998) suggests; by using this definition, the evaluation of learning technologies provides the designer or user with sufficient evidence to make confident judgements regarding its effectiveness. The author suggests; the main purpose of evaluation is to improve teaching, ensure quality and demonstrate accountability.

Some overlap of Human Computer Interaction and the Design of the courseware is inevitable when evaluating educational software. Although this paper is considering evaluation from the perspective of an educational setting, a brief outline of HCI and Design will be discussed.

It has been suggested; instructional design principles should be applied to develop pedagogically effective learning materials. Ritchie and Hoffman (1997) suggest; well designed courses include elements that motivate the learner, specify what is to be learned, prompt the learner to recall and apply previous knowledge, provide new information, offer guidance and feedback, test comprehension, and supply enrichment or remediation. Web Based Instruction should be designed to accommodate individual learning styles; this does not mean using all available technologies but instead using those appropriate technology mechanisms that will directly contribute to enhance learning.

To encourage dynamic involvement, Uber-Grosse et al (1999) suggest VLE curriculum developers and instructors design assignments that promote student interaction. Cothrel and Williams (1999) argue; a VLE has met its intent when members are found to be extending their relationships beyond the online discussion space. The question here, would be 'what is the intent?' Surely the intent of the VLE is to promote learning; would student interaction alone promote learning? Vygotsky (1986) emphasizes the use of social dialogue and interaction to be an essential part of the learning process; even twenty five hundred years ago, Socrates was using interactive learning with students, asking questions to promote active thinking on their part. From these summations it would be a reasonable assumption that any adopted VLE should have some form of interactive and communicative involvement.

It has already been established that educational software needs to be evaluated in order to make decisions on its purchase and use. Further questions might be - Who will be carrying out the evaluation? What is going to be the focus of the evaluation? Should it be the teachers who will evaluate the learning outcomes? Should it be the students who will evaluate usability? Is it necessary to involve the learners in the evaluation? An important issue, Perkins (1997) suggests; any evaluation of software requires a thorough knowledge of the company, its strategic performance plans, its data and information requirements, and its practices and processes. In other words, we should know exactly what we wish the software to do. Typically, software tools are evaluated on the basis of the features they provide (Britain et al, 2000). From these conjectures we will analyse the 'nucleus' (Figure 2) of the managed learning environment and ascertain if this structure is sufficient to inform us of the required features.


Figure 2 Virtual Learning Environment (Source: Becta 2001)


Virtual Learning Environment Content

(Back to Table of Contents)

Curriculum Mapping

The information presented here would be in accordance with examining body requirements and would possibly include: Course information and structure, deadline dates for assignments and details of progression. The software needs to be sufficiently versatile to show different levels of coursework using a web-type hierarchy.


Learning Resources are included within the MLE, but not within the 'nucleus'. The author suggests; surely it should be part of the 'delivery system'.

Learner Control and Student Engagement: arguably, these components could be categorized with motivation. If the learner is able to adapt the learning environment to make it more personal and visually appealing it is likely to be more conducive to learning; this was also suggested in a report by Keister and Gallaway (1983) initiated by the NCR Corporation at Dayton, Ohio.

Motivation: This is a primary factor in instructional models (Carroll, 1963). Reeves, (1997) suggests; any new approach…promises to be more motivating than any that have come before. From this, we could make the assumption that, as 'virtual learning' is a fairly recent innovation; the 'complete package' should be motivating in itself. Arguably, the initial design will still have an impact; as according to Keller, (1987) motivation aspects must be consciously designed into multimedia.

Instructional tools, as suggested by Ginsburg, (1998) to support learning activities will give learners opportunities to develop technology skills and experiences, the skills gained, particularly around the meaningful use of everyday technology applications, can be transferred to other settings such as the workplace. These should accommodate individual differences and include variation of skills.


We need to consider assignments and grading in accordance with examining body requirements. How will the assignments be presented to the tutor? File uploading? The students need to track their progress - perhaps with a diary or chart? Formative and summative assessment and feedback facilities would be required for supporting students to study effectively and to help in identifying student needs. (Race, 1994)

Tutor Support

This is encompassed within each of the sectors, as tutor support is an intrinsic part of every course; to facilitate learning, guide, advise, and inform. How will tutorials take place? Confidentiality is a prime factor both for student and tutor; secure access and personal login codes.

Communication and Interaction

An important aspect of learning, as suggested by Vygotsky, (1986). We need to think about communication as a three-way process (Figure 3):

Figure 3 Communication Cycle (HDPB 2002)

This communication/interaction could take place via e-mail, chat rooms, discussion boards, interactive whiteboard, and/or video conferencing.

There might also be a shared file area. Ginsburg, (1998) suggests:

'The most powerful and engaging educational activities that use technology are those that are complex, realistic, and may have more than one reasonable methodology and answer';

To this end the software should be able to accommodate different types of files, add multimedia and be able access to material on CD ROM.


This appertains to funding initiatives, quality issues, retention and achievement. With regard to creating auditable evidence for the Learning Skills Council, Ofsted and ALI, the students will need to be tracked for attendance/access on the course. The evidence also needs to be saved and printed, also perhaps exported to another piece of software and linked to a Database. This means that the software package will need to have an inbuilt tracking system.

According to Laurillard, (2002) the complete lists of events to be carried out by the tutor is suggested as: · Activating motivation · Informing learner of the objective · Directing attention · Stimulating recall · Providing learner guidance · Enhancing retention · Promoting transfer of learning · Eliciting performance · Providing feedback.

Therefore, from the above dissection of the virtual learning environment, it does appear that most of the suggested components are 'part and parcel' of the learning experience. From this, the author concludes that these will shape our emerging set of principles.


Traditional Evaluation Methodologies

(Back to Table of Contents)

One approach to evaluation is cognitive analysis, also called predictive evaluation or cognitive walk through; the aim of this kind of evaluation is to predict the kind of problems that users will encounter and identify strategies to help user.

There are several methods used for predictive evaluation, most of them do not involve user testing which makes it cost effective, and as there is no specialist equipment required it is relatively quick. Three of the more well-know methods are:

Squires et al (1994) suggest the focus of attention should be the predictive evaluation of educational software "i.e. the evaluation of software prior to its use, which, typically occurs when teachers are either planning lessons or making purchasing decisions" they argue that thinking of learning and usability as independent issues leads to superficial evaluations of educational software and that many teachers are not trained to consider usability. This argument is supported by checklists developed by National Council for Educational Technology for evaluating CD-ROMs and the American Microsoft Evaluators Guide, which reveal lack of attention to usability, although Preece et al (1999) argue that using checklists to evaluate predictively is a flawed process. They argue that 'formal predictive evaluations, typically based on a checklist fail to take account of the widely accepted view of learning as a socio-constructivist activity' (Preece et al, 1999); they seem to be of the opinion that heuristic evaluation (described later) 'prevents evaluators inadvertently forgetting to cover part of the evaluation'. The writers' have conflicting views; although we could assume the writers are of reasonable repute and with educational status, we need to make our own conclusions.


Expert Evaluation

(Back to Table of Contents)

This methodology is often used in the commercial world. It is an evaluation of a system or a web site by a person skilled in usability and user interface design. The expert role-plays less expert users in order to identify potential usability problems (based on their knowledge of design principles, standards and ergonomics). The evaluation tends to be 'formative' where the tester who would be an expert in the particular subject area works through the program. Arguably it could be difficult to classify the rationale for design changes; Mulholland et al (1998) also suggest this type of evaluation could be problematic as the tester is not usually a 'real-life' tester; these means that the problems a 'typical' user might encounter, may not be found. This process, when analysed is similar to predictive evaluation in trying to find the problems a user might meet.


Heuristic Evaluation

(Back to Table of Contents)

Another important issue deals with evaluation from the users point of view. How will we know if the software is conducive to learning? Do we actually need to know at this point of the evaluation?

The heuristic approach (originally proposed by Nielsen and Molich, 1990) is a method for quick, cheap, and easy method generally used for evaluating the HCI; although the validity of Nielson's guidelines have been questioned and alternative guidelines have been suggested by other writers such as Tognazzini (1996).

Nielsen (1994) refined the original set of heuristics with new principles, these being:

The process requires that a group of testers (often 'experts') examine the interface, and judge its compliance with recognised usability principles (called "heuristics"). The intended outcome is the identification of any usability issues that need to be addressed as part of the design process; although, Mayes and Fowler argue that educational software design requires the design of effective tasks, rather than interfaces. Arguably, heuristic evaluation is popular in Web development circles because it requires few resources in terms of money, time or expertise. It is characterised by:

One or more evaluators systematically inspect a user interface and rate it against a set of recognised principles; this evaluation can take the form of either: · Individual Evaluation - each evaluator reviews the interface individually and reports problems to you.


Cognitive Analysis - 'Walk Through'

(Back to Table of Contents)

The cognitive walkthrough was developed as an additional tool in usability engineering, to give design teams a chance to evaluate early mock-ups of designs quickly. The process requires a detailed review of a sequence of actions.

In the cognitive walkthrough, the sequence of actions refers to the steps that an interface will require a user to perform in order to accomplish some task. The evaluators then step through that action sequence to check it for potential usability problems. Usually, the main focus of the cognitive walkthrough is to establish how easy a system is to learn. More specifically, the focus is on learning through exploration. Experience shows that many users prefer to learn how to use a system by exploring its functionality hands on, and not after sufficient training or examination of a user's manual, therefore the kind of checks that are made during the walkthrough ask questions that address this exploratory kind of learning. To do this, the evaluators go through each step in the task and provide a story about why that step is or is not good for a new user.

To do a walkthrough, you need four things:

  1. The user sets a goal to be accomplished with the system (for example, "check spelling of this document").
  2. The user searches the interface for currently available actions (menu items, buttons, command-line inputs, etc.).
  3. The user selects the action that seems likely to make progress toward the goal.
  4. The user performs the selected action and evaluates the system's feedback for evidence that progress is being made toward the current goal.

Woodward (1998) argues, it may not be appropriate for designing a completely new system; but it has been suggested, walkthroughs have been found to reveal a high proportion of serious interface problems experienced by users.


Established Evaluation Guidelines/Rules

(Back to Table of Contents)

Writers have produced various sets of guidelines, which reflect different contexts and purposes; some focus on pedagogic values, some on the student learning and some concentrate on the design of the software and HCI. It is also suggested that learning materials must display a number of particular elements. Vaile (1999) argues; there are six categories needed to form the basis of evaluation for effective online learning:

  1. Learning design
  2. Curriculum and standards alignment
  3. Educational content
  4. Learner support resources
  5. Teacher support resources
  6. Site design

Ginsburg (1998) presents a helpful way to think about integrating technology into adult learning that is also applicable to vocational education by proposing four basic approaches:

  1. Technology as curriculum
  2. Delivery mechanism
  3. Complement to instruction
  4. Instructional tool

Each approach has its benefits and limitations but the latter - technology as instructional tool-is suggested to be superior to the other approaches; and this is the one we are most concerned with. In this approach, the primary instructional goals remain the same with technology being used to enrich and extend them. The approach moves technology beyond being seen as an end in itself to being a tool that is integral to learning (Sulla 1999). Because some adult and vocational education programs provide instruction about the technology itself and the skills to use it as the concept within Vocational Training is about proving one's performance in using skills, however, technology as curriculum may be the most appropriate approach in other settings.

As an educational institution we must take into account the constraints placed upon us for making a decision, these might include:

Inevitably the consequences of these constraints could put considerable pressure on the choice of software; so the evaluation becomes mainly concerned with which package will give a better overall operability. At the University of Maryland a table of principles (Figure 4) was formed for evaluating and selecting Web Course Management tools. Their final recommendation was for WebCT.

'This was a product of committee discussions, online evaluation, vendor demonstration, review of literature, individual contacts, testimonials from existing users and institutions, scalability, integration with current infrastructure, and ratings comparison.' (Hazari, 1998)

  • Annotation
  • Browser support
  • Bulletin Board
  • Calendar
  • Chat
  • E-mail
  • File uploads
  • Graded Assessment
  • HTML Links
  • Import/Export Capabilities
  • Instructor Customisation
  • Listserv support
  • Login Security
  • Multimedia
  • Multiple Security Levels
  • Online grading
  • Online help
  • Progress Tracking
  • Self Assessment
  • Set-up wizards
  • Student Groups
  • User Interface
  • Whiteboarding
  • CGI Scripts
  • Course Archive/Backup
  • Database Access
  • Development Platform (OS, Web)
  • EXE file support
  • Java
  • Logging
  • Security
  • Server Type (Unix, NT)
  • SSL Compliance
  • Student data batch input


Figure 4 Evaluation Criteria for VLE (Source:

The choices made about which technologies to use as well as how to use them will "reflect whatever values the educator holds - consciously or subconsciously - about her/his relationships with learners, and their use will invariably bring advantages and disadvantages" (Burge and Roberts, 1993). Evaluation should have a specific focus. The more traditional methodologies veer towards measuring the learning outcomes, they are still very much 'teacher' centred; whereas the newer trends in courseware evaluation are more concerned with changes in focus, issues of validity and applicability (Mulholland et al, 1998).


Conclusion: Own Model

(Back to Table of Contents)

As the focus of this paper is to formulate a set of rules for deciding the applicability of the courseware, the author is suggesting a mixed approach method. As Reeves (1991) also suggests the 'phenomena involved in learning are so complex and so difficult to measure that multifaceted evaluation methods are required to obtain meaningful information'.

A Virtual Learning Environment should have the capabilities to improve student learning according to the requirements of the faculty curriculum. It should also be a tool for providing access to a distributed learning experience that should be: · Learner Centred · Active · Social · Supportive · Holistic · Synthetic · Effective (Stiles, 2000).

After taking into account the constraints and by analysing and synthesising the documented research the following set of principles for evaluation was developed to guide the choice of virtual learning environment.

  • Pedagogical effectiveness
  • Variety of learning experiences
  • Interactive subject matter
  • Ability to show all course details/ qualification aims
  • Measurable Assessment methods (formative and summative)
  • Test & Quiz applications/Question/Answer/Feedback
  • Ability to allow students different types of access according to needs
  • Interactive involvement & Communications Facilities
  • Discussion groups
  • Chat room
  • Bulletin board
  • E-mail functions
  • Conferencing facilities
  • Learner Control
  • Be able to personalise interface
  • Track & record own progress
  • Visually attractive interface
  • Secure access and log-ins
  • Personal diary/calendar
  • Teacher Control
  • Track progress
  • Record grades
  • Retention & Achievement
  • Adaptability/personalisation
  • Secure access
  • Be able to modify student/course details
  • Is it attached to a database?
  • Are results printable/exportable?
  • Ability to change the 'course' interface?
  • Resource availability: Instructional tools (Ginsburg, 1998)
  • Send and receive files
  • Different types of files & multimedia accepted and supported
  • How easy is it to upload files?
  • Ability to add links to the web?
  • Technical support
  • Training and support in using the VLE Accessible help within package


Although it is suggested that 'Institutions are unlikely to find any single VLE perfect for all purposes' Stiles, (2000); the author is of the opinion that possibly, it is not so much the choice of software as the ability to use it to its full potential will be more appropriate. Sorge et al (1993) similarly suggest 'there is no one clearly superior software product, website or piece of hardware-any piece of software can be effective if embedded in an appropriate curriculum and surrounded with support materials'. This suggestion is reinforced by Britain et al (2000) who report;

'The answer…. lies not solely in the features of a system, but in how they are integrated to facilitate learning and administration and what metaphors are constructed to guide the way the system is used'.

Once the VLE has been chosen, it is also suggested that the package should be piloted for a period of time in order to compile qualitative and quantitative data, pertaining to a more pedagogic evaluation. It should also be emphasised that evaluation is a continuous process; in the light of this, any problems, misrepresentations or omissions can be addressed.



(Back to Table of Contents)

Becta: British Educational Communications & Technology Agency (2000) Evaluating Software and Web Pages. Available online: [25.3.02]

Bloom, B. S (1981) All Our Children Learning. New York: McGraw-Hill.

Britain, S & Liber, O (2000) A Framework for Pedagogical Evaluation of Virtual Learning Environments. [9.3.02]

Burge, E J and Roberts, J M (1993) Classrooms with a Difference: A Practical Guide to the Use of Conferencing Technologies. Toronto: Distance Learning Office, Ontario Institute for Studies in Education, (ED 364 206)

Calder, J & McCollum,A (1998) Open and Flexible Learning in Vocational Education and Training. London: Kogan Page

Carroll, J (1991) Designing Interaction - Psychology at the Human-Computer Interface. Cambridge: Cambridge University Press

Christian-Carter, J (2001) Mastering Instructional Design on Technology-Based Training. London: CIPD

Cothrel, J & Williams, R (1999) On-line Communities: Helping Them Form and Grow. Journal of Knowledge Management, 3 (1)

Dix, A; Finlay, J; Abowd, G & Beale, R (1993) Human-Computer Interaction. London: Prentice Hall

Ellington, H; Percival, F & Race, P (1995) Handbook of Educational Technology. London: Kogan Page

Gardiner, M & Christie, B (1987) Applying Cognitive Psychology to User-interface Design. New York: Wiley

Ginsburg, L (1998) "Integrating Technology into Adult Learning." In Technology, Basic Skills, and Adult Education: Getting Ready and Moving Forward, Information Series no. 372, edited by C. Hopey. Columbus: ERIC Clearinghouse on Adult, Career, and Vocational Education, Center on Education and Training for Employment, College of Education, the Ohio State University. Available online: [12.4.02]

Hazari, S (1998) Evaluation and Selection of Web Course Management Tools. Available online: [1.4.02]

Hiltz, S. R. (1986). The "Virtual Classroom": Using Computer Mediated Communication for University Teaching, Journal of Communication, 36 [2]

Hodgson, V. E; Mann, S. J. & Snell, R (1987) Beyond Distance Teaching - Towards Open Learning, Milton Keynes: Open University Press

Jackson, B (1998) Evaluation Studies. Available online: [28.2.02]

Joyes, G (2000) An Evaluation Model for Supporting Higher Education Lecturers in the Integration of New Learning Technologies. Available online: [9.3.02]

Latchem, C & Hanna, D (2002) Leadership for 21st Century Learning. London: Kogan Page

Laurillard, D (2002) Rethinking University Teaching. London: Routledge

Lguide (2001) A Comparative Analysis and Industry Directory, Available online: [16.2.02]

Maier, P & Warren, A (2000) Integr@ting Technology in Learning and Teaching. London: Kogan Page

Mulholland, C; Wing, A & White, B (1998) Courseware Evaluation Methodologies - strengths, weaknesses and future directions. Available Online: [22.3.02]

Nielsen, J (1994) Heuristic Evaluation. In Usability Inspection Methods. (1994) Nielsen, J & Mack, R (Eds.) New York: Wiley and Sons Newman, W & Lamming M (1995) Interactive System Design. England: Addison-Wesley

Preece, J & Keller, L (1990) Human-Computer Interaction: Selected Readings. Herts,UK: Prentice Hall

Preece, J; Rogers, R; Sharp, H; Benyon, D; Holland, S & Carey, T (1994) Human-Computer Interaction. England: Addison-Wesley

Porterfield, S (2001) Towards The Development of Successful Virtual Learning Communities Available online: [24.2.02]

Race, P (1994) (2nd Edition) The Open Learning Handbook, London: Kogan Page

Reeves, T (1997) Evaluating What Really Matters in Computer-Based Education. Available online: [1.2.02]

Reeves, T (1986) Research and Evaluation Models for the Study of Interactive Video. Journal of Computer-Based Instruction (Autumn) 13 [4]

Ritchie, D & Hoffman, B (1997) Incorporating Instructional Design Principles with the World Wide Web. In B. Khan (Ed.), Web Based Instruction NJ: Educational Technology Publications.

Salmon, G (2000) E-Moderating, The Key to Teaching and Learning. Online. London: Kogan Page

Salvendy, G & Smith, M (1993) Human-Computer Interaction: Software and Hardware Interfaces - Proceedings of the 5th International Conference on Human-Computer Interaction, Orlando, Florida. Volume 2. New York: Elsevier

Schank, R (2002) Designing World-Class E-Learning. New York: McGraw-Hill

Spearman, R (2000) Evaluating, Selecting, and Implementing an Online Course Management System. Available online: [2.2.02]

Squires, D & McDougall, A (1994) Choosing and Using Educational Software. London: Falmer Press

Stiles, M (2000) Pedagogy and Virtual Learning Environment Evaluation and Selection. Available online: [2.2.02]

Sulla, N. (1999) Technology: To Use or Infuse. The Technology Source: Commentary. Available online: [16.12.01]

Syverson, M (1995) Evaluating Learning in Virtual Environments. Available online: [28.2.02]

Tannenbaum, R .S (1998) Theoretical Foundations of Multimedia. Basingstoke: Computer Science Press.

Tognazzini, B (1996) Tog on Software Design. Reading MA: Addison-Wesley

Uber-Grosse, C & Leto, L (1999) Virtual Communities and Networking in Distance Education. Available online: [04/01/01]

Vaille, A (1999) Nine Keys to Quality K-12 Online Learning Experiences. Available online: [11.9.01]

Van der Veen, J (2000) W3LS: Evaluation framework for World Wide Web learning. Available online: [9.2.02]

Vygotsky, L (1986) Thought and Language. Cambridge: MIT Press.

Woodward, B (1998) Evaluation Methods in Usability Testing. Available online: [9.4.02]

(Back to Table of Contents)