AU Leadership Portfolio: 3e Evaluation and Assessment

Portfolio Menu

Leadership and Learning Plan

1a Philosophical Foundations

1b Ethics, Values, and Spirituality

1c Learning and Human Development

2a Effective Communication

2b Mentor/Coach

2c Social Responsbility

3a Resource Development: Human and Financial

3b Legal and Policy Issues

3c Organizational Behavior, Development, and Culture

3d Implementing Change

3e Evaluation and Assessment

4a Reading and Evaluating Research

4b Conducting Research

4c Reporting and Implementing Research

5 Servant Leadership in Technology Facilitation and Collaboration

Synthesis Paper

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Introduction

Evaluation/Assessment: Finding quality in diversity
Evaluation/Assessment:
Finding quality in diversity

Leadership uses appropriate evaluation and assessment tools to make decisions about programs and plans.

*Satisfactory Competency Level

The materials for this competency are organized by my work: annual evaluation of my videoconferencing program, evaluation of a recently completed grant, improved evaluation of Read Around the Planet, and finally an analysis of e-assessment in my online classes. The table below matches the organization in my approved IDP.

Artifacts

Evaluations or verification artifacts are indicated with this icon.
Leadership and Learning Group contributions are indicated with this icon.

Activities

Description

Documentation

A. Videoconference program annual evaluation reports

Overview
I run the videoconferencing program for Berrien RESA, and as part of that work, I compile annual reports of the use of videoconferencing in our service area and within the schools and districts.

Competency Connection
These reports show that I can compile program evaluation data into a useful readable format for local educators.

Artifact Descriptions
i. The annual reports include summary data for the whole service area, as well as specific schools and districts to see their statistics.

ii. The report reflections from 2007 are a verbalization of the thinking that I do every year based on the annual report. In 2007 I posted my reflections and plans on my videoconferencing blog.

iii. This evaluation comment is from my supervisor on how the annual reports are used in the organization.

Some files unlinked for privacy reasons.

i. Annual reports

ii. 2007 Report Reflections

iii. Usefulness of annual reports by my supervisor, Dennis Lundgren

B. USDA RUS DLT Grant Evaluation

Overview
The United States Department of Agriculture Rural Utilities Service Distance Learning and Telemedicine Grant 2006-2009 provided 35 videoconference units to the schools in Berrien and Cass counties.

Competency Connection
The evaluation of this grant is another example of program evaluation.

Artifact Descriptions
i. I completed this evaluation by comparing our grant goals with actual results.

ii. This grant evaluation was done by an outside evaluator on the same grant.

i. Grant Evaluation to be submitted to the USDA.

ii. This grant evaluation was done on the grant implementation by an outside evaluator, Lakehouse Evaluation.

C. Improve Read Around the Planet evaluation.

Overview
Read Around the Planet was formerly known as Read Across America, and began with 200 classes participating, expanding to 1750 classes in 2009. The evaluation process for this project began with a survey, but for a few years no evaluation was done.

Competency Connection
This project shows early evaluation and the most recent evaluations that I updated in the last two years.

Artifact Descriptions
The evaluation of RAP had fallen by the wayside as the program has grown so large.
i. a. This is the first evaluation of Read Across America.

i. b. In 2008 when renewing the evaluation procedures, I made two evaluations for teachers and tech coordinators.

i. c. In 2009 I updated it again to make it easier to review the data in just one evaluation form.

i. d. After writing my reflection paper, I updated the evaluation again for 2010 to include a better focus on the results and expected benefits.

Some files unlinked for privacy reasons.

i. Past and improved evaluation tools and results of Read Around the Planet.

   a. Early Evaluation

   b. Evaluation 2008 for Teachers and Tech Coordinators

   c. Evaluation 2009

   d. Evaluation 2010

 

 

D. Publish an article.

Overview
This article is based on the evaluation data of Read Around the Planet 2009, and a little from 2008. I hope to get it published soon.

Competency Connection
The article shows my ability to write a report from evaluation data.

Some files unlinked for intellectual property reasons.

i. Article ready for publication

E. LEAD756

Overview
The activities in this section were completed as part of the class: Advanced Studies: Evaluation and Assessment.

Competency Connection
These activities are the beginning of my reflection on my current practice in evaluation and assessment and contribute to my thinking for my reflection paper in the next section.

Artifact Descriptions
i. This file is my reflection on current practices before beginning my reading.

ii. These two files are a brief organization of the knowledge base reviewed for my work. They are no means exhaustive of the complete knowledge base in assessment and evaluation, but provide a review of my work in this area. I shared these with my regional group as a contribution to their learning.

iii. I used a recent full journal issue on e-assessment to consider my assessment practices in my online classes.

iv. This qualitative data was collected to understand the experience of the top teachers who use videoconferencing the most in my schools. It shows another way to evaluate my program.

v. Comparing my current evaluation practices to the General Evaluation Model raised the question of how to measure effectiveness in my curriculum videoconferencing program. It is unanswered in my reflection paper. This post on my blog brings the question to my "personal learning network" for further discussion.

i. Pre-reading review of my evaluation and assessment practices

ii. Regional Group contribution: Assessment and Evaluation Menu & Book Reviews

iii. Read British Journal of Educational Technology E-Assessment issue and reflect on my current online courses in light of the articles.

iv. Top Teachers Using VC: Program reflections using data from my twenty top teachers using VC

v. Reflection on measuring effectiveness of curriculum videoconferencing

Reflection Paper

This reflection paper uses the knowledge base on assessment and evalution to critically reflect on my practice.

References

Angus, S. D., & Watson, J. (2009). Does regular online testing enhance student learning in the numerical sciences? Robust evidence from a large data set. British Journal of Educational Technology, 40(2), 255-272.

Baehr, M. (2009). Distinctions between assessment and evaluation. Retrieved from http://matcmadison.edu/cetl/resources/archive/efgb/4/4_1_2.htm

Baker, E. L., & Herman, J. L. (2003). A distributed evaluation model. In G. D. Haertel & B. Means (Eds.), Evaluating educational technology: Effective research designs for improving learning. New York, NY: Teachers College Press.

Barbera, E. (2009). Mutual feedback in e-portfolio assessment: an approach to the netfolio system. British Journal of Educational Technology, 40(2), 342-357.

Chang, C.-C., & Tseng, K.-H. (2009). Use and performances of Web-based portfolio assessment. British Journal of Educational Technology, 40(2), 358-370.

Cole, P. (1992). Constructivism revisited: A search for common ground. Educational Technology, 32(2), 27-34.

Cook, T. D. (2003). Why have educational evaluators chosen not to do randomized experiments? The ANNALS of the American Academy of Political and Social Science, 589(1), 114-149.

Creighton, T. B. (2007). Schools and data: the educator's guide for using data to improve decision making. Thousand Oaks, CA: Corwin Press.

Dermo, J. (2009). e-Assessment and the student learning experience: A survey of student perceptions of e-assessment. British Journal of Educational Technology, 40(2), 203-214.

Draper, S. W. (2009a). Catalytic assessment: understanding how MCQs and EVS can foster deep learning. British Journal of Educational Technology, 40(2), 285-293.

Draper, S. W. (2009b). What are learners actually regulating when given feedback? British Journal of Educational Technology, 40(2), 306-315.

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation: Alternative approaches and practical guidelines (3rd ed.). Boston, MA: Pearson.

Freedman, R. L. H. (1998). Constructivist Assessment practices. Retrieved from http://www.eric.ed.gov/

Haertel, G. D., & Means, B. (2003). Evaluating educational technology: Effective research designs for improving learning. New York, NY: Teachers College Press.

Heineke, W. F., & Blasi, L. (2001). Methods of evaluating educational technology. Greenwich, CT: Information Age Pub.

Heinze, A., & Heinze, B. (2009). Blended e-learning skeleton of conversation: Improving formative assessment in undergraduate dissertation supervision. British Journal of Educational Technology, 40(2), 294-305.

Hense, J., Kriz, W. C., & Wolfe, J. (2009). Putting theory-oriented evaluation into practice: A logic model approach for evaluating SIMGAME. Simulation Gaming, 40(1), 110-133.

Janesick, V. J. (2006). Authentic assessment. New York, NY: Peter Lang Publishing.

Jonassen, D. (1991). Objectivism vs constructivism: Do we need a new philosophical paradigm? Educational Technology, Research and Development, 39(3), 5-13.

Jordan, S., & Mitchell, T. (2009). e-Assessment for learning? The potential of short-answer free-text questions with tailored feedback. British Journal of Educational Technology, 40(2), 371-385.

Loddington, S., Pond, K., Wilkinson, N., & Willmot, P. (2009). A case study of the development of WebPA: An online peer-moderated marking tool. British Journal of Educational Technology, 2, 329-341. Retrieved from 10.1111/j.1467-8535.2008.00922.x

Love, A. J. (1991). Internal evaluation: building organizations from within. New York, NY: Sage Publications.

Marriott, P. (2009). Students' evaluation of the use of online summative assessment on an undergraduate financial accounting module. British Journal of Educational Technology, 40(2), 237-254.

Matusevich, M. N. (1995). School reform: What role can technology play in a constructivist setting? Retrieved from http://delta.cs.vt.edu/edu/fis/techcons.html

McNeil, K. A., Newman, I., & Kelly, F. J. (1996). Testing research hypotheses with the general linear model. Carbondale, IL: Southern Illinois University Press.

McNeil, K. A., Newman, I., & Steinhauser, J. (2005). How to be involved in program evaluation: What every administrator needs to know. Lanham, MD: ScarecrowEducation.

NCREL (2004). Assessment in a constructivist classroom. Retrieved from http://www.ncrel.org/sdrs/areas/issues/methods/assment/as7const.htm

Ng'ambi, D., & Brown, I. (2009). Intended and unintended consequences of student use of an online questioning environment. British Journal of Educational Technology, 40(2), 316-328.

Patton, M. Q. (1997). Utilization-focused evaluation: the new century text. New York, NY: Sage Publications.

Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage.

Rumberger, R. W. (2003). The advantages of longitudinal design. In G. D. Haertel & B. Means (Eds.), Evaluating educational technology: Effective research designs for improving learning. New York, NY: eachers College Press.

Shephard, K. (2009). e is for exploration: Assessing hard-to-measure learning outcomes. British Journal of Educational Technology, 40(2), 386-398.

Stufflebeam, D. L., & Wingate, L. A. (2005). A self-assessment procedure for use in evaluation training. American Journal of Evaluation, 26(4), 544-561.

Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco, CA: Jossey-Bass.

Whitelock, D. (2009). Editorial: e-assessment: developing new dialogues for the digital age. British Journal of Educational Technology, 40(2), 199-202.

Williams, J. B., & Wong, A. (2009). The efficacy of final examinations: A comparative study of closed-book, invigilated exams and open-book, open-web exams. British Journal of Educational Technology, 40(2), 227-236.

Worthen, B. R., Sanders, J. R., & Fitzpatrick, J. L. (1997). Program evaluation: Alternative approaches and practical guidelines (2nd ed.). New York, NY: Longman.

 


Home | Courses | Faith | Family | Service | Presentations | Projects | Publications | Videos | Contact
Contact: janine@andrews.edu
Last Updated September 20, 2011

©All Rights Reserved, Janine Lim, unless content previously licensed.