• Simple Item 1
  • 1

Post-conference workshop program: Thursday 7 September 2017

>>> DOWNLOAD a printable conference workshop program

View Pre-conference (Sunday) workshop program here.


WORKSHOP DETAILS

Engaging children and youth in evaluation

presented by Sandra Mathison, Professor of Education, University of British Columbia, Vancouver, Canada

HALF DAY (AM) – BEGINNER/INTERMEDIATE

The workshop will focus on strategies for including children and youth in evaluations, especially in evaluations of programs presumed to be for their benefit. It will consider why, and how, the least powerful stakeholders should be enabled to make authentic and substantive contributions to evaluations and outline strategies to engage children and youth across a range of evaluation activities including planning, data collection and reporting. If evaluators and the commissioners of evaluations are to engage children and youth, they need to allow space for participation, and to craft age appropriate ways of communication.

Using case examples and mini role playing scenarios, the workshop will build evaluators’

  • awareness of and strategies for engaging children and youth in the evaluation process (for example, working with youth to understand evaluation and capitalizing on their nascent evaluation skills) and
  • collecting evaluation data with and from children and youth (for example, using modes of interaction and communication relevant for children and youth).

Participants will take away an awareness of principles for engaging children and youth, and a repertoire of strategies.

The workshop is grounded in the assumption that children and youth are authentic stakeholders who can engage productively with adults in evaluation practice. While stakeholder participation in evaluation is standard operating procedure, the details of that participation cannot be taken for granted. The workshop will challenge adultism that privileges adult knowledge, expertise, modes of interaction, and control. Including less powerful groups in evaluation contributes to democratic ideals and better decision making.

About the presenter
One of the conference keynote speakers Sandra Mathison is Professor of Education, University of British Columbia. She is an educational evaluator. Her research focuses on government mandated testing of teachers and students, and she is also an expert on researching children’s experiences. She is editor of the Encyclopedia of Evaluation and Editor-in-Chief of New Directions in Evaluation.

> back to top > register


Evaluation ethics for evaluation practitioners, evaluation commissioners and evaluation reviewers

presented by Emma Williams, Northern Institute, Charles Darwin University, Darwin; David Turner, Evaluation practitioner,
Wellington, NZ

HALF DAY (PM) – BEGINNER TO ADVANCED

Evaluation practitioners are expected to conform to ethical and professional guidelines, which appear straightforward but may be complex in application. AES need to know the society’s Code of Ethics (2013) and Guidelines for the Ethical Conduct of Evaluations (2013) as well as standards of good evaluation practice, and practitioner competencies.

This workshop will cover the different types of guidance on ethics, and then involve participants in considering a set of scenarios. The scenarios have been designed for four groups: practitioners who are relatively new to evaluation, experienced practitioners, evaluation commissioners, and members of Human Research Ethics Committees (or equivalent groups) who review evaluations. The session is designed to be interactive, and participants with any level of experience will be able to participate, with scenarios appropriate to their role and degree of experience. Small group discussion learnings will be shared with the larger group, in keeping with the ‘Learn from Practice’ category of the 2017 conference. By the end of the workshop, participants will be aware of the contents of the AES Code of Ethics and the AES Guidelines for the Ethical Conduct of Evaluations, and will also recognise any other codes that may be applicable to their particular evaluation (e.g. if it involves Indigenous participants, a random control group or if it is influencing a government policy decision). Participants will be aware of how ethics processes can be used to inform and improve evaluation designs, but also how field practices can impact on original ethical designs, and what to do when unforeseen issues arise during the course of an evaluation.

About the presenters
Emma Williams is a Credentialed Evaluator who has conducted evaluations on four continents. Currently Principal Scientist of the Evaluation and Knowledge Impact unit at the Northern Institute, Charles Darwin University, Emma was one of the team who updated the AES Guidelines for the Ethical Conduct of Evaluations and AES Code of Ethics. She has published and presented on multiple aspects of evaluation ethics, and has recently been investigating a realist view of ethics, i.e. ‘ethical to/for whom in what circumstances, and how?’. 

David Turner is an independent evaluation practitioner based in Wellington, New Zealand. He has worked in or for the NZ public sector for 20 years, in a variety of policy areas that included labour, immigration, employment relations, justice and housing. He worked in the U.S. public sector before coming to New Zealand, also in the field of research and evaluation. David has chaired the AES Ethics Committee, has lectured on evaluation at the graduate level and retains an interest in ethics and other issues of professional practice.

> back to top > register


Rapid Impact Evaluation

presented by Andy Rowe, Evaluation and economics consultant, British Columbia, Canada

FULL DAY – INTERMEDIATE/ADVANCED

Rapid Impact Evaluation (RIE) is a new evaluation approach for use in settings where it is challenging to assess impacts. RIE can be used to forecast expected impact, as well as to evaluate impact after implementation. It is nimble and low cost. Using RIE as part of mixed methods evaluations enhances the ability of evaluators to quickly and cheaply assess the direction and magnitude of impacts, including of complex initiatives. Pilots in Canada, the US and SE Asia have shown RIE to be fit for purpose and provide valuable insights about how to tailor the method to different settings.

Built on a use-seeking framework RIE introduces three new methods: the scenario-based counterfactual; a simplified metric for scalar measurement of impacts; and an interest-based approach for using program stakeholders as experts. RIE triangulates judgments of three distinct groups of experts bringing knowledge of the intervention and of the science involved to the assessment of impacts. This workshop will introduce RIE and the new methods developed to enable rapid evaluation of impacts. All modules in the workshop include exercises to apply RIE concepts. In addition to introducing and building capacity to use these methods this workshop will demonstrate how use-seeking approaches can be infused into the DNA of an evaluation.

Learning outcomes of the workshop are for participants:

  • to understand the RIE approach and options for adapting RIE to different settings
  • to learn new methods developed for RIE and how they can each be used as part of a mixed methods evaluation approach (scenario-based counterfactuals, metrics to assess impacts and applying interest-based approaches to evaluation)
  • to gain awareness of the features of a use-seeking evaluation approach learn lessons from applying a use-seeking approach and understand how to systematically build use-seeking approaches into all elements of an evaluation.

The workshop best suits intermediate and more advanced evaluators with some experience conducting and designing evaluations and the challenges of evaluating impacts.

About the presenter
One of the conference keynote speakers, Andy Rowe is an evaluation and economics consultant. He is former President of the Canadian Evaluation Society. An innovator, he has developed new approaches to evaluation that are accepted in Canada, the United States and by the World Bank. His Rapid Impact Evaluation is a flexible low cost mixed methods approach to evaluating impacts. He has also developed systems to evaluate conflict resolution and sustainable development.

> back to top > register


How to build evaluation systems

presented by Dugan Fraser, Program Director, RAITH Foundation, Johannesburg, South Africa

FULL DAY – MASTER CLASS

This master class will explore different ways of designing and implementing big scale evaluation systems and frameworks. It will draw on the facilitator’s experience of working on big whole-of-government and multi country evaluation systems. Special attention will be paid to conditions for success: developing evaluation capacity; building organisational cultures of evaluative thinking; and building supportive institutional coalitions. Culture, tone, approach and style will be addressed as major considerations in the design and implementation of big scale systems.

Learning outcomes are:

  • Why governments should build big scale evaluation systems
  • An introduction to principles that can be used to guide the development of large scale systems
  • An understanding of the various kinds of big-scale systems and frameworks. Participants will consider which type of system best suits their needs and reflect on whether planning to graduate from one type of system to another is appropriate or necessary
  • Options on working across a number of different entities, what these entities could be and how to go about building the necessary relationships and shared platforms
  • Capacity building strategies to support the operations of big-scale systems and 
  • Insights into what pitfalls are likely to be encountered and how these can be avoided.

The session will provide many opportunities for participants to share knowledge and experiences, and to explore how lessons from international experiences can be adapted to suit participants’ particular contexts. It is not an introductory or beginner’s workshop.

This masterclass is for experienced evaluation practitioners and public servants. It is targeted at:

  • experienced evaluation practitioners working with government, who are already actively involved in the evaluation community and who are contemplating developing a big evaluation system
  • public servants who are considering implementing such a system and who will be responsible for commissioning, overseeing and using the prospective system.

About the presenter
One of the conference keynote speaker, Dugan Fraser is Program Director of the RAITH Foundation. He leads a philanthropic organisation dedicated to empowering civil society to overcome injustice and unfairness in South Africa. He is also Chairperson of the South African Monitoring and Evaluation Association. A former government official, he helped develop South Africa’s Government-Wide Monitoring and Evaluation system and developed evaluation systems to evaluate land reform, social development and public service governance.

> back to top > register


Introduction to evaluation

presented by Ian Patrick, Ian Patrick and Associates, Alphington (VIC)

FULL DAY – BEGINNER

This workshop is designed for those who are reasonably new to evaluation, and is aimed at equipping them with a core set of knowledge and skills that inform its use. These will be valuable for those who need to manage or undertake evaluations, or participate in evaluation activities. Workshop participants will develop an understanding of why and how organisations and programs undertake evaluation, including the approaches and methods commonly used in its application. Using case studies and practical exercises, participants will have the opportunity to apply knowledge and skills to situations and tasks common to the evaluation process. 

Core areas covered in the workshop include:

Role of evaluation and approaches to its use

  • Understanding the purpose and use of evaluation
  • Identifying who does evaluation, and how different stakeholders can participate
  • Clarifying the complementary nature of the relationship between monitoring and evaluation
  • Understanding the different stages of an evaluation
  • Using common areas of focus for evaluation such as effectiveness and efficiency

Planning an evaluation

  • How to plan an evaluation
  • Role of program logic to identify expected outcomes and impacts
  • Using evaluation questions
  • Using indicators and targets

Conducting an evaluation

  • Appreciating different approaches to evaluation including theory based, participatory developmental, and quantitative and qualitative
  • Common methods to collect data
  • Data analysis and synthesis
  • Drawing evaluation conclusions and recommendations
  • Reporting and dissemination

Capacity for evaluation

  • Building capacity for evaluation and learning in organisations

About the presenter
Ian Patrick is an independent consultant and Director of Ian Patrick & Associates. His career as an evaluator extends over 20 years and covers both Australia and the Asia Pacific region. He has broad experience in designing and implementing evaluations across different social sectors such as health, education, law and justice, community development, and human rights and Indigenous issues. Ian has worked with a range of government and non-government organisations and programs in developing monitoring and evaluation systems, and conducting evaluation-related training. He has previously delivered workshops for the Australasian Evaluation Society and the American Evaluation Association. He is joint author of Developing Monitoring and Evaluation Frameworks (SAGE, 2016). Ian is an Honorary Senior Fellow, School of Social and Political Sciences at the University of Melbourne where he contributes to evaluation-focused teaching programs.

> back to top > register


Foundations of survey design

presented by Sarah Mason, Centre for Program Evaluation, The University of Melbourne, Melbourne

FULL DAY – BEGINNER

At a time when evaluators are pressed for time and limited in budget, survey data often forms a foundation for evaluation work. As the primary source of data in many evaluations, it is imperative that evaluations are grounded in valid, reliable data. Yet high quality surveys are hard to create. Designing a credible survey requires understanding not only question wording, but also the cognitive and motivational aspects of participant responses.  For example, even with the ‘right’ question wording, inappropriate question ordering can fundamentally affect the nature of responses we receive.

This day-long workshop provides a hands-on experience in creating and evaluating reliable survey instruments. Working through a combination of brief lectures and practice exercises, participants will explore:

  1. the motivational approach to survey design
  2. principles for wording survey items
  3. the cognitive approach to survey creation
  4. foundational strategies for assessing and improving survey quality.

Across each domain, attendees will be given the opportunity to apply their newly gained knowledge by critiquing established surveys, and working in groups to create surveys of their own.

About the presenter
Sarah Mason is a Research Fellow and Lecturer based at the Centre for Program Evaluation, The University of Melbourne. Sarah specialises in the design and implementation of high quality monitoring, reporting and evaluation products in complex and challenging environments. Over the past 10 years, she has conducted research and evaluation projects across a wide range of contexts, including Australia, the United States, Afghanistan, East Timor, Myanmar, Thailand and Cambodia. She recently led the design and implementation of an international survey of more than 1,000 schools across the globe.

She has post-graduate training in experimental, quasi-experimental and non-experimental research designs, qualitative and quantitative data analysis, program theory-driven evaluation and evaluation theory.

Sarah has an MA in Evaluation and Applied Research Methods from Claremont Graduate University (CGU) and is currently pursuing a Ph.D. in the same field. She also has an MA in Development Studies from the University of New South Wales, a Graduate Diploma in Early Years Education and a Bachelor of Arts in Political Science and International Relations from the University of Queensland. She regularly presents at international conferences and was recently awarded a Faster Forward Fund scholarship for innovative ideas in evaluation.

> back to top > register


Using logic models to develop evaluation criteria and make value judgements

presented by Pauline Dickinson, SHORE & Whariki Research Centre, Massey University, Auckland

FULL DAY – BEGINNER/INTERMEDIATE

What makes evaluation unique is that the focus is on the systematic determination of the quality, value or importance of an evaluand in order to take action. Logic models can help focus the key interventions and outcomes the evaluand is seeking to achieve and the development of evaluation criteria and performance standards (rubrics) help determine merit.

This workshop is designed to suit beginning evaluators and others who want to conduct clearly defined and useful evaluations that move beyond just describing the evaluand. Participants will use logic models as a framework for an evaluation and develop a robust way of determining the quality, value and importance of an evaluand. The workshop is very practical, hands-on, interactive and uses adult learning strategies. The workshop addresses the domains of evaluation theory and evaluation activities.

In this workshop, participants will:

  • gain an understanding of the use of logic models in evaluation
  • develop a logic model for an evaluation project
  • gain an understanding of how to develop evaluation criteria and performance standards (rubrics).

About the presenter
Pauline Dickinson has over 20 years’ evaluation experience. She manages and delivers a national evaluation capacity building contract to public health workers and conducts a wide range of evaluations using both mixed and qualitative methods that address health and social issues. Her approach to evaluation is programme theory driven, utilisation focused, and participatory and the evaluations conducted are developmental, formative, process and outcome focused. Pauline has presented many times at national and international conferences including AES, AEA and EES and has received very positive feedback and interesting discussion from attendees. During her time at Massey University Pauline has successfully bid over $4million in contracts.

> back to top > register