• Simple Item 1
  • 1

Post-conference workshop program: Thursday 7 September 2017

>>> DOWNLOAD a printable conference workshop program

View Pre-conference (Sunday) workshop program here.


WORKSHOP DETAILS

Engaging children and youth in evaluation

presented by Sandra Mathison, Professor of Education, University of British Columbia, Vancouver, Canada

HALF DAY (AM) – BEGINNER/INTERMEDIATE

The workshop will focus on strategies for including children and youth in evaluations, especially in evaluations of programs presumed to be for their benefit. It will consider why, and how, the least powerful stakeholders should be enabled to make authentic and substantive contributions to evaluations and outline strategies to engage children and youth across a range of evaluation activities including planning, data collection and reporting. If evaluators and the commissioners of evaluations are to engage children and youth, they need to allow space for participation, and to craft age appropriate ways of communication.

Using case examples and mini role playing scenarios, the workshop will build evaluators’

  • awareness of and strategies for engaging children and youth in the evaluation process (for example, working with youth to understand evaluation and capitalizing on their nascent evaluation skills) and
  • collecting evaluation data with and from children and youth (for example, using modes of interaction and communication relevant for children and youth).

Participants will take away an awareness of principles for engaging children and youth, and a repertoire of strategies.

The workshop is grounded in the assumption that children and youth are authentic stakeholders who can engage productively with adults in evaluation practice. While stakeholder participation in evaluation is standard operating procedure, the details of that participation cannot be taken for granted. The workshop will challenge adultism that privileges adult knowledge, expertise, modes of interaction, and control. Including less powerful groups in evaluation contributes to democratic ideals and better decision making.

About the presenter
One of the conference keynote speakers Sandra Mathison is Professor of Education, University of British Columbia. She is an educational evaluator. Her research focuses on government mandated testing of teachers and students, and she is also an expert on researching children’s experiences. She is editor of the Encyclopedia of Evaluation and Editor-in-Chief of New Directions in Evaluation.

> back to top > register


Commissioning better evaluations

presented by Vanessa Hood, Rooftop Social, Melbourne; Duncan Rintoul, Rooftop Social, NSW Department of Education, Sydney

HALF DAY (AM) – EVALUATION COMMISSIONERS

If you work in government, the community sector or business and have a role in planning, commissioning or managing external evaluations, this is the workshop for you.

This workshop focuses on:

  • effective strategies for stakeholder engagement in the evaluation process
  • elements that make up a good evaluation brief/approach to market
  • techniques for developing and prioritising your evaluation questions
  • factors that influence the scale, budget and timeframe of an evaluation
  • what to look for in an external evaluation team
  • assessment of evaluation proposals and the procurement process
  • management of evaluation consultancies
  • ethical conduct, governance and risk management in evaluation.

This half-day workshop assumes that participants are already familiar with the basics of evaluation, including:

  • different types of evaluation (e.g. process, outcome, economic)
  • typical steps in collecting and analysing data for an evaluation
  • logic modelling, as a tool for getting clarity on the intended pathway of cause and effect in a project.

The training is interactive and hands-on, with lots of practical examples and group activities through the day to keep the blood pumping and the brain ticking. It will provide you with tools that you can start using immediately.

About the presenters
Vanessa Hood blends the world of facilitation and evaluation and has over 15 years' experience in a range of settings, including behaviour change for sustainability. Vanessa is passionate about working with people and uses a range of creative facilitation techniques to help participants engage deeply with technical content and, importantly, with each other. In her current role as Associate Director with Rooftop Social she works with a range of NGOs and government organisations across Australia. She regularly delivers structured training, coaching and mentoring in facilitation and evaluative thinking. Prior to this, Vanessa was the Evaluation Lead at Sustainability Victoria, where she had responsibility for numerous internal evaluations at project and strategic organisational levels. She is an active member of the AES, including member of the newly formed Design and Evaluation Special Interest Group. Vanessa has presented at a lot of large forums in the past, with very positive feedback.

Duncan Rintoul is the Director of Rooftop Social. He also holds a role at the NSW Department of Education, as the Principal Project Officer – Evaluation Capacity Building at the Centre for Education Statistics and Evaluation.

> back to top > register


Evaluation ethics for evaluation practitioners, evaluation commissioners and evaluation reviewers

presented by Emma Williams, Northern Institute, Charles Darwin University, Darwin; David Turner, Evaluation practitioner,
Wellington, NZ

HALF DAY (PM) – BEGINNER TO ADVANCED

Evaluation practitioners are expected to conform to ethical and professional guidelines, which appear straightforward but may be complex in application. AES need to know the society’s Code of Ethics (2013) and Guidelines for the Ethical Conduct of Evaluations (2013) as well as standards of good evaluation practice, and practitioner competencies.

This workshop will cover the different types of guidance on ethics, and then involve participants in considering a set of scenarios. The scenarios have been designed for four groups: practitioners who are relatively new to evaluation, experienced practitioners, evaluation commissioners, and members of Human Research Ethics Committees (or equivalent groups) who review evaluations. The session is designed to be interactive, and participants with any level of experience will be able to participate, with scenarios appropriate to their role and degree of experience. Small group discussion learnings will be shared with the larger group, in keeping with the ‘Learn from Practice’ category of the 2017 conference. By the end of the workshop, participants will be aware of the contents of the AES Code of Ethics and the AES Guidelines for the Ethical Conduct of Evaluations, and will also recognise any other codes that may be applicable to their particular evaluation (e.g. if it involves Indigenous participants, a random control group or if it is influencing a government policy decision). Participants will be aware of how ethics processes can be used to inform and improve evaluation designs, but also how field practices can impact on original ethical designs, and what to do when unforeseen issues arise during the course of an evaluation.

About the presenters
Emma Williams is a Credentialed Evaluator who has conducted evaluations on four continents. Currently Principal Scientist of the Evaluation and Knowledge Impact unit at the Northern Institute, Charles Darwin University, Emma was one of the team who updated the AES Guidelines for the Ethical Conduct of Evaluations and AES Code of Ethics. She has published and presented on multiple aspects of evaluation ethics, and has recently been investigating a realist view of ethics, i.e. ‘ethical to/for whom in what circumstances, and how?’. 

David Turner is an independent evaluation practitioner based in Wellington, New Zealand. He has worked in or for the NZ public sector for 20 years, in a variety of policy areas that included labour, immigration, employment relations, justice and housing. He worked in the U.S. public sector before coming to New Zealand, also in the field of research and evaluation. David has chaired the AES Ethics Committee, has lectured on evaluation at the graduate level and retains an interest in ethics and other issues of professional practice.

> back to top > register


Designing improved performance measures

presented by Graham Smith, Numerical Advantage, Farrer (ACT)

HALF DAY (PM) – INTERMEDIATE

The purpose of the workshop is to enable participants to improve performance measures for their organisation or program. The need for effective performance measures is increasingly important for both public sector agencies, as shown by recent changes to Commonwealth legislation, and also non-government bodies. The need is also shown by findings of several Auditors-General that the state of performance measurement is less than adequate.

The workshop will touch on the background to performance measurement in the public sector before outlining briefly what makes a sound performance measure and then giving some practical examples of poor and good measures. It will address how to work with current measures and organisational constraints, such as resources and data limitations when seeking to improve performance measures.

At the end of the workshop, participants should be better equipped to understand the legislative and practical requirements for performance measures and to develop improved measures for their organisation. The workshop will include lecture-style imparting of information, question and answer sessions and some small group work on the development of performance measures, ideally based on case studies provided by group members. Materials will include examples of current organisational performance measures, extracts from formal guidance from government and academic references.

The workshop is targeted to those who have, or are expected to have, some responsibility for developing or managing performance measures. An intermediate level of knowledge will be assumed. Some knowledge of performance measures will be of assistance, and if participants bring specific performance measurement issues with them for discussion it will aid the relevance and focus of the workshop.

About the presenter
Graham Smith has more than 20 years’ experience in evaluation and related fields including performance audit and development of performance measures. These reviews have covered many fields of government, including defence, indigenous affairs, education, health, environment, heritage and infrastructure. One major evaluation of the Natural Heritage Trust of which he was a team member, won an Australia Day public service award. 

He has advised on performance measurement in organisations as diverse as Environment, the Australian Tax Office, the Australian Communications and Media Authority and the ACT Audit Office. A workshop proposal in this field was accepted for the Wellington AES conference in 2010, and following that the presenter was invited by the AES to develop that half-day workshop to a series of full day workshops that were delivered in Melbourne, Canberra, Wellington and Brisbane in 2011. A further iteration was delivered (with a co-presenter) to IPAA in Canberra in 2014, and a mini workshop was also presented at the Darwin AES conference that year. In 2012, the author commenced a part-time Ph.D. in performance measurement in government, now almost complete, and has had the opportunity to present the findings of his research to several international and Australasian conferences.

> back to top > register


Rapid Impact Evaluation

presented by Andy Rowe, Evaluation and economics consultant, British Columbia, Canada

FULL DAY – INTERMEDIATE/ADVANCED

Rapid Impact Evaluation (RIE) is a new evaluation approach for use in settings where it is challenging to assess impacts. RIE can be used to forecast expected impact, as well as to evaluate impact after implementation. It is nimble and low cost. Using RIE as part of mixed methods evaluations enhances the ability of evaluators to quickly and cheaply assess the direction and magnitude of impacts, including of complex initiatives. Pilots in Canada, the US and SE Asia have shown RIE to be fit for purpose and provide valuable insights about how to tailor the method to different settings.

Built on a use-seeking framework RIE introduces three new methods: the scenario-based counterfactual; a simplified metric for scalar measurement of impacts; and an interest-based approach for using program stakeholders as experts. RIE triangulates judgments of three distinct groups of experts bringing knowledge of the intervention and of the science involved to the assessment of impacts. This workshop will introduce RIE and the new methods developed to enable rapid evaluation of impacts. All modules in the workshop include exercises to apply RIE concepts. In addition to introducing and building capacity to use these methods this workshop will demonstrate how use-seeking approaches can be infused into the DNA of an evaluation.

Learning outcomes of the workshop are for participants:

  • to understand the RIE approach and options for adapting RIE to different settings
  • to learn new methods developed for RIE and how they can each be used as part of a mixed methods evaluation approach (scenario-based counterfactuals, metrics to assess impacts and applying interest-based approaches to evaluation)
  • to gain awareness of the features of a use-seeking evaluation approach learn lessons from applying a use-seeking approach and understand how to systematically build use-seeking approaches into all elements of an evaluation.

The workshop best suits intermediate and more advanced evaluators with some experience conducting and designing evaluations and the challenges of evaluating impacts.

About the presenter
One of the conference keynote speakers, Andy Rowe is an evaluation and economics consultant. He is former President of the Canadian Evaluation Society. An innovator, he has developed new approaches to evaluation that are accepted in Canada, the United States and by the World Bank. His Rapid Impact Evaluation is a flexible low cost mixed methods approach to evaluating impacts. He has also developed systems to evaluate conflict resolution and sustainable development.

> back to top > register


How to build big scale evaluation systems

presented by Dugan Fraser, Program Director, RAITH Foundation, Johannesburg, South Africa

FULL DAY – MASTER CLASS

This master class will explore different ways of designing and implementing big scale evaluation systems and frameworks. It will draw on the facilitator’s experience of working on big whole-of-government and multi country evaluation systems. Special attention will be paid to conditions for success: developing evaluation capacity; building organisational cultures of evaluative thinking; and building supportive institutional coalitions. Culture, tone, approach and style will be addressed as major considerations in the design and implementation of big scale systems.

Learning outcomes are:

  • Why governments should build big scale evaluation systems
  • An introduction to principles that can be used to guide the development of large scale systems
  • An understanding of the various kinds of big-scale systems and frameworks. Participants will consider which type of system best suits their needs and reflect on whether planning to graduate from one type of system to another is appropriate or necessary
  • Options on working across a number of different entities, what these entities could be and how to go about building the necessary relationships and shared platforms
  • Capacity building strategies to support the operations of big-scale systems and 
  • Insights into what pitfalls are likely to be encountered and how these can be avoided.

The session will provide many opportunities for participants to share knowledge and experiences, and to explore how lessons from international experiences can be adapted to suit participants’ particular contexts. It is not an introductory or beginner’s workshop.

This masterclass is for experienced evaluation practitioners and public servants. It is targeted at:

  • experienced evaluation practitioners working with government, who are already actively involved in the evaluation community and who are contemplating developing a big evaluation system
  • public servants who are considering implementing such a system and who will be responsible for commissioning, overseeing and using the prospective system.

About the presenter
One of the conference keynote speaker, Dugan Fraser is Program Director of the RAITH Foundation. He leads a philanthropic organisation dedicated to empowering civil society to overcome injustice and unfairness in South Africa. He is also Chairperson of the South African Monitoring and Evaluation Association. A former government official, he helped develop South Africa’s Government-Wide Monitoring and Evaluation system and developed evaluation systems to evaluate land reform, social development and public service governance.

> back to top > register


Introduction to evaluation

presented by Ian Patrick, Ian Patrick and Associates, Alphington (VIC)

FULL DAY – BEGINNER

This workshop is designed for those who are reasonably new to evaluation, and is aimed at equipping them with a core set of knowledge and skills that inform its use. These will be valuable for those who need to manage or undertake evaluations, or participate in evaluation activities. Workshop participants will develop an understanding of why and how organisations and programs undertake evaluation, including the approaches and methods commonly used in its application. Using case studies and practical exercises, participants will have the opportunity to apply knowledge and skills to situations and tasks common to the evaluation process. 

Core areas covered in the workshop include:

Role of evaluation and approaches to its use

  • Understanding the purpose and use of evaluation
  • Identifying who does evaluation, and how different stakeholders can participate
  • Clarifying the complementary nature of the relationship between monitoring and evaluation
  • Understanding the different stages of an evaluation
  • Using common areas of focus for evaluation such as effectiveness and efficiency

Planning an evaluation

  • How to plan an evaluation
  • Role of program logic to identify expected outcomes and impacts
  • Using evaluation questions
  • Using indicators and targets

Conducting an evaluation

  • Appreciating different approaches to evaluation including theory based, participatory developmental, and quantitative and qualitative
  • Common methods to collect data
  • Data analysis and synthesis
  • Drawing evaluation conclusions and recommendations
  • Reporting and dissemination

Capacity for evaluation

  • Building capacity for evaluation and learning in organisations

About the presenter
Ian Patrick is an independent consultant and Director of Ian Patrick & Associates. His career as an evaluator extends over 20 years and covers both Australia and the Asia Pacific region. He has broad experience in designing and implementing evaluations across different social sectors such as health, education, law and justice, community development, and human rights and Indigenous issues. Ian has worked with a range of government and non-government organisations and programs in developing monitoring and evaluation systems, and conducting evaluation-related training. He has previously delivered workshops for the Australasian Evaluation Society and the American Evaluation Association. He is joint author of Developing Monitoring and Evaluation Frameworks (SAGE, 2016). Ian is an Honorary Senior Fellow, School of Social and Political Sciences at the University of Melbourne where he contributes to evaluation-focused teaching programs.

> back to top > register


Foundations of survey design

presented by Sarah Mason, Centre for Program Evaluation, The University of Melbourne, Melbourne

FULL DAY – BEGINNER

At a time when evaluators are pressed for time and limited in budget, survey data often forms a foundation for evaluation work. As the primary source of data in many evaluations, it is imperative that evaluations are grounded in valid, reliable data. Yet high quality surveys are hard to create. Designing a credible survey requires understanding not only question wording, but also the cognitive and motivational aspects of participant responses.  For example, even with the ‘right’ question wording, inappropriate question ordering can fundamentally affect the nature of responses we receive.

This day-long workshop provides a hands-on experience in creating and evaluating reliable survey instruments. Working through a combination of brief lectures and practice exercises, participants will explore:

  1. the motivational approach to survey design
  2. principles for wording survey items
  3. the cognitive approach to survey creation
  4. foundational strategies for assessing and improving survey quality.

Across each domain, attendees will be given the opportunity to apply their newly gained knowledge by critiquing established surveys, and working in groups to create surveys of their own.

About the presenter
Sarah Mason is a Research Fellow and Lecturer based at the Centre for Program Evaluation, The University of Melbourne. Sarah specialises in the design and implementation of high quality monitoring, reporting and evaluation products in complex and challenging environments. Over the past 10 years, she has conducted research and evaluation projects across a wide range of contexts, including Australia, the United States, Afghanistan, East Timor, Myanmar, Thailand and Cambodia. She recently led the design and implementation of an international survey of more than 1,000 schools across the globe.

She has post-graduate training in experimental, quasi-experimental and non-experimental research designs, qualitative and quantitative data analysis, program theory-driven evaluation and evaluation theory.

Sarah has an MA in Evaluation and Applied Research Methods from Claremont Graduate University (CGU) and is currently pursuing a Ph.D. in the same field. She also has an MA in Development Studies from the University of New South Wales, a Graduate Diploma in Early Years Education and a Bachelor of Arts in Political Science and International Relations from the University of Queensland. She regularly presents at international conferences and was recently awarded a Faster Forward Fund scholarship for innovative ideas in evaluation.

> back to top > register


Using logic models to develop evaluation criteria and make value judgements

presented by Pauline Dickinson, SHORE & Whariki Research Centre, Massey University, Auckland

FULL DAY – BEGINNER/INTERMEDIATE

What makes evaluation unique is that the focus is on the systematic determination of the quality, value or importance of an evaluand in order to take action. Logic models can help focus the key interventions and outcomes the evaluand is seeking to achieve and the development of evaluation criteria and performance standards (rubrics) help determine merit.

This workshop is designed to suit beginning evaluators and others who want to conduct clearly defined and useful evaluations that move beyond just describing the evaluand. Participants will use logic models as a framework for an evaluation and develop a robust way of determining the quality, value and importance of an evaluand. The workshop is very practical, hands-on, interactive and uses adult learning strategies. The workshop addresses the domains of evaluation theory and evaluation activities.

In this workshop, participants will:

  • gain an understanding of the use of logic models in evaluation
  • develop a logic model for an evaluation project
  • gain an understanding of how to develop evaluation criteria and performance standards (rubrics).

About the presenter
Pauline Dickinson has over 20 years’ evaluation experience. She manages and delivers a national evaluation capacity building contract to public health workers and conducts a wide range of evaluations using both mixed and qualitative methods that address health and social issues. Her approach to evaluation is programme theory driven, utilisation focused, and participatory and the evaluations conducted are developmental, formative, process and outcome focused. Pauline has presented many times at national and international conferences including AES, AEA and EES and has received very positive feedback and interesting discussion from attendees. During her time at Massey University Pauline has successfully bid over $4million in contracts.

> back to top > register