Guidance on Student Growth in T-TESS

Overview

Beginning with the 2017-2018 school year, student growth will become a required component in teacher appraisal for any district to which Texas Education Code Sec. 21.351 and 21.352 apply. 

This non-regulatory guidance seeks to support districts as they make determinations about student growth in teacher appraisal.   This guidance uses the T-TESS philosophy and the purpose of teacher appraisal that T-TESS espouses when discussing suggested practices for each student growth measure.  Districts are encouraged to consider their local contexts and the district’s priorities with teacher appraisal when shaping their approach to student growth.

Districts are free to adopt and implement any student growth measure and model they choose.  No single measure or model is required.  Each district, with its unique context and needs, will need to determine which student growth measures and models best fit its approach to teacher appraisal.

Purpose

T-TESS was developed as an appraisal process that engages teachers in a cycle of continuous improvement.  T-TESS seeks to:

  • Create a shared understanding and common language across a campus and district to describe effective pedagogy
  • Increase the frequency and quality of collaborative and coaching conversations between teachers and their appraisers
  • Strengthen habits of reflection, self-assessment, and adjustment on the part of teachers
  • Strategically sequence development opportunities for teachers

When considering student growth within T-TESS, the intended purpose is the same.  Student growth also seeks to provide feedback to teachers and their appraisers that captures where teachers are in their practice to pinpoint strengths and areas for development. 

For student growth to be a valuable tool in continuous improvement, educators should keep the following in mind:

  • Although it’s called student growth, it is really about teacher growth
  • Student growth is not the end in itself – the key to a meaningful experience with student growth measures is the ability to translate the student growth outcomes into feedback on       pedagogical practices
  • In contrast to an observation, which captures impact in a snapshot of time, student growth captures impact over an entire course
  • Honest assessment of pedagogy, sincere reflection on the approach to planning, and a commitment to adjustment are the best ways to improve student growth
  • Ratings are less important than the process of professional growth

Components of T-TESS

With the inclusion of student growth, T-TESS is composed of two different measures – the rubric and student growth – that determine where a teacher is in his or her practice for pinpointing areas of reinforcement and refinement.  Evidence to determine where a teacher is on those measures is generally captured through:

  • The observation process
  • Progress on the Goal-Setting and Professional Development Plan
  • The student growth process
  • Cumulative data

These components all come together during the end-of-year conference, where appraisers and teachers reflect on what teachers learned about their practice throughout the year and through the various processes.  Teachers and appraisers synthesize that information (what worked and what didn’t work as revealed through observations, student growth, goal attainment, and cumulative data) into the next year’s Goal-Setting and Professional Development Plan.  The Goal-Setting and Professional Development Plan then becomes the component that links one appraisal year to another, keeping the process cyclical and recursive.

Ratings

T-TESS districts have multiple options when determining end-of-year appraisal ratings for teachers. Districts can keep the ratings disaggregated and provide individual ratings for teachers for each of the sixteen dimensions on the T-TESS rubric.  For districts that adopt this method for summative ratings, student growth acts like the seventeenth dimension and is not weighted, as weighting does not apply to disaggregated ratings.

For T-TESS districts that decide to provide teachers a single overall summative rating, student growth must count at least 20% of the overall summative rating.  In that sense, student growth acts like the fifth domain, with the four rubric domains accounting for the other 80% of the teacher’s overall summative rating. 

District Considerations

Just like appraisal in general, student growth will mirror the culture of the district and campus.  If teachers feel like appraisal is about ratings and labels and not about improvement, then student growth will likely be viewed negatively.  If a continuous improvement culture has been established, student growth should fit positively into that culture. 

When making procedural decisions about student growth measures, keep the goal of teacher appraisal in mind.  Does the procedural decision improve the feedback and growth opportunities for teachers, or does it sacrifice value for ease of implementation?  For example, if a student enters a class later in the school year, how should the teacher include that student in the student growth process?

  • If including the student will not improve the quality and accuracy of feedback the teachers receives from the process, then the district may decide to not include the student in the process
  • If student mobility is a common concern for the district and/or campus, then the student may need to be included to determine how effectively the teacher brings about student growth for students who enter mid-stream

Growth vs. Proficiency

Student growth measures how much a student progresses academically during his or her time with a particular teacher.  It takes into consideration a student’s entering skill level when measuring how much the student grew over time, and, as opposed to measuring student proficiency on an assessment, student growth isn’t concerned with whether or not a student passes a particular test or reaches a predetermined and uniform benchmark.  It considers equally students who enter behind grade level, on grade level, and beyond grade level, tailoring growth expectations to each student’s context. 

Although feedback from both proficiency measures and growth measures has significant value in driving instruction, growth provides better feedback for driving improvements to instructional practices.  Proficiency lets teachers know where a student is relative to a fixed expectation to know what gaps in learning exist.  However when it comes to measuring the effect of instructional practices in teacher appraisal, measuring only proficiency can lead to inaccurate conclusions. 

For example, if a teacher has a student who is three years behind grade level entering class, and the teacher can help that student gain two and a half years of learning in a single year, the student still may not pass a grade-level proficiency assessment.  In a proficiency-based student performance measure used for appraisal, the feedback to the teacher would be that his or her pedagogy did not work for that student and needs to be adjusted, when in fact the teacher had a significant impact on student learning and should seek to reinforce the practices used with that student.

By measuring growth, a teacher develops a better understanding of the academic impact of his or her instructional choices to solidify strengths and identify development opportunities.

General Guidance on All Student Growth Measures
Student growth has the following characteristics:

  • Measures academic progress over time
  • Establishes a baseline for each student covered in the measure that captures what learning the student brought into class
  • Tailors the expectation for growth to the student (reflects rigorous instruction and high expectations, but is not a uniform expectation for all students)

Narrowed Focus 

Trying to measure growth on all TEKS can be practically impossible – the length of an assessment would be unreasonable if it attempted to capture student learning for each skill and content expectation.

Considering the purpose of a student growth measure is to capture the impact of pedagogical decisions throughout the year, and considering that effective teachers spend much of the year focused on recursively teaching the foundational skills that students must develop in a given course, then student growth measures tend to be strongest when they concentrate on the knowledge and skills that persist throughout the course and that have transferability – the knowledge and skills that not only lead to success in the current course but that have lifelong application.

Rating Rubrics

Measuring teacher performance related to student growth requires the development of a rating rubric.  If a district chooses to pursue any measure outside of the state’s SLO process, then it will need to take the time to either build or select a rubric to determine the teachers’ proficiency level in bringing about student growth.  This is a crucial step in the student growth process that will likely require input from multiple stakeholder groups and time to build, gather feedback, and pilot.  

In creating the SLO rating rubric, TEA and its stakeholders made the following decisions:

  • The rubric allows for a holistic rating, asking the appraiser to weigh the preponderance of evidence when determining ratings, as opposed to a more checklist-style approach
  • The rubric matches the number of and labels for performance levels on the T-TESS rubric
  •  The rubric mixes teacher practices and student outcomes for the following reasons:
    • It highlights those pedagogical approaches that lead to strong student growth (setting targets with high expectations for students, creating a high-quality SLO, and adjusting practice throughout the year based on evidence of student progress)
    •  It reflects the understanding that determining growth expectations for students will be new for most educators early on, so basing ratings entirely on whether or not students met those targets can be problematic

Process-based Student Growth Measures

The student learning objectives (SLOs) and Portfolio processes follow very similar paths.  They are designed to engage teachers in deliberate thinking as they answer fundamental questions about their instructional planning and delivery:

  1. What are the most important skills that I develop in students through this course?
  2. Where do I think my students will be with these skills upon entering my class?
  3.  Where are my students actually with these skills upon entering my class?
  4. Based on where my students are with these skills, where should they be at the end of the course if I provide effective instruction?

With these questions answered, the teacher then monitors student progress throughout the course to ensure that the instructional plans for those students effectively move them to their targeted skill level, making adjustments to pedagogy when students aren’t progressing as expected.

Accuracy vs. Precision  

Process-based student growth measures embrace the idea that determining where a student is with a given skill at a given time is an estimation.  More important than creating a system that tries to, with precision, create assessments that separate 79s from 80s, it instead focuses on the description of the student skill as an anchor within a category and relies on the synthesis of multiple data points to get a more accurate and thorough understanding of where a student is at a given moment. 

Process-based student growth measures are less about cut scores and single assessments and more about the “teaching loop”, where teachers engage in evidence-based instruction that leads to consistent adjustments and improvements to practice and planning:

SLOs:

For more information on the state’s SLO process, please visit: texasslo.org

It is recommended that districts considering the state’s SLO process view the Process Overview first, and then view the Teacher Guide and the Principal Guide as follow-up supports after attending one-day SLO training.

As expressed in the state’s SLO training, SLOs are:

  • A means to teacher growth (reflect, assess, adjust, and develop overtime)
  •  A concentrated look at instructional impact through the lens of the most important skill(s)
  • A part of a teacher’s cycle of development (results could feed into Goal Setting and Professional Development plans)
  • Evidence-based


SLOs are not:

  • A second-grade book
  • Mathematical or mathematically precise
  • Focused on traditional testing
  • Standardized across a campus or district (as in, all students must reach a predetermined level, or all teachers will focus on reading)
  • The place to supersede teacher development needs with campus or district improvement plan needs

Districts can follow or adopt any SLO model that best fits their needs. The state’s SLO process has been designed to align with T-TESS, but it is not the required or only SLO model.

Portfolios:

Portfolios are very similar to SLOs in that they:

  • Work best with a focus narrowed to foundational skills
  • Determine the baseline for each student covered in the measure upon entering the course
  • Determine the expectation for each student covered in the measure at the end of the course
  • Determine what the demonstration of performance will be throughout or at the end of the course

Portfolios could differ from SLOs in that they can either accumulate student work over time to capture incremental steps in student progress with given content or skills or they can capture a range of end-of-course demonstrations of performance, showing student skill in a multitude of forms (although SLOs can also do this).

Portfolios can also be valuable for teachers who work with students in smaller increments of time or responsibility, such as teachers on alternative placement campuses or teachers who work with students in content mastery one or two days a week. In those instances, the teacher would capture evidence of student progress during his or her time with the student to show the impact of the pedagogy.

It is recommended that districts considering portfolios with T-TESS attend the one-day SLO training. From there, districts can determine ways to tweak the SLO process to fit the needs of teachers who would benefit more from a portfolio process.

Assessment-based Student Growth Measures

District pre- and post-tests and value-added measures based on state assessments (VAM) are measures that focus on uniform assessments as the basis for determining students’ levels of learning to enter and exit a course. They are designed to numerically quantify an amount of growth at the student level before determining the teacher’s impact on growth.

VAM uses standardized state assessments to determine whether or not a student progressed as much as expected. For VAM measures, how that expectation is created can vary depending on the VAM measure used.

District-level pre- and post-tests use assessments to determine entering and exiting levels of learning for students based on district-level assessments rather than state assessments.

VAM and district pre- and post-tests allow for greater comparability across campuses because students in the same grades and subjects are taking the same assessments and the interpretation of assessment results is generally objective and standardized.  If the district’s primary concern for teacher appraisal is uniformity, comparability, and standardization, then VAM and district pre- and post-tests could be measures that best align with those objectives.

Value-added Measures (VAM) based on state assessments:

When considering the use of VAM, please note that multiple models could be used to calculate VAM, and, depending on the entity using the model, similar models can take different names.

Research will capture both pros and cons for any given model of VAM a district could pursue, so districts are encouraged to weigh the relative importance of certain considerations when choosing a model.  Some of those considerations could be:

  • The feedback the data produces. Does it signal to teachers potential growth areas based on the entering achievement levels of students (how well low-achieving students progress in the teacher’s class, for example) or based on demographic data (how well do male students progress in the teacher’s class, for example)?
  • The amount of information the model takes into consideration, such as prior testing data
  • The ease of calculation or explanation
  • The ability to calculate VAM for certain tests (i.e., 4th grade, EOC, science, social studies)

In addition, districts will need to make a host of procedural decisions related to processing data and producing a VAM measure, such as:

  •  How many performance levels will the district use to capture teachers’ results?
  •  Will the district combine results for teachers who teach multiple tested grades and subjects? If so, how?
  •  How will the district handle shared responsibilities for teaching students (i.e., the primary teacher working with content mastery or co-teaching situations)?
  •  What is the minimum number of test takers that would be able to produce a VAM result for a teacher?
  •  Are there certain conditions that would cause a student to be dropped from the data, such as several absences or an enrollment date late in the school year?
  •  How will the district handle teachers who are out for an extended time, such as those on FMLA leave, for example?
  • Will student-teacher linkages be binary or dynamic? For example, if a student is enrolled with a teacher on a given date, will the teacher be responsible for 100% of the student’s results, or will the responsibility be weighted based on percentage of time the student is enrolled in a teacher’s course?

Due to the numerous considerations associated with using VAM, it is strongly encouraged that districts work with external partners with expertise in producing VAM results.

District Pre- and Post-Tests:

Districts pursuing District Pre- and Post-Tests should consider the following:

  •  Access to assessments for the grades and subjects in question (whether district-created or third-party-provided)
  •  The scope of those assessments and their ability to accurately capture levels of learning for specific standards. Do they focus on certain knowledge and skills or broadly cover many TEKS?
  •  The length of the assessments. Do they ask enough questions to accurately determine levels of learning for all standards tested?  How long will it take a student to complete the assessment?
  •  Are pre- and post-tests the same assessment, or are they different assessments testing the same standards? If the same assessment, is there improvement in learning or familiarity with the questions? If different assessments, how certain is the district that both tests assess the same content at the same levels?
  •  The feedback the tests generate for teachers – do the tests not only quantify how much growth a student made but do they also indicate pedagogical strengths and areas for improvement for teachers?

Closing

Improvement of practice is the ultimate goal in T-TESS, and the student growth component should be structured so that teachers have a better sense of what’s working, what’s not working, and what they can do to improve their practice moving forward. In that sense, measuring student growth should be a means to supporting teacher growth.

Please reach out to your local education service center (ESC) for support in determining which student growth measure(s) are best for your district and with implementation of student growth.

Still need help? Contact Us Contact Us