Here’s a little-reported story about the state assessment task force’s process. To evaluate different possible assessments, the task force created a rubric, asked vendors to respond to a request for information, then planned to score the responses. What happened, though, surprised them: No vendor submitted the Smarter Balanced Assessments for review. Of the proposals that were submitted, the Next Generation Iowa Assessments received by far the highest score. The other proposals received sufficiently low scores that the task force eliminated them from consideration.
At that point, the Next Generation Iowa Assessments became the only proposal under consideration. From the point of view of the task force, that was a problem that had to be solved. At the time, Iowa was still a member of the Smarter Balanced Assessment Consortium; in becoming a member, the state had agreed to adopt the Smarter Balanced Assessments.
The task force decided to issue another Request for Information and “to reach out to specific vendors to ask them to submit the Smarter Balanced Assessments for our review.” (Details here.) Lo and behold, a vendor submitted the Smarter Balanced Assessments for review.
Soon afterward, the state decided to withdraw from the Smarter Balanced Consortium, as a way of “respecting the Assessment Task Force’s independence and ensuring an impartial process.” A few months later, the task force recommended that the state adopt the Smarter Balanced tests.
If it had been the Iowa Testing Programs that had failed to submit a proposal in response to the Request for Information, would the task force have issued a second request? Would it have “reached out” to ask for a submission of the Next Generation Iowa Assessments for review? Or was the task force determined from the outset to recommend Smarter Balanced?