Thursday, April 25, 2024

Procedures and Communications Responsive to The Needs and Concerns of Districts and Schools - ebookschoice.com

The standards and responsibilities in this section describe activities necessary to administer psychometrically and legally defensible high-stakes tests efficiently and to a high standard of quality. These activities are typically assigned to the vendor but responsibility may be shared with the agency. If the agency decides to retain responsibility for an activity, the agency may seek advice from the vendor but should clearly indicate that expectation in the RFP (Request for Proposal) and resulting contract.

 

For each activity or portion of an activity assigned to a vendor, the RFP and resulting contract should describe in detail what is expected of the vendor, any special conditions or limitations, and the compensation to be paid. If a state requests changes or delegates additional responsibilities to the vendor after the contract has been signed, the state may have to renegotiate the price.

 

Where the state has delegated such responsibility to the vendor, a plan for developing and maintaining a database of student and school testing information shall be created. The plan should provide mechanisms for tracking student movement, keeping track of retests, collecting demographic information needed for data analyses and reporting, ensuring confidentiality of individually identifiable student data, correcting student identification numbers as needed, and updating files when errors are uncovered.

 

With multiple subjects, multiple grades, and retests, it is essential that test data be organized in a format that is accessible, accurate, provides all data needed for state and federally-mandated analyses, and tracks the testing history of students, items and test forms. Because most of the data collected will involve confidential or secure information, detailed policies for protecting the confidentiality of data collected and retained must be developed.

 

The RFP and resulting contract should clearly specify vendor expectations in this area. Creation and maintenance of electronic databases is expensive and the cost may be prohibitive for some small testing programs. If the state chooses to maintain or collect its own data, the contract should clearly specify the form and content of data files the vendor is expected to provide to the agency.

 

The state has the responsibility to collect and report useful data to a variety of constituencies, including satisfying federal requirements. Where permitted by state law, a database of student and school information can be highly useful. The state is ultimately responsible for ensuring that such a database of student and school information is maintained properly; where it has elected to delegate this responsibility to the vendor, the state is responsible for monitoring the work. States choosing not to use a state level database possess other means for carrying out this function that a vendor does not, such as requiring school districts to provide the data.

 

When a state chooses not to contract with a vendor to maintain a state database, the state must assume the responsibility for collecting and maintaining assessment data in a form that will produce usable information for various constituencies and that satisfies applicable law. Appropriate procedures must be implemented to satisfy confidentiality requirements and to ensure proper use and access to all data. While a state has options other than creation of a statewide database, such options limit the usefulness of the available data.

 

The vendor proposal and resulting contract shall specify procedures for determining quantities of materials to be sent to districts (or schools), tracking test materials that have been sent, and resolving any discrepancies. A mechanism shall be developed for ensuring the accuracy of enrollment data supplied to the vendor and for updating school requests for additional or replacement materials. Instructions for handling test materials and for test administration (e.g., Administrator’s Manuals) shall be shipped to districts at least one month prior to testing to allow time for planning and staff training.

 

Valid and fair test results require adherence to all standard test administration conditions and security procedures by all test administrators. Test administrators are best prepared for this task when sufficient quantities of materials are received prior to testing and training has been provided using the actual instructions to be employed during testing. In order for districts to receive sufficient quantities of materials, accurate and timely enrollment information must be supplied to the vendor and a mechanism must be established for efficiently responding to requests for additional or replacement materials. Administrator’s manuals and instructions for handling test materials are important communications for district planning and test administrator training and should be available for study prior to the receipt of test materials. By making such procedures and communications responsive to the needs and concerns of districts and schools, greater cooperation should be achieved.

 

Timelines and procedures for receipt and return of test booklets and answer sheets shall be consistent with an agreed upon test security policy and specifications in the RFP and resulting contract. Generally, test materials should arrive in sealed containers no earlier than one week prior to testing, should remain in a secure, locked storage area while in district/schools, and should be repackaged and picked up within two days after test administration has been completed.

 

The security of test materials and the accuracy of state test data depend on the timely receipt and return of test materials by schools. The RFP and resulting contract should provide detailed descriptions of all security procedures to be followed by the vendor, including procedures for distributing, tracking, and returning test materials.

 

The state may wish to delegate the responsibility for training of school and district personnel to the vendor.

States must develop and implement a policy for ensuring that schools and districts comply with the policies enumerated in the RFP and contract. When non-compliance is an issue, the state must be able to impose sanctions or otherwise compel action on the part of the local education agency. In addition, the state is responsible for the training of school and district personnel in the security policies.

 

The state retains responsibility for training, monitoring, and investigating local education agencies’ compliance with established test security procedures. Administrative rules or statute should enumerate educators’ responsibilities, proscribed activities and sanctions for violators. The state also has a duty to monitor contractor activities and to assist in the resolution of unforeseen circumstances (e.g., school closing on test week due to a major flood or storm damage).

 

Reliability for any high-stakes exam should be at the highest levels. Where open-ended response items or essays are included in an assessment, two raters shall score each response with at least 70% agreement on initial scoring. When raters disagree on initial scoring, resolution (re-scoring) by a senior or supervisory rater is required.

 

Tests that are to be used for high stakes for either educators or students should attain high standards of reliability, as may be exemplified by an overall internal consistency rating of at least 0.85 to 0.90 on a 0-1 scale. Such overall reliability will not be attained unless hand scored items, typically essays or other open-ended items, also attain adequate levels of inter-rater reliability. Trained raters using detailed scoring rubrics who are periodically rechecked for accuracy should be able to score responses with a high degree of agreement. When two raters disagree and the test is being used for high-stakes decisions about individual students, fairness dictates that an experienced third rater resolve the discrepancy. (In cases of items with a large number of score points, "agreement" may consist of adjacent scores.) Alternative procedures for computerized scoring of open response items can include one trained rater serving as the second rater, with similar procedures for resolving discrepancies. For assessments that do not include high-stakes for students, a single rater may be sufficient as long as proper procedures are in place for checking samples for rater drift.

 

Quality control procedures for checking the accuracy of all item information, student scores and identification, and summary data produced by the testing program shall be developed and implemented. The standard for the error rate of data reports provided by a vendor to an agency for review is zero.

 

The vendor has a duty to formulate and implement quality control procedures for data generation that have as their goal the production of error-free reports and summary data. All data operations should be subject to multiple checks for accuracy before being released to the state. The vendor should document its quality control procedures for state review and create detail logs that trace the application of those procedures to the state data reports.

 

Data reports released by state agencies must also be error free. The state must develop its own quality assurance policy to monitor the work of the vendor. Data reports should be examined before general release. Effective techniques prior to release include: running score and summary reports on "dummy" data to ensure that the output is correct; close examination of a sample of the reports; sending preliminary data to select schools or districts for review; or having the state TAC (Technical Advisory Committee) or an outside consultant examine a sample of the reports.

 

When erroneous data is released publicly, the testing program loses credibility and incorrect decisions may be made. It is imperative that all reasonable procedures be used to check the accuracy of all testing program data before report distribution or public release. The vendor has primary responsibility to find and correct errors, with agency staff acting as a final check. The expectation of zero errors is contingent upon the state providing all necessary information. Nontrivial vendor errors may trigger financial penalties in states that include such provisions in their contracts.

 

When an item error, scoring error, or reporting error is discovered, the vendor shall notify state staff immediately. Vendor staff should then work closely with agency staff, and technical advisory committee members or outside consultants where appropriate, to develop a comprehensive plan for correcting the error. The plan should include the provision of timely and truthful information to the affected stakeholders.

 

The way in which an error becomes public and the actions taken to correct it can have a major impact on public perceptions. Straightforward communication of information as it becomes available and immediate corrective action can help restore public confidence in the vendor and the state testing program. Error does not include reasonable differences of opinion.

 

Testing report forms shall be received by the district or other responsible entity (e.g., charter school) no later than the end of the semester in which testing occurred. Individual student reports for multiple-choice tests should be received within 2 weeks of the date on which answer documents were received by the vendor. School, district, and state reports should be produced within 2 weeks of the cutoff date for return of answer documents. For tests containing open-ended items or essays requiring ratings, individual student reports should be received within 6 weeks of the date on which answer documents were received by the vendor. School, district, and state reports should be produced within 6 weeks of the cutoff date for return of answer documents. Where an assessment is composed entirely, or almost entirely, of essays or other open-ended items, more time is likely to be necessary for scoring. The contract should specify any antecedent conditions that must be met by the agency for reports to be delivered on time.

 

For data to be useful for instructional improvement and for making decisions about enrollment in remedial classes or summer school, it must be received prior to the beginning of the next instructional semester following the date of testing. Turnaround time will vary depending on program complexity but should be kept as short as possible while maintaining accuracy. If state staff with expertise believe that these timelines do not reflect their needs, they can elect to deviate from them; however, a rationale should be provided. It is understood that there are tradeoffs inherent in the timeline process, and state policymakers should be able to explain their reasoning for allowing vendors to go beyond these timelines, if they elect to do so.

 

Plans should include rules for scoring of late arriving papers, particularly with regard to calculating summary statistics. (E.g., how long should one school be allowed to hold up the state summary statistics?) Clear guidelines in this area are especially important for tests that include open-response items; in such cases, a contractor will typically have only a limited window of time to implement the work of the human raters. The beginning date of the 2-week or 6-week scoring window should be clearly defined in the contract. Further, the scoring timeline for the contractor should be defined to include all activities that the contractor needs to perform (i.e., including all of those required to ensure the integrity of the data, not just the scoring itself once these activities have been completed).

When the RFP and resulting contract provide reasonable timelines for scoring and reporting, and the agency has met its obligations, states may wish to include contractually agreed upon incentives for performance by the vendor. Incentives may include a bonus for early completion or a penalty for late performance or errors. Administration activity timelines may well exceed typical annual state appropriations; states may benefit from multi-year funding plans and contracts across fiscal years (which may be cancelled if the budget must be reduced or the program is eliminated). States must, of course, stay within statutory constraints imposed by their respective legislatures.

 

The RFP and resulting contract should contain workable timelines that allow sufficient time for scoring and quality control. When delays occur, timely communication is vital for resolving the problem expeditiously and dealing effectively with those affected. If bonus or penalty clauses are included in contracts, timelines for agency staff to complete prerequisite tasks should also be specified. States may want to consider contract payment schedules to vendors based upon the delivery of specified products and services rather than on the basis of calendar dates alone.

 

The majority of state testing programs choose a spring test administration that results in demands on vendors to produce reports for multiple programs during the same narrow time frame at the end of the school year. States able to schedule scoring during nonpeak periods may have greater flexibility in turnaround time and may gain a cost savings. Programs with bonus or penalty contract provisions may likely be given priority in such circumstances (though other considerations are also likely to come into play). The contract should contain the same scoring deadlines contained in the RFP. States may wish to attach to these deadlines specific liquidated damages for each day of non-delivery. In such cases, the contract should include provision for performance bonds against which the agency can claim the damages.

 

Funding is not a simple issue of obtaining annual appropriations. Activities for any given assessment administration from start to finish require approximately 18 months. This means that the typical fiscal year of 12 months and the assessment "year" of 18 months will conflict unless special provisions are made in the funding. One would not want to be in the position of having to write a contract for the first 12 months of activities and then another contract for the last 6 months of work. Furthermore, there is the likelihood that the fiscal year will not coincide with the RFP/contract/implementation cycle. The solution is to create multiyear funding plans and permit the agency to contract across fiscal years. Contracts can be cancelled if budgets must be reduced or the program is eliminated. Contracts should allow for necessary audits if required by the state comptroller.

 

When a delay is likely, the vendor should notify agency staff immediately and provide a good faith estimate of its extent.

 

Immediate notification of the state when a delay is likely is always best practice for the vendor. Quick notification allows all parties involved to assess the scope of the problem, its impact, and any necessary actions.

 

Generally, use of the data is the responsibility of the state and the LEA (Local educational agency). Some of these activities might be delegated to vendors, however. It is important that the RFP and the resulting contract make it clear what is expected of the vendor. If the state requests changes or delegates additional responsibilities to the vendor after the contract has been signed, the state may have to renegotiate the price.

 

Clear and understandable reports must be developed for communicating test results to educators, students, parents, and the general public.

 

Clear communication and guidelines for interpretation are essential to appropriate use of test data. Interpretative guidelines should be reported for both individual and school level reports. Cautions on over-interpretation, such as using tests for diagnostic purposes for which they have not been validated, should be made clear.

 

The state is responsible for communicating the test results to educators, students, parents, and the general public. An important part of this responsibility is the design of reports of test data. The state might choose to do this itself, or delegate it to the vendor. If the state delegates the design of reports to the vendor, the state shall be responsible for clearly sharing with the vendor its expectations about the audience for the reports, the purpose of the testing program and the uses to which the data will be put. The state shall also make clear, in writing, its requirements for the languages of reports to parents and the community and whether the reports should be graphic, numerical or narrative. The state shall be responsible for approving report formats in a timely manner as described in the contract.

The state is in the best position to determine how the test results will be used and what data will best communicate relevant and important information to the various audiences. It is also the prerogative of the state to determine report formats, types of scores to be reported and appropriate narrative information to accompany each report. Final report formats should be approved by the state before actual reports are printed. The state may also choose to provide access to data on a website designed by the state or its vendor.

 

If specific responsibility for monitoring the use of the test data is a part of the vendor’s contract, the vendor shall develop detailed policies and procedures for promoting and monitoring the proper interpretation of test data and implement those plans. Regardless of delegation of responsibility in this area, the vendor shall have a system for compiling any information of which it becomes aware regarding the improper and/or incorrect uses of data and relaying that information to the state.

 

The vendor, just like the state, bears responsibility for supporting and encouraging the ethical and proper implementation of the assessment system. Where the vendor has become aware of inappropriate practices in the course of its work on the assessment system, these should be reported to the state.

 

The state shall determine how the test data are to be used, and develop detailed policies and procedures for the proper use of the data. The state shall use the resources of the vendor or other qualified individuals (such as the Technical Advisory Committee) as needed to ensure the proper use of the test data for the purposes for which the test is intended, and make all reasonable attempts to prevent the improper use and interpretation of the data.

 

The only purpose of the testing program is to provide data that meets the goals of the program. Improper interpretation and use of the data negate all of the activities that led to the creation of that data, wasting money and time and perhaps causing serious disservice to students in the state. Since the vendor knows the test well and often has the capabilities to assist in interpretation and dissemination, the state may want to include in the contract the use of the vendor’s resources in conducting workshops around the state for teachers and administrators, joining and assisting the state personnel in presenting the data to stakeholders, such as legislative committees and the press, or assisting in the dissemination of the data. The state should use its greater knowledge of schools and districts in the state and their needs to help the vendor in these functions. The complementary expertise of the vendor and state should be utilized to ensure that the data is use in an appropriate manner.

 

 

Jeff C. Palmer is a teacher, success coach, trainer, Certified Master of Web Copywriting and founder of https://Ebookschoice.com. Jeff is a prolific writer, Senior Research Associate and Infopreneur having written many eBooks, articles and special reports.

 

Source: https://ebookschoice.com/procedures-and-communications-responsive-to-the-needs-and-concerns-of-districts-and-schools/

Tuesday, March 26, 2024

State Reforms To Develop K-12 Academic Standards - ebookschoice.com

State reforms to develop K-12 academic standards and to assess the performance of all students on these standards have resulted in a substantial increase in the number and scope of contracts with testing companies for statewide assessment programs. Many of these assessments are high-stakes for students (e.g., graduation or grade promotion tests) and/or educators (e.g., accountability programs). With high school diplomas, monetary awards or federal funding for schools and school systems dependent on test results, it is imperative that state assessments be of high quality, meet professional standards for best practice, be delivered in a timely manner, and be scored accurately. With increasingly tight budgets, it is similarly imperative that assessment programs be developed and implemented in an efficient and cost-effective manner without sacrificing quality.

 

Creating a high-quality state testing program requires both cooperation and accountability. It recalls the arms control motto, "trust but verify." The main participants in this relationship are state agency staff and test vendor staff. To support these efforts, the participating states in developing these Model Contractor Standards and State Responsibilities for State Testing Programs must communicate more clearly today's expectations for the development and administration of high-quality, efficient, and defensible, high-stakes state testing programs.

 

For vendors, commitment to following the "vendor standards" described herein can be cited as evidence of self-regulation and adherence to best practices. Of course, such standards are also designed for use by states in designing contractual relationships with vendors and in managing and overseeing those relationships. For states, the outlined "state responsibilities" are intended to provide a model for what is necessary to create a high-quality testing program and to serve as guidelines for policymakers enacting reforms in state testing programs. Some of the state responsibilities also describe important requirements for legal defensibility of high-stakes components within a state testing program.

 

The Preplanning Standards address antecedent activities and decisions critical to the production of a comprehensive Request for Proposal (RFP) describing the products and services the state wants a vendor to supply to its testing program. Assuming the criteria specified in the Preplanning responsibilities have been met, the Development and Administration sections provide guidelines for the specification of contract activities and execution of the contracted work. The Uses of Data section deals activities subsequent to the administration of the test, but directly connected with the interpretation of test data and score reports. Each section begins with a brief introduction that provides background and explanatory information.

 

Many of the standards herein have both a vendor and state corollary. The purpose of this document is to clarify ideal roles and obligations in a typical relationship between a state and vendor, either with respect to test development or scoring and administration. The "typical" relationship is assumed to be that the state, in response to action by its legislature, has developed academic standards and is contracting with one vendor to purchase a custom-built assessment based on its own academic standards and with another vendor to handle test administration and scoring.

 

Assumptions of some kind are clearly necessary for the design of a "model" document of this type. Of course, there are few "typical" states that precisely, or even nearly, mirror all of the arrangements assumed here. Among the many possible variations are:

- A state agency buys access to a commercially developed and published test.

 

- A state agency hires a vendor to develop a test that is then owned by the agency. The state requires the vendor to provide materials, scoring services, and reporting services.

 

- A state agency hires a vendor to develop tests that will be owned by the state and then hires a second vendor to do the test administration, scoring, and reporting activities.

 

- A state agency buys access to a test publisher by one vendor but then hires another vendor to administer, score, and report the results.

 

- A state agency develops the test with the assistance of local districts and state universities. It then hires a vendor to administer, score, and report the results.

 

Evaluating the acceptability of a process or contract between a state and vendor should not rely on the literal satisfaction of every standard or responsibility in this document, nor can acceptability be determined through the use of a checklist. Further, while retaining decision-making authority, a state may benefit from seeking the advice of the vendor regarding alternative methods for satisfying a particular guideline. Similarly, a responsible vendor will seek the state's advice or feedback at every point along the way where important decisions must be made. Regardless of how roles are defined and tasks are delegated, states retain ultimate responsibility and authority for state testing programs.

 

Throughout this document, the terms "testing companies" and "industry" apply generically to refer to all providers of test content, printing, scoring, validation, and other testing related services, whether for-profit or not-for-profit, public or private. The term "state" applies to the educational enterprise of the fifty states, territories, and other appropriate jurisdictions and includes state education agencies (SEAs), state boards of education, and other official educational entities.

 

For a state testing program to follow standards of best practice, several preconditions should be met before an RFP is developed, a contract is signed with a vendor, and test development begins. These preconditions are important in enabling a vendor to produce a quality test that will satisfy the state's expectations. These preconditions include actions to be taken by the state legislature (or other responsible entity) as well as the state agency with authority to implement the testing program. The purpose of these preconditions is to support production of an RFP that specifies in detail the services and products to be provided by the vendor given reasonable timelines and resources, to ensure that the state has knowledgeable and adequately trained staff competent to assign all required activities to either itself or the vendor, to ensure adequate planning and funding to produce a quality testing program, and to ensure that staff is in place to competently supervise the vendor relationship going forward.

 

The state legislature should enact reasonable timelines and provide adequate funding for the testing program. The legislation should also include: statements of purpose; designation of authority for important tasks (e.g., standard setting); responsibilities of the state agency and the local districts; types of reports of results; uses of the data (e.g., school or district accountability); contracting authority.

 

In general, the lead time for developing a new, high-stakes assessment includes a minimum of 38 months: 6 months planning, preparation, & development of test blueprint; 6 months item writing & editing; 6 months item tryouts & analyses; 6 months preparation of field test forms & supporting materials; 6 months field testing, research studies (e.g., opportunity to learn surveys in the case of high-stakes tests) & analyses; 6 months development of final test forms, edit supplementary materials, set passing standards, finalize security, accommodations, & reporting policies; 2 months administer final tests, score, equate, and report results. This timeline begins with the signing of a contract with a vendor and assumes that state content standards for the subjects being tested have already been adopted.

 

Preplanning activities leading to the development of a comprehensive RFP, vendor bid time, proposal evaluation, and negotiations for the award of a final contract will often add at least another 6 months. Unanticipated complications that often accompany implementation of a new testing program will also add additional time. Thus, legislation that creates a new testing program should allow approximately 4 years from the time of passage until the first live tests are administered. States with well-established testing programs and experienced staffs in place may be able to reduce the time required to develop additional tests (i.e., development limited to certain grades not previously covered).

 

Three to four years of lead time is also consistent with legal requirements for an adequate notice period and opportunity to learn (OTL) for tests with high stakes for individual students. OTL requires sufficient time for implementation of curricula and instruction that provides an adequate opportunity for students to have been taught the tested content before their initial attempt to pass high-stakes tests. If a state has developed sufficiently clear academic standards, notice requirements for OTL may be triggered by the publication date of the standards if a strong communication effort is undertaken and the state can demonstrate that schools have aligned instruction with the standards. When offered opportunities for input, potential vendors should alert states to unreasonable timelines and propose alternatives reasonably calculated to meet professional standards for best practice.

 

In addition to providing sufficient development time, legislation for a new testing program must provide adequate funding for agency preplanning activities, test development, test administration, and other program activities necessary to produce a quality test for each of the mandated grades and subjects. The required activities may be conducted by the state or an outside vendor, but funding must be sufficient so that no important steps are left out.

 

Development costs do not depend on the number of students to be tested. Therefore, a small testing program with limited funds and a relatively small number of students over which to spread the cost may only be able to develop tests for fewer grades and/or subjects than larger testing programs. Options for state cooperation to improve efficiency and lower costs are described in the accompanying innovation priorities document.

 

Where custom tests are to be designed on the basis of state academic standards, special care should be taken to develop high quality standards that are rigorous, clear and specific, consistent with sound research on curriculum and instruction, and well-organized to ensure that the lower levels serve as a sound foundation for the upper levels.

 

While sound standards-based tests must be well aligned with state standards, the standards should be designed so that they serve that purpose well. Consensus-building within a state is important in order to develop support for broad implementation, but consensus does not always or necessarily lead to quality. Where, for example, consensus is reached as a result of agreement on broad or vague standards statements, schools may focus excessively on the test itself for guidance on what to teach; tests are typically not designed to carry such a heavy load, leading to criticisms that teachers are "narrowing the curriculum" to what is being tested. More detailed criteria and models for high-quality standards exist and have been identified by other organizations.

 

The state agency responsible for testing should include staff with adequate knowledge and training in psychometrics, curriculum, communication, policy, special populations, and contracting to develop a comprehensive RFP, to complete all necessary activities not assigned to the vendor, and—especially—to monitor vendor performance throughout the contract period.

 

Agency staff must play an active role in the development of a quality testing program. In order to decide which services and products should be included in an RFP, agency staff must thoroughly understand the test development process and the requirements for a psychometrically and legally defensible test. Where knowledge and training are inadequate or lacking, staff should seek training and assistance from outside experts.

 

Agency staff must be prepared to complete all required steps and activities not specifically delegated by the state to a vendor and to competently monitor all contract activities. It is possible for a state agency to outsource some of the steps and activities required prior to the selection of the main test development vendor, though adequate expertise should always exist on staff to monitor the performance of all such additional vendors.

 

The staffing needs of the state agency to support a statewide assessment program are significant. This is going to be one of the more serious tasks confronting states. At the minimalist end of the continuum, a state could theoretically have one person be the assessment coordinator and simply allow the contractor to do everything. At the other end, a state agency can hire sufficient number of staff members to coordinate the work of multiple contractors, assume primary responsibility for quality control work, and provide data analysis and dissemination/training activities within the state.

 

State legislatures do not have to create a lot of permanent positions for the state bureaucracy if the agency molds together an office with a critical mass of permanent employees and out-sources tasks requiring more personnel (e.g., test development, editing, and test form assembly).

 

The state agency responsible for testing should develop a comprehensive RFP that describes clearly and in detail both the end products to be provided by the vendor and the development process.

 

The RFP is the roadmap for the creation of a quality testing program. It should contain detailed specifications for all key areas, including, but not necessarily limited to, the totality of services and products to be provided, timelines for performance, quality criteria, responsiveness criteria, mid-course correction opportunities, and the process for evaluating proposals. When developing the RFP, agency staff should be aware of all required activities for a defensible testing program and should specifically assign responsibility for each activity to itself or to the vendor. It is occasionally the case that the state is not sure how to accomplish a certain goal of the testing program and wants vendors to propose a solution. In this case, the RFP should clearly separate those requirements that are firm, those that are aspirational, and those that are simply unknown. It should be very clear to vendors whether they are responding to specific requirements or proposals for implementing general requirements. The state should also clearly spell out timelines in the development process, both for test development and for scoring/administration.

 

To provide a fair comparison of proposals received in response to an RFP, states should require all vendors to either: (a) provide costs for a fixed set of services and products specified in detail in the RFP; or (b) specify in detail what services and products they could provide for fixed incremental costs. The method should be chosen in advance by the state and clearly specified in the RFP.

 

When vendors bid on different combinations of services and products, their proposals are not comparable and it is difficult for the state to evaluate cost effectiveness. The lowest bid may have to be accepted when essential activities are missing or incomplete. When all vendors use the same method, a fairer evaluation of their proposals is possible. States with precise knowledge of the test products and services they will need are more likely to benefit from option (a), while states with less precise knowledge, or whose plans may change, are more likely to benefit from option (b).

 

Clearly, this document will apply differently in each of these different scenarios. It is hoped, however, that the model is sufficiently clear and well-defined that a state using a different arrangement would be able to determine the necessary adjustments in those sections that require adjustment. Even in cases where the relationship or tasks in a given state appear to fit perfectly with the model described here, this document is only intended to provide a framework to ensure that relevant issues are addressed. Important issues need to be resolved in a way that is consistent with a state's political process.  

 

 

Jeff C. Palmer is a teacher, success coach, trainer, Certified Master of Web Copywriting and founder of https://Ebookschoice.com. Jeff is a prolific writer, Senior Research Associate and Infopreneur having written many eBooks, articles and special reports.

 

Source: https://ebookschoice.com/state-reforms-to-develop-k-12-academic-standards/