Tuesday, March 26, 2024

State Reforms To Develop K-12 Academic Standards - ebookschoice.com

State reforms to develop K-12 academic standards and to assess the performance of all students on these standards have resulted in a substantial increase in the number and scope of contracts with testing companies for statewide assessment programs. Many of these assessments are high-stakes for students (e.g., graduation or grade promotion tests) and/or educators (e.g., accountability programs). With high school diplomas, monetary awards or federal funding for schools and school systems dependent on test results, it is imperative that state assessments be of high quality, meet professional standards for best practice, be delivered in a timely manner, and be scored accurately. With increasingly tight budgets, it is similarly imperative that assessment programs be developed and implemented in an efficient and cost-effective manner without sacrificing quality.

 

Creating a high-quality state testing program requires both cooperation and accountability. It recalls the arms control motto, "trust but verify." The main participants in this relationship are state agency staff and test vendor staff. To support these efforts, the participating states in developing these Model Contractor Standards and State Responsibilities for State Testing Programs must communicate more clearly today's expectations for the development and administration of high-quality, efficient, and defensible, high-stakes state testing programs.

 

For vendors, commitment to following the "vendor standards" described herein can be cited as evidence of self-regulation and adherence to best practices. Of course, such standards are also designed for use by states in designing contractual relationships with vendors and in managing and overseeing those relationships. For states, the outlined "state responsibilities" are intended to provide a model for what is necessary to create a high-quality testing program and to serve as guidelines for policymakers enacting reforms in state testing programs. Some of the state responsibilities also describe important requirements for legal defensibility of high-stakes components within a state testing program.

 

The Preplanning Standards address antecedent activities and decisions critical to the production of a comprehensive Request for Proposal (RFP) describing the products and services the state wants a vendor to supply to its testing program. Assuming the criteria specified in the Preplanning responsibilities have been met, the Development and Administration sections provide guidelines for the specification of contract activities and execution of the contracted work. The Uses of Data section deals activities subsequent to the administration of the test, but directly connected with the interpretation of test data and score reports. Each section begins with a brief introduction that provides background and explanatory information.

 

Many of the standards herein have both a vendor and state corollary. The purpose of this document is to clarify ideal roles and obligations in a typical relationship between a state and vendor, either with respect to test development or scoring and administration. The "typical" relationship is assumed to be that the state, in response to action by its legislature, has developed academic standards and is contracting with one vendor to purchase a custom-built assessment based on its own academic standards and with another vendor to handle test administration and scoring.

 

Assumptions of some kind are clearly necessary for the design of a "model" document of this type. Of course, there are few "typical" states that precisely, or even nearly, mirror all of the arrangements assumed here. Among the many possible variations are:

- A state agency buys access to a commercially developed and published test.

 

- A state agency hires a vendor to develop a test that is then owned by the agency. The state requires the vendor to provide materials, scoring services, and reporting services.

 

- A state agency hires a vendor to develop tests that will be owned by the state and then hires a second vendor to do the test administration, scoring, and reporting activities.

 

- A state agency buys access to a test publisher by one vendor but then hires another vendor to administer, score, and report the results.

 

- A state agency develops the test with the assistance of local districts and state universities. It then hires a vendor to administer, score, and report the results.

 

Evaluating the acceptability of a process or contract between a state and vendor should not rely on the literal satisfaction of every standard or responsibility in this document, nor can acceptability be determined through the use of a checklist. Further, while retaining decision-making authority, a state may benefit from seeking the advice of the vendor regarding alternative methods for satisfying a particular guideline. Similarly, a responsible vendor will seek the state's advice or feedback at every point along the way where important decisions must be made. Regardless of how roles are defined and tasks are delegated, states retain ultimate responsibility and authority for state testing programs.

 

Throughout this document, the terms "testing companies" and "industry" apply generically to refer to all providers of test content, printing, scoring, validation, and other testing related services, whether for-profit or not-for-profit, public or private. The term "state" applies to the educational enterprise of the fifty states, territories, and other appropriate jurisdictions and includes state education agencies (SEAs), state boards of education, and other official educational entities.

 

For a state testing program to follow standards of best practice, several preconditions should be met before an RFP is developed, a contract is signed with a vendor, and test development begins. These preconditions are important in enabling a vendor to produce a quality test that will satisfy the state's expectations. These preconditions include actions to be taken by the state legislature (or other responsible entity) as well as the state agency with authority to implement the testing program. The purpose of these preconditions is to support production of an RFP that specifies in detail the services and products to be provided by the vendor given reasonable timelines and resources, to ensure that the state has knowledgeable and adequately trained staff competent to assign all required activities to either itself or the vendor, to ensure adequate planning and funding to produce a quality testing program, and to ensure that staff is in place to competently supervise the vendor relationship going forward.

 

The state legislature should enact reasonable timelines and provide adequate funding for the testing program. The legislation should also include: statements of purpose; designation of authority for important tasks (e.g., standard setting); responsibilities of the state agency and the local districts; types of reports of results; uses of the data (e.g., school or district accountability); contracting authority.

 

In general, the lead time for developing a new, high-stakes assessment includes a minimum of 38 months: 6 months planning, preparation, & development of test blueprint; 6 months item writing & editing; 6 months item tryouts & analyses; 6 months preparation of field test forms & supporting materials; 6 months field testing, research studies (e.g., opportunity to learn surveys in the case of high-stakes tests) & analyses; 6 months development of final test forms, edit supplementary materials, set passing standards, finalize security, accommodations, & reporting policies; 2 months administer final tests, score, equate, and report results. This timeline begins with the signing of a contract with a vendor and assumes that state content standards for the subjects being tested have already been adopted.

 

Preplanning activities leading to the development of a comprehensive RFP, vendor bid time, proposal evaluation, and negotiations for the award of a final contract will often add at least another 6 months. Unanticipated complications that often accompany implementation of a new testing program will also add additional time. Thus, legislation that creates a new testing program should allow approximately 4 years from the time of passage until the first live tests are administered. States with well-established testing programs and experienced staffs in place may be able to reduce the time required to develop additional tests (i.e., development limited to certain grades not previously covered).

 

Three to four years of lead time is also consistent with legal requirements for an adequate notice period and opportunity to learn (OTL) for tests with high stakes for individual students. OTL requires sufficient time for implementation of curricula and instruction that provides an adequate opportunity for students to have been taught the tested content before their initial attempt to pass high-stakes tests. If a state has developed sufficiently clear academic standards, notice requirements for OTL may be triggered by the publication date of the standards if a strong communication effort is undertaken and the state can demonstrate that schools have aligned instruction with the standards. When offered opportunities for input, potential vendors should alert states to unreasonable timelines and propose alternatives reasonably calculated to meet professional standards for best practice.

 

In addition to providing sufficient development time, legislation for a new testing program must provide adequate funding for agency preplanning activities, test development, test administration, and other program activities necessary to produce a quality test for each of the mandated grades and subjects. The required activities may be conducted by the state or an outside vendor, but funding must be sufficient so that no important steps are left out.

 

Development costs do not depend on the number of students to be tested. Therefore, a small testing program with limited funds and a relatively small number of students over which to spread the cost may only be able to develop tests for fewer grades and/or subjects than larger testing programs. Options for state cooperation to improve efficiency and lower costs are described in the accompanying innovation priorities document.

 

Where custom tests are to be designed on the basis of state academic standards, special care should be taken to develop high quality standards that are rigorous, clear and specific, consistent with sound research on curriculum and instruction, and well-organized to ensure that the lower levels serve as a sound foundation for the upper levels.

 

While sound standards-based tests must be well aligned with state standards, the standards should be designed so that they serve that purpose well. Consensus-building within a state is important in order to develop support for broad implementation, but consensus does not always or necessarily lead to quality. Where, for example, consensus is reached as a result of agreement on broad or vague standards statements, schools may focus excessively on the test itself for guidance on what to teach; tests are typically not designed to carry such a heavy load, leading to criticisms that teachers are "narrowing the curriculum" to what is being tested. More detailed criteria and models for high-quality standards exist and have been identified by other organizations.

 

The state agency responsible for testing should include staff with adequate knowledge and training in psychometrics, curriculum, communication, policy, special populations, and contracting to develop a comprehensive RFP, to complete all necessary activities not assigned to the vendor, and—especially—to monitor vendor performance throughout the contract period.

 

Agency staff must play an active role in the development of a quality testing program. In order to decide which services and products should be included in an RFP, agency staff must thoroughly understand the test development process and the requirements for a psychometrically and legally defensible test. Where knowledge and training are inadequate or lacking, staff should seek training and assistance from outside experts.

 

Agency staff must be prepared to complete all required steps and activities not specifically delegated by the state to a vendor and to competently monitor all contract activities. It is possible for a state agency to outsource some of the steps and activities required prior to the selection of the main test development vendor, though adequate expertise should always exist on staff to monitor the performance of all such additional vendors.

 

The staffing needs of the state agency to support a statewide assessment program are significant. This is going to be one of the more serious tasks confronting states. At the minimalist end of the continuum, a state could theoretically have one person be the assessment coordinator and simply allow the contractor to do everything. At the other end, a state agency can hire sufficient number of staff members to coordinate the work of multiple contractors, assume primary responsibility for quality control work, and provide data analysis and dissemination/training activities within the state.

 

State legislatures do not have to create a lot of permanent positions for the state bureaucracy if the agency molds together an office with a critical mass of permanent employees and out-sources tasks requiring more personnel (e.g., test development, editing, and test form assembly).

 

The state agency responsible for testing should develop a comprehensive RFP that describes clearly and in detail both the end products to be provided by the vendor and the development process.

 

The RFP is the roadmap for the creation of a quality testing program. It should contain detailed specifications for all key areas, including, but not necessarily limited to, the totality of services and products to be provided, timelines for performance, quality criteria, responsiveness criteria, mid-course correction opportunities, and the process for evaluating proposals. When developing the RFP, agency staff should be aware of all required activities for a defensible testing program and should specifically assign responsibility for each activity to itself or to the vendor. It is occasionally the case that the state is not sure how to accomplish a certain goal of the testing program and wants vendors to propose a solution. In this case, the RFP should clearly separate those requirements that are firm, those that are aspirational, and those that are simply unknown. It should be very clear to vendors whether they are responding to specific requirements or proposals for implementing general requirements. The state should also clearly spell out timelines in the development process, both for test development and for scoring/administration.

 

To provide a fair comparison of proposals received in response to an RFP, states should require all vendors to either: (a) provide costs for a fixed set of services and products specified in detail in the RFP; or (b) specify in detail what services and products they could provide for fixed incremental costs. The method should be chosen in advance by the state and clearly specified in the RFP.

 

When vendors bid on different combinations of services and products, their proposals are not comparable and it is difficult for the state to evaluate cost effectiveness. The lowest bid may have to be accepted when essential activities are missing or incomplete. When all vendors use the same method, a fairer evaluation of their proposals is possible. States with precise knowledge of the test products and services they will need are more likely to benefit from option (a), while states with less precise knowledge, or whose plans may change, are more likely to benefit from option (b).

 

Clearly, this document will apply differently in each of these different scenarios. It is hoped, however, that the model is sufficiently clear and well-defined that a state using a different arrangement would be able to determine the necessary adjustments in those sections that require adjustment. Even in cases where the relationship or tasks in a given state appear to fit perfectly with the model described here, this document is only intended to provide a framework to ensure that relevant issues are addressed. Important issues need to be resolved in a way that is consistent with a state's political process.  

 

 

Jeff C. Palmer is a teacher, success coach, trainer, Certified Master of Web Copywriting and founder of https://Ebookschoice.com. Jeff is a prolific writer, Senior Research Associate and Infopreneur having written many eBooks, articles and special reports.

 

Source: https://ebookschoice.com/state-reforms-to-develop-k-12-academic-standards/

Tuesday, March 12, 2024

Establishing Trust Between School Teachers and University Faculty - ebookschoice.com

The professional development school initiatives show the greatest promise in school reform due to collaborative efforts in teacher preparation. Educators in both public schools and in universities must work together in the preparation of teachers who are culturally, socially and instructionally responsive to student diversity. This lofty preparation aim begins with selecting the most promising teacher candidates for admittance into the program. The author describes an admissions procedure that has proven to be not only efficient and effective, but reflects the collaborative values of the program.

 

For over a decade, advocates of educational reform have supported professional development schools (PDSs) as a way for school and university partners to promote simultaneous renewal of both institutions. PDS aims are now commonplace: (a) provide exemplary education for preservice teachers, (b) support continuing professional development of experienced teachers, (c) engage in the renewal of curriculum and instruction, and (d) involve schools and universities in collaborative research.

 

Essential to these aims is the collaborative process. Establishing trust, recognizing cultural differences, and breaking perceived roles between school teachers and university faculty are key if partnerships are to be anything more than traditional in nature. University instructors, including teacher educators, are entering into cooperative working ventures with more frequency than ever before. Critical to the successful attainment of any partnership project are the people involved and the common commitment to program quality and coherence. In the ongoing process of developing, nurturing, and maintaining partnerships, one can expect to confront both predictable and unforeseen obstacles. Sharing information on program structures and systems will help advance the development of university and K-12 partnerships. The purpose of this article is twofold: (a) to describe, and (b) to analyze an admissions procedure, which reflects the values of the program and efficiently and effectively promotes the involvement of K-12 personnel in what is traditionally a university decision. To this end, we briefly discuss the history of this partnership and the key values that drive our work. Next, we elaborate on the admissions process and how it reflects those values in linking the university and schools. In taking stock of where we have made progress and where we have not, we examine the perceptions of major stakeholders in this process. We conclude with a discussion of recommendations to others considering similar efforts.

 

Description of Partnership

 

While the School of Education has a long history of establishing partnerships with school and school districts that have benefited both parties, the teacher preparation program was, for the most part, not involved in these partnerships. The program was traditional in nature. Just a few School of Education faculty members, and many of these adjunct or honoraria faculty, supervised all of the field experiences in schools across the states. Faculty taught methods classes in their field and rarely knew what was being covered in other classes. Students and faculty felt isolated and neither group was satisfied with the skill level of the graduating students.

 

During the last school year, a planning committee of 29 School of Education faculty, public school personnel from a number of districts, and current students in the School of Education joined forces to redesign the Initial Teacher Education (ITE) program. The goals of this committee were to develop a strong program based on research, involve more of the School of Education faculty, and provide more meaningful experiences for our students. One of the major recommendations of this planning committee was the establishment of professional development schools. The program is graduate level and leads to a state teaching license and, after at least one year of teaching, a master's degree. During the last school year, the program courses and procedures were further developed and the first cohort of students was admitted.

 

Currently, 17 elementary and secondary schools in five metropolitan school districts are collaborating with the School of Education to engage in simultaneous renewal of schools and the School of Education. The roles and responsibilities of those in the partnership are shaped by the four functions of a partner school: (a) teacher preparation, (b) professional development, (c) renewal of curriculum and instruction, and (d) inquiry/research. Professional development or partner school and School of Education faculty collaborate to accomplish these functions as they are viewed as important in positively supporting student learning and well-being.

 

Prospective graduate students select an area of emphasis from an array of "leadership areas" within the Initial Teacher Education program. Teacher candidates (TCs) in the ITE program are aligned with one of seven possible areas of expertise: Bilingual/ESL, Inclusionary Practices, Information and Learning Technologies, Literacy, Math, Science and Social Studies, Teaching for Mental Health, or Young Child. All of these leadership areas cover K-12 except for Young Child.

 

Program Values

 

The planning committee developed a set of teaching responsibilities, which continue to be developed and refined as the program evolves. These responsibilities include the knowledge, beliefs, and practical skills we believe are essential to becoming teacher.

 

In essence, teachers must understand and be able to learn about the subjects they teach; be able to use appropriate teaching and learning strategies; be supportive of students in their attempts at learning and growing; behave professionally and continue to grow as educators; and take on a leadership role in their school. The ITE program also stresses the importance of collaboration and reflection in attaining these goals.

Description of Admissions Process

 

The planning committee decided that a paper screening was not going to be sufficient in selecting students for the ITE program. An interview was called for, but having individual interviews with 183 applicants was beyond the resources of the program. A group process was used for the first cohort of applicants. All the applicants were conducted through a series of activities in a single session, while faculty members observed their interactions. Decisions to admit were then made by leadership area. This proved to be too large a group to be either effective or efficient. Beginning with the second group of applicants, interviews took place by leadership area. This process is described below.

 

Paper Screening

 

Once a year, applications are accepted for the ITE program. Applicants rank their first three choices in leadership areas. Each leadership area forms a committee consisting of LAPs and partner school teachers to review the applications submitted to the applicant's first choice of leadership area.

 

The criteria include grade point averages (GPA), standardized test scores, coursework in the teaching field the applicant wishes to pursue, letters of recommendation, verification of 30 hours working with school-age children and a goals statement. While all of these data are considered, the goals statement often carries the most weight. A tale of growth and change from a wild freshman 20 years ago into a responsible adult who is willing to turn his/her life upside down in order to become a teacher can make up for a very low undergraduate GPA, and the desire to have summers off can make a 4.0 GPA irrelevant.

 

The task of the committee members is to narrow down the list to close to the maximum number of students they can accept. A decision is made to either interview, reject immediately, or pass the application on the next leadership area. The reasons for rejecting applicants without an interview are usually due to incomplete documentation, lack of preparation, or other indicators that the applicant would not be very likely to succeed in the program. Often one leadership area has more qualified applicants than they can afford to interview. Applicants who are the best fit to the leadership area and have the highest qualifications are kept and the others are passed on to their next choice in leadership area. The whole paper screening takes place in one long afternoon, over lunch, with all leadership area committee members in one large room. Each leadership area is limited in the number of students they may admit, so the decisions can be difficult.

 

Interview

 

Faculty members from across the eight different leadership areas have frequently decided to join together to conduct the interview process within various partner schools. These decisions are generally made based on the organization structure of divisions and program areas within the School of Education. They may also be based on the partner schools where the students will be placed. Each leadership area conducts its interviews in slightly different ways, but all contain three common elements.

 

-    Applicants read and react to short articles. These articles tend to be controversial, related to the leadership area, education in general, sensitivity toward students with diverse backgrounds, and are no more than two pages long.

-    Applicants plan and teach a short lesson. The topic may be chosen from a list of suggestions that range from whimsical (how to eat spaghetti without getting splattered) to practical (how to pack a suitcase) to school skills (how to find a word in the dictionary), or they may choose their own topic.

-    Applicants write a short essay. These topics vary, but include a reaction to the article they read, a reaction to the group interview process, or a topic specific to the leadership area they are applying to.

 

Most of the leadership areas arrange for the applicants to complete the article and teaching activities using a jigsaw format. The applicants are placed in groups for both activities. They read an article and discuss it or plan to teach a lesson in the first group. Then each group member goes to a second group and either explains the article or teaches the lesson to the new group of applicants.

 

During this time, partner school teachers, administrators, and sometimes ITE students or recent graduates hover over each group of applicants, evaluating their performance based on the program values described above. Each observer is assigned a group of applicants to follow. The observers could also follow other groups when they feel they have enough information to make a decision. A rubric is used for each activity. These rubrics allow the reviewer to indicate the degree to which the applicants collaborate, cooperate, organize tasks, and plan. Some reviewers use the scores and others use the rubrics to categorize their notes. The rubrics are intended to be used as guides for later discussions, not to obtain hard and fast scores. In addition, each reviewer is alert to behaviors that would indicate that an applicant would have challenges collaborating with adults or working with the K-12 students we serve.

 

Selection

 

At the end of the session, the evaluators review all of the applicants, with the help of the pictures taken at the beginning of the session. It is amazing how difficult it is to describe people and the pictures help to avoid errors in identification. Each applicant is reviewed, with those who observed the applicant sharing observations and ratings.

 

The applicants usually are sorted pretty quickly into three categories: (a) those who are clearly outstanding, (b) those who clearly do not belong in our program, and (c) those we have questions about. Most of our time is spend trying to decide why an applicant in the third category gave cause for concern and whether the concern is strong enough for the student to be denied admittance.

 

Once all the applicants are placed into one of the first two categories and ranked, the highest ones are admitted until the leadership area is filled. This is accomplished during the two or so hours following the interview. Those who we feel would be successful in the program, but were ranked too low to make it into their choice of leadership area, are passed on to leadership areas which have not reached their capacity. This process continues for a few weeks, until either all the leadership areas are filled or all the successful applicants are placed. Approximately 81% of all applicants participating in this process are finally admitted to the ITE program.

 

Evaluation Process

 

The purpose of this article is twofold: (a) to describe and (b) to analyze an admissions procedure, which reflects the values of the program and efficiently and effectively promotes the involvement of K-12 personnel in what is traditionally a university decision. In an attempt to gain a more thorough understanding of the benefits and drawbacks of this admissions process, we examined multiple data sources. These sources included surveys from current teacher candidates, individual interviews with at least one university professor from each of the leadership areas and detailed notes taken at a debriefing meeting with the interviewers following a recent interview.

Anonymously, 168 teacher candidates completed surveys about the interview process. Additionally, hour-long individual meetings with representatives from each of the leadership areas were held. We used seven questions to frame those discussions. The third source of feedback was obtained from members of a recent interview team. One week following a day-long interview, members of the interview team met to discuss the process. This particular interview team consisted of school and university faculty representing the three leadership areas of Inclusionary Practices, Information and Learning Technologies, and Teaching for Mental Health. Those discussions were framed around three topics: what worked, what didn't, and recommendations for change.

 

Data Analysis

 

Direct transcriptions from all respondents were collated by question and by specific stakeholder group. Initially, multiple read-throughs were conducted of all sources of feedback by each of the reviewers. Following this period of becoming thoroughly acquainted with the feedback content, the reviewers identified several themes present in the responses.

 

Results

 

The following discussion provides a look at the results of these feedback measures. We wanted to understand if this process was an efficient way to manage the large number of interviewees, if it was effective in selecting the most qualified candidates and if it served as a vehicle to demonstrate the values of trust, professionalism, and collaboration that the program strives for. In reviewing the feedback, specific issues related to each of these themes emerged. These included the attention to logistics to enhance efficiency, the perceptions of staff regarding quality candidates, and the illustration of program values.

 

Interview Process Logistic

 

Serious attention to a wide array of logistical issues is absolutely critical to the successful implementation of an interview process of this scope. Members of the interview team highlighted key logistical factors that contributed to the efficiency of this process. Advanced planning was identified as the driving factor in efficiency.

 

As both teachers and leadership area professors pointed out, the matrix planning that assigned staff to students was extremely helpful. The use of the scoring rubrics was consistently noted as a positive feature of the planning process. Prior to the interviews, color-coded rubrics were prepared and ready for each reviewer. It was helpful to hear from different teachers that they would have benefited from an advance copy of the rubrics as well as the articles to better acquaint themselves with the material prior to the actual interview.

 

Responses from the interview team also expressed strong support for continuing to hold the interviews in one of the partner schools. Securing a large space such as the school's gym along with a classroom for debriefing purposes was seen as not only very convenient, but also displays the message that teacher preparation will not just be occurring at the university.

 

Representatives from each stakeholder group responded positively to issues regarding time efficiency.

 

Perceptions Regarding Candidate Qualities

 

Reflections from both partner school and university faculty indicate strong consensus regarding the value of this admissions process in selecting quality candidates: students who do well during the interview process clearly rise to the top. Collectively we are selecting students who will indeed be the most promising new teachers.

 

Opportunities to actually meet with and observe the applicants in the interview activities are noted as an invaluable aspect of the application process. One leadership area professor made the point, "Doubts in the paper screening are red-flagged to be confirmed or rejected at the interview."

 

Another reflected, "The interview allows me to see a whole other dimension of the person. I get to glimpse how this person interacts with others and that ability to interact with others is at the core of teaching."

 

The involvement of multiple professionals during the interview process ensures that applicants are seen by more than one person and in more than one activity. One leadership area professor noted, "I strongly feel that reliability is increased due to our high inter-observer agreement. When we sit down to make the actual decisions, there appears to be consistently high agreement among all observers."

Illustration of Program Values

 

Feedback from all stakeholders reveals an awareness of how aspects of the admissions process contribute to establishing trust and breaking the perceived roles between school teachers and university faculty. One applicant reported, "From the article jigsaw activity, I could see that from the get-go, this program was valuing communication and collaboration." Another student stated, "The teaching activities taught me how important it is to work together. As I now look at the actual program I see that that skill is practiced everywhere."

 

As leadership area professors in the partner school efforts, the author has seen first hand how critical the trust building process is to maintaining and nurturing partnerships. This value was clearly noted as both an essential element of and a benefit of this application process. During the debriefing meeting, one site coordinator reported, "I can tell you that site staff felt as though they were very much part of the team." A leadership area professor summarized her experiences indicating, "The thing that feels best about this process is that we are continuing to learn to trust each other especially regarding judgements about students. I really rely on those judgements, just as I truly believe everyone else around that table is trusting my judgements." Another leadership area professor noted, "Having been involved in the program since its inception, I see a tremendous development of trust. Earlier, the university folk tended to dominate, but not anymore. Now we are all equal."

 

Discussion and Recommendations

 

The application process described here is but one of The Initial Teacher Education Program's efforts to enhance school and university partnerships. From our analysis of the reflections and feedback with regard to sustaining this five-year process, we now understand more fully the impact this has on our partnership. We are reaffirmed in our notion that this process is fairly efficient with most of the actual work being completed in two full days. There is strong consensus that the multitude of measures that staff use to select applicants contributes to the selection of the most qualified students. Finally, our analysis of this aspect of the program suggests that this admissions process serves as an important vehicle to foster the values of the collaborative process.

 

We contend that other teacher education programs as well as individual school buildings can easily replicate this teacher selection process. For those considering similar efforts, we offer the following suggestions. This process must of course begin by establishing relationships with staff in partner schools, so they can be involved in the application process. Together identify key features of your program then develop an interview system that matches those beliefs/values/attributes. For us that meant the design of three specific activities for the interactive interview. The article jigsaw provided us an opportunity to observe students' ability to synthesize, collaborate, and react to controversial ideas. The teaching activity allowed students to be creative, to demonstrate organizational skills under time constraints and collaborate. The writing activity tapped a student's basic writing skills in an impromptu format. This activity allowed us to examine spontaneous writing skills in contract to their more deliberate goals statement.

 

We further suggest that teams critically select and review all articles in the jigsaw activity to ensure current and controversial themes. It is also important that clear scoring rubrics be established, discussed and modeled for item clarity and shared understanding of the scales. As we have learned, it is imperative that all interviewers have a copy of the agenda, activity descriptions, articles, and rubrics beforehand. Lastly, we suggest that teams proactively plan for ways to accommodate for the differing learning styles of the applicants; that is, participation in the writing activity in a large room with background noise may be difficult for some learners and could easily be accommodated for. We encourage other teacher educators to expand upon these inquiries so that best practice models can be developed, documented, and replicated.

 

 

Jeff C. Palmer is a teacher, success coach, trainer, Certified Master of Web Copywriting and founder of https://Ebookschoice.com. Jeff is a prolific writer, Senior Research Associate and Infopreneur having written many eBooks, articles and special reports.

 

Source: https://ebookschoice.com/establishing-trust-between-school-teachers-and-university-faculty/