What We’re Talking About, When We Talk About Research in Schools

Posted by Lisa Grable on February 25 2015 at 7:21 p.m.

Tags: research, evaluation, experimental design, online surveys

//academicexperts.org/conf/site/2015/papers/45460/

This paper describes the continued challenges associated with school-based research of technology and teacher training training programs, more than 10 years after the strong push for scientifically-based research in education began at the U.S. federal level. The authors’ experiences with implementing experimental designs, using high stakes test scores and online surveys as measures of program success, and negotiating data collections with wary teachers are discussed.


  • Over-arching questions to consider for this presentation: 1. How do evaluators, working within limited budgets and often hostile conditions, capture the effects of an intervention in ways that are authentic, in a world obsessed with test scores? 2. How do we communicate our findings in ways that acknowledge that the complexities of school-based research, without undermining our credibility as researchers in the eyes of policymakers?

    Posted

    • You know, Lisa, I was thinking about our work, and I think that pushing back starts with making our voices heard. Conversations about educational research, accountability, testing, and teacher quality come up ALL the time in all kinds of venues, and I think a good first step (beyond just constantly emailing politicians and trying to produce good evaluation reports) is to chime in where we can and try to be a positive force in educating the general public. As John Wesley said, “Do all the good you can. By all the means you can. In all the ways you can. In all the places you can. At all the times you can. To all the people you can. As long as ever you can.”"

      Posted in reply to Lisa Grable

      • Back when NCLB was in the works, I was able to convene folks for brown bags where we discussed what the proposals for the law would mean to school systems (no more Eisenhower funds) and university work with teachers and schools. People thought I was crazy. But we were able to prepare state ed leaders for the RFPs they would have to craft. It can be hard to be proactive. All this work is part of an organic, dynamic system and researchers have an important voice in terms of seeing and reporting evidence.

        Posted in reply to Amy Overbay

        • I think people still think we're crazy! :^D

          But I think that it is important to have the meta-conversations where we can talk about policy and what it means for us, and not just react to it or act as if it were unchangeable and inevitable.

          Posted in reply to Lisa Grable

  • We'd like some input from you, dear readers. We'll post some questions here and please hit Reply to This Post below if you'd like to join in!

    1. What are the on-the-ground problems with setting up control groups for a teacher or student treatment? Are there good solutions for randomization?
    2. In your location, how do you obtain access to large databases of teacher and student data? Are there costs? What are the issues with IRB approval?
    3. How have you tried to look at differences between groups? Have you tried rotating a group from no treatment to receiving the treatment at a later time?

    Posted

    1. What sources have you used to find validated instruments for teachers and various ages of students? How do you measure a technology intervention along with content, such as science, math, or reading?
    2. How does budgetary scope influence your studies? For research, do you have to have grant funding? Are graduate students a limitless resource? For evaluation, how do you manage the scope of work with the person-hours and funds available?
    3. Have you tried to use standardized test results as a measure of the effects of a treatment or intervention? What issues have you encountered in finding information about the content of the tests?

    Posted

    1. Is it still important to observe classrooms, along with online surveys? What do researchers and evaluators gain from on-site visits?
    2. Have you developed and validated a survey instrument for teachers or children? What is the time investment involved? How many participants total were needed for the validation?
    3. What is the youngest age of child for which you've successfully been able to use online instruments? What resources were needed?

    Posted

    1. How do you decide who applies for IRB approval and to which committee? University, school system, or outside group? Do all your school systems have IRB committees?
    2. What issues have you observed with data management and security?
    3. Are teachers' ratings or salary tied to children's test performance in your area?

    Posted

Log in to post a a comment in this discussion.