Recommendations for Hart Banach
Please contribute two recommendations to make to Dr. Banach
1. The proportion of students that are “represented” by the funds of knowledge that are incorporated in a program may affect student success. As a result, these proportions should be kept constant amongst the programs. Or, if enough programs are held, the proportions can vary and be considered as a covariate, perhaps telling us how this proportion impacts student success.
2. A good measure of “student success” is their enthusiasm about learning and school. A good way to quantify this is through student participation – namely, the number of times a student “raises their hand”. This is similar to “attendance” data but has the benefits that 1. it is controlled by the student (not influenced by the parents), 2. data can be collected in a smaller time frame, and 3. an upper limit is not really an issue.
1. It would be a good idea to clearly state the research question(s). From there, we can work on defining the variable(s) of interest we may want to measure to answer the research question(s), which will lead to the statistical techniques we may want to use. Then the design and scope of the study can be created with these specified goals while taking into consideration any restrictions we may have. This was discussed in Tuesday's class, but it would be a good idea to make everything as concrete as possible before moving forward.
2. It is probably very likely that there will be students that "drop out" of the program. In this study, I believe that students that "drop out" from the study may be intrinsically different than students that choose to stay in the study (e.g. students that drop out of the study are less likely to try in school than those that do not). Therefore, the assumption that units drop out randomly will probably not hold in this study, and it would be wrong to base results on just the students that remained in the study. Careful consideration should be taken in order to retain as many students as possible in the study.
1. I don't think it's a good idea to hold several programmes concurrently or spaced closely in time. Otherwise, since the proportion of vulnerable students is (presumably) small, it will mean that students may be engaged in different activities at almost the same time, and it will be difficult to gauge which one actually contributed to the variables we are planning to use.
2. Getting more schools involved seems to be a necessity. There might be some school-specific effects that one will not be able to differentiate from the true experimental effect if only one school is considered. This will still not eliminate teacher effect though. How about treating that as a random effect?
1. We discussed several designs in class, but not all designs answer the same research question. It is important to clearly define your research question so that the right design can be selected. For instance, if you hope to prove that curriculums built around students’ funds of knowledge improve their ability to learn (measured by some quantitative factor), then the design where all participants partake in a program seems fitting.
2. It may be useful to consider quantitative measures that you could reasonably expect to see a difference in between the treatment and control groups. The larger the expected difference, the greater the statistical power of your analysis. Since you mentioned that your sample size is relatively fixed around 40 students, choosing a metric that will reflect a difference between the two groups is important.
1. Since the study lasts for quite a long time, quite a few subjects might quit from the study. Please keep a detailed record about the reason why they intend to quit, since these information plays an important role in our statistical analysis of missing data.
2. As we know, for general treatment and control study, the subjects themselves have no idea of which group they are divided into, just as we have come across in acupuncture study. For this study, the researcher seems to pay little attention to this point. In my opinion, this design might lead to some bias, which means difference produced by other factors, rather than treatment and control.
1. As the main objective of the study is to make some necessary steps in the policy implementation, so I think, it would be better to take more schools in different areas rather than one (stratified sampling).
2. The subjects should be selected using probability sampling procedure not subjective sampling.
1. Carefully and clearly choose and define one or several variables that can directly reflect the research purpose. For example, if the purpose is to see whether students benefit from certain approach of teaching, then variables that measure students' performance would be good candidates.
2. Once the subjects (students) are voluntarily enrolled in the study, it's better to randomize them into control group and treatment group instead of grouping them according to certain criterion. It's also important to blind them to their grouping label.
1. Before study design and data collection, it's better to clear clarify the purpose of the research. Even if there are several potential topics you can explore, pick one as the primary research question and leave others less important.
2. Maybe it's better to choose some funds of interest yourself and let children check them, instead of letting them think of, which may cause response bias. In this way, you can design program or curriculum beforehand and set better quantitative measurement for the effect.
1. Design.
Two Programs: Curriculum based on "Funds of knowledge" (Program T) v.s. Traditional curriculum (Program C)
Two types of subjects: children with (or with more) funds of knowledge (Group F) v.s. children with no (or less) funds of knowledge (Group !F).
Implementation:
Round 1: Program T and Program C run at the same time. Randomize (equal) numbers of Subjects F and !F into each program. Conduct some kind of entry and exit measures (m_i and m_e, let d =m_e-m_i) related to learning, but is balanced in groups and in programs.
Analysis:
- Potential to compare the effect of program T vs C (as in, compare d_t v.s d_c). --> H0: program developed with funds of knowledge is better for children on this particular measure
- Potential to compare the effect of how having more funds of knowledge v.s. no (or less) funds of knowledge (as in, d_f v.s. d_f!) --> stretching this a bit: children with more funds of knowledge tend to do better on this measure.
Round 2: Switch the treatment (program) for each individual. Conduct the same entry and exit measures. Can do same comparison as above.
IN ADDITION, ask the question: "Which program increased your interest in learning". Since the order of the programs are randomized for all participants I believe this could be a valid question.
(Am I too ambitious? Sample size could be a problem (and budget)).
1. Should be careful when putting survey questions measured by "Strongly disagree, Disagree, ... , Strongly Agree" : kids are very whimsical and they might not respond truthfully or they might not know how to answer correctly this kind of question.
2. Although the effect may vary across baseline performances, it should be still valid to use quantitative measure. Maybe we should alleviate this problem by constructing unbounded measure instead of using the typical test score methods which have upper bounds : ex) number of flowers planted in one minute