Jump to content

Birth Attitudies Survey

Birth Attitudies Survey

Please post three questions

RollinBrant (talk)23:26, 22 March 2013

1. Is the average taken for each bullet point or for each "attitude/belief" category? The latter seems weird since the questions in a category are quite different and it doesn't make sense to average their responses.

2. Is "correlation structure of the average scores" referring to the correlation among the 15 categories?

3. In what way do you judge validity? Is there a "gold standard" for a valid survey for attitudes?

DavidLee (talk)00:08, 23 March 2013
 

1. Should "no opinion" be included as score 4 or should not be included when calcuating averages? The former results in less variation.

2. Is it reasonable to assign the same step-size among different responses? e.g. people may be reluctant to choose "strongly agree", which may imply that "strongly agree" should get more score.

3. Aren't the categories too many? e.g. do people really distinguish "mildly agree" and "agree"?

WooyongLee (talk)16:22, 23 March 2013
 

1. What is a "provider group"?

2. What do you mean by "establish the validity of the survey"?

3. Are the questions within each set equally important? If not, weighted averages should be used.

VincenzoCoia (talk)04:22, 25 March 2013
 

1. If the researchers wish to take an average over each set of questions, the questions in that set should be measuring the same thing. For instance, in the maternal choices section, some of the statements seem to represent contradictory views.

2. Are six questions enough to measure the attitudes on one topic? Increasing the number of questions that measure the same thing might improve the validity of the survey.

3. Since the data are correlated (multiple responses from one subject), a linear mixed effects model may be a good way to compare provider groups. In this case, the response is the raw scores (not averaged) with predictors question area and provider group.

ShannonErdelyi (talk)17:33, 25 March 2013
 
  1. How do the responses align with the attitudes/beliefs of a set of questions? For example, if the average score for a group of providers is 7 (strongly agree) for the "maternal choices" section, what kind of attitude does this imply?
  2. Can you clarify what you mean by "correlation structure of the average scores to possibly identify underlying factors or groupings"? What correlation are you interested in (correlation of average scores for a particular question, average scores for a particular section, etc.)?
  3. What does it mean in this case to "validate the survey as a measure of attitudes"?
JonathanBaik (talk)23:05, 25 March 2013
 

1. Is the degree of the internal consistency of items in each of the 15 sets(e.g. the maternal choices set) large enough? 2. What are the Cronbach's alpha values for each set of questions? 3. Is it good to include highly similar items? For example, "For a woman, having a naturally managed birth is a more empowering experience than delivering by cesarean section." and "Women who deliver their baby by cesarean section miss an important life experience."

GuohaiZhou (talk)04:33, 26 March 2013
 

1. if the researchers wish to use the surveys as a measure of "attitude" by using some kind of scoring system derived from the survey, what kind of attitude does a perfect score correspond to?

2. how can 2 people's response to the same question be compared fairly when what people define as "strongly agree" internally vary across individuals.

3. What does it mean to 'validate' a survey, and how do you do that?

VivianMeng (talk)05:07, 26 March 2013
 

1. When calculating the average of each set of structure set, it is reasonable to give each question of the set the same weight? i.e., some question may better reflect the attitude towards this set of questions.

2. From the attached article, maybe we can use the Cronbach's alpha to test whether the questions within each set are related to each other. Then what is the appropriate sample size to calculate this coefficient?

3. What is the cut-off numerical score which distinguish the overall attitude (disagree, agree, no opinion maybe) for each set of questions? Is chi-squared helpful here?

YumianHu (talk)05:53, 26 March 2013

1. They say they intend to compare provider groups. Do you they compare these groups in each of 15 areas or just take a average of these 15 areas? Hear taking a simple average might lead to a misleading conclusion.

2. In the end, they talk about validty of their survey. I'm quite confused about the concept "validity" here. Does it mean that they hope to prove that this survey indeed reflects their real attitude towards surgical delivery?

3. Are the participants be able to distinguish the seven possibilities listed in the questionnaire? When I fill in such questionnaires, I feel so confused about disagree and strongly disagree.

PeijunSang (talk)06:30, 26 March 2013

1. By using the average of these scores(i.e agree, disagree, and so on..),, is it would be realistic to valiadate this survey ?

2. How could they identify the groupings by using the correlation structure of the these categogies avarage scores ? I think, sometimes it is difficult to distinguish among agree, strongly agree and mild agree ?

3. Besides these two set of questions, is it possible to add some quantitative measurements ? I think, that quantitative measurements would be beneficial to make a concrete decision about the safety of child-birth.

MdMahsin (talk)07:24, 26 March 2013
 

1. By using the average of these scores(i.e agree, disagree, and so on..),, is it would be realistic to valiadate this survey ?

2. How could they identify the groupings by using the correlation structure of the these categogies avarage scores ? I think, sometimes it is difficult to distinguish among agree, strongly agree and mild agree ?

3. Besides these two set of questions, is it possible to add some quantitative measurements ? I think, that quantitative measurements would be beneficial to make a concrete decision about the safety of child-birth.

MdMahsin (talk)07:24, 26 March 2013