Bias and Risk in Behavioral Polls and Studies – A Cautionary Tale for Public Policy

Courtesy WebIndia

by James C. Sherlock

Here at BR, both the authors and commenters spend a great deal of time discussing the outcomes of behavioral polls and studies.

Taxes, mandates, and bans are behaviorally informed. As are most public policies.

But behavioral science adds levels of risk and bias much more prevalent than in the hard sciences.

As a citizenry, we generally understand that polls that predict future behavior can prove unreliable because we see political polling.

Most expect polls about how we feel about our lives to be imperfect, but not purposely so. Yet some polls are designed to support a specific political position.

We probably understand a lot less about the risks and biases in behavioral studies that govern most public policy, because assessing them requires technical expertise most, including most elected politicians and political observers do not possess.

Which is a key reason such policies often go wrong.

Quality of Polling. Remember the red wave of 2022? I don’t either. And professional pollsters were generally trying to get it right. To predict future voting. Political pollsters know that both sides start with 47-48% of the voters in every election.

They are polling in order to understand the persuadable. And sort the wheat from the chaff in their poll results.

And, importantly, political pollsters are trying to get it right to preserve their reputations, and their incomes.

Questions and their design matter to outcomes of polls. So do various methods of trying to wring meaning out of them.

Sometimes, as in studies, the refs in observational poll design are players with a rooting interest in the outcome.

Six months ago I wrote an article exposing the purposeful and very public official corruption in 2020 of Virginia’s previously excellent, scientifically structured Authoritative School Climate Survey by people with political/dogmatic goals.

They destroyed the existing question base and shaped a new one in order to get results they wanted to support public policies that they had created.

Quality of Behavioral Studies. The Proceedings of National Academy of Sciences published in 2013 a meta-analysis (study of studies) that recommended caution in accepting behavioral studies, especially those authored in the United States.

US studies may overestimate effect sizes in softer research urged caution.

We found that primary studies whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters.

Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.

Behavioral science-based studies were assessed often to be biased by both small sample sizes and confirmation bias of the researchers who tend to find what they start out looking for. As that study writes, too many tend:

to deviate in the direction predicted by their experimental hypotheses.

Nice way of saying the authors cheated when studying the results of their own hypotheses.

It was written by Danielle Fanielli and John P.A. Ioannnidis, two renowned meta-research scholars. Both are now at Stanford.

They are also responsible for the famous report Meta-assessment of bias in science published in the same Proceedings in 2017.

That one was not limited to behavioral sciences.

If you remember the extensive discussions of the reproducibility crisis in hard and soft science studies that even made the popular press, that report was their primary source.

It concluded:

The social sciences, in particular, exhibited effects of equal or larger magnitude than the biological and the physical sciences for most of the biases and some of the risk factors.

Yet we cite behavioral studies all the time, and in the process give many of them far more credit in public policy and in debates than they deserve.

Public Policy. My personal focus in this blog is on Virginia education and public health policy, both subject primarily to behavioral analysis.

Education.  

I have pressed in this space for consideration in the field of education only of studies assessed by the Institute of Educational Sciences What Works Clearinghouse to be both scientifically valid and to provide strong evidence.

We spent a lot of ink back and forth about the single major study on the effectiveness of Positive Behavioral Interventions and Supports (PBIS).

I used as reference the conclusions of the Institute for Educational Sciences about that study, for the simple reason that they scientifically review the construct and evidence of studies, which I cannot do.

There are many other examples of public education policy changing based upon questionable or no evidence but rather on politics.

Public health.

Admit it. You have been waiting for this discussion to turn to studies of masking and other physical interventions for airborne viruses as well as the COVID isolation recommended by the CDC and enforced by public policy.

Here is the latest multi-national meta-analysis of that subject of physical interventions published three weeks ago. This is the 6th version going back to 2006. The conclusion that I find most interesting does not pick a side:

The high risk of bias in the trials, variation in outcome measurement, and relatively low adherence with the interventions during the studies hampers drawing firm conclusions.

People simply do not do, or do not do consistently or well, what they are told to do in the area of public health. Hardly shocking.

Yet firm conclusions were drawn in that meta-analysis about the efficacy of medical/surgical masks and N95/P2 respirators worn properly.

Which were by and large not the types of masks people wore. And even fewer wore them properly.

Public policy enforced masking during COVID anyway. Even on children, who were the least likely both to suffer ill effects from COVID and to wear masks properly.

And it kept them home. As directed by the teachers unions in direct contact with CDC.

But COVID was hardly the first pandemic.

The authors of Social isolation and its impact on child and adolescent development: a systematic review, a meta analysis, screened 519 articles published worldwide between 1990 and 2000 on the effects of social isolation on child development.

Using prescreening to eliminate all but 83, and Agency for Healthcare Research Quality (AHRQ) standards for the rest, the researchers found 12 that met high quality standards.

They showed the same results as COVID isolation.

So the results of COVID isolation were not just predictable but predicted.

Now comes CDC’s Understanding the Pandemic’s Impact on Children and Teens. (Understanding)

It claims to “describe the COVID-19 pandemic’s profound effect on the physical and mental well-being of children and teens” using data about pediatric emergency department visits.

The data should be solid. They are from the CDC’s National Syndromic Surveillance Program (NSSP).

But if you read Understanding, the impacts on pediatric health were less from the pandemic itself than from the preventive measures recommended by CDC.

For example, weekly visits among older children (5–11) and teens (12–17) increased for self-harm, drug poisoning, and psychosocial concerns during 2020, 2021, and 2022 when compared to 2019.

The other report shows that teenage girls may have experienced the largest overall increase in behavioral and psychosocial concerns. The proportion of ED visits for eating disorders doubled and tic disorders more than tripled in this population as well. Other studies have also noticed increases in tic-like symptoms among girls during the pandemic.

Note the use of “during the pandemic,” not “caused by the pandemic.”

Children and teens were largely at home and isolated for very long periods from their friends and extended families.

As recommended by the CDC.

Now they tell us about the mental health disaster. That was predicted before COVID.

They do not discuss its equally disastrous effects on education.

Bottom line. Both polling and behavioral studies are necessary, often headline-ready and regularly-flawed features of modern life.

Authors at BR, right and left, do the best we can to use them properly. But it is and will remain a crap shoot.

Some regular commenters disagree, often angrily and at length, from the other side of the culture wars.

But, for everyone, caveat emptor.

Updated Feb 18 at 1745 to insert reference to pre-COVID findings of the effects of social isolation on children.


Share this article



ADVERTISEMENT

(comments below)



ADVERTISEMENT

(comments below)


Comments

13 responses to “Bias and Risk in Behavioral Polls and Studies – A Cautionary Tale for Public Policy”

  1. Dick Hall-Sizemore Avatar
    Dick Hall-Sizemore

    Of course, we should use or cite only reputable studies (I shy away from polls), preferably peer-reviewed. I commend you for preferring to use studies already cleared by a professional body.

    As has been the case with others, your criticism of the CDC and others regarding policies adopted in response are classic examples of Monday quarterbacking. Yes, studies are now showing the ill effects of isolation on children. But, those studies have the advantage of actual effects that have already occurred. Can you cite a study that the CDC could have relied upon to make policy for a new virus which was making people very ill and even killing a lot of people, but whose mode and rate of transmission was not well understood and how it affected the body was not well understood?

    1. James C. Sherlock Avatar
      James C. Sherlock

      See https://doi.org/10.1590/1984-0462/2022/40/2020385

      COVID was hardly the first pandemic.

      The authors of the linked meta analysis screened 519 articles published worldwide between 1990 and 2000 on the effects of social isolation on child development.

      Using prescreening to eliminate all but 83, and Agency for Healthcare Research Quality (AHRQ) standards for the rest, the researchers found 12 that met high quality standards.

      They showed the same results as COVID isolation. So yes, the results were not only predictable but also predicted.

      1. James C. Sherlock Avatar
        James C. Sherlock

        Read Lessons From Europe, Where Cases Are Rising But Schools Are Open from November 2020. It was on NPR for Pete’s sake.
        https://www.npr.org/2020/11/13/934153674/lessons-from-europe-where-cases-are-rising-but-schools-are-open

        1. James C. Sherlock Avatar
          James C. Sherlock

          “The CDC is not the only agency in the world that deals with infection disease and pandemics and it does not dictate to other countries what to do. They have their own agencies to do that
          and many recommended similar restrictions and closures in their respective countries.”

          And many did not. Face it, Larry. CDC bet on the wrong horse. And the teachers unions in the U.S. drove bad policy.

      2. James C. Sherlock Avatar
        James C. Sherlock

        Read them. Here is the link to the table of high quality reports. https://www.scielo.br/j/rpp/a/ZjJsQRsTFNYrs7fJKZSqgsv/?lang=en#ModalTablet1

      3. James C. Sherlock Avatar
        James C. Sherlock

        The U.S. is not unique. That is the point.

        CDC, whose job it was to prepare for pandemics, could have done the same research done in Brazil.

        If it had, perhaps its recommendations, at the time parroting and in coordination with those of the American Federation of Teachers, would have been different or at least told American parents of the social isolation effects.

      4. James McCarthy Avatar
        James McCarthy

        Is it your position that the CDC failed to prepare the US for the Covid pandemic?

  2. William O'Keefe Avatar
    William O’Keefe

    To paraphrase the economist who said all models are wrong but some are useful, all studies are wrong—biased—but some are useful. Unfortunately, many public policy writers have agendas that influence their writings and as a result won’t take the time to suppress their biases and because objectivity interferes with promoting their agenda.
    There are three books that are worth reading as a way to sharpen the focus on critical thinking and understanding. Both take hard work. The books are How To Lie With Statistics by Darrell Huff; The Image-A Guide to Pseudo Events in America; and Thinking Fast and Slow by Daniel Kahneman.
    So yes, caveat emptor!

    1. sherlockj Avatar

      I don’t agree that all studies are biased. That is overstated.

  3. LarrytheG Avatar

    Polls are risky business in general IMO. Studies are risky business if they are not peer-reviewed, which requires them to fully disclose their methodology.

    But more than that, no one study, and certainly no one poll should be the basis of a whole lot. There needs to be more than one and it’s the concurrence between them that leads to something that might be informing.

    The organization and it’s general operating principles matters. An organization like Pew is different from say The Federalist Society or Brookings, etc.

    In general – studies , models – can look like this:

    https://uploads.disquscdn.com/images/606f56fc1facd11b2b226a6f16e592e2435b5eaf14b600bd263ec184b317be34.jpg

    You can rightly say that none are correct and all are wrong.

    Yet… the truth is that we do rely on them and they do provide us with exceptionally valuable information that saves lives and money.

    These models over time, do get better also. As do polls and studies as they “learn” from each prior and add to the next.

  4. LarrytheG Avatar

    Polls are risky business in general IMO. Studies are risky business if they are not peer-reviewed, which requires them to fully disclose their methodology.

    But more than that, no one study, and certainly no one poll should be the basis of a whole lot. There needs to be more than one and it’s the concurrence between them that leads to something that might be informing.

    The organization and it’s general operating principles matters. An organization like Pew is different from say The Federalist Society or Brookings, etc.

    In general – studies , models – can look like this:

    https://uploads.disquscdn.com/images/606f56fc1facd11b2b226a6f16e592e2435b5eaf14b600bd263ec184b317be34.jpg

    You can rightly say that none are correct and all are wrong.

    Yet… the truth is that we do rely on them and they do provide us with exceptionally valuable information that saves lives and money.

    These models over time, do get better also. As do polls and studies as they “learn” from each prior and add to the next.

  5. walter smith Avatar
    walter smith

    How to account for the mass psychosis we live in?
    Masks don’t work. We knew that prior to Covidiocy and had to comply (and still do)…why?
    Natural immunity is at least as good as the “vax” (it is better, but can’t admit that yet)
    Lockdowns were counterproductive
    When you have to change the definition of vaccine to call the Covid vaccine a vaccine, it is not a vaccine
    Maybe violating the Nuremberg Code was a bad idea…maybe crossing that line was bad…
    Damar Hamlin collapses on live TV and “experts” immediately pounce to control the narrative that it was commotio cordis…but it wasn’T, but ignore all the times the experts have been wrong and trust the experts because they say they are the experts!
    Oh, and men cannot get pregnant and there are only two sexes
    Wake up people. Use your brains.

  6. Nancy Naive Avatar
    Nancy Naive

    There is no such thing as an unbiased observer. The act of observing either affects the subject, or the subject affects observer.

Leave a Reply