Course:CPSC522/November Assignment/2023/Patient-first

From UBC Wiki


Rejecting Homogeneous Solutions, Avoiding Medical Labels, and Relational Design

Author: Celeste/Clair

This page explains why I, the author, reject the premise of the assignment. I make an intentional choice of writing this page in first-person. I explain why later.

Abstract

I argue that in trying to ascertain the "goals" and "preferences" of patients in the current "medical system", we will reconstruct systems that result in marginalization and medical discrimination. I am advocating against the idea that we should "optimize" preference and goal solicitation, especially within our current world state.

I begin by explaining that this problem requires long-term large-scale change in society in the occupied areas of Turtle Island, colonially know as (c.k.a) Canada. I share why we need a relational worldview in medicine and healing to overcome the problem and offer suggestions of groups to work with to decolonize medicine.

Then, I advocate against using prior databases for making medical decisions due to ingrained bias showing a need to generate new "labelless" data. To do so, I explore the idea of labels and then use that understanding to show how they have harmed marginalized communities in a medical context.

Next, I explore some ways we could approach building the foundation of a labelless-based AI system and afterwards explore challenges that would need to be understood in undertaking the work.

I end by offering further dilemmas and giving our team options for how to go about starting this work, should we choose to pursue it.

Part 1: The Fallacy of Optimizing Ourselves Out of Oppression

The idea of ascertaining the goals and preferences of patients by optimization is one that stems from colonial and capitalist thinking. Optimization is a relatively new word, first recorded in the 19th century. The modern use of it is also associated with the idea of "efficiency." Both terms come from a time during the industrious revolution (the broader idea beyond the industrial revolution) starting in the 17th-century[1]. Self-development and self-worth are often tied to goals, and without goals people are often derogatorily thought of as lazy or unambitious.

In Laziness Does Not Exist, Devon Price argues that people do not need to earn the right to exist, and that inefficiency is not a terrible thing.[2] In the video essay You don't have to set goals[3], Linnea Ritland summarizes the ideas behind conscious and unconscious goals and adds "if you don't set conscious goals for yourself that doesn't mean you won't achieve anything in the future," and "by making your subconscious goals conscious you can spend your time doing stuff that aligns with your values."

The effort to create a "healthy" population is often prescribed in a very goal-oriented manner in Western medicine where illness becomes a moral failing and justifies a multitude of discrimination. Most notoriously being the field of phrenology. It is known for pathologizing queer, autistic, d/Deaf, and b/Blind[4] people as mentally ill, disabled, and in need of curing.[5] In the 1980s, Mike Oliver coins the idea of the social model of disability as "a political status, one that is created by the systems that surround us, not our minds and bodies."[6] In doing so, he turns the lens and failings back onto the system itself.

If we want to create anti-colonial and anti-oppressive change, we will need to accept that this problem is larger than one that our team can take on alone.[7] We may never see results in our lifetime. Even our thinking of what a "medical system" is or can be needs to be broken down. We can name and grow our ideas away from hegemonic (or dominant) understandings that cause medical marginalization. This sort of change is tied to the decolonization in c.k.a Canada. This will be a generational effort that requires the cooperation and/or the removal of ongoing colonial authorities.

Relationships are important in Indigenous methodologies[Kovach pp35] and have a broader meaning than the colonial definition. Taking on a relational worldview requires "explicit reference to personal preparations (Montgomery, 2012) involving motivations, purpose, inward knowing, observation, and the variety of ways that the researcher can relate her own process undertaken in the research."[Kovach pp 36] The shift to a relational worldview would bring dramatic change to the current system. The idea of optimizing patient goals and preferences may seem alien in that new context.

To affect this change, we should partner with governmental and non-governmental groups. There are many already working towards these changes. A non-exhaustive list includes the First Nations Health Authority[8], Disability Alliance BC[9], MAiD to MAD[10], Disability Filibuster[11], Inclusion Alberta[12], and the Council of Canadians with Disability.[13]

Part 2: The Ubiquity of Medical Discrimination

It is widely recognized that AI is embedded with cultural bias.[14] Institutions such as the Distributed AI Research Institute (DAIR)[15] work on the topic of the harm AI has caused. They centre people with marginalized identities as leaders in their AI-based research projects. In a video from DAIR[16], Alex Hanna says that "AI might be a solution to a pretty narrow set of problems."

Hanna’s idea is in opposition to the zeitgeist of today. There is a belief that AI will solve everything so quickly that people will not be able to keep up.[17] Industries of today are incentivized to want AI as a solution for their problems due to a depleted workforce, partially from the COVID-19 pandemic.[18] Worker wages lag an in comparison to inflation and industries have seen large profits[19]. The idea is that with AI, their workforce could be further cut to save money.

In this world-state, AI is not positioned to be the most ethical solution to the problem at hand. Medicine is a field plagued with a racist, sexist, and queerphobic history. At the intersections with technology, even the most common and ubiquitous tests contain harmful medical bias. Spirometry, the most common lung function test, is measured with machines that use data with racial bias[20]. The dataset used by the machines is not the newest dataset available. There are newer ones that attempt to address this issue, but because this dataset is already widely used, it is considered "the dataset."

While talking with another UBC researcher, he shared that his work in spirometry research led him to learn that at the Vancouver General Hospital (VGH) all participants’ race is labelled as ‘white,’ regardless of race or ethnicity. This is due to attempting to address an issue of the original dataset. The dataset used only recorder two options for race: white and Mexican. I share this example to illustrate two points. First, the existing data is not good enough. This has had a domino effect on how medical practitioners use the machines to treat patients. Second, this is grievous for multiple reasons, largely because it is so glaringly misguided. However, many issues like this one are much subtler. These issues share a common thread for which we must take a brief detour to a more abstract concept. Labels.

Part 3: A Detour - AI Aren't Constructionists, Labels Construct AI

Data labelling is a fundamental idea behind AI. It is also a common way of understanding knowledge in much of Western science and culture. Constructionism is a subjectivist epistemology (a fancy word for how someone views what knowledge is) that believes meaning is "socially constructed" between people. From this point of view, the world around us is not objective. For example, without people ideas such as race, gender, socioeconomic status, and other identity labels would not exist. Labels are created from a shared knowledge and understanding of the objects and ideas we can observe.

Think of a label as a variable in a programming language to which we would assign data. The data and variable are independent of meaning. Different people may use different labels for the same data. If I were to ask what "two" looks like, could you produce a satisfying answer? The question does not make sense. This example is an oversimplification but sufficient for this context.

In computer science, it is common to find people who hold an objectivist epistemology, one that views knowledge as something that is discovered or obtained through observation. From this perspective, identity is understood through deductive reasoning and the senses. Subjectivism and objectivism are not always incongruous, and they exist on a continuum in Ontology, the theory of existence, reality, and being. It is becoming more common that research fields once dominated with positivist (or objectivist) thinking are warming to post-positivist (or subjectivist) ways of thinking.

From a constructionist point of view, an AI cannot create new meaning, only interpret what it is given. We must ask whose perspectives are represented in the meanings AI systems hold. It is often a singular perspective. Is there a singular perspective that an AI could take on solve the issues of medical and technological discrimination? To jump ahead of myself, the next section argues that no, there is not.

Part 4: The Over-Simplification of Labelling Relationships and Relational Experience

In Indigenous Methodology, Kovach says[21], research is relationship. To gain knowledge from the world is to further the relationships with the objects and ideas in it and to experience the self in relationship. In this research "data are more than things, they are living connections animated through the exchange of story."[21] Relationships are things that happen, things that are experienced in the moment. The knowledge gained from a written work will differ based on if it is written in a passive third-person voice versus an active first-person one. The knowledge would be further different when using different mediums like storytelling or conversing. To situate myself in the text as much as possible, I have elected to use first-person prose.

The better understand how the fluidity of knowledge relates to relationship, I present another passage from Kovach:

For contemporary scholars, acknowledging an Indigenous episteme is not without its challenges. The English language limits a full understanding of Indigenous knowledges. Oral tradition (as found within an Indigenous episteme) raises eyebrows of how it might, if ever, meet a standard of “truth” derived from a data-driven, text-based “objectivity.” Indigenous knowledges have a fluidity and motion that manifest in the distinctive structure of Indigenous languages meant to accentuate their animism (e.g., such as the use of ing in Nêhiyaw language). Oracy, as a form of knowledge sharing, relies upon the spoken word associated with a simultaneous witnessing other. Oral tradition is an expected knowledge dissemination approach within Indigenous communities. The written expression of Indigenous knowledges does not supersede the oral tradition in Indigenous societies. Written knowledge dissemination relies upon scribed symbols where author and reader are largely not in same room.[21]

Building an understanding of ourselves is part of experiencing ourselves in relationships. Damage can be done when we are forcibly or coercively given understandings of ourselves. In medicine, the history of labels comes from a damaging origin in a puritanical and ablest culture where deviance was shunned.

To have depression is not having depleted dopamine levels or to feel self-harm ideations. Depression is the variable that symptoms, as data, are assigned to. However, we assign the same label to many different sets of data, and we could just as easily assign more states to the label of depression. We could assign depleted dopamine to other labels like ADHD, schizophrenia, or Parkinson's disease, but none of these tell us what the dopamine levels are.

Autism is often described by autistic activists as a sundae bar.[22] Not every autistic person has the same set of needs to the same degrees. This is a validating expansion of the understanding of being autistic, rejecting a monolithic experience. It also illustrates how autism, too, is constructed term with different states. We do not need the "sundae bar" if we can identify each of the "toppings" we mean in the moment.

Self-identification of mental health is beginning to gain support. These terms are used as identity labels for people to understand their self in relation to the world. Even if the label changes, if the person has learned something about themself, it is a useful diagnosis. To receive an official diagnosis within a medical system requires validation by someone not in the relationship with the self: a doctor.

These labels, by their nature, create a level of abstraction about the story of people's lives. When we create systems around these abstractions and include a gatekeeping process, we narrow whose stories and experiences we are accounting for. To use abstraction labels in an AI system to provide decisions would be to designate it act as a gatekeeper, thus discrimination and marginalization would occur.

Okay, so I have presented what cannot be done, but where do we go now?

Well, we need a “labelless” AI system!

Part 5: How Many Labels Would an AI Label if an AI Could Not Label

I believe that a method of acquiring the goals and preferences of a patient is the wrong road to go down, instead we need a new way of using data and technology to understand and help people.

When we compare people against each other, we create systems focused on measuring their differences which we then try to align. This results in medical discrimination. For example, the ablest notions of "curing" autistic, b/Blind, and d/Deaf are still held by many today. Instead, we should centre the stories of individuals in relation to themself and their relationships with others - not the others themselves!

Our understanding of relationships needs to extend beyond just thinking about the people in the relationship. The relationship is a distinct actor, separate from the people. In this way, we can be in relationship to our dreams, ideas, and places where we are.

In the book More Than Two: A Practical Guide to Ethical Polyamory[23], Franklin Veaux and Eve Rickert put forward two axioms of non-monogamy:

  1. The people in the relationship are more important than the relationship
  2. Don't treat people as things

If we consider these axioms in conjunction with this definition of relationships, we can draw some conclusions. We should nurture the relationships we have with the self but not at the cost of the self. These relationships with the self and the broader world can be let go of if they become damaging. This is a frame in which we can understand “healing.”

I would say it also gives us a clear framing to apply the idea of ascertaining “goals” and “preferences” for these relationships. Most importantly, we must not treat people as things or invalidate their relation to the world!

Within this new framing of measuring health, we should then explore the answer to our previous problem. Is it possible or paradoxical to measure something without labelling it?

As suggestions to where to start, we might look at how to remove as many labels from existing data as possible. Instead of gender, race, sex, disability, and so on, we can identify a person's bodily features and functions and needs directly.

We could read a genome, not to compare against other people, but to past and future versions of the self. This would create the directly observable value of what DNA replication has succeeded or failed over time. We could also consider brain and endocrinological activity and changes over time.

A hypothesis to test with this new framing may look like the impact of hereditary or social relationships on prediction accuracy about aspects of a relationship with the self.

These considerations come with constructed meanings to unpack. The concept of family would impact how individuals feel about measuring genetic relationships. The endocrine system's chemicals have connotations to the construct of gender that could make gender-minorities hesitant to trust such a system. Importantly, we should not consider any design above questioning and iteration.

We might ask ourselves, is what we are building even AI anymore? To that, I encourage remembering Alex Hanna's words, "AI might be a solution to a pretty narrow set of problems." We might want to look outside, or we might want to socially construct a new concept of what AI can look like.

Part 6: Move Slowly and Uncover New Complexity

There are going to be many challenges to this approach. The key is that we do not have to - and should not - do this alone as a single team. Even with more resources and people there are a few challenges to flag here for consideration.

No "One Size Fits All"

The issues of accessibility will need to be addressed. Not just ease of use but how, when, and where people can access the help that the system could provide.  

Whatever information provided should be easily understandable. If there is jargon, there should be straightforward ways to learn what it means. The diversity of acceptable interaction styles should be large, so it can be highly personalized. Information should also be accessible when it is most helpful to the individual.

Methods of delivery might look like wearable technology, virtual reality, augmented reality, a brick-and-mortar building, a shared community space, and so on. The information we want to understand should be both achievable and transferable. Data being collected at separate times and with different methods should be able to connect to each other as needed. This way, not everything needs to be measured at the same time.

We should not limit our help to Western concepts of relationships, and we should be critical by thinking about how our system would treat non-Western forms of medicine practices. Kovach says about Indigenous conceptual frameworks, '"through word or metaphoric symbol, we must guard against the notion of “one size fits all.”'[21] Similarly, we should not try and find a "one size fits all" idea of relationships and helping someone with self-relationships.

Having this sort of adaptability will be a large challenge. It may be the most important one to address.

If You Can't Control Them, Commodify Them

As I have asserted, the medical system as it stands has deeply rooted issues of discrimination. The goal of many medical practices is to create a homogeneity of status. Ideas like making people "focus better" or "sleep better" or "speak better" may sound innocuous but are not. These ideas include examples of abstraction labels.

Often, when people talk about "healthy bodies" they are talking about bodies that look or act in a certain way. Often this meaning excludes disabled[24], fat[25][26], queer[27], and racialized[28][footnote that I mean racialized as racially discriminated bodies] bodies. We must be careful about what we mean by “health” while designing a system motivated to “improve healthy living.”

Often, "healthy" can also mean "productive."[29][30] The goal of Western medicine within capitalism is to get people back to work as fast as possible so to not lose "human capital."[31] There is a link between increased ADHD diagnosis and profit generation[32], as mental health moves from stigmatized to commodified.

Medicine is a gated resource by both money and stigma. Even when resources are accessed, there is no guarantee that the options available will serve the person in ways that work best for them, but rather best for the system the person lives in. We should take heed not to replicate this issue.

Taxonomy of Autonomy and Monotony of Policy

Another complex issue is that of autonomy and decision making. It is easy to say that a decision should always rest with the person being helped, but sometimes that simply is not possible or true. There are also times autonomy can be used to hurt the individual.

Communication is not always possible - such as if the person is a child or does not share a language or mode of communication with the medical providers. If someone is unconscious, unless we know their preference beforehand, there is no way for them to decide in that state.

Autonomy can also be weaponized. In Canada there is an ongoing struggle against Medical Assistance in Dying (MAiD) from many disabled activist groups.[8][9][10][11][12][13] The idea of this resistance is that people who face discrimination in a system will be the most likely to be coerces to use the service. Coercion can include enforcing capitalist requirements to prove the right to exist.  

For people in Canada with a PWD designation (person with disability) their autonomy is highly limited. They receive an amount of money that is impossible to live a comfortable life with. For someone living in a 10-person household (the requirement for receiving the maximum amount), currently, they will receive $1,140 on a monthly bases.[33] On top of this, they are not allowed to hold assets with a combined net worth of $100,000 or more. If they enter a marriage or common law, the income is reduced.[34] This creates a hierarchical structure of dependence in disabled-able relationships. This also limits where and how it is possible to live and how enjoyable life can be.

Disability-rights groups argue that introducing MAID while these, and many other, restrictions on disabled lives exist is to send many disabled people to an early death. The restrictions of who can access MAiD also continue to loosen. In a world with threats of climate change and civil collapse affecting the mental health of young generations[35], it is deeply upsetting to think how many would be helped in ending their life when hitting rough patches.

We will need to consider that we are creating something in an unjust system that will apathetically kill through policy.

Build Trust, Question Authority

The most marginalized in society are often the people with the most reason to distrust their government. If we want to be able to offer help to everyone, we should aim to understand and listen to this distrust to build something better.

As a society, we have the technology to run DNA tests in less than a month[36], but the ethics of the practice are questionable[37]. This has also started being used to enforce borders.[38]

We should collaborate with groups globally to better account for structural issues that are unique to their location and cultural origin. To begin, however, we can start our work in Canada, by working with groups such as the First Nations Health Authority (who already work with the government).

Martyring Now for the Future

Technology and AI are large parts of colonial efforts of today. Elon Musk's effort to colonize mars has a quickly accumulating number of injuries[39][40], and as colonial efforts continue so will the body counts. The case is also an example of what Timnit Gebru calls "longtermism."[41]

Longtermism is the idea that people today are worth sacrificing if it means that people in the future may benefit. Musk's project is not the only example of such an endeavour. The c.k.a British Colombian government has been colonizing Wet’suwet’en land for years now to build a pipeline. Activists are still charged and have upcoming trials where they may face jail time for peace protests.[42]

If our solution works well, it will hopefully work better over time, as more relationships are learned over time. We should not, however, fall into the trap of thinking it is okay to sacrifice the people of today. A solution should help people in the future but not at the expense of people today. A proper solution should help everyone, in the future and present.

Scope: Seven Generations[43]

What I am proposing may seem daunting, I acknowledge that. I am proposing a large swing away from the original premise. The effort's scope would be longer, broader, and harder.

The changes would need to touch many aspects of our world. Medical, academic, industrial, political, socioeconomical, cultural, and so on.

The work may not come to fruition within a lifetime. We would be just one team among many working on this goal.

The question to ask yourself, is it worth creating a system in our lifetime that will perpetuate medical discrimination or be among a movement that pushes back and rethinks what healing means?

Part 7: Muddying Artificial Waters

In this part, I leave questions to think about. These are dilemmas we would need to answer or reject the premise of if a solution makes them contextually meaningless.

  • When and how would someone enter their information? How would this change per person?
  • When and how would someone decide? Who is involved in the decision-making process and at what stage?
  • Who oversees the decision when it affects someone unconscious and unable to make one?
    • Does this answer change if the person is intoxicated? To what degree of intoxication and what kind of intoxication?
    • Does this answer change if the person can communicate but no one present is able to understand?
    • Are there other situations that bring the idea of supporting autonomy into question?
    • Does the answer change based on the cultural context? How so? (e.g., MAID)
  • Are there situations where it is justifiable to override someone’s decision? What are they?
  • How do we transition away from the biased data, algorithms, and practices used today? How can we confirm we have done so?
  • What is labelless data? How would a labelless AI work?
  • Can people of any age or ability access this new system? How?
  • How should emergencies be handled?

There are many new questions to be asked beyond this, but this should give initial direction to our team's work.

Part 8: Conclusion, Question the Axioms

The medical system is rife with painful histories of marginalized identities. Science has been used to justify atrocities as "logical" and "just" and we need to be critical of our assumptions if we, as a team, are to commit our resources to creating something that will influence how well people are taken care of.

If we are to build something, it must not be on the foundation of a structure that creates marginalization. We need to start fresh, which in a way, gives us options.

I propose, as a set of non-exhaustive and not-mutually-exclusive options, the team could...

  1. Consider if AI and machine learning are the correct answer for this problem before continuing.
  2. Begin the process of building relationships with communities that have been subject to medical discrimination to better understand the nuances of the failures and how to mitigate that in any future work.
  3. Support the creation of a community-led research ethics board to help understand how to go about answering these questions.
  4. Define what a "label-less" machine learning model can look like and what problems it can help with.

If this work inspires additional proposals, this list should not be used as a restriction on what the team should do, but this is where I propose we start.

Annotated Bibliography

  1. De Vries, Jan (June 1994). "The Industrial Revolution and the Industrious Revolution". The Journal of Economic History. 54.
  2. Devon, Price (2021). Laziness Doesn't Exist. Atria Books. ISBN 9781982140113.
  3. Ritland, Linnea (June 12, 2022). "You don't have to set goals". YouTube. Retrieved November 24, 2023.
  4. A. Conway, Megan (September 1, 2017). "When A Hyphen Matters: Reflections on Disability and Language". The Review of Disability Studies: An International Journal. Retrieved November 24, 2023.
  5. Price, Devon (2022). Unmasking Autism Discovering the New Faces of Neurodiversity. Harmony/Rodale. ISBN 9780593235232.
  6. Oliver, Michael (1990). Politics Of Disablement. Red Globe Press London. ISBN 9780312046583.
  7. Moulton, Benjamin; Jaime S, King (2010). "Aligning ethics with medical decision-making: The quest for informed patient choice". Journal of Law Medicine & Ethics. 38: 85–97.
  8. 8.0 8.1 "First Nations Health Authority". First Nations Health Authority. November 24, 2023. Retrieved November 24, 2023.
  9. 9.0 9.1 "DABC Statement on Medical Assistance in Dying (MAiD) and Bill C-7". Disability Alliance BC. June 24, 2022. Retrieved November 24, 2023.
  10. 10.0 10.1 "MAiD to MAD threatens vulnerable Canadians in Bill C-7". MAiD to MAD. November 24, 2023. Retrieved November 24, 2023.
  11. 11.0 11.1 "DisabilityFilibuster – Disability Filibuster against C-7". DisabilityFilibuster. November 24, 2023. Retrieved November 24, 2023.
  12. 12.0 12.1 "UPDATED: We need your help to stop Bill C-7 - Inclusion Alberta". Inclusion Alberta. February 8, 2021. Retrieved November 24, 2023.
  13. 13.0 13.1 "Disability-Rights Organizations' Public Statement on the Urgent Need to Rethink Bill C-7, The Proposed Amendment to Canada's Medical Aid in Dying Legislation | Council of Canadians with Disabilities". Council of Canadians with Disabilities. November 24, 2023. Retrieved November 24, 2023.
  14. Manjul, Gupta; M. Parra, Carlos; Dennehy, Denis (June 20, 2021). "Questioning Racial and Gender Bias in AI-based Recommendations: Do Espoused National Cultural Values Matter". Information Systems Frontiers. 24.
  15. "Distributed AI Research Institute | DAIR". Distributed AI Research Institute. November 24, 2023. Retrieved November 24, 2023.
  16. "Introducing DAIR". Distributed AI Research Institute. November 30, 2022. Retrieved November 24, 2023.
  17. Castro, Ethan (October 10, 2022). "They Lied; AI Won't Alleviate Us. The zeitgeist is that: No, Artificial… | by Ethan Castro | LatinXinAI | Medium". Medium. Retrieved November 24, 2023.
  18. Bochtis, Dionysis; Benos, Lefteris; Lampridi, Maria; Marinoudi, Vasso; Pearson, Simon; G. Sørensen, Claus (August 2020). "Agricultural Workforce Crisis in Light of the COVID-19 Pandemic". Sustainability. 19.
  19. Vanek Smith, Stacey (November 29, 2022). "The mystery of rising prices. Are greedy corporations to blame for inflation?". NPR. Retrieved November 24, 2023.
  20. Braun, Lundy (Autumn 2015). "Race, ethnicity and lung function: A brief history". Canadian Journal of Respiratory Therapy. 51.
  21. 21.0 21.1 21.2 21.3 Kovach, Margaret (2021). Indigenous Methodologies Characteristics, Conversations, and Contexts. University of Toronto press. ISBN 9781487508036.
  22. "Irrevocably Illogical". Tumblr. July 18, 2014. Archived from the original on May 17, 2018. Retrieved November 24, 2023. |first= missing |last= (help)
  23. Veaux, Franklin; Hardy, Janet; Gill, Tatiana (2014). More Than Two: A Practical Guide to Ethical Polyamory. Thorntree Press, LLC. ISBN 9780991399727.
  24. Aas, Sean (June 22, 2016). "Disabled – therefore, Unhealthy?". Ethical Theory and Moral Practice. 19.
  25. I. Ardern, Christopher; T. Katzmarzyk, Peter; Janssen, Ian; Ross, Robert (September 6, 2012). "Discrimination of Health Risk by Combined Body Mass Index and Waist Circumference". Obesity Research. 11.
  26. Herndon, April (Autumn 2002). "Disparate but Disabled: Fat Embodiment and Disability Studies". Nwsa Journal. 14.
  27. Tabaac, Ariella; B. Perrin, Paul; G. Benotsch, Eric (December 26, 2017). "Discrimination, mental health, and body image among transgender and gender-non-binary individuals: Constructing a multiple mediational path model". Journal of Gay & Lesbian Social Services. 30.
  28. T Ahmed, Ameena; A Mohammed, Selina; R Williams, David. "Racial discrimination & health: pathways & evidence". Indian Journal of Medical Research. 126.
  29. M. Schwartz, Steven; Riedel, John (September 2010). "Productivity and Health: Best Practices for Better Measures of Productivity". Journal of Occupational and Environmental Medicine. 52.
  30. Kirsten, Wolf (2010). "Making the Link between Health and Productivity at the Workplace ―A Global Perspective". Industrial Health. 48.
  31. Zhang, Wei; Bansback, Nick; H. Anis, Aslam (January 2011). "Measuring and valuing productivity loss due to poor health: A critical review". Social Science & Medicine. 72.
  32. Harper, Gordon (2014). The ADHD Explosion: Myths, Medication, Money, and Today’s Push for Performance. Oxford University Press. ISBN 9780199790555.
  33. "Disability Assistance Rate Table - Province of British Columbia". Government of British Columbia. August 1, 2023. Retrieved November 24, 2023.
  34. "Assets & Exemptions - Province of British Columbia". Government of British Columbia. November 24, 2023. Retrieved November 24, 2023.
  35. Ma, Tianyi; Moore, Jane; Cleary, Anne (May 2022). "Climate change impacts on the mental health and wellbeing of young people: A scoping review of risk and protective factors". Social Science & Medicine. 301.
  36. "How Long Does a DNA Test Take? {Complete Guide} | Dynamic DNA". Dynamic DNA. November 24, 2023. Retrieved November 24, 2023.
  37. de Wert, Guido (September 1, 1999). "Ethics of predictive DNA-testing for hereditary breast and ovarian cancer". Patient Education and Counseling. 35.
  38. D. Makhlouf, Medha (January 1, 2021). "The Ethics of DNA Testing at the Border". American Journal of Law & Medicine. 46.
  39. "SpaceX - Missions: Mars". SpaceX. November 24, 2023. Retrieved November 24, 2023.
  40. Taylor, Marisa (November 10, 2023). "At SpaceX, worker injuries soar in Elon Musk's rush to Mars". Reuters. Retrieved November 24, 2023.
  41. Ahuja, Anjana (May 10, 2023). "We need to examine the beliefs of today's tech luminaries". Financial Times. Retrieved November 24, 2023.
  42. "Canada: Charges must be dropped against Wet'suwet'en land defenders and their supporters". Amnesty International. October 26, 2023. Retrieved November 24, 2023.
  43. "What is the Seventh Generation Principle?". Indigenous Corporate Training Inc. May 30, 2020. Retrieved November 24, 2023.