On a more practical angle for related research you might find the work of Dr Sachil Singh at York University (the Canadian one) of interest. https://health.yorku.ca/health-profiles/?mid=2080700 . He's been looking at embedded racial (and other) bias in existing CPD systems which use "AI" to generate training material based on academic publications. There are multiple problems he's found in practice. Many medics are using these training systems as close to decision support systems (and the companies that produce them are obviously interested in getting into that area properly). The research projects may be biased (lack of diversity in test patient data for example) or even the literature on a particular area may be biased. Finally, their "AI" interpretation of the data seems to introduce new biases by misinterpreting statements or numbers in the literature about racial make-up of patient base, such as interpreting a study trying to address systemic lack of study of non-white patients as meaning that non-white patients have a greater prevalance of certain conditions.
He gave a talk here a few months back, though I'd already met him at a workshop when he was a PhD student in surveillance studies.
no subject
He gave a talk here a few months back, though I'd already met him at a workshop when he was a PhD student in surveillance studies.