Interview: Enthought – the trials and tribulations of AI

There is still a general sense of wariness from some sectors within the life sciences and we discussed why this might be, what businesses can do to reassure clients and staff and how machines are not quite set to take people’s jobs yet. 

OSP: A lot of people are wary about artificial intelligence, including within the life sciences industry – is this justified?

MC:​ Any time there is a new or emerging technology, people should absolutely be cautious about its implications. Two primary generative AI risks to be aware of involve intellectual property (IP) and bias. Safeguarding your IP—your R&D data—from being used to train models outside your company should be your first and foremost priority when integrating AI. Bias comes into play when a model’s algorithms systematically produce biased results due to biased inputs and assumptions.

Automation bias, or the tendency for people to ignore conflicting information and trust the output of a computer more than a person, is another concern, especially when AI models are not transparent or explainable. However, both biases can be mitigated by remaining sensitive to bias potential in the first place, and then balancing with additional inputs, proactive training, workflow checkpoints, and QC.

And while we should be wary of these risks, we shouldn’t let them prevent the scientific community from embracing the positive and powerful impact AI can have on R&D. The human-centric nature of traditional drug discovery processes means variability and errors are inevitable, but AI offers a new opportunity to finetune and enable a more precise approach.

Leave a Reply

Your email address will not be published. Required fields are marked *