Modern healthcare largely depends on the effectiveness of clinical trials. This has become especially evident in the past two years, as the world faced the COVID-19 pandemic and the need to find the right medicines and develop vaccines as fast as possible. Less than a year has passed between the moment the first patient was diagnosed with a new virus in China and the moment the first vaccine got its license - a record time for the global pharmaceutical industry.
However, this seems to be just the tip of the iceberg: advanced information technologies are revolutionizing clinical trials, so the scientific system itself is expected to undergo breakthrough changes within the next decade.
Actually, the revolution has already begun: until recently, clinical trials had only 13.8% probability of success (PoS) , yet the numbers have been on the rise since 2015. The estimated PoS of clinical trials has grown up to 40.6% by June 2021 and skyrocketed to a record 98.6% in case of phase III trials . In fact, for the first time in history, the medicine reached the point where launch of phase III trials becomes a telltale sign of success.
Two years ago, Nature wrote  that the major cause for this boost is the active involvement of artificial intelligence (AI) technologies in clinical research. The number of trials increased as the financial volume of AI medical market expanded. According to IQVIA, the market volume reached $600 million in 2014 and is estimated to soar up to $8 billion by 2022 — and this is clearly not the limit .
There is abundant statistical evidence to prove this tendency will progress and become even more demanded. Still, I believe it will be more interesting to find out what qualitative changes are happening right now in medical science and will take part in the future, thanks to artificial intelligence and IT in general. I can outline ten major trends and will provide an overview for each of them.
1. More trials and errors
Medical science — and clinical research in particular — is an endless story of trials and errors. Scientists and pharmacologists are testing more and more medications for their efficacy against various pathologies — the number of investigational drugs in development has increased dramatically in recent years, as my colleagues pointed out in their Clinical Reader column .
Manufacturers are competing for people willing to take part in trials and for developing the first drug that can eliminate certain pathologies.
The thing is that the nanotech progress of the early 21 st century has radically changed the system of targeted therapy that emerged at the turn of the millennium, as well as the whole principle of developing new drugs. No one is working on a “cure for cancer” now — from the scientific standpoint, this would seem naïve or downright ridiculous. Instead, scientists are looking for medications that can affect specific cell types, genome sequences and the tiniest parts of a human body while having no impact on healthy cells and organs.
Targeted drugs have already demonstrated they can treat various types of cancer and help patients to live longer. Personalized targeted therapy is the future of medical science. The ideal that scientists are now striving to achieve is to find a drug that would help particular patient N to treat their particular type of tumor without damaging a single healthy cell. But if patient N’s has a rare, one-in-ten-million type of cancer pathology, how is it possible to develop and test a cure for it? Traditional clinical trials that must involve thousands of participants leave patient N no chance of survival.
Artificial intelligence enables testing numerous promising medications in silico, i.e., using computer modeling . At its present stage of development, the in silico method cannot substitute classical trials with their human and animal testing. But it is able, for example, to prescreen the efficacy of a targeted drug without losing time on searching for candidates.
2. A search for candidates becomes more effective
For many scientists, enrolment can become a pain in the neck. Has it always been like this? In his Nature article , Marcus Woo told the story of the 1994 breast cancer study, led by Kevin Hughes. Hughes sought female volunteers over the age of 70, with particular types of mammary gland tumors. Out of 40,000 potential candidates in the US, the researchers eventually enrolled only 636 patients — and it took five years to complete the search, negotiations, and paperwork.
It was a tremendous loss of time. The average survival rate for patients with an early- stage mammary gland cancer is 90% in developed countries and only 66% and 40% in India and South Africa, respectively . This means that while the research had been stuck in the infrastructural quagmire for five years, one patient out of ten in the US and three patients out of five in South Africa lost their lives.
Until recently, many leaders of research teams had no opportunity to facilitate their search for candidates and 86% of clinical trials had to be delayed globally. Nowadays, this opportunity is given to them by artificial intelligence that can find the right candidates in a few seconds and even send them standard invitation letters. This is probably the most obvious part of all the work delegated by us to the machines.
3. Openness to everyone
Modern systems for data collection and analysis are working both ways. Not only is it easier for researchers to find potential candidates but also candidates have better chances of finding a trial where they could be of great help. Even one online resource, ClinicalTrials.gov, contains information on 300,000 clinical trials that are currently in progress. However, it can be very hard for people not working in science or medicine to understand what trials they could qualify for.
My colleagues from DQueST designed a smart survey that helps candidates to filter out clinical trials that are clearly not applicable to them. It takes only 50 simple questions to cross out 60-80% of trials, lowering the threshold for those interested in participation.
By the way, this reveals an important problem of our age, typical for science as well: lack of information transforms into its overabundance. Intellectual tools that let evade all the needless data are becoming more valuable — and I’m sure they will be extremely useful for a science of the 2020s-2030s.
4. Protocol design gets better
Every clinical trial must follow a pre-planned protocol, so any accidental deviations or external factors can suspend its activities for months. This leads not only to actual loss of money but also to potential loss of lives — especially in the case of new targeted drugs.
The more stringent a clinical trial protocol is designed, the more effective its results are. Meanwhile, scientists preparing the protocol must take multiple accounts into consideration, from the specifics of the disease to published data on previous trials. ClinicalTrials.gov alone offers information on 300,000 studies that are being carried out right now in over 200 countries. Obviously, even the most brilliant leader of a research team cannot assess such large amounts of information. This is why until recently many trials partially duplicated one another, ignored results of similar trials, lost time or faced unpredictable obstacles because of the inability to process so much data.
These problems are not completely gone but are eventually being dealt with thanks to AI systems. Services like Trials.ai can collect all data on previous and current research from available open sources so it could be used for designing a trial protocol. AI can calculate parameters relevant for the research and their range: for instance, a therapy for diabetes requires people with a specific hemoglobin level that is neither too high nor too low — but estimating these limits without results of prior clinical trials can be a real challenge.
5. Monitoring at all stages
In the 1950s, it took a lot of effort for researchers to get a license for contraceptive pill trials. There were many obstacles to overcome: pushback from scientific community conservatives, government doubts on the reasonability of this type of medication, dominating patriarchal views shared by many families. However, unexpected trouble appeared during the final stages: many women who had agreed to join the trials often forgot to take their pills in time. Controlling the administering turned out to be problematic, leading to significant distortion of initial results.
Anti-Covid vaccines, with their accelerated approval, faced similar complications: the “golden standard” — double-blind, randomized, placebo-controlled trials — was thwarted by fear of COVID, with participants unblinding (i.e., violating the rule that they must not know whether they got an authentic medication or a placebo) and taking tests for S-protein antibodies after the vaccination. In fact, vaccine developers had no tools to control the situation, thus exacerbating the ethical problem of modern science: does the “blinded” placebo group have the right to get real treatment? And when, given the threat of a deadly pandemic?
This is where the latest technologies can be of use once again. For example, patients testing drugs for mental disorders often forget or even refuse to take the medication. Nowadays, there are platforms that not only remind patients to take their medicine but also analyze its administering via smartphone cameras and computer vision. The research on one of such services has found that 90% of patients diagnosed with schizophrenia, who used this system, were taking their medication according to the schedule, compared to only 72% in the control group .
Further development of artificial intelligence, including facial, speech and emotion recognition, as well as of “smart” gadgets that can measure multiple parameters will increase the efficiency of participation control and make trial results more accurate. As soon as the data are collected, the placebo group can be unblinded and immediately treated with real medicine.
6. NLP: systematization becomes easier
Another promising trend is natural language processing (NLP). NLP systems help the computer to analyze written and verbal texts. In the case of medicine and clinical trials, their main objective is to find relevant excerpts from medical charts and records made by doctors and patients themselves — even including words uttered during a check-up.
The biggest challenge of introducing NLP to all clinical trials is the necessity to teach the neural network of all basic context-related terms and lexemes in advance. In other words, human intelligence (represented by data scientists, medics, and linguists) has a lot of work to do before its artificial counterpart takes over. This shows great potential for development, with services gradually becoming more effective: accuracy of their entity recognition and relation extraction has reached 79.5% and 80.5% respectively . I believe the real revolution will
begin when these values get up to 97-98%.
7. Quick analysis of collected data
A scientist rummaging through paper heaps and scattered files — this is the image that still dominates in the mass culture. However, it is the analysis of clinical trial results that pioneered AI application. For decades, the systematization of large amounts of data was a major problem for science and resulted in numerous mistakes made due to evidence misinterpretation. Nowadays, any large-scale research can be quickly converted into elegant charts, tables, and mathematical models. And thanks to these technologies, for the first time ever mankind has the opportunity to literally observe the spread of a new virus online .
8. Prospects of making a universal database
The next step of big data development is a single database for all healthcare systems, patients, doctors, and studies in the world. Overcoming language barriers and national borders, this database would become the cornerstone of future medical progress. Scientists of the late 19 th - early 20 th century, when the modern approach to clinical trials was taking shape, could not even imagine having access to data on millions of cancer or diabetes patients. Back then,not only was it impossible to digitize all this info but also there were fewer people. While overpopulation poses a clear threat to the climate, for medical science, it is more of a blessing, giving the potential to find and test medications even for the most uncommon diseases. A “one case in a million” situation can be completely different depending on whether one or eight billion people live on this planet.
This obviously raises questions on data transparency and leaks of confidential information about patients’ health. These are the two challenges the World Health Organization and the IT community are facing and I want to believe that because of the 2020-2021 COVID-19 pandemic, these questions got the attention of the people in charge.
9. Double-blind, randomized, placebo-controlled clinical trials: a relic of the past?
CFD (computational fluid dynamics) technologies have been widely applied in the auto racing and aerospace industry. They are used for studying continuum mechanics, helping to streamline racecars and spaceships and reduce the drag of airplanes and torpedoes. Everyone knows about wind tunnel tests — and they still remain the industry’s “golden standard”. However, in recent decades CFD became so advanced that a great share of calculations is performed virtually, while their accuracy keeps growing. It is highly plausible that in the nearest future, no wind tunnels will be needed to check the results of computer-made calculations.
Medicine can also eventually introduced so-called “virtual patients”. As of now, such technologies are used primarily for medical education , yet it is possible to create a digital “human model” to test the efficacy of various medications.
Virtual clinical trials will be — and, in fact, already are due to the COVID-19 pandemic — at their most useful in several situations: firstly, when a drug is aimed to counter life-threatening or unfamiliar diseases like Ebola or rabies (so it is ethically impossible to establish a control group); secondly, when therapy is developed to treat rare nosologies (if a disease rate is very low, it is extremely hard to assemble a control group); thirdly, when excessively large population groups are required to test the medication efficacy. Compared to fluid dynamics, “virtual patient” technology is only making its first steps, but in the next decades, it will surely see an influx of money, human resources, and scientists’ attention.
Nevertheless, a huge number of clinicals trials, around 80%, are now on hold because of the lack of participants and old-fashioned, say, “analogue” ways of looking for them.
10. Lower expenses on trials, higher revenues for pharma companies
The last, but definitely not the least: money, human resources, and attention. Every trial involves thousands and millions of dollars, hundreds and thousands of people, dozens and hundreds of organizations and countries. Transactional expenses alone have reached exorbitant amounts. Besides, as I mentioned at the beginning of this article, only 40% of medical trials end up successfully (six years ago, this rate was three times lower!), so billions of dollars and man-hours in the industry are regularly gone to waste.
Artificial intelligence itself cannot replace a human being at the helm of a clinical trial or real patients taking part in the research, but it could save these billions. It already does, while it is being gradually implemented in medicine and science. I truly hope that the funds saved by IT will be spent by pharmaceutical companies on developing new drugs, vaccines, and therapy methods. For us, investors and IT specialists, this is both important and inspiring.