Applied AI in Healthcare (IAM-AI)

generisches Bild

In Applied AI in Healthcare, machine learning computer algorithms, neural networks, and deep learning systems find their application.

Machine learning is an approach to data analysis that involves building and adapting models that allow programs to "learn" through experience. Machine learning involves the construction of algorithms that adapt their models to improve their ability to make predictions. Neural networks and deep learning systems, which have applications in image analysis and other fields, use a series of functions to process an input signal and translate it through multiple stages into the expected output. Their structure is often compared to that of the human brain. The goal of analyses in medical research is to improve prediction of clinical phenotypes and treatment outcomes, as well as improved patient stratification. For this purpose, data from routine medical diagnostics as well as study data can be used.

In Applied AI in Healthcare, the generated computer algorithms are considered as precursors of AI-based medical devices. The focus is thus always on the proximity to the user in the clinic and takes into account the explainability of the models used and their results (Explainable AI).

IAM-AI is a member of the bAIome .

Examples of currently running projects:

  • As the second most common degenerative disease of the nervous system, Parkinson's disease induces a great deal of suffering for those affected if left untreated. Our group is working on methods to support both them and those treating them in the complex adjustment of the necessary medication. The goal is an automated dosage recommendation for levodopa based on verifiable and measurable factors. The basis for this is continuous monitoring of motor symptoms using low-cost, widely available smart wearables. This approach promises several advantages over the current clinical reality:

    • Direct interventions are possible when symptoms change. This can be done quickly and does not rely on subjective assessments of symptomatology that are not necessarily congruent. The use of transparent machine learning techniques promises to yield information that is applicable to non-apparatus diagnostics.
    • Monitoring and adaptation can also occur away from the traditional clinical setting. In terms of telemedicine, patients can thus also be optimally cared for in their familiar home environment. This promises a significant improvement in quality of life, and not only in the case of limited mobility.
    • By recording critical factors, optimized individual medication plans can also be predicted. Instead of having to rely on general population values, the individual circumstances of those affected can be taken into account from the outset.

    We are currently working with our pharmaceutical partners in the hospital pharmacy to integrate the knowledge gained regarding optimal treatment into everyday clinical practice. Optimized dosage is envisaged for preparations produced individually by means of a 3D printer. Further information on this so-called "closed-loop system" can also be found on the hospital pharmacy website.

    Further information

  • The Artificial Intelligence for CAncer REgistration and Research (AI CARE) project is a BMG-funded innovation project that was part of the call "Combining and intelligently using cancer registries". It addresses two questions:

    1) Can the complex cancer registry data be processed, improved and merged using artificial intelligence (AI) methods in such a way that they are more accessible than before for oncological quality assurance and research?

    2) How can evaluations of cancer registry data with AI methods meaningfully complement the classic evaluation spectrum of oncological health services research?

    The collaborative project involves 13 partners, consisting of experts in cancer registration, medical informatics and artificial intelligence. It started on September 1, 2022 and will run for three years.

  • Mapping between different formats of Electronic Medical Records is a challenging task. Data types collected from individual studies are often very specific and not easily transferable to other data structures. This is a problem for tasks where comparisons/combinations with other data (studies) are required. The main solution is to harmonize data by using a common data model (CDM) like OMOP (Observational Medical Outcomes partnership).

    However, existing solutions for displaying data into OMOP are not highly accurate, involve a lot of manual work, and are difficult in integrating into other processes. Furthermore, they are currently only applicable to data in English.
    The current pilot project is based on a method used in one of these solutions, which is called TF-IDF (a method for finding matching values based on similar occurrences). We have extended it to be easily integrated, to work with different languages, and to allow more than one suggestion of mapping to OMOP concepts without any additional manual steps. Next, we will further improve accuracy using deep learning algorithms.

  • Following conventional therapy standards, patients are treated with a disease-specific drug therapy after a diagnosis has been made. Since people can react to drugs differently, patients often exhibit severe side effects or show no response at all. Based on the "trial-and-error" principle, the drug dosage gets changed, or a different therapy is tried until a suitable therapy is identified. The ineffectiveness rate of current therapies is very high.

    In contrast to the classical therapy approach, Precision Medicine uses a comprehensive molecular, cellular, and functional analysis to define the disease more precisely right from the start. In this way, individual predispositions can be included in the therapy, a better prediction of the efficacy and dose of a drug treatment can be made, as well as side effects minimized in advance.

    Significant advances in recent decades in omics data generation technologies have made these comprehensive analyses, and thus Personalized Medicine, possible. The cost for sequencing of one genome has dropped from $2.7 trillion in 2003 to $1000 in 2012, and the $100 genome is within reach. While it took years to analyze a genome in 2003, today this can be done in a matter of hours. These developments have led to the generation of an unprecedented amount and variety of biomedical data, which is likely to further increase in the coming years.

    The biomedical research data for the interpretation of novel findings is stored in various external sources - genetic databases, pathway databases or in the form of publications. To make this data usable, it must be structured and merged with internal patient data. Here, IT solutions are needed to support physicians from the evaluation of the analysis to the therapy decision and to automate this process as far as possible. Only with smart, digital knowledge management can individualized treatment be offered to as many patients as possible.