Government introduces data system to boost artificial intelligence in healthcare sector
In a significant move to bolster medical AI research, Taiwan's Ministry of Health and Welfare has launched a clinical data verification and certification system [1]. The system is designed to address two major challenges: avoiding bias in AI-based healthcare and ensuring data privacy.
To avoid AI bias, the system emphasizes training medical AI algorithms on Taiwan's own indigenous clinical data that accurately reflect the local population's demographic and health conditions. This approach counters a key limitation seen when models developed on one population are applied elsewhere, such as a diabetes-related retinal disease diagnosis AI adapted in Thailand, which proved ineffective due to biased and limited training data [1]. The system also tackles overrepresentation issues where privileged or urban populations dominate datasets, which can skew AI outcomes and reduce effectiveness for underrepresented groups [1].
Additionally, the system incorporates a federated learning model, a privacy-preserving AI technique that enables hospitals to collaboratively train AI models without directly sharing raw patient data. This method allows multiple hospitals, such as Taichung Veterans’ General Hospital and others involved, to compare and utilize data securely while complying with standards like Health Level Seven Inc’s Fast Healthcare Interoperability Resources (FHIR) [1]. Federated learning supports data privacy by allowing decentralized data processing, minimizing risks of patient data exposure.
Lee Chien-chang, Director of the Department of Information Management, spoke at the system's launch event. He emphasized the importance of rigorous and standardized verification of clinical data integrity for the nation's medical AI development [1]. Lee also stated that medical algorithms in Taiwan need to be trained on indigenous data to avoid AI bias in "smart" medicine [1].
The system is designed to ensure digital privacy and depersonalize data and AI training suggestions. This allows developers to work without accessing physical documents and test models on data from multiple hospitals. The system's launch event took place in Taipei.
The Department of Information Management and the Food and Drug Administration have subsidized clinical data centers at Taichung Veterans' General Hospital, Kaohsiung Chang Kung Memorial Hospital, Tri-Service General Hospital, and Far Eastern Memorial Hospital. These centers feature an inter-hospital data comparison function and a federated learning model.
Each branch of the AI datacenter has a specific mission, such as data security and privacy protection, certifying AI safety and fairness, and making recommendations for a model's inclusion in the National Health Insurance system.
In summary, this systematic approach is a key advancement for ethical, effective AI in healthcare tailored to Taiwan's population. Bias avoidance is primarily ensured by training AI on verified, representative local clinical datasets and mitigating demographic imbalances. Data privacy is protected through federated learning models that enable collaborative AI development without sharing raw sensitive data outside institutions. Certification and verification bolster system-wide trust and standardization for medical AI in Taiwan.
References: [1] Taiwan News, 2022. Taiwan's Ministry of Health and Welfare launches clinical data verification and certification system. [online] Available at: https://www.taiwannews.com.tw/en/news/4461514 [2] Chang, C. and Chang, C., 2022. Taiwan's AI datacenter to support medical AI development. [online] The China Post. Available at: https://www.chinapost.com.tw/business/technology/2022/03/16/586420/Taiwans-AI-datacenter-to-support-medical-AI-development.html
- The clinical data verification and certification system in Taiwan, designed for medical AI research, aims to bolster AI effectiveness by training algorithms on indigenous data that accurately reflect local medical-conditions and health-and-wellness trends.
- To address concerns about data privacy and avoid AI bias, the system incorporates a federated learning model, a privacy-preserving technology that enables multiple hospitals to collaboratively train AI models, thereby leveraging technology to advance health-and-wellness while safeguarding individual data.