The Trust Deficit in Automated Machine Learning
A comprehensive systematic review of Automated Machine Learning (AutoML) research reveals a significant gap in addressing trustworthiness. While AutoML frameworks automate key data science tasks like feature engineering, model selection, and hyperparameter tuning, making machine learning more accessible, the study found that only a small fraction of recent research focuses on ensuring these automated models are explainable, fair, privacy-preserving, and robust. The analysis of 86 peer-reviewed studies from 2019 to 2025 shows that while traditional AutoML methods are well-established, innovations specifically targeting trustworthy AI requirements remain scarce. This research identifies critical gaps and proposes new strategies, including protection against adversarial attacks and multicriteria decision-making approaches, to build more reliable and ethical AutoML systems for practical data science applications.
Study Significance: For data scientists and engineers, this review highlights a pivotal shortcoming in the current AutoML ecosystem: the automation of model building has outpaced the integration of essential governance and ethical safeguards. This means that deploying AutoML-generated models in sensitive domains—where fairness, explainability, and data privacy are paramount—carries inherent risks. The findings underscore an urgent need to prioritize trustworthiness as a core component of the MLOps lifecycle, influencing how teams select tools, validate models, and design monitoring systems to ensure responsible and reproducible data science outcomes.
Source →Stay curious. Stay informed — with Science Briefing.
Always double check the original article for accuracy.
