A colleague and I have just released on arXiv a paper titled “Enabling Secure and Ephemeral AI Workloads in Data Mesh Environments”. The key innovation is in pushing the now well-established idea of minimal immutable data structures up and down the software infrastructure stack a bit further than what others have done, resulting in a system with reduced attack surface (from a cyber security perspective), improved portability across multiple environments, and minimal impact on functionalities.
Here’s the abstract of the paper:
Many large enterprises that operate highly governed and complex ICT environments have no efficient and effective way to support their Data and AI teams in rapidly spinning up and tearing down self-service data and compute infrastructure, to experiment with new data analytic tools, and deploy data products into operational use. This paper proposes a key piece of the solution to the overall problem, in the form of an on-demand self-service data-platform infrastructure to empower de-centralised data teams to build data products on top of centralised templates, policies and governance. The core innovation is an efficient method to leverage immutable container operating systems and infrastructure-as-code methodologies for creating, from scratch, vendor-neutral and short-lived Kubernetes clusters on-premises and in any cloud environment. Our proposed approach can serve as a repeatable, portable and cost-efficient alternative or complement to commercial Platform-as-a-Service (PaaS) offerings, and this is particularly important in supporting interoperability in complex data mesh environments with a mix of modern and legacy compute infrastructure.
Here’s the link to the paper: https://arxiv.org/abs/2506.00352
Enjoy.