This is the latest in a series of successful ARDUOUS (Annotation of useR Data for UbiquitOUs Systems) workshops. In this workshop, we will explore key topics surrounding the role of user data, scoring and ground truth data throughout the lifecycle of machine learning and artificial intelligence systems. This is particularly timely in light of the new EU AI Act, which will come into force in the following months and years. The use of user data to explore the efficacy and performance of AI systems is well understood, but as the regulatory landscape takes into account the risks to participants of automated decision-making and data analysis, there is an increasing need to better understand and characterise issues which may affect the fundamental rights of those whose data is processed using these systems. These include but are not limited to characterisation of model bias, robustness, sustainability issues, and detection of security and privacy issues such as malicious training data, data poisoning or leakage of source data. As full-lifecycle monitoring of AI systems becomes a priority for practitioners and system deployers, methods drawn from academic research must be translated into accessible real-world policy and practice. Transparency in AI, too, benefits significantly from the availability of ground truth data that enables us to concretely understand and characterise the performance of automated systems in real-world use cases, and is in scope for this workshop.