Background There is an increasing number of tools that use AI to carry out assessments of published clinical research. Objective We set out to address the question: “Which AI-enhanced tools have been developed or used to help with evaluation of the trustworthiness of clinical trial publications?” Methods We searched five databases for publications of tools, checklists or methods irrespective of how many items they had (Epistemonikos, Google Scholar, PubMed, Scopus, Web of Science). We excluded studies if they did not apply the tool to publications of randomised clinical trials. Our search was restricted to publications in English. The date of the last search was 27 March 2025. For each identified tool we identified the domains and questions for which they had been used. If reported, we extracted information on accuracy. Results We identified 16 publications describing 17 tools tackling 4 different domains (governance, plausibility, plagiarism, reporting). We found no papers/tools addressing specific questions in the domain of statistics, but one tool was used to prepare data for statistical trustworthiness assessment. Four papers checked adherence to CONSORT and PRISMA guidelines. Four papers looked for evidence of manipulation or duplication of images; seven papers used various tools to look for suggestions that the publication may not be authored by the named authors (e.g. AI-generated); one paper checked four other governance questions, two other reporting questions, and evaluated whether data could be extracted for statistical trustworthiness assessment. Conclusion If used in conjunction with traditional software/human trustworthiness checks, AI-assisted tools can be relevant as an aid to assessment. We suggest that assessors must have realistic expectations about the capabilities and limitations of AI. Any AI-assisted assessment must align with established guidelines and research practices and outputs must be checked carefully, as only humans should make final judgements on clinical and methodological relevance and plausibility. With this proviso, we predict that with the increasing quality and user-friendliness of AI, and an ever-growing demand for trustworthiness assessment, the use of AI in this area will grow exponentially.