Businesses have invested billions into artificial intelligence (AI) applications, leading to a sharp rise in the number of AI applications being released to customers. Taking into account previous approaches to attacking machine learning models, this article aims ‘to devise a practical means of benchmarking performance in more realistic scenarios.’[9] We conduct a comparative analysis of adversarial attacks, contrasting large language models (LLMs) being deployed through application programming interfaces (APIs) with the same attacks against locally deployed models. The article puts forward adversarial attacks that are adapted for remote model endpoints in order to create a threat model that can be used by security organisations to prioritise controls when deploying AI systems through APIs. This paper contributes: 1) a public repository of adversarial attacks adapted to handle remote models on https://github.com/l3ra/adversarial-ai, 2) benchmarking results of remote attacks comparing effectiveness of attacks on remote models with those on local models, and 3) a framework for assessing future AI system deployment controls. By providing a practical framework for benchmarking the security of remote AI systems, this study contributes to the understanding of adversarial attacks in the context of natural language processing models deployed by production applications.