Microservice applications, composed of many independent containerized components, are well-suited for hybrid deployments spanning cloud and edge datacenters. The decision regarding which microservices should run on the edge and which in the cloud is a classic optimization problem with various dimensions. In this paper, we address the deployment of synchronous microservice applications in a cloud-edge infrastructure with theoretically infinite resources and a pay-per-use cost model for CPU, memory, and network. Our objective is to devise a strategy that leverages telemetry data and loadbalancing capabilities provided by service mesh technologies to ensure an average user delay below a configurable threshold at minimal cost. We propose a greedy algorithm that utilizes an analytical model of user delay and evaluate its effectiveness through both simulations and deployment within a novel opensource Kubernetes Controller, named Geographical Microservice Autoplacer (GMA). The proposed controller has a high level of automation and can be used with any application able to export telemetry data to the Istio service mesh.