Autonomous dishwasher loading is a benchmark problem in robotics that highlights the challenges of robotic perception, planning and manipulation in an unstructured environment. Current approaches resort to a specialized solution, however, these technologies are not viable in a domestic setting. Learning-based solutions seem promising for a general purpose solutions, however, they require large amounts of catered data, to be applied in real-world scenarios. This paper presents a novel solution based on pre-trained object detection networks. By developing a perception, planning and manipulation framework around an off-the-shelf object detection network, we are able to develop robust pick-and-place solutions that are easy to develop and general purpose requiring only a RGB feedback and a pinch gripper. Analysis of a real-world canteen tray data is first performed and used for developing our in-lab experimental setup. Our results obtained from real-world scenarios indicate that such approaches are highly desirable for plug-and-play domestic applications with limited calibration. All the associated data and code of this work is shared in a public repository.