Sound source localization is used in various applications such as industrial noise-control, speech detection in mobile phones, speech enhancement in hearing aids and many more. Newest video conferencing setups use sound source localization. The position of a speaker is detected from the difference in the audio waves received by a microphone array. After detection the camera focuses onto the location of the speaker. The human brain is also able to detect the location of a speaker from auditory signals. It uses, among other cues, the difference in amplitude and arrival time of the sound wave at the two ears, called interaural level and time difference. However, the substrate and computational primitives of our brain are different from classical digital computing. Due to its low power consumption of around 20 Watts and its performance in real time the human brain has become a great source of inspiration for emerging technologies. One of these technologies is neuromorphic hardware which implements the fundamental principles of brain computing identified until today using \ac{CMOS} technologies and new devices. In this work we propose the first neuromorphic closed-loop robotic system that uses the interaural time difference for sound source localization in real time. Our system can successfully locate sound sources such as human speech. In a closed-loop experiment, the binaural robotic platform turned immediately into the direction of the sound source with a turning velocity linearly proportional to the angle difference between sound source and pan-tilt unit. After this initial turn, the robotic platform remains at the direction of the sound source. Even though the system only uses very few resources of the available hardware and was only tuned by hand it already reaches performances comparable to other neuromorphic approaches. The sound source localization system presented in this article brings us one step closer towards neuromorphic event-based systems for robotics and embodied computing.