3D imaging data has been in the limelight for the past couple of years in the fields of Medical Imaging, Robotics, Autonomous Vehicles and computer vision. Artificial Intelligence, Machine Learning and Deep Learning helps us in the development of models that are used for Recognition, Classification and Translation of Data. While these techniques have been in the technical industry for a decade, the complexities of working on 3D imaging data have been a major problem when it comes to processing 3D data and executing run-time applications that uses data formats such as 3D Point Clouds or Meshes. A major aspect of 3D image data is the depth map where the same data is represented in a 2-Dimensional format known as a depth map. This 2D representation of the 3D point cloud has the same data and is converted into a lossless format, that is no data or state of the defined object is lost. In this paper we propose a novel method of converting 3D Point Clouds and Meshes into 2D Depth Images also called as Depth Map. This research focuses on a mathematical model that helps in this conversion. The mathematical model that we developed converts the file in the least time possible. Thus, this research aims at solving the problem of converting 3D image data into 2D depth maps with no information loss, converting at high speed and less time complexities.