Music separation focuses on isolating individual components, such as vocals and instruments, from mixed audio tracks. This process has gained significant attention due to its diverse applications, including music remixing, karaoke systems, audio restoration, music information retrieval (MIR), music education and practice, forensic audio analysis, music transcription, gaming and virtual reality (VR), performance analysis for musicians, and music sampling and licensing. This study investigates feature extraction from audio sources using DenseNet, employing diverse datasets such as DSD100, MUSDB, and MUSDB18-HQ, which offer a wide variety of musical compositions for benchmarking. To evaluate the accuracy and reliability of the techniques, metrics such as Precision, Recall, and F1-score are used to assess the efficiency of feature extraction. Among these datasets, MUSDB18-HQ demonstrates superior performance, particularly for extracting features related to vocals and drums. By providing insights into the potential and challenges of music separation, this study contributes to the ongoing development of music technology and its applications.