With increasing computer-based work, prolonged poor sitting posture can result in negative health effects such as scoliosis. However, current sitting posture detection systems often require the purchase of additional hardware. The camera-based detection system may compromise user privacy and be impacted by varying lighting conditions. In this paper, we propose a solution to realize a sitting posture detection system derived from acoustic signals generated by smartphones. Firstly, the acoustic signals of different sitting postures are obtained through the built-in speaker and microphone of the smartphone. Then an innovative signal segmentation technique based on the adaptive threshold is designed to extract the signals, followed by the creation of a deep learning model for posture recognition. To meet the demand for a lightweight model, a knowledge distillation compression technique is used to compress the model while maintaining its accuracy. The experimental results prove that our sitting posture detection model has good effectiveness and robustness, making it more universal.