In Chinese grammar error correction, the collaborative work of detection and correction is an important research direction. This paper proposes a two-stage model combining contrastive learning and sequence labeling based on the pre-trained model. This model consists of the detection stage and the correction stage. In the detection stage, the MacBERT model architecture is used. By introducing the improved BIO sequence annotation system and contrastive learning mechanism, the error positions are located and the error information is extracted. In the correction stage, the BART model is used to concatenate the error information labels output in the detection stage with the original text to generate a brand new input, and the generation ability of the model is utilized to correct the errors. To verify the performance of the model, a comparative experiment was conducted on the NLPCC2018 dataset, and F 0.5 and RECALL were superior to the baseline model.