This paper introduces DetectBERT, a cuttingedge deep learning framework designed for statement-level vulnerability detection (SVD) in source code, with a focus on Python. In contrast to conventional vulnerability detection methods that rely on graph-based structures like Control Flow Graphs (CFG) or Data Dependency Graphs (DDG), DetectBERT utilizes the BERT transformer encoder to learn contextual relationships between code statements directly from the text. The model consists of two BERT-based layers: one dedicated to feature extraction and the other for classification. By eliminating the need for graph-based representations, DetectBERT offers significant advantages, such as handling the dynamic nature of programming languages and managing variations in code across different versions. This approach also reduces reliance on language-specific graph generation tools, enhancing its flexibility and scalability. Experimental results on publicly available datasets show that DetectBERT outperforms traditional graph-based methods in vulnerability detection. Moreover, the paper introduces a novel preprocessing pipeline that labels Python code by analyzing commit history, further boosting the model's accuracy and resilience. DetectBERT's design is extendable to other programming languages, and the complete implementation is made available to foster continued research and practical applications in the domain of software security.