In the domain of continuous control, deepreinforcement learning (DRL) demonstrates promising results.However, the dependence of DRL on deep neural networks(DNNs) results in the demand for extensive data and increasedcomputational complexity. To address this issue, a novel hybridarchitecture for actor-critic reinforcement learning (RL)algorithms is introduced. The proposed architecture integratesthe broad learning system (BLS) with DNN, aiming to merge thestrengths of both distinct architectural paradigms. Specifically,the critic network is implemented using BLS, while the actornetwork is constructed with a DNN. For the estimations of thecritic network parameters, ridge regression is employed, and theparameters of the actor network are optimized through gradientdescent. The effectiveness of the proposed algorithm is evaluatedby applying it to two classic continuous control tasks, and itsperformance is compared with the widely recognized deepdeterministic policy gradient (DDPG) algorithm. Numericalresults show that the proposed algorithm is superior to the DDPGalgorithm in terms of computational efficiency, along with anaccelerated learning trajectory. Application of the proposedalgorithm in other actor-critic RL algorithms is suggested forinvestigation in future studies.