The proliferation of Large Language Models (LLMs) has ushered in a transformative era in artificial intelligence, particularly within the realm of network security. These models demonstrate exceptional capabilities, from conducting code vulnerability audits to proposing solutions, significantly impacting decision-making processes in security. Ensuring the consistency and accuracy of security threat intelligence across models becomes a key challenge when sharing knowledge among different large models, especially in the context of deploying various Large Language Models for security purposes. This paper introduces dynamic incentive sharing mechanisms (DISM), an incentive mechanism crafted to foster the sharing of network threat intelligence among LLMs. Grounded in evolutionary game theory, DISM addresses the challenge of knowledge discrepancies among LLMs by promoting collaboration and seamless sharing of security threat intelligence. DISM formulates incentive strategies to mitigate free-riding behavior among participating LLMs. In conclusion, the effectiveness of DISM in promoting and regulating threat intelligence sharing was verified through experimental simulations and comparative analyses.