Text-guided texture generation has been rapidly developed with the proliferation of generative artificial intelligence for creating three-dimensional textured objects. However, existing text-guided texture generation methods often suffer from artifacts such as inconsistent visual appearance across different views, Janus problems and seams in texture maps. To address these issues, a novel text-guided texture generation method, named WonderTex, is proposed. It aims to produce high-quality, view-consistent, and seamless texture maps by overcoming the shortcomings of existing texture generation methods. Specifically, we fine-tune a Stable Diffusion model using a large dataset to obtain a multi-view image diffusion model capable of generating a 4-view grid. This model serves as the foundation for producing four consistent views and establishing the base texture through back-projection. Subsequently, an automatic view selection and inpainting strategy is employed to effectively fill and refine the texture maps. Extensive experiments have shown that our method is effective and robust, capable of generating high-quality textures with various meshes and prompts, outperforming baseline methods in terms of texture details, view consistency, and other metrics.