Hardware accelerators for artificial intelligence (AI) applications must often meet stringent constraints for accuracy and throughput. In addition to architecture/algorithm improvements, high performance computational techniques such as mixed precision are also required. In this paper, a floating-point fused multiply-add (FMA) unit supporting mixed/multiple precision is proposed. A wide range of conventional precision formats (such as half and single, formats) as well as emerging precision formats (including E4M3, E5M2, DLFloat, BFLoat16 and TF32) are supported in the proposed design. In addition to all these formats, the proposed design is flexible in manipulating the exponent and mantissa lengths for 8, 16 and 32-bit floating-point numbers based on the needs of an application. The proposed FMA can be configured to perform normal FMA operation supporting multiple precision, or alternatively perform mixed precision in ASIC. The proposed FMA is fully pipelined and in each cycle, the input bit streams are processed based on the provided configuration, so independent of the previous cycles. For normal FMA, the proposed design utilizes sharing of resources to parallelize several operations based on the available hardware and required precision. Mixed precision tends to accumulate the lower precision dot products into higher precision to avoid overflow/underflow. The proposed design improves computational accuracy by adding all possible dot products at the same time while decreasing the number of rounding operations to prevent rounding errors. An innovative method to accumulate the dot products and the aligned addend is also proposed for normal/mixed precision FMA operation. By considering tradeoffs between reusing the available hardware and removing unnecessary complex units, a more efficient and flexible design is attained in terms of hardware metrics and supported mixed precision computation compared to other designs found in the technical literature. Extensive simulation results for a comparative analysis are provided.