With advancements in automation and high-throughput techniques, complex materials discovery with multiple conflicting objectives can now be tackled in experimental labs. Given that physical experimentation is greatly limited by evaluation budget, maximizing efficiency of optimization becomes crucial. We discuss the limitations of using hypervolume as a performance indicator for desired optimality across the entire multi-objective optimization run and propose new metrics specific to experimentation: ability to perform well for complex high-dimensional problems, minimizing wastage of evaluations, consistency/robustness of optimization, and ability to scale well to high throughputs. With these metrics, we perform a comparison of two conceptually different and state-of-the-art algorithms (Bayesian and Evolutionary) on synthetic and real-world datasets. We discuss the merits of both approaches with respect to exploration and exploitation, where fully resolving the Pareto Front could be the main aim for greater scientific value in understanding materials space, and thus provide a perspective for materials scientists to implement optimization in their platforms.