The work considers the N-server distributed computing scenario with K users requesting functions that are linearly-decomposable over an arbitrary basis of L real (potentially non-linear) subfunctions, and the aim is to receive the function outputs with zero error, reduced computing cost (γ; the fraction of subfunctions each server must compute), and reduced communication cost (δ; the fraction of users each server must connect to). For a matrix F ∈ R K×L representing the linearly-decomposable form of the K requested functions, our problem is made equivalent to the open problem of zeroerror sparse matrix factorization that seeks F = DE over a special subset of γ-sparse and δ-sparse E ∈ R N ×L and D ∈ R K×N matrices that respectively define which servers compute each subfunction, and which users connect to each server. We here design an achievable scheme designing E, D by utilizing a fixed-support SVD-based matrix factorization method that first splits F into properly sized and carefully positioned submatrices, and then decomposes these into properly designed submatrices of D and E. For the zero-error case and under basic dimensionality and single-shot assumptions, this work reveals that the optimal number of servers can be upper-bounded as Nopt ≤ min(∆, Γ)⌊K/∆⌋⌊L/Γ⌋ + min(mod(K, ∆), Γ)⌊L/Γ⌋ + min(mod(L, Γ), ∆)⌊K/∆⌋ + min(mod(K, ∆), mod(L, Γ)). In the special case, it reveals that the maximum possible ratio K/N or the zero-error capacity of this system for any feasible single-shot scheme satisfying δ −1 , γ −1 ∈ N is C = max(K/Lδ, γ).