Hi everyone,
Is Flow Matching invertible like Normalizing Flows? Can I use the inverse of the generator? 𝐺 G with the same parameters to retrieve the noise 𝑧 z from a given 𝑥 x?

Does Flow Matching lie between diffusion models and Normalizing Flows? I think the concept is derived from Continuous Normalizing Flows and that it generalizes diffusion methods.

Thanks for clearing that. Would you be aware of any papers on Flow Matching that show generation results on Facial Recognition datasets, for example, like in Glow?

In Flow Matching (FM), you learn a vector field at each time step. For sampling, you solve the simple ODE
𝑑𝑥/𝑑𝑡=𝑣(𝑥𝑡)dx/dt(x t), which represents the mapping from source to target. By reversing the sign in the vector field, you obtain the mapping from target to source. FM generalizes diffusion models (DMs) and can also be shown to be equivalent to Continuous Normalizing Flows (CNFs).

What is your use case? If it is pure generation then you should probably stick to a flow matching/diffusion model. They are much more efficient to train than simulated continuous normalizing flows (Neural ODEs) and do not come with limiting downsides that come with invertible architectures used in discrete normalizing flow models (e.g. Glow).