Sparse Dictionary Learning seeks to represent data as combinations of a small number of basic elements, or atoms, drawn from an overcomplete dictionary. When all observations are considered together, this framework can be viewed as a form of matrix factorization, where the data matrix Y is decomposed into a dictionary D and a set of sparse coefficients X: Y=DX. Traditional algorithms such as MOD and K-SVD are alternating between estimating sparse coefficients (sparse coding) and refining the dictionary atoms (dictionary updates). They are computationally efficient in practice but inherently non-convex, with convergence guarantees only when initialized near a minimum. I will describe a simple iterative scheme for atom recovery—Iterative Atom Refinement (IAR), that has convergence guarantees for any initialization. The analysis reveals a self-reinforcing selection rule: once the iterate has a small overlap with a true atom, subsequent updates amplify this alignment.