For a smooth function $f: \mathbb{R}^n \rightarrow \mathbb{R}$, we are familiar with the gradient condition, i.e it must be 0 at the point of extremum. For a constrained optimization problem it is known as the Lagrange multiplier's rule. In this talk, we shall see their analogues in the non-smooth settings and in much general (Hilbert/Banach) spaces.