Regret minimization in the problem of prediction with expert advice in the presence of noisy feedback is a fundamental challenge in online learning and sequential decision making. A general framework is proposed for designing and analyzing no-regret algorithms in this setting. This analysis, when specialized to several canonical channel models, is shown to lead to tight bounds on the regret thus characterizing how the noise level affects the regret and demonstrating that in some cases it is possible to achieve the same regret as with noiseless feedback.