In online learning, a learner receives data in rounds $1\leq t \le T$ and at each round predicts a label which is then compared to the true label resulting in a loss. The total loss over $T$ rounds, when compared to a loss over the best expert from a class of experts, is called the regret. This paper focuses on \emph{logarithmic loss} over a class of experts $\CH_{p,\bw}$, represented by a probability distribution $p$ and parameterized by a $d$-dimensional weight vector $\bw$. Unlike previous work that studied bounded weight, we assume that the norm of the weight can be \emph{unbounded}. This unboundedness poses a challenging problem that leads to unexpected results. For such a class of weighted experts we analyze the (fixed design) minimax regret for the best predictor and worst label sequence. Such a minimax regret turns out to be a universal lower bound for most regrets analyzed in the literature. For bounded weights it is known that the minimax regret can grow like $(d/2) \log(T R^2/d)$ where $R$ is an upper bound on the weight norm. In contrast, we show in this paper that for unbounded norm with $R=\infty$ the minimax regret is asymptotically $(d-1)\log (T/d)$ for a logistic-like expert class which we also extend to $R=\Omega(\sqrt{T})$. We prove it by introducing the so called splittable label sequences that partition the weight space into $T^{d-1}$ regions with maximum sequence probability equal to $1$. Finally, for a general class of monotone experts we present an upper bound $2 d\log T$ for the minimax regret.