This paper studies information theoretic secure aggregation in federated learning, involving K distributed users and a central server. “Secure” means that the server can only get aggregated locally trained model updates, with no information about the local users' data being leaked to the server. In addition, the effect of user dropouts is also considered, where at most K-U users can drop and the identity of these users cannot be predicted in advance. We consider the information theoretic secure aggregation scenario with offline key sharing and model aggregation phases, where users share keys in an offline way independently of the models, and users send the encrypted models to the server in the model aggregation phase. The objective of this problem is to minimize the number of transmissions in the model aggregation phase. A secure aggregation scheme with uncoded groupwise keys (which could be shared more conveniently compared to coded keys), where any S users share an independent key, was recently proposed to achieve the same optimal communication cost as the best scheme with coded keys when S>K-U. In this paper, we additionally consider the potential impact of user collusion, where up to T users may collude with the server. For this setting, we propose a secure aggregation scheme with uncoded groupwise keys that guarantees secure aggregation with U non-dropped users and T colluding users provided that K-U+1 ≤ S ≤ K-T, and is proven to achieve the optimality without any constraint on the keys.