This paper considers the secure aggregation problem for federated learning under an information theoretic cryptographic formulation, where distributed training nodes (referred to as users) train models based on their own local data and a server aggregates the trained models without retrieving other information about users’ local data. Secure aggregation generally contains two phases, namely key sharing phase and model aggregation phase. Due to the common effect of user dropouts in federated learning, the model aggregation phase should contain two rounds, where in the first round the users transmit masked models and according to the identity of surviving users, the surviving users then transmit some further messages to help the server decrypt the sum of users’ trained models. The objective of the considered information theoretic formulation is to characterize the capacity region of the communication rates from the users to the server in the two rounds of the model aggregation phase, by assuming that the key sharing have already been done offline in prior. If the keys shared by the users could be any random variables, the capacity was fully characterized in the literature. Recently, an additional constraint on the keys (referred to as uncoded groupwise keys) was added into the problem, where there are several independent keys in the system and each key is shared by exactly S users, where S is a system parameter. In this paper, we fully characterize the capacity region for this problem by matching new converse and achievable bounds.