Experts are often asked to represent their uncertainty as a subjective probability. Structured protocols offer a transparent and systematic way to elicit and combine probability judgements from multiple experts. As part of this process, experts are asked to individually estimate a probability (e.g., of a future event) which needs to be combined/aggregated into a final group prediction. The experts' judgements can be aggregated behaviourally (by striving for consensus), or mathematically (by using a mathematical rule to combine individual estimates). Mathematical rules (e.g., weighted linear combinations of judgments) provide an objective approach to aggregation. However, the choice of a rule is not straightforward, and the aggregated group probability judgement's quality depends on it. The quality of an aggregation can be defined in terms of accuracy, calibration and informativeness. These measures can be used to compare different aggregation approaches and help decide on which aggregation produces the "best" final prediction.In the ideal case, individual experts' performance (as probability assessors) is scored, these scores are translated into performance-based weights, and a performance-based weighted aggregation is used. When this is not possible though, several other aggregation methods, informed by measurable proxies for good performance, can be formulated and compared. We use several data sets to investigate the relative performance of multiple aggregation methods informed by previous experience and the available literature. Even though the accuracy, calibration, and informativeness of the majority of methods are very similar, two of the aggregation methods distinguish themselves as the best and worst.