Categories
aggregate dataframe r r-faq

How to sum a variable by group

472

I have a data frame with two columns. First column contains categories such as “First”, “Second”, “Third”, and the second column has numbers that represent the number of times I saw the specific groups from “Category”.

For example:

Category     Frequency
First        10
First        15
First        5
Second       2
Third        14
Third        20
Second       3

I want to sort the data by Category and sum all the Frequencies:

Category     Frequency
First        30
Second       5
Third        34

How would I do this in R?

1

  • 4

    The fastest way in base R is rowsum.

    – Michael M

    Jan 4, 2019 at 18:58


482

Using aggregate:

aggregate(x$Frequency, by=list(Category=x$Category), FUN=sum)
  Category  x
1    First 30
2   Second  5
3    Third 34

In the example above, multiple dimensions can be specified in the list. Multiple aggregated metrics of the same data type can be incorporated via cbind:

aggregate(cbind(x$Frequency, x$Metric2, x$Metric3) ...

(embedding @thelatemail comment), aggregate has a formula interface too

aggregate(Frequency ~ Category, x, sum)

Or if you want to aggregate multiple columns, you could use the . notation (works for one column too)

aggregate(. ~ Category, x, sum)

or tapply:

tapply(x$Frequency, x$Category, FUN=sum)
 First Second  Third 
    30      5     34 

Using this data:

x <- data.frame(Category=factor(c("First", "First", "First", "Second",
                                      "Third", "Third", "Second")), 
                    Frequency=c(10,15,5,2,14,20,3))

3

  • 4

    @AndrewMcKinlay, R uses the tilde to define symbolic formulae, for statistics and other functions. It can be interpreted as “model Frequency by Category” or “Frequency depending on Category”. Not all languages use a special operator to define a symbolic function, as done in R here. Perhaps with that “natural-language interpretation” of the tilde operator, it becomes more meaningful (and even intuitive). I personally find this symbolic formula representation better than some of the more verbose alternatives.

    – r2evans

    Dec 19, 2016 at 4:35

  • 1

    Being new to R (and asking the same sorts of questions as the OP), I would benefit from some more detail of the syntax behind each alternative. For instance, if I have a larger source table and want to subselect just two dimensions plus summed metrics, can I adapt any of these methods? Hard to tell.

    Oct 28, 2018 at 10:42

  • Is there anyway of maintaining an ID column? Say the categories are ordered and the ID column is 1:nrow(df), is it possible to keep the starting position of each category after aggregating? So the ID column would end up as, for example, 1, 3, 4, 7 after collapsing with aggregate. In my case I like aggregate because it works over many columns automatically.

    – QAsena

    Jun 24, 2020 at 20:32

370

You can also use the dplyr package for that purpose:

library(dplyr)
x %>% 
  group_by(Category) %>% 
  summarise(Frequency = sum(Frequency))

#Source: local data frame [3 x 2]
#
#  Category Frequency
#1    First        30
#2   Second         5
#3    Third        34

Or, for multiple summary columns (works with one column too):

x %>% 
  group_by(Category) %>% 
  summarise(across(everything(), sum))

Here are some more examples of how to summarise data by group using dplyr functions using the built-in dataset mtcars:

# several summary columns with arbitrary names
mtcars %>% 
  group_by(cyl, gear) %>%                            # multiple group columns
  summarise(max_hp = max(hp), mean_mpg = mean(mpg))  # multiple summary columns

# summarise all columns except grouping columns using "sum" 
mtcars %>% 
  group_by(cyl) %>% 
  summarise(across(everything(), sum))

# summarise all columns except grouping columns using "sum" and "mean"
mtcars %>% 
  group_by(cyl) %>% 
  summarise(across(everything(), list(mean = mean, sum = sum)))

# multiple grouping columns
mtcars %>% 
  group_by(cyl, gear) %>% 
  summarise(across(everything(), list(mean = mean, sum = sum)))

# summarise specific variables, not all
mtcars %>% 
  group_by(cyl, gear) %>% 
  summarise(across(c(qsec, mpg, wt), list(mean = mean, sum = sum)))

# summarise specific variables (numeric columns except grouping columns)
mtcars %>% 
  group_by(gear) %>% 
  summarise(across(where(is.numeric), list(mean = mean, sum = sum)))

For more information, including the %>% operator, see the introduction to dplyr.

5

  • 1

    How fast is it when compared to the data.table and aggregate alternatives presented in other answers?

    – asieira

    Jan 23, 2015 at 14:35

  • 10

    @asieira, Which is fastest and how big the difference (or if the difference is noticeable) is will always depend on your data size. Typically, for large data sets, for example some GB, data.table will most likely be fastest. On smaller data size, data.table and dplyr are often close, also depending on the number of groups. Both data,table and dplyr will be quite a lot faster than base functions, however (can well be 100-1000 times faster for some operations). Also see here

    – talat

    Jan 23, 2015 at 14:50


  • 1

    What does the “funs” refer to in the second example?

    Oct 8, 2019 at 19:02

  • 1

    @lauren.marietta you can specify the function(s) you want to apply as summary inside the funs() argument of summarise_all and its related functions (summarise_at, summarise_if)

    – talat

    Oct 9, 2019 at 11:52

  • In case, the column name has spaces. It might not work. Using back ticks would help. Ref. stackoverflow.com/questions/22842232/…

    Nov 2, 2020 at 7:57

86

The answer provided by rcs works and is simple. However, if you are handling larger datasets and need a performance boost there is a faster alternative:

library(data.table)
data = data.table(Category=c("First","First","First","Second","Third", "Third", "Second"), 
                  Frequency=c(10,15,5,2,14,20,3))
data[, sum(Frequency), by = Category]
#    Category V1
# 1:    First 30
# 2:   Second  5
# 3:    Third 34
system.time(data[, sum(Frequency), by = Category] )
# user    system   elapsed 
# 0.008     0.001     0.009 

Let’s compare that to the same thing using data.frame and the above above:

data = data.frame(Category=c("First","First","First","Second","Third", "Third", "Second"),
                  Frequency=c(10,15,5,2,14,20,3))
system.time(aggregate(data$Frequency, by=list(Category=data$Category), FUN=sum))
# user    system   elapsed 
# 0.008     0.000     0.015 

And if you want to keep the column this is the syntax:

data[,list(Frequency=sum(Frequency)),by=Category]
#    Category Frequency
# 1:    First        30
# 2:   Second         5
# 3:    Third        34

The difference will become more noticeable with larger datasets, as the code below demonstrates:

data = data.table(Category=rep(c("First", "Second", "Third"), 100000),
                  Frequency=rnorm(100000))
system.time( data[,sum(Frequency),by=Category] )
# user    system   elapsed 
# 0.055     0.004     0.059 
data = data.frame(Category=rep(c("First", "Second", "Third"), 100000), 
                  Frequency=rnorm(100000))
system.time( aggregate(data$Frequency, by=list(Category=data$Category), FUN=sum) )
# user    system   elapsed 
# 0.287     0.010     0.296 

For multiple aggregations, you can combine lapply and .SD as follows

data[, lapply(.SD, sum), by = Category]
#    Category Frequency
# 1:    First        30
# 2:   Second         5
# 3:    Third        34

5

  • 14

    +1 But 0.296 vs 0.059 isn’t particularly impressive. The data size needs to be much bigger than 300k rows, and with more than 3 groups, for data.table to shine. We’ll try and support more than 2 billion rows soon for example, since some data.table users have 250GB of RAM and GNU R now supports length > 2^31.

    Sep 9, 2013 at 10:05


  • 2

    True. Turns out I don’t have all that RAM though, and was simply trying to provide some evidence of data.table’s superior performance. I’m sure the difference would be even larger with more data.

    – asieira

    Oct 23, 2013 at 23:22

  • 1

    I had 7 mil observations dplyr took .3 seconds and aggregate() took 22 seconds to complete the operation. I was going to post it on this topic and you beat me to it!

    – zazu

    Nov 14, 2015 at 19:10

  • 3

    There is a even shorter way to write this data[, sum(Frequency), by = Category]. You could use .N which substitutes the sum() function. data[, .N, by = Category]. Here is a useful cheatsheet: s3.amazonaws.com/assets.datacamp.com/img/blog/…

    – Stophface

    Feb 22, 2017 at 11:47

  • 4

    Using .N would be equivalent to sum(Frequency) only if all the values in the Frequency column were equal to 1, because .N counts the number of rows in each aggregated set (.SD). And that is not the case here.

    – asieira

    Mar 1, 2017 at 13:26