Categories
pivot pivot-table sql sql-server

Convert Rows to columns using ‘Pivot’ in SQL Server

316

I have read the stuff on MS pivot tables and I am still having problems getting this correct.

I have a temp table that is being created, we will say that column 1 is a Store number, and column 2 is a week number and lastly column 3 is a total of some type. Also the Week numbers are dynamic, the store numbers are static.

Store      Week     xCount
-------    ----     ------
102        1        96
101        1        138
105        1        37
109        1        59
101        2        282
102        2        212
105        2        78
109        2        97
105        3        60
102        3        123
101        3        220
109        3        87

I would like it to come out as a pivot table, like this:

Store        1          2          3        4        5        6....
----- 
101        138        282        220
102         96        212        123
105         37        
109

Store numbers down the side and weeks across the top.

2

393

If you are using SQL Server 2005+, then you can use the PIVOT function to transform the data from rows into columns.

It sounds like you will need to use dynamic sql if the weeks are unknown but it is easier to see the correct code using a hard-coded version initially.

First up, here are some quick table definitions and data for use:

CREATE TABLE yt 
(
  [Store] int, 
  [Week] int, 
  [xCount] int
);
    
INSERT INTO yt
(
  [Store], 
  [Week], [xCount]
)
VALUES
    (102, 1, 96),
    (101, 1, 138),
    (105, 1, 37),
    (109, 1, 59),
    (101, 2, 282),
    (102, 2, 212),
    (105, 2, 78),
    (109, 2, 97),
    (105, 3, 60),
    (102, 3, 123),
    (101, 3, 220),
    (109, 3, 87);

If your values are known, then you will hard-code the query:

select *
from 
(
  select store, week, xCount
  from yt 
) src
pivot
(
  sum(xcount)
  for week in ([1], [2], [3])
) piv;

See SQL Demo

Then if you need to generate the week number dynamically, your code will be:

DECLARE @cols AS NVARCHAR(MAX),
    @query  AS NVARCHAR(MAX)

select @cols = STUFF((SELECT ',' + QUOTENAME(Week) 
                    from yt
                    group by Week
                    order by Week
            FOR XML PATH(''), TYPE
            ).value('.', 'NVARCHAR(MAX)') 
        ,1,1,'')

set @query = 'SELECT store,' + @cols + ' from 
             (
                select store, week, xCount
                from yt
            ) x
            pivot 
            (
                sum(xCount)
                for week in (' + @cols + ')
            ) p '

execute(@query);

See SQL Demo.

The dynamic version, generates the list of week numbers that should be converted to columns. Both give the same result:

| STORE |   1 |   2 |   3 |
---------------------------
|   101 | 138 | 282 | 220 |
|   102 |  96 | 212 | 123 |
|   105 |  37 |  78 |  60 |
|   109 |  59 |  97 |  87 |

8

  • 4

    Very nice! But how to eliminate column when all values of that column are NULL?

    – ZooZ

    Jan 19, 2015 at 9:22

  • 1

    @ZooZ See answer below. Haven’t tried it out verbatim, but the concept is sound.

    – ruffin

    Jun 17, 2015 at 15:58

  • 1

    +1 “It sounds like you will need to use dynamic sql if the weeks are unknown but it is easier to see the correct code using a hard-cded version initially.” Unlike the Qlikview Generic function (community.qlik.com/blogs/qlikviewdesignblog/2014/03/31/generic) which allows doesn’t require that you explicitly name the different “FOR ____ IN (…)”

    Aug 13, 2015 at 22:03

  • 1

    If you are building a pivot table with a cte earlier.cte3 AS (select ... ) then you have the defined above logic with the @cols and @query … there is an error.` Invalid object name ‘cte3’.` how do you fix that. –

    – Elizabeth

    Mar 1, 2016 at 18:59

  • 3

    This is fantastic – nice one @bluefeet. I’d never used STUFF(...) before (or the XML PATH either). For the benefit of other readers, all that is doing is joining the column names and chopping off the leading comma. Note I think the following is slightly simpler: select @cols =(SELECT DISTINCT QUOTENAME(Week) + ‘,’ from yt order by 1 FOR XML PATH(”)) set @cols = SUBSTRING(@cols, 1, LEN(@cols) – 1) … replacing the group by by distinct and order by 1 and manually chopping a suffixed comma!

    Jun 16, 2016 at 12:42


29

This is for dynamic # of weeks.

Full example here:SQL Dynamic Pivot

DECLARE @DynamicPivotQuery AS NVARCHAR(MAX)
DECLARE @ColumnName AS NVARCHAR(MAX)

--Get distinct values of the PIVOT Column 
SELECT @ColumnName= ISNULL(@ColumnName + ',','') + QUOTENAME(Week)
FROM (SELECT DISTINCT Week FROM #StoreSales) AS Weeks

--Prepare the PIVOT query using the dynamic 
SET @DynamicPivotQuery = 
  N'SELECT Store, ' + @ColumnName + ' 
    FROM #StoreSales
    PIVOT(SUM(xCount) 
          FOR Week IN (' + @ColumnName + ')) AS PVTTable'
--Execute the Dynamic Pivot Query
EXEC sp_executesql @DynamicPivotQuery

2

  • Hey I have a fiddle where I need to pivot tables dynamically do you think you can help me with that ? dbfiddle.uk/…

    Oct 11, 2019 at 6:49

  • @SillyVolley here is one, you didn’t specify what you wanted to pivot on. Also I don’t know if you can do this in Postgres so I did it in SQL Server: dbfiddle.uk/…

    – Enkode

    Oct 11, 2019 at 8:24

17

I’ve achieved the same thing before by using subqueries. So if your original table was called StoreCountsByWeek, and you had a separate table that listed the Store IDs, then it would look like this:

SELECT StoreID, 
    Week1=(SELECT ISNULL(SUM(xCount),0) FROM StoreCountsByWeek WHERE StoreCountsByWeek.StoreID=Store.StoreID AND Week=1),
    Week2=(SELECT ISNULL(SUM(xCount),0) FROM StoreCountsByWeek WHERE StoreCountsByWeek.StoreID=Store.StoreID AND Week=2),
    Week3=(SELECT ISNULL(SUM(xCount),0) FROM StoreCountsByWeek WHERE StoreCountsByWeek.StoreID=Store.StoreID AND Week=3)
FROM Store
ORDER BY StoreID

One advantage to this method is that the syntax is more clear and it makes it easier to join to other tables to pull other fields into the results too.

My anecdotal results are that running this query over a couple of thousand rows completed in less than one second, and I actually had 7 subqueries. But as noted in the comments, it is more computationally expensive to do it this way, so be careful about using this method if you expect it to run on large amounts of data .

1

  • 8

    it is easier, but it is a very expensive operation, those subqueries have to be executed once for each row returned from the table.

    – Greg

    Jul 5, 2016 at 23:30