Categories
dataframe pandas python sql-function

How to filter Pandas dataframe using ‘in’ and ‘not in’ like in SQL

730

How can I achieve the equivalents of SQL’s IN and NOT IN?

I have a list with the required values.
Here’s the scenario:

df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']})
countries_to_keep = ['UK', 'China']

# pseudo-code:
df[df['country'] not in countries_to_keep]

My current way of doing this is as follows:

df = pd.DataFrame({'country': ['US', 'UK', 'Germany', 'China']})
df2 = pd.DataFrame({'country': ['UK', 'China'], 'matched': True})

# IN
df.merge(df2, how='inner', on='country')

# NOT IN
not_in = df.merge(df2, how='left', on='country')
not_in = not_in[pd.isnull(not_in['matched'])]

But this seems like a horrible kludge. Can anyone improve on it?

2

1296

You can use pd.Series.isin.

For “IN” use: something.isin(somewhere)

Or for “NOT IN”: ~something.isin(somewhere)

As a worked example:

import pandas as pd

>>> df
  country
0        US
1        UK
2   Germany
3     China
>>> countries_to_keep
['UK', 'China']
>>> df.country.isin(countries_to_keep)
0    False
1     True
2    False
3     True
Name: country, dtype: bool
>>> df[df.country.isin(countries_to_keep)]
  country
1        UK
3     China
>>> df[~df.country.isin(countries_to_keep)]
  country
0        US
2   Germany

6

  • 1

    If you’re actually dealing with 1-dimensional arrays (like in you’re example) then on you’re first line use a Series instead of a DataFrame, like @DSM used: df = pd.Series({'countries':['US','UK','Germany','China']})

    Nov 13, 2013 at 19:41

  • 3

    @TomAugspurger: like usual, I’m probably missing something. df, both mine and his, is a DataFrame. countries is a list. df[~df.countries.isin(countries)] produces a DataFrame, not a Series, and seems to work even back in 0.11.0.dev-14a04dd.

    – DSM

    Nov 14, 2013 at 16:10

  • 8

    This answer is confusing because you keep reusing the countries variable. Well, the OP does it, and that’s inherited, but that something is done badly before does not justify doing it badly now.

    – ifly6

    May 18, 2018 at 22:20


  • 1

    @ifly6 : Agreed, I made the same mistake and realized it when I got a error : “‘DataFrame’ object has no attribute ‘countries’

    – le_llama

    Apr 23, 2020 at 13:54

  • 3

    For people who are confused by the tilde (like me): stackoverflow.com/questions/8305199/…

    – bmurauer

    Mar 25, 2021 at 14:00

133

Alternative solution that uses .query() method:

In [5]: df.query("countries in @countries_to_keep")
Out[5]:
  countries
1        UK
3     China

In [6]: df.query("countries not in @countries_to_keep")
Out[6]:
  countries
0        US
2   Germany

7

  • 4

    .query is so much more readable. Especially for the “not in” scenario, vs a distant tilde. Thanks!

    Sep 3, 2020 at 10:48

  • 1

    What is @countries ? Another dataframe ? A list ?

    Sep 7, 2021 at 6:10

  • @FlorianCastelain countries are the column you want to check on, OP called this column

    Sep 8, 2021 at 9:06

  • 1

    @FlorianCastelain, somebody has renamed a variable in the original question: countries -> countries_to_keep, so my answer has become invalid. I’ve updated my answer correspondingly. countries_to_keep – is a list.

    Sep 8, 2021 at 10:17

  • 1

    The most readable solution indeed. I wonder if syntax exists to avoid creating countries_to_keep. Is it possible to specify the list of values inside the query directly?

    – Maxim.K

    Dec 9, 2021 at 14:10

82

How to implement ‘in’ and ‘not in’ for a pandas DataFrame?

Pandas offers two methods: Series.isin and DataFrame.isin for Series and DataFrames, respectively.


Filter DataFrame Based on ONE Column (also applies to Series)

The most common scenario is applying an isin condition on a specific column to filter rows in a DataFrame.

df = pd.DataFrame({'countries': ['US', 'UK', 'Germany', np.nan, 'China']})
df
  countries
0        US
1        UK
2   Germany
3     China

c1 = ['UK', 'China']             # list
c2 = {'Germany'}                 # set
c3 = pd.Series(['China', 'US'])  # Series
c4 = np.array(['US', 'UK'])      # array

Series.isin accepts various types as inputs. The following are all valid ways of getting what you want:

df['countries'].isin(c1)

0    False
1     True
2    False
3    False
4     True
Name: countries, dtype: bool

# `in` operation
df[df['countries'].isin(c1)]

  countries
1        UK
4     China

# `not in` operation
df[~df['countries'].isin(c1)]

  countries
0        US
2   Germany
3       NaN

# Filter with `set` (tuples work too)
df[df['countries'].isin(c2)]

  countries
2   Germany

# Filter with another Series
df[df['countries'].isin(c3)]

  countries
0        US
4     China

# Filter with array
df[df['countries'].isin(c4)]

  countries
0        US
1        UK

Filter on MANY Columns

Sometimes, you will want to apply an ‘in’ membership check with some search terms over multiple columns,

df2 = pd.DataFrame({
    'A': ['x', 'y', 'z', 'q'], 'B': ['w', 'a', np.nan, 'x'], 'C': np.arange(4)})
df2

   A    B  C
0  x    w  0
1  y    a  1
2  z  NaN  2
3  q    x  3

c1 = ['x', 'w', 'p']

To apply the isin condition to both columns “A” and “B”, use DataFrame.isin:

df2[['A', 'B']].isin(c1)

      A      B
0   True   True
1  False  False
2  False  False
3  False   True

From this, to retain rows where at least one column is True, we can use any along the first axis:

df2[['A', 'B']].isin(c1).any(axis=1)

0     True
1    False
2    False
3     True
dtype: bool

df2[df2[['A', 'B']].isin(c1).any(axis=1)]

   A  B  C
0  x  w  0
3  q  x  3

Note that if you want to search every column, you’d just omit the column selection step and do

df2.isin(c1).any(axis=1)

Similarly, to retain rows where ALL columns are True, use all in the same manner as before.

df2[df2[['A', 'B']].isin(c1).all(axis=1)]

   A  B  C
0  x  w  0

Notable Mentions: numpy.isin, query, list comprehensions (string data)

In addition to the methods described above, you can also use the numpy equivalent: numpy.isin.

# `in` operation
df[np.isin(df['countries'], c1)]

  countries
1        UK
4     China

# `not in` operation
df[np.isin(df['countries'], c1, invert=True)]

  countries
0        US
2   Germany
3       NaN

Why is it worth considering? NumPy functions are usually a bit faster than their pandas equivalents because of lower overhead. Since this is an elementwise operation that does not depend on index alignment, there are very few situations where this method is not an appropriate replacement for pandas’ isin.

Pandas routines are usually iterative when working with strings, because string operations are hard to vectorise. There is a lot of evidence to suggest that list comprehensions will be faster here..
We resort to an in check now.

c1_set = set(c1) # Using `in` with `sets` is a constant time operation... 
                 # This doesn't matter for pandas because the implementation differs.
# `in` operation
df[[x in c1_set for x in df['countries']]]

  countries
1        UK
4     China

# `not in` operation
df[[x not in c1_set for x in df['countries']]]

  countries
0        US
2   Germany
3       NaN

It is a lot more unwieldy to specify, however, so don’t use it unless you know what you’re doing.

Lastly, there’s also DataFrame.query which has been covered in this answer. numexpr FTW!

1

  • 2

    I like it, but what if I want to compare a column in df3 that isin df1 column? What would that look like?

    Oct 21, 2019 at 1:24