bit-manipulation bitwise-operators c c++

How do I set, clear, and toggle a single bit?


How do I set, clear, and toggle a bit?



Setting a bit

Use the bitwise OR operator (|) to set a bit.

number |= 1UL << n;

That will set the nth bit of number. n should be zero, if you want to set the 1st bit and so on upto n-1, if you want to set the nth bit.

Use 1ULL if number is wider than unsigned long; promotion of 1UL << n doesn’t happen until after evaluating 1UL << n where it’s undefined behaviour to shift by more than the width of a long. The same applies to all the rest of the examples.

Clearing a bit

Use the bitwise AND operator (&) to clear a bit.

number &= ~(1UL << n);

That will clear the nth bit of number. You must invert the bit string with the bitwise NOT operator (~), then AND it.

Toggling a bit

The XOR operator (^) can be used to toggle a bit.

number ^= 1UL << n;

That will toggle the nth bit of number.

Checking a bit

You didn’t ask for this, but I might as well add it.

To check a bit, shift the number n to the right, then bitwise AND it:

bit = (number >> n) & 1U;

That will put the value of the nth bit of number into the variable bit.

Changing the nth bit to x

Setting the nth bit to either 1 or 0 can be achieved with the following on a 2’s complement C++ implementation:

number ^= (-x ^ number) & (1UL << n);

Bit n will be set if x is 1, and cleared if x is 0. If x has some other value, you get garbage. x = !!x will booleanize it to 0 or 1.

To make this independent of 2’s complement negation behaviour (where -1 has all bits set, unlike on a 1’s complement or sign/magnitude C++ implementation), use unsigned negation.

number ^= (-(unsigned long)x ^ number) & (1UL << n);


unsigned long newbit = !!x;    // Also booleanize to force 0 or 1
number ^= (-newbit ^ number) & (1UL << n);

It’s generally a good idea to use unsigned types for portable bit manipulation.


number = (number & ~(1UL << n)) | (x << n);

(number & ~(1UL << n)) will clear the nth bit and (x << n) will set the nth bit to x.

It’s also generally a good idea to not to copy/paste code in general and so many people use preprocessor macros (like the community wiki answer further down) or some sort of encapsulation.


  • 150

    I would like to note that on platforms that have native support for bit set/clear (ex, AVR microcontrollers), compilers will often translate ‘myByte |= (1 << x)’ into the native bit set/clear instructions whenever x is a constant, ex: (1 << 5), or const unsigned x = 5.

    – Aaron

    Sep 17, 2008 at 17:13

  • 54

    bit = number & (1 << x); will not put the value of bit x into bit unless bit has type _Bool (<stdbool.h>). Otherwise, bit = !!(number & (1 << x)); will..

    Nov 16, 2008 at 7:49

  • 24

    why don’t you change the last one to bit = (number >> x) & 1

    – aaronman

    Jun 26, 2013 at 18:47

  • 48

    1 is an int literal, which is signed. So all the operations here operate on signed numbers, which is not well defined by the standards. The standards does not guarantee two’s complement or arithmetic shift so it is better to use 1U.

    Dec 10, 2013 at 8:53

  • 64

    I prefer number = number & ~(1 << n) | (x << n); for Changing the n-th bit to x.

    – leoly

    Mar 24, 2015 at 0:38


Using the Standard C++ Library: std::bitset<N>.

Or the Boost version: boost::dynamic_bitset.

There is no need to roll your own:

#include <bitset>
#include <iostream>

int main()
    std::bitset<5> x;

    x[1] = 1;
    x[2] = 0;
    // Note x[0-4]  valid

    std::cout << x << std::endl;

[Alpha:] > ./a.out

The Boost version allows a runtime sized bitset compared with a standard library compile-time sized bitset.


  • 38

    +1. Not that std::bitset is usable from “C”, but as the author tagged his/her question with “C++”, AFAIK, your answer is the best around here… std::vector<bool> is another way, if one knows its pros and its cons

    – paercebal

    Sep 19, 2008 at 18:16

  • 28

    @andrewdotnich: vector<bool> is (unfortunately) a specialization that stores the values as bits. See for more info…

    – Niklas

    Dec 12, 2008 at 20:40

  • 85

    Maybe nobody mentioned it because this was tagged embedded. In most embedded systems you avoid STL like the plague. And boost support is likely a very rare bird to spot among most embedded compilers.

    – Lundin

    Aug 18, 2011 at 19:47

  • 19

    @Martin It is very true. Besides specific performance killers like STL and templates, many embedded systems even avoid the whole standard libraries entirely, because they are such a pain to verify. Most of the embedded branch is embracing standards like MISRA, that requires static code analysis tools (any software professionals should be using such tools btw, not just embedded folks). Generally people have better things to do than run static analysis through the whole standard library – if its source code is even available to them on the specific compiler.

    – Lundin

    Aug 19, 2011 at 6:26

  • 45

    @Lundin: Your statements are excessively broad (thus useless to argue about). I am sure that I can find situations were they are true. This does not change my initial point. Both of these classes are perfectly fine for use in embedded systems (and I know for a fact that they are used). Your initial point about STL/Boost not being used on embedded systems is also wrong. I am sure there are systems that don’t use them and even the systems that do use them they are used judiciously but saying they are not used is just not correct (because there are systems were they are used).

    Aug 19, 2011 at 6:41


The other option is to use bit fields:

struct bits {
    unsigned int a:1;
    unsigned int b:1;
    unsigned int c:1;

struct bits mybits;

defines a 3-bit field (actually, it’s three 1-bit felds). Bit operations now become a bit (haha) simpler:

To set or clear a bit:

mybits.b = 1;
mybits.c = 0;

To toggle a bit:

mybits.a = !mybits.a;
mybits.b = ~mybits.b;
mybits.c ^= 1;  /* all work */

Checking a bit:

if (mybits.c)  //if mybits.c is non zero the next line below will execute

This only works with fixed-size bit fields. Otherwise you have to resort to the bit-twiddling techniques described in previous posts.


  • 81

    I’ve always found using bitfields is a bad idea. You have no control over the order in which bits are allocated (from the top or the bottom), which makes it impossible to serialize the value in a stable/portable way except bit-at-a-time. It’s also impossible to mix DIY bit arithmetic with bitfields, for example making a mask that tests for several bits at once. You can of course use && and hope the compiler will optimize it correctly…

    Jun 28, 2010 at 6:17

  • 43

    Bit fields are bad in so many ways, I could almost write a book about it. In fact I almost had to do that for a bit field program that needed MISRA-C compliance. MISRA-C enforces all implementation-defined behavior to be documented, so I ended up writing quite an essay about everything that can go wrong in bit fields. Bit order, endianess, padding bits, padding bytes, various other alignment issues, implicit and explicit type conversions to and from a bit field, UB if int isn’t used and so on. Instead, use bitwise-operators for less bugs and portable code. Bit fields are completely redundant.

    – Lundin

    Aug 18, 2011 at 19:19

  • 49

    Like most language features, bit fields can be used correctly or they can be abused. If you need to pack several small values into a single int, bit fields can be very useful. On the other hand, if you start making assumptions about how the bit fields map to the actual containing int, you’re just asking for trouble.

    – Ferruccio

    Aug 18, 2011 at 19:35

  • 5

    @endolith: That would not be a good idea. You could make it work, but it wouldn’t necessarily be portable to a different processor, or to a different compiler or even to the next release of the same compiler.

    – Ferruccio

    Mar 8, 2012 at 21:02

  • 4

    @Yasky and Ferruccio getting different answers to a sizeof() for this approach should illustrate the problems with compatibility not just across compilers but across hardware. We sometimes fool ourselves that we’ve solved these issues with languages or defined runtimes but it really comes down to ‘will it work on my machine?’. You embedded guys have my respect (and sympathies).

    Dec 8, 2016 at 16:11