1

This is the question. Given an integer n, return an array ans of length n + 1 such that for each i (0 <= i <= n), ans[i] is the number of 1's in the binary representation of i.

https://leetcode.com/problems/counting-bits/

And this is my solution below. If the input is 2, expected output should be [0,1,1] but I keep getting [0,2,2]. Why is that???

var countBits = function(n) {
    //n=3. [0,1,2,3]
    var arr=[0];
     for (var i=1; i<=n; i++){
         var sum = 0;
         var value = i;
         while(value != 0){
             sum += value%2;
             value /= 2;
         }
         arr.push(sum);
     }
    return arr;
};

console.log(countBits(3));

3
  • 2
    value /= 2 does actual floating point division, not integer division as you seem to think. (Basic debugging should have told you this much.) Commented Jun 23, 2022 at 21:30
  • You need to set value to the floor of value/2 (to account for the remainder you've already handled). Commented Jun 23, 2022 at 21:30
  • Thanks a lot! I totally forgot about that Commented Jun 24, 2022 at 8:52

5 Answers 5

4

You're doing way too much work.

Suppose b is the largest power of 2 corresponding to the first bit in i. Evidently, i has exactly one more 1 in its binary representation than does i - b. But since you're generating the counts in order, you've already worked out how many 1s there are in i - b.

The only trick is how to figure out what b is. And to do that, we use another iterative technique: as you list numbers, b changes exactly at the moment that i becomes twice the previous value of b:

const countBits = function(n) {
    let arr = [0], bit = 1;
    for (let i = 1; i <= n; i++){
        if (i == bit + bit) bit += bit;
        arr.push(arr[i - bit] + 1);
    }
    return arr;
};

console.log(countBits(20));

This technique is usually called "dynamic programming". In essence, it takes a recursive definition and computes it bottom-up: instead of starting at the desired argument and recursing down to the base case, it starts at the base and then computes each intermediate value which will be needed until it reaches the target. In this case, all intermediate values are needed, saving us from having to think about how to compute only the minimum number of intermediate values necessary.

Sign up to request clarification or add additional context in comments.

1 Comment

Voted up. You might appreciate gog's answer
3

Think of it this way: if you know how many ones are there in a number X, then you immediately know how many ones are there in X*2 (the same) and X*2+1 (one more). Since you're processing numbers in order, you can just push both derived counts to the result and skip to the next number:

let b = [0, 1]

for (let i = 1; i <= N / 2; i++) {
    b.push(b[i])
    b.push(b[i] + 1)
}

Since we push two numbers at once, the result will be one-off for even N, you have to pop the last number afterwards.

1 Comment

Voted up. I believe you could make it even more succinct by have one push call with two parameters.
0

use floor():

sum += Math.floor(value%2);
value = Math.floor(value/2);

I guess your algorithm works for some typed language where integers division results in integers

Comments

0

Here's a very different approach, using the opposite of a fold (such as Array.prototype.reduce) typically called unfold. In this case, we start with a seed array, perform some operation on it to yield the next value, and recur, until we decide to stop.

We write a generic unfold and then use it with a callback which accepts the entire array we've found so far plus next and done callbacks, and then chooses whether to stop (if we've reached our limit) or continue. In either case, it calls one of the two callbacks.

It looks like this:

const _unfold = (fn, init) =>
  fn (init, (x) => _unfold (fn, [...init, x]), () => init)

// Number of 1's in the binary representation of each integer in [`0 ... n`]
const oneBits = (n) => _unfold (
  (xs, next, done) => xs .length < n ? next (xs .length % 2 + xs [xs .length >> 1]) : done(),
  [0]
)

console .log (oneBits (20))

I have a GitHub Gist which shows a few more examples of this pattern.

An interesting possible extension would be to encapsulate the handling of the array-to--length-n bit, and make this function trivial. That's no the only use of such an _unfold, but it's probably a common one. It could look like this:

const _unfold = (fn, init) =>
  fn (init, (x) => _unfold (fn, [...init, x]), () => init)

const unfoldN = (fn, init) => (n) => _unfold (
  (xs, next, done) => xs .length < n ? next (fn (xs)) : done (),
  init
)

const oneBits = unfoldN (
  (xs) => xs .length % 2 + xs [xs .length >> 1], 
  [0]
)

console .log (oneBits (20))

Here we have two helper functions that make oneBits quite trivial to write. And those helpers have many potential uses.

Comments

0

I've done like this, with bitwise operators

function countBits(n) {
    const arr = new Array(n + 1).fill(0);
    for(let i = 0; i <= n; i++){
        arr[i] = arr[i >>> 1] + (i & 1);
    }
    return arr;
};

2 Comments

Welcome to Stack Overflow! Please take the tour, visit the help center and read up on answering questions well. While this code looks like it answers the question, providing additional context regarding how and/or why it solves the problem would improve the answer's long-term value.
As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.