Every programming language I can think of does an optimization where boolean operators determine flow. It seems like a good idea at first, but it's also inconsistent. Here's the phenomenon I'm taking about starting with C:

C

#include <stdio.h>

int foo()
{
    printf("foo\n");
    return 1;
}

int bar()
{
    printf("bar\n");
    return 1;
}

int main(int argc, char** args)
{
    printf("%d\n", foo() || bar());
    return 0;
}

Output:

foo
1

The question is, why doesn't it print this?:

foo
bar
1

And the answer is: bar() didn't run, because C said to itself:

foo() || bar() that's two things or-ed together, the first thing is true, which means, no matter what the second thing is, the expression is true, so we don't have to evauate it.

If you've never seen this before, you might be forgiven for thinking it's a C thing. But weirdly, it seems fairly universal.

Python

def foo():
    print "foo"
    return True

def bar():
    print "bar"
    return True

print(foo() or bar())

Ouput:

foo
True

Lua

function foo()
    print("foo")
    return true
end

function bar()
    print("bar")
    return true
end

print(foo() or bar())

Output:

foo
true

JavaScript

function foo()
{
    console.log("foo")
    return true
}

function bar()
{
    console.log("bar")
    return true
}

console.log(foo() || bar())

Output:

foo
true

Java

class MyFooBar
{
    public static boolean foo()
    {
        System.out.println("foo");
        return true;
    }

    public static boolean bar()
    {
        System.out.println("bar");
        return true;
    }

    public static void main(String args[])
    {
        System.out.println(foo() || bar());
    }
}

Output:

foo
true

Rust

fn foo() -> bool
{
    println!("foo");
    true
}

fn bar() -> bool
{
    println!("bar");
    true
}

fn main()
{
    println!("{}", foo() || bar());
}

Output:

foo
true

Ruby

def foo
    print "foo\n"
    true
end

def bar
    print "bar\n"
    true
end

print (foo() or bar())

Output:

foo
true

Perl

sub foo {
    print "foo";
    return 1
}

sub bar {
    print "bar";
    return 1
}

print (foo or bar)

Output:

foo1

It's kinda remarkable to me that all those languages made the same call, especially when they disagree about so many other things. I guess I can see why, I mean, it comes in handy. In Perl, you often see this:

open(my $fh, "<", "somefile.txt") or die "error!";

So, it kinda says "open or die", which is whimsical, and maybe reads naturally to English-speakers.

In JavaScript and Lua you'll sometimes see a similar trick for getting a default value when a variable is undefined, like in JavaScript, you might see something like this:

function resize(height, width)
{
    height = height || 600;
    width = width || 800;
    // ...
}

In Lua, there's a convention of using and and or to hand-roll a ternary operator, so you'll see this:

local width = isBig() and 300 or 30

instead of:

local width
if isBig() then
    width = 300
else
    width = 30
end

It's just one line, so that's better, right?

Eh... I don't know about all this. It's neat how these short expressions read, but the underlying optimization actually represents a tiny inconsistency in the language. It's inconsistent because "and" and "or" are binary operators, but they have something that no other binary operators have; they control flow. With other binary operators...

foo() + bar()
foo() - bar()
foo() * bar()
foo() / bar()
foo() | bar()
foo() & bar()
foo() == bar()
foo() != bar()

...you can depend on foo() and bar() to both execute. But not with boolean operators? Huh?

Consider this C code:

#include <stdio.h>

int foo()
{
    printf("foo\n");
    return 0;
}

int bar()
{
    printf("bar\n");
    return 0;
}

int main(int argc, char** args)
{
    printf("%d\n", foo() * bar());
    return 0;
}

Output:

foo
bar
0

Even though a similar optimization could work here. Remember, the rationale with this:

foo() or bar()

If foo() is true, we needn't bother running bar() because we know the value of the whole expression already.

That's also the case here when foo() is 0:

foo() * bar()

and yet, the language does not cull the call to bar().

I don't like it. To me, it would be cleaner if I could think about boolean operators as two-argument functions. Instead, they are not like functions, the're actually more like if-statements, because they determine what code executes.

What if we made a language without this exceptional behavior? What if "and" and "or" were just functions? Come on, let's dream big!