Zeros, Ones, and Twenty-First Century Reasoning

Of the handful of books which have changed my life, The Next 100 Years, written in 2009, by George Friedman remains one of my favorites.  Not only do I think it is an extraordinary book, but it is also the only book of which I say I have a favorite chapter (Chapter 3)1.  It contains a straightforward observation about computers, from which Friedman draws some extraordinary conclusions.

“The computer is based on binary logic… To a computer, everything is a number, from a letter on a screen to a bit of music. Everything is reduced to zeros and ones... It is a powerful and seductive tool. Yet it operates using a logic that lacks other, more complex, elements of reason. The computer focuses ruthlessly on things that can be represented in numbers. By doing so, it also seduces people into thinking that other aspects of knowledge are either unreal or unimportant. The computer treats reason as an instrument for achieving things, not for contemplating things. It narrows dramatically what we mean and intend by reason. [much later] Computing culture is also, by definition, barbaric. The essence of barbarism is the reduction of culture to a simple, driving force that will tolerate no diversion or competition. The way the computer is designed, the manner in which it is programmed, and the way it has evolved represent a powerful, reductionist force. It constitutes not reason contemplating its complexity, but reason reducing itself to its simplest expression and justifying itself through practical achievement. Pragmatism, computers, and Microsoft (or any other American corporation) are ruthlessly focused, utterly instrumental, and highly effective. The fragmentation of American culture is real, but it is slowly resolving itself into the barbarism of the computer."

A computer can, or it cannot.  If code is wrongly written, it does not work.  Occasionally we will enjoy the result anyway, but a computer isn't capable of correcting itself.  Give them flawed inputs and they proceed without hesitation, reinterpretation, or correction.  They execute exactly what they are given.  This is the essence of I, Robot.  The robots follow the Three Laws of Robotics, but still there are unexpected results.

That's fine as far as computers go.  But because we engage so much with computers - and now, through LLMs, have conversations with computers - they've begun to change us.  We have been led to believe knowledge is useful only when it can be used, and its usefulness is based on its application.  But humanity should not behave like computers.  The world is not zeros and ones and we should not experience it as such.

Human language and behavior is very different from language of computers. Meaning shifts with tone, context, and history.  Words can mean different things to different people, and even different things to the same person at different times.  Seeing this forces us to see competing views and priorities, which forces us to acknowledge the world cannot be so easily defined and categorized.

But the more we live alongside computer systems that provide clean outputs, the more we start expecting clean outputs everywhere else. Something is either the best or the worst. Five stars or zero. Genius or fraud. Hero or villain. “It depends” starts to sound evasive.

When the world out there can only be one thing or the other, there's less justification to understanding the other.  I mean, what is extremism but binary thinking with moral passion attached?  Once we internalize the computer's structure nuance begins to look like corruption rather than wisdom.  Everything is either a zero or it a one.  0.5, or 2, or -1 become impossible to comprehend.  Nuance starts to seem like corruption, not wisdom.

You can hear this register everywhere.  Donald Trump, despite being famously uninterested intechnology, often talks in binary superlatives and catastrophes: everything is “the best,” “a disaster,” “perfect,” “a total failure.” It is computational language. The grammar of the machine has become the grammar of public life,  which means public life has become a battle of 1 V 0.  Nuance is rarely tolerated.

As an experiment, I drafted this essay with ChatGPT.  It’s unusual for me to write something collaboratively.  My intention is to take what the machine produces and add nuance through more prompts or direct edits.

(The parts I underlined are direct quotes from my prompt about this collaboration.  It reorganized my thoughts, but to say it created this paragraph - much less this essay - is hard to argue.  Surprisingly, it tried to insert the same phrases over and over again into other parts of this essay.  I'd remove them and it'd insert them again.  It wouldn't try to reform them in some attempt to persuade me they actually fit.  It just thought it was the right phrase.)

But the reduction isn’t always flattening.  Computers work precisely because they force ambiguity out of the system. They demand definitions.  Maybe it doesn't do what you want, but it does what it was told.  That can help us determine if we are saying what we really mean, or if we should revisit our approach to be more precise.  But we'd need an opportunity for those tests.  The scientific method doesn't prevent false hypothesis - it very much required them.  You just can't get too attached.

To build a bridge or write a law, you need to have a nuanced discussion about use and popular support and funds and safety, etc.  But ultimately you need to decide on your materials, placement, contractor, etc.  You need the ones and zeros for it to come into being.

The real goal isn’t wholesale rejecting the reduction. It’s sequencing it properly.  Plan, then do.  First, we think in gradients. We wrestle. We qualify. We hold competing values in tension.  We understand the context.  We make a decision with those nuances in mind.  But when we move on to the step of doing, then we allow the 1s and 0s to become important.  Bearing in mind, of course, we are not computers, and can always go back and adjust our plan.


I would estimate ChatGPT is responsible for 33% of this essay.  I gave them some ideas and provided the source text, and they wrote it into paragraphs which I rearranged/edited/cut and I added in some original passages.  When I asked Chat to estimate their responsibility they said 25%-35%, citing the idea was mine, and the final essay was mine (by which I think they mean they had edits I continually rejected).

1  For this post I reread chapter 3 of The Next 100 Years.  I realized the section on computers occupies only pages 61–64. I briefly considered dog-earing just those pages to revisit in the future.  "Considered" makes it seem I really thought about it.  I mean the thought popped into my head. But that would be a narrowing.  I should read those pages in context. Reading one chapter of a 250-page book is already a narrowing, but reading a dozen pages instead of 250 feels like a reasonable sacrifice to efficiency. Slicing it down further feels cheap.  Given the point here is that we need more nuance, not less, it felt like an unbecoming impulse - and one to reject.  But I had the impulse, nonetheless, and wanted to share that.


Comments

Popular posts from this blog

Maslow's Hierarchies of Matters - June 2025

Maslow's Hierarchies of Matters - May 2025

Maslow's Hierarchies of Matters - July 2025