Friday, October 3, 2008

Meeting Barack Obama

I had the privilege of briefly meeting Barack Obama at a fundraiser in early September. Seeing and hearing him speak first-hand stripped away all the layers of media and punditry that has made it so difficult to assess which candidate is best for our country. Meeting him not only confirmed for me that Obama is the best choice, it also confirmed for me that he has what it takes to be a great leader. One of the major epiphanies I had following the meeting centered around his economic plan: how it would impact me personally and how it would impact America as a nation if he executes his plan.

One of the many components of the Obama economic plan is to roll back the Bush tax cuts on the wealthiest Americans, back to the level they were under Bill Clinton. We happen to fall within this group, yet we fully support it. Why would we advocate a tax plan that costs us more money? Because it has a better return on investment than the alternative, which calls for further tax cuts despite rampant federal spending, thus putting the stability and security of our country at great risk. Let me begin with a crash course on federal budget economics.

Tax revenue is the primary source of income for the federal government. When it spends more than it receives, it has a budget deficit. To fund a deficit, it borrows money by issuing government bonds to domestic lenders and sovereign bonds to foreign lenders. These bonds are considered risk-free because the government can raise taxes, reduce spending, or even print more money if necessary to ensure the bond can be redeemed (paid back in full) at maturity.

Therefore, with a budget deficit, part of the tax revenue received is spent on interest payments to debt holders. New debt issued at higher interest rates due to inflation costs more. Interest payments made to foreign debt holders fluctuate in cost as the value of the US Dollar changes. This is because sovereign bonds are denominated in the foreign currency, and a cheaper dollar means more is required to convert to the currency for interest payments. The worst case scenario is when tax revenue decreases and spending increases over a long period of time, because it pushes national debt to levels that could be catastrophic to the economy and the value of our currency.

As of March 2008, our national debt was about $9.6 trillion dollars. $5.4 trillion was owned by the public. Public debt is lent by individuals, businesses, and both local and foreign governments. Almost half of this (over $2.6 trillion) is sovereign debt owned by foreign countries. Russia owned $60 billion, oil exporting nations owned $153 billion, and China owned a whopping $502 billion in US debt. Altogether, we spent more than $250 billion dollars in FY08 paying interest alone on our national debt. Only defense, Social Security, and Medicare cost more.

Since then, we’ve spent over $60 billion dollars in Iraq. The Bear Stearns bailout and nationalization of Fannie, Freddie, and AIG will cost more than $300 billion. Now Washington is asking for $700 billion more to absorb a fraction of the toxic debt behind the currently unfolding financial crisis. So we’re looking at more than a trillion dollars in additional national debt, and we haven’t even addressed the potential spending burden of the growing crises in Afghanistan, Pakistan, and Georgia!

As a result of all this spending, our national debt has surged to record levels. A year ago, Congress raised the debt limit to $9.8 trillion. In July, they raised it to $10.6 trillion. With the bailout package currently being legislated, it is now expected to rise to $11.3 trillion, and there appears to be nothing stopping it from going to $12 or $14 or even $15 trillion dollars in the next few years. This would be sustainable if tax revenue was growing with it, but it is not. Tax revenue is constrained by the high unemployment rate, lack of income growth, and by businesses that exploit loopholes or move revenue offshore. And the Republicans in power over most of the last seven years have been cutting taxes (i.e., reducing tax revenue) with the belief that it will stimulate the economy, which has further exacerbated the ballooning deficit and accumulating national debt.

Imagine you have a stable job and live in a house that you financed with a $400,000 loan from a bank in Canada. You pay your bills on time, generally paying more than the minimum on your credit card each month, but the balance has increased dramatically this year. One day you got throbbing toothache that required emergency dental surgery, but dental insurance only covered a fraction of the cost due to a high deductible. High gas prices have doubled how much you spend to get to and from work every day. The dental surgery, high gas prices, and general inflation have made it difficult for you to make even the minimum payment. And since the value of the US Dollar has dropped considerably, monthly payments on the Canadian loan have almost doubled because of required conversion to the Canadian Dollar.

Suddenly, you find yourself in a cash flow negative situation. You try to trade your car in for a more fuel-efficient model, but you don’t have the money saved up to make the down payment, nor the capacity for the monthly payments. You look for other ways to cut costs such cancelling memberships, turning the heat down in your house, use Netflix instead of going to the movies, and downgrade cable service, but it’s not enough. You explore refinancing your home loan and/or establishing a home equity line of credit, but your debt-to-equity ratio too low to qualify. The only alternative is to get another credit card, but due to inflation and a lower credit score from the existing credit card balance, the credit card has a high APR.

Now imagine a solution to your income problem taking the Republican approach: rather than ask for a raise or find another income source, you show up at work and ask for a salary cut. Your rationale is that it will allow your employer to develop more business, and thus it will increase your chances for an excellent review and higher annual bonus. Despite best wishes, less income only accelerates your descent into bankruptcy, so it is clearly not a feasible approach to take. Our federal government now finds itself in a similar conundrum: unforeseen natural disasters and economic problems, inflation, higher interest rates, and exposure to currency risk on foreign debt redemptions contribute to our ballooning national debt. It is not feasible for the federal government to do the equivalent of a salary cut and lower tax revenue in hopes that it will stimulate the economy.

The Obama economic plan calls for addressing this problem by optimizing both tax and spend to pull us out of this rapid descent into economic oblivion. Despite widespread belief, Obama’s plan does not raise taxes. Specifically, the plan states that if your family adjusted gross income is $90,000 or less, you’ll get tax cuts. If between $90,000 and $250,000, your income tax stays the same. Anybody above $250,000 gets rolled back to what it was under Clinton, which is 36% for the second highest tax bracket, and 39.6% for the highest tax bracket. Effectively, Obama is calling for distributing tax cuts given to high income taxpayers under the Bush administration to over 150 million taxpayers who fall below the top two income tax brackets.

Despite the fact that Obama calls for more of my money to be taken from me—money to feed my kids and put clothes on their backs, or perhaps go on a family vacation—I think it is an essential component to restoring economic stability. More money in the hands of those who really need it brings back that extra cappuccino, brings people out to the movies again, affords higher car payments for more fuel efficient automobiles, etc. 150 million people consuming more stimulates the economy by growing businesses whose additional sales results in more tax revenue for the very government who must reduce the federal budget deficit to avert major financial crisis. I’d rather suck it up with a moderate income tax increase than the alternative – watching everything I own go down the tubes along with the economy.

Friday, August 15, 2008

No Rest for the Wicked

Representational State Transfer (REST) is a popular architectural style used in the construction of systems whose components are distributed across a network. First conceived by Roy T. Fielding in his famous doctoral dissertation Architectural Styles and the Design of Network-based Software Architectures, REST as an architectural style has become popular through its application in Web architectures with three pervasive technologies: HTTP, XML, and URI.

Developers have embraced these technologies, which are elements of the Web architecture, as REST itself. The result, unfortunately, is that REST has now become synonymous with building applications that use HTTP to transfer XML representations of resources identified by URIs around a network. Developers are arguing the pros and cons of something wacky like tunneling SOAP-tunneled RMI calls with GET vs. POST as if it were a philosophy of REST problem! This is so way off the mark. While some of these ideas can be put to use in a way that results in well-designed systems, the problem is that developers jumped on the bandwagon too early and left behind the true nature and timeless value of the REST architectural style.

Bloggers seem to be saying that to be RESTful, you must transfer content around the system using HTTP GET, PUT, POST, and DELETE. You must identify resources with URIs, and representations of those resources must be transferred in XML. So here we go again: the Majors of the world have socially engineered the Boxer masses once again. But the solution doesn't always fit the problem, does it? One could argue that solutions never fit the problem - that it is a subjective notion. But true practitioners know what I mean. Solutions forced into problem contexts can quickly metastasize into poor architecture decisions and signal the beginning of a brutish Hobbesian future for REST proper, RESTful system development, and all those involved (or at least those who get blamed for it).

The classic problem rears its head once again. The business ends up committing far more time and resources than originally budgeted, and there is no going back. Critical design flaws emerge and the system becomes very costly to maintain. Well into production, when its progenitors are long gone, the remaining team struggles to maintain a poorly performing system that is difficult and frustrating to use. It is so brittle and hard-coded that, outside of the depressing task of maintenance, a complete rewrite is necessary. Unfortunately, many companies find themselves in this situation, and all of it could have been prevented if a little more thinking had gone into it before the project started. Instead, REST gets blamed as a past trend by sellers of the latest snake oils in the market.

Introduction to REST

REST is an architectural style comprised of a collection of recurring architectural themes that transcend the constraints of any specific set of technologies or protocols used to build a system. According to Fielding in section 5.2 of his dissertation, the REST style is an abstraction of the architectural elements within a distributed hypermedia system.

REST ignores the details of component implementation and protocol syntax in order to focus on the roles of components, the constraints upon their interaction with other components, and their interpretation of significant data elements.

The Web architecture is a single application—just one of infinitely possible examples—of the REST architectural style. To dig a little deeper into what this means, let's explore the metaphysics of REST, look at a message passing architecture as another example of applied REST, and then use parts of Fielding's dissertation and subsequent writings to show beyond a reasonable doubt that I'm not completely off my rocker.

REST as Applied Perdurantism

Okay, okay, don't let the word scare you away. Resources are a key abstraction in the REST style. More specifically, a resource R is a temporally varying member function MR(t), which at time t maps to a set of values. The values are resource representations and/or identifiers. See? No mention of XML files on Web servers or HTTP to communicate representations of them. Great, this is music to my ears! I happen to be a fan of the school of metaphysical perdurantism. Just like Web architecture is applied REST, REST is applied perdurantism. Let's waste a little time exploring this concept further.

The idea behind perdurantism is that material objects extend through space and are identified by having different spatial parts in different places, and they also persist via temporal parts that extend through time. So objects are like four-dimensional entities comprised of parts that take up the three dimensions of space, and the collection of these spatial parts at any given point in time comprise the time dimension.

Object identity takes on an interesting characteristic with this approach: noticeable changes in the set of spatial parts making up an object at any given instant could be indexed over time and identified via epochs. So under perdurantism, person X from moment of conception until wisdom teeth are extracted, and person X from that point until now are considered the same person, despite the fact that many changes have occurred before, during, and after the extraction.

How about an example that doesn't make you cringe: a piece of code known as from conception through all its revisions to the most recent version maintains the same identity. We still call it To reference a specific revision or epoch is what Fielding is getting at with his "temporally varying member function MR(t), where revision r or time t maps to a set of spatial parts" stuff. In short, line 15 of is just as much a part as version 15 of, they just reference different subsets of its set of parts (one spatial and one temporal).

How does this concept apply to resources in REST? Resources are a composite of the set of all spatiotemporal parts for a material object, and representations are immutable reflections on a subset of the set of parts that have identity in a resource, and whose content can take on different forms when transferred between components. Moreover, resource identifiers are abstractions that allow components involved in a transfer to identify and select subsets of the resource's parts to be transferred.

Looking back at section 5.2 of the dissertation, Fielding says a resource identifier is used to identify the particular resource involved in an interaction between components. Rather than enforce a particular standard, the author chooses a resource identifier that best fits the nature of the concept being identified. So maybe the author chooses to identify line 15 from the file resource as and version 15 of as

This fits nicely with perdurance, since each resource has its own composition of spatial and temporal parts that can be identified. Components exchanging representations of these resources need a priori knowledge of the resource and how to identify parts of it. While REST does not mandate use of a particular standard, use of the URI as a standard for identification is certainly useful, especially since its adoption is somewhat universal and has far transcended the Web.

There is an ongoing debate in the philosophical community about the notion of an object, its parts, and identity in terms of perdurantism vs. other competing schools such as endurantism. Discourse is not limited to architectural styles in building distributed systems, nor are they limited to objects materially comprised of atoms, bits, or any hybrid thereof.

Just as the REST architectural style for distributed systems can be seen as an application of perdurantism, and just as the Web architecture is an application of REST, a message passing architecture can be an application of REST as well.

Message Passing Architecture as Applied REST

Message passing architectures, also referred to as publish-subscribe or pub/sub architectures, have become popular over the years in large, event-driven distributed applications. Rather than require components to actively identify and pull representations of resources from other components, the data is more efficiently pushed out to them based on a previously established indication of interest, usually via subscription. We'll use this as our non-Web example of applied REST.

Subscribers typically subscribe to a subject (i.e., the resource identifier), and publishers publish notifications, events, or data updates through this subject. An example would be a real-time market data feed, where the subject is composed of segments that identify a particular spatial part of the service (e.g., execution venue and symbol), and temporal parts published are real-time quote and trade data updates. Some publishers, especially those providing trade and quote feed services, communicate session-level sequence numbers and allow subscribers to submit sequence inquiries if updates are missed.

This message passing architecture can meet the constraints of the REST architectural style. Fielding's dissertation provides web examples. Let us look at the architectural elements of REST and explore some examples for our message passing architecture:

Data Elements
  • Resource: order, basket, order book, trade, trade blotter, execution report, symbol, montage, exchange, NBBO, etc.
  • Resource Identifier: URI to identify the spatiotemporal "shape" of data to be transferred between components, as a subset of all parts that comprise a given resource.
  • Resource Metadata: caching semantics, version info, sequence number, identity of new resource, content type mappings; made available in a metadata directory service that exists in a system configuration manager.
  • Representation: full initial or current data snapshots; delta updates communicated to subscribers; updates with before and after images that include in and/or out of focus disposition.
  • Representation Metadata: content type, sending time, sequence number, checksum, encryption method, content length, etc.
  • Control Data: message type, computer or location identifier for sender or target (or any intermediate component in between), retransmission semantics such as possible duplicate, possible resend, original sending time, last sequence number processed.
  • Client: facilitates connectivity for initiator of communication such as a subscriber or requester; hides details such as use of connection retry policies to handle reconnects.
  • Server: same as client connector, except facilitates connectivity for receiver of client communication such as a publisher or request handler.
  • Cache: located in both the client and server to reduce latency for data dependencies that require injection of external content to send or process requests.
  • Resolver: resolves computer or location identifiers into IP addresses and ports of target or intermediate components.
  • Tunnel: SOCKS proxy creates an SSL tunnel to a message router on behalf of any clients behind a firewall.
  • User Agent: initiator of communication such as front-end GUIs for front office sales & position traders, middle office, system administrators, and test simulators.
  • Proxy: instance of a content-based router selected by initiator components to broker queued requests or perform pub/sub message routing.
  • Gateway: sits within same physical machine as the origin server and encapsulates process configuration and management of fail-over routing of content to backup message routers as well as routing requests to be processed by components operating at various nice levels.
  • Origin Server: the ultimate resource managers such as the system configuration manager, order manager, position manager, master blotter, market data server, market manager, etc.


The above should provide enough substance to show that a message passing architecture can fit within the constraints of the REST architectural style. Sure, we can perform an exhaustive analysis of the actual constraints and confirm that the above architecture conforms, however my focus is not REST purity. It's exploring how distributed systems can be architected using the REST style. If you have any doubt remaining, look at what Fielding himself has to say about this topic.

In future blog postings, I hope to explore this message passing architecture in the form of a content-based router as an example that transfers representations between REST components as a key component of a RESTful architecture that has no dependency on HTTP GET or POST. I'll address how requests for representations are queued and dispatched to request handlers and how representations are published to interested subscribers without jumping on the REST bandwagon and misusing the REST architectural style through the confines of the Web architecture.

Thursday, May 25, 2000

Do Web Sites Dream of Electric Sheep?


The Web became a name used to describe a common information space in which people used to communicate and share information in the form of web pages. The pages can contain links that point to private, local, or global web pages that may or may not be finished or even exist at all. The end user uses a web browser to view the document and click on hyperlinks that lead to other documents on the Web. Its definition represents a moment in history that will indefinitely effect how we live our lives.

The Web has become an organism manufactured by humans that not only carries out work that is too boring, dangerous, or distasteful for humans, but also allows for rich communication and collaboration between its participants. As the sum total of all web sites in existence and the technologies used on them "come alive", we can imagine them becoming more and more like people, and in fact in many ways replacing them.

This paper peeks at a futuristic Web and examines the changing nature of personal and human identity from the context of man vs. machine.

A Digital History

The first successful run of a stored program on a computer occurred in 1948 on a small-scale experimental machine (SSEM) known as the "Baby". The program was written by Tom Kilburn, who designed and built the machine along with F.C. Williams. It had a 32-bit word length, serial binary arithmetic using 2's complement integers, a single-address format order code, a random access main store of 32 words extendable up to 8,192 words, and a computing speed of around 1.2 milliseconds per instruction.

Over 20 years later, Ed Mosher, Ray Lorie, and Charles Goldfarb invented Generalized Mark-up Language (GML) at IBM. In 1974, Goldfarb invented the Standard Generalized Mark-up Language (SGML). The idea was to create a mark-up language that would allow information too complex for one person to understand, such as the documentation for nuclear plants, aircraft, and government laws and regulation, which had life-or-death significance.

The Internet was designed in 1973, published in 1974, and rolled out in 1983. It used TCP/IP as its key protocols for host-to-host communication. David Clarke of the MIT Laboratory for Computer Science was considered the father of the Transmission Control Protocol (TCP), which allowed for reliable connections between computers. Vint Cerf and Bob Kahn defined the Internet Protocol (IP), which routed packets of information sent from one computer to another.

Together, the Internet and SGML led to the invention of the World Wide Web (WWW, or the "Web") by Sir Tim Berners-Lee in 1990. It was the ultimate killer app – you never knew how much you needed it until you saw it. It used the Hypertext Transport Protocol (HTTP) to communicate Hypertext Mark-up Language (HTML) documents, also known as "web pages", from a web server to a web client using a standard naming scheme called the Uniform Resource Locator (URL).

The Web became a name used to describe a common information space in which people used to communicate and share information in the form of web pages. These pages could contain links that pointed to private, local, or global web pages that may or may not be finished or even exist at all. The client uses a web browser to view the document and click on hyperlinks, which lead to other documents on the Web. Its definition represents a moment in history that will inevitably effect how humans live their lives.

A global and rapidly increasing inertia facilitated the piecemeal evolution of the Web as a tool to connect people, computers, and devices together. It allowed them to communicate and collaborate in ways never imagined before. The global integration of rich content, social fabric, and electronic commerce has led to what is often called the connected economy. But it had one major issue – it wasn't well suited for non-graphical applications, such as data interchange between integrated businesses.

This lead to the invention of the Extensible Mark-up Language (XML) in 1996 by a group of SGML experts, which acted as an extremely simple dialect of SGML. It's main purpose was to allow generic SGML documents to be served, received, and processed on the Web in a way that was as simple widespread as HTML.

While the evolution of SGML, the Internet, the Web, HTML, and now XML has dramatically helped businesses communicate with their customers and partners, it only represents the first major step in the evolution toward a totally connected digital economy.

The Humans

Why weren't humans satisfied enough with life in caves? Since the beginning of time, both natural and unnatural creation has facilitated biological and sociological evolution, and experiencing life has become both easier and more advanced. Newer inventions tend to be created from existing ones. As society became more and more advanced, the notion of value for some things became a commodity, and we began to trade value with others, often hoping to gain a competitive advantage. More than two millenniums ago, a major economic paradigm shift occurred: trading value got organized.

As far back as 1800 BC, the Mesopotamians created institutionalized trading exchanges. In 1400 AD, guilds were created to provide a context for self-regulation and cartel-like trading. And two years after the first American stock exchange was created in Philadelphia, 24 traders got together under a buttonwood tree at 68 Wall Street in New York City. They agreed to give each other preferential treatment in deals, creating what is known as the New York Stock Exchange. Eventually, technology facilitated a more efficient exchange, and the Internet has taken it to a whole new level.

Ticker machines were used to communicate price information to stockbrokers. The invention of the telephone facilitated richer communication between brokers and their customers. Exchanges themselves became more efficient when computers were used to perform analysis, and financial institutions all around them took on a whole new level of efficiency by using computers for their own analysis, processing, and to communicate with other institutions.

As the Web started to become mainstream, online discount trading firms emerged that allowed their customers to receive quotes and research, execute trades, and view trade history through the Web. They became popular so rapidly, that trade execution has traditional brokerages such as Merrill Lynch squandering for an online presence with competitive advantage. Within a few years, a deep majority of all retail equities trading was executed online.

Today, products such as wholesale and retail energy are being traded online. Traditional financial institutions are beginning to announce clearing services for these products. And consortiums of financial institutions are creating scalable, reliable, organic financial networks that will allow participants and service providers to interact in ways that will totally redefine the global financial marketplace.

This—in conjunction with a global movement by all industry segments to buy and sell raw materials, parts, MRO, and other goods and services online through exchanges, auctions, and aggregation platforms—creates a global environment that will eventually blur the lines between traditional and non-traditional financial products and currencies. A global ecosystem of dynamic trading communities will eventually emerge that naturally follow liquidity and price elasticity in a frictionless, organic manner.

And the Servers that Serve Them

Even further back than trading exchanges—relatively near to the creation of humankind—people started using tools to make it easier to do things. What seems to have motivated this is the ability to perform activities like hunting, building shelter, and agriculture more efficiently.

As societies became more and more organized, newer tools and techniques were developed that allowed people to establish a craft and perform services in exchange for value. This value could consist of things like food, animals, and eventually derivatives such as currency. Currency, which served as a proxy for value, was an important component in the evolution of our economy.

It allowed for third parties such as exchanges and banks to emerge that provided a trusted, neutral platform for this value exchange.

Dramatic increases in population led to a demand for goods and services that could not be met by the creation of handcrafted goods. Resources such as tools and workers, who had established crafts qualifying them to perform specific tasks, were utilized and structured into manufacturing organizations that performed mass production. Manufacturing companies were able to produce goods much quicker, at a lesser cost, and in greater quantities. Then, these manufacturers would distribute them through a supply chain to retailers who would sell them directly to customers.

Mass production burgeoned and created the industrial era. Workers who manned assembly lines at manufacturing plants were definitely valuable in the production process. But machines were a key factor that facilitated mass production. They allowed companies to continue to innovate in ways that increased efficiency or made it easier for assemblers to be more productive.

As the industrial era gained critical mass, companies used capital to acquire physical assets that would allow them to increase efficiency and capacity. This led to the creation of capital markets such as the New York Stock Exchange that allowed companies to receive capital in exchange for sharing partial ownership. Since the industrial era was booming, capital markets attracted speculators that bought and sold shares in trading exchanges.

Exchanges gained so much momentum, that price elasticity made it even more attractive to analyze and speculate on the future of these companies and the market overall. Machines such as stock tickers were used to make it easier for exchanges to transmit price information to traders and brokers at remote locations. Eventually, machines were used to make a wide variety of things easier: transportation, mass production, trading on capital markets, national defense, etc.

In 1948, a key breakthrough occurred: machines became smart. The first successful run of a stored program on a computer was the beginning of a whole new era. It would not only facilitate dramatic increases in industrial efficiency in the mass production of goods and services, but it allowed capital markets to become much more sophisticated in how stock market behavior was analyzed and forecasted. Many innovations in all other industry segments have led to increased efficiency and other benefits.

Today, people and businesses use private and public networks to disseminate real-time market data all over the world almost instantaneously. Most retail trading is done online through the Internet. People use automatic teller machines to withdraw money any time of the day or night.

Commercial aircraft like the Boeing 757 are capable of taking off and landing without human assistance. Neural networks, Monte Carlo simulations, and other analytical strategies are utilized in financial institutions that make it easier for traders to make informed decisions that would be virtually impossible most humans to do alone.

The explosion of computing machines in the 1950s made it easier to process large amounts of data. But the explosion of the Internet into the consumer and business space in the 1990s set the landscape for a new era that would yet again change the way humans live life. The Web and the Internet have turned computers into connected devices that allow them to collaborate, communicate, and coordinate activities with other people regardless of their location on planet Earth.

While the mass adoption of telephones in the early 1900s allowed a person to communicate with others at specific locations, the mass adoption of cellular phones in the late 1900s allow a person to communicate with others regardless of their location. The notion of smart appliances emerged as well. And recent innovations in technology are indicating that we are not far off from having households full of wireless connected appliances and devices which constantly communicate, collaborate, and coordinate to make our lives easier.

Since the beginning of mankind, people have striven to make doing things and living life easier and more efficient. Suddenly, a world containing billions upon billions of connected devices—tracking people’s location and every action they perform in order to make their lives easier—has become a reality. A pervasively connected future consisting of this kind of distributed, complex system of objects suggests that our perception of reality, identity, relationships, and privacy will experience a dramatic, perhaps frightening change.

The (Dark?) Future

Oil and electricity powered the Industrial Age, and bandwidth will power the Digital Age. Billions of objects around the world will come into life and discover, connect, and interact with other objects, then become eliminated. Digital behavior in this massively distributed system will occur through a totally pervasive, part-wired and part-wireless, infrastructure. The platform will be comprised of ubiquitous devices, such as cars, refrigerators, televisions, computers, cellular telephones, and vending machines.

Eventually—and to a large extent, it’s happening now—there will be a transparent but global electronic blizzard of light waves, radio waves, electric current, and who knows what else. As bandwidth, storage, and processing requirements increase exponentially, other mediums will be used to transmit signals and store memory. Even today, research groups at major corporations are investigating quantum systems, biological processing, molecular computing, and other crazy means of data crunching (see Corporate Logic by Robert Buderi), —all driven by our insatiable desire to make life easier.

At the time this essay was written, computers and networks have already made working and living tremendously easier. As a consequence, some major side effects have emerged. Every time people use the Internet or create a new account, their identity and behavior is tracked, analyzed, and sold to mass marketers. Privacy has become a major issue. Corporations all around the world are attempting to utilize technology to automate, integrate, and dominate. This digital renaissance has changed the market climate in dramatic ways that have both hurt and helped.

Billions of dollars were spent to remediate machines that were suspected to be not Y2K compliant. An MCI meltdown caused the Chicago Board of Options Trade to halt trading of derivatives for half a day. Viruses virtually paralyzed e-mail communication in entire corporations for days. The dot com gold rush has turned Internet savvy kids into overnight millionaires. And quiz shows like Who Wants to be a Millionaire? received sky high ratings through an intensified sense of greed in those who have missed out.

Already today, application service providers allow companies to provide a link on their consumer-oriented web sites that allow them to click and initiate a real-time chat session. But who says humans need to exist on the other end? As artificial intelligence is applied to collaborative filtering, and as more and more data about a company’s customers is gathered, it will become more and more possible to predict what the customer will do next.

The art of cross-selling and up-selling will be computerized. Intelligent agents won’t be human unless a customer’s problem is too complicated for it to solve, and the transition from machine to human will be seamless. Everything will be relationship-centric, and technology will increasingly serve as a value enabler. As technology becomes more and more effective, and as bandwidth and storage capacity and processing power become greater, more and more of the customer-facing side of a business will be replaced. Customer service becomes commoditized.

Perception of reality and identity will change. Cops will use intelligent agents that act like 13-year-old girls in order to find pedophiles. Dynamically personalization of a consumer’s context with automated, value-added services will occur through mutual discovery of mobile devices such as cars and cellular telephones via short-range radio. Everywhere a consumer goes, devices, agents and services will be struggling for a piece like of the action, like meat sharks preying on your existence.

Humans will eventually live in such a deep sea of illusion, that the concept of reality’s beginning and end, the authenticity of life experience, and the notion of digital sociology replacing our ability to largely discern the difference between human and machine will change us. In fact, our predisposition for waxing nostalgia and reliving memories and keeping part of life private could become blurred into non-existence.

Baby Boomers will feel powerless through ignorance but attracted to how technology allows them to connect with others. Generation X will both fear and loathe this trend as its velocity increases. Generation Why will have a natural advantage in adapting to its digital evolution. But Generation Z may not even have the ability to understand how it could be different.


As the sum total of all ubiquity and the infrastructure technologies that connect them "come alive", they will become more and more like people, and in fact in many ways replace them. In twenty years, the Industrial Age will seem like the Medieval Age. Fear will walk hand in hand with the benefits that the Digital Age will bring to our lives.

Will the struggle between machines and humans ultimately become The Immortal Game of the Digital Age? Too bad we won't live to know. But then again, who does?