Information Unbound Archive 1998 - 2000
An On-line Column by Erick Von Schweber

# 1: Microsoft's Millennium approach to Computing Fabrics presents weighty challenge to CORBA and Java

# 2: Computing Fabrics compared with distributed and parallel software technologies

# 3: Computing Fabrics - The BIGGER Picture
(a 4 part series)

# 3-1: Computing Fabrics extend the cell phone paradigm
# 3-2:
Virtual Personal Supercomputers (VPSCs)
# 3-3:
Computing Fabrics and next generation Interface Devices
# 3-4
:
Hyper-Reality - A Wild and Crazy GUI
# 4: Is History Repeating Itself?
The low-level procedurality of the past returns to haunt IT.
# 5: Object Based Semantic NetworksTM
Toward a Fusion of Knowledge Representation and the Relational Model

 

 

Information UNBOUND # 3-4
Also see: Computing Fabrics

Computing Fabrics: The BIGGER Picture

Part 4: Hyper-Reality - A Wild and Crazy GUI
Why 3D will become essential for human computer interface

January 8,1999

Executive Summary: The command line required us to break the context of our work to issue computer instructions. The 2D GUI, realized with the desktop metaphor, integrated information access and computation into our work context. The next step in the evolution of human computer interface will integrate information access and computation into the many other contexts of daily life. Seamlessly integrating digital information with the real world will require bringing the former closer in form to the later, a task that demands a true 3D interface.

The VPSC-powered Info "WalkMan"

Visualize an Info WalkMan, with the processing power of a supercomputer and the form factor of a pair of sunglasses connected to a minidisc player, and it's always on the Internet. The hardware and software to accomplish this will abound, but how will it all come together in the user experience?

Imagine a cocktail party circa 2010. As you wander about, equipped with your wearable, you see guests "visually highlighted" according to your own personal preferences and predilections, subject of course to personal security constraints. See someone you want to approach, but you don't know what to say? Not a problem as your wearable provides their name, interests, perhaps even the potentials for relationships (drawn from a wide variety of types). The opportunities for forming both personal and professional relationships are considerable. From a distance you can tell what drinks are being served at the e-cash bar and how much they cost, where the bathrooms and fire exits are, what bands the bass player had previously performed in, and the ingredients of the strange smelling hors d'oeuvres being offered you on a serving tray.

Is such a scenario possible just over 10 years out? Consider the potential technological developments we've discussed in this series of columns that could well make this a reality. The flexibility and reconfigurability of Computing Fabrics presents a new take on the issues of local vs. remote services and processing, joining (and extending) the ranks of cellular phones, network computing, and ubiquitous computing.

VPSCs, Virtual Personal Supercomputers, provide the mobile user with the computing resources required by their ever changing needs and tasking through a low-latency connection with nearby resources dynamically assembled from the local fabric. Such VPSCs will drive personal interface devices -- wearable computers that provide a human computer interface via a personal display in a mobile form factor similar to sunglasses and superior to the user interfaces mediated by all current desktops and workstations, supporting both immersive experiences as well as augmented and enhanced reality.

Information in Context

The web-based information and services we have grown accustomed to will likely continue to expand and deepen, but the context in which we access this information and utilize these services will change dramatically, due to our newly acquired mobility. Consider for a moment the range and variety of contexts you personally pass through in the course of a day. This may include driving through familiar or unfamiliar territory, riding one or more forms of public transportation, shopping in physical environments, doing one's job in one or more work environments, eating - including selecting what you eat, where you eat, and how your food is prepared, exercising, entertaining, socializing,…you get the point.

No longer bound to the desk at work or the sofa at home we will access information and web-based services throughout our typical and not so typical days and we will want this information and these services integrated as seamlessly as possible into our routines.

There's a selective pressure at work here. If future wearables merely provide mobile access to expanding web content, but lack tight integration with a user's changing contexts, then the requirement to break one's current context, such as shopping, in order to enter a computing context will restrict the size of the market for such wearables. How so? Only those people who are both comfortable in a computing context (e.g., those using the web today) and who are also amenable to back-and-forth context shifting constitute a potential user base.

On the other hand, if future wearables provide a seamless integration of web content within the multiple contexts of a user's day, without breaking any of those contexts, then the market potential becomes far, far larger. The big money is in this latter group, not the former. This opportunity should encourage the development and widespread adoption of a seamless integration between one's daily contexts and information. Since physical reality is unlikely to radically shift its form to support this integration, the onus is on information and services to shift theirs'.

Enhancing Reality

How can web content be integrated into daily life? Simply overlaying current web pages over what we see around us will not do. It is obvious, in a trivial sense, that current web content, and information displays in general, are designed to be viewed as a unit. Superimposing that content over the varying and distracting background of whatever it is that we happen to be looking at does not constitute integration. Even if we redesign 2D web pages for overlay viewing through our personal displays they won't be suited to the task of proper integration with the world around us because, lacking the dimension of depth, they'll be unable to direct or attach their content to the objects of the real 3D world.

How about displaying content to peripheral vision, somewhat in the sense of James Cameron's Terminator, or for that matter, the view through a feature-laden camera's viewfinder, where information readouts surround your view of the world in a bezel or frame? While this approach should prove useful for selected information and some applications, it is not a general solution - peripheral vision simply does not provide sufficient display space for all the content we want to view at a time. Additionally, a great preponderance of information requires our focus at the center or near center of our visual attention. Peripheral display can extend our visualization with context but does not diminish or off-set the need for focus.

A few words should also be mentioned concerning the current vogue where a display is directed to only one eye, as with the recent IBM wearable and Virtual Vision announcements. Steve Mann of MIT, who arguably has had more augmented reality experience than any other human being, has told me in private conversation that this one-eye approach has caused significant changes to his own visual system over the course of over 15 years of daily use. Physically reorganizing the structure of their Lateral Geniculate Nuclei, occipital cortex, and associative brain regions is not what the mass market consumer will have in mind (and neither, I suspect will OSHA, regarding the health of users in the workplace). So even in overlay mode both eyes should be engaged.

That leaves but one major option in my opinion - web content and information displays of the future must become more like the physical world with which they are to be integrated. Since we live in a participatory world of 3 spatial dimensions and one time dimension, our informational displays and interfaces must also be interactive, 3D, and temporal, and needless to say, directed to both eyes.

Another example may help to illustrate this. Consider the varied tasks of home improvement and maintenance. Equipped with a 3D wearable (wirelessly connected to a local VPSC accessing the requisite data sources) a home owner or repairman could see the location of wall studs behind the wall board, where wiring and plumbing is run, the manufacturer and style codes of wallpaper (and coordinated paint colors too!), the history of repairs made to the furnace, central AC, and appliances, and on and on, potentially just by gazing at the desired object(s) and selecting the type of information to be revealed. From ubiquitous computing, where processors abound, we enter ubiquitous information, where information resides in and enhances nearly all things (or at least appears to do so). This is just the beginning, with similar examples arising naturally in commerce, transportation, recreation, travel, and so forth.

These examples, however, rely not only on the availability of wearables and the deployment of flexible and interoperable networking (e.g., Computing Fabrics) but also on the general deployment of 3D tracking and active IDs embedded in many items. This is not so far fetched as it first sounds. The commoditization of GPS circuitry has already begun, resolving the location of many objects, including wearables (and by proximity their wearers) to within meters. More local tracking systems, operating within a room context, for example, will provide additional resolution, likely down to the centimeter (see Intersense at www.isense.com and http://www.pinpointco.com for PinPoint's 3D-iD system).

Additionally, new packaging technologies for integrated circuits (see www.iButton.com and www.dalsemi.com) greatly reduce space consumption and may provide inexpensive digital identity to a wide range of goods. With VPSC processing power available not all objects will need to be tracked for them to be identified and their positions obtained. However, the local presence of a digital ID could well reduce the ambiguities involved in feature extraction and object recognition, allowing nearby objects to be identified and attributed with greater accuracy.

Hyper-Reality

For millennia, the human knowledge base has remained apart from the rest of the man made world, residing in libraries, archives, and most recently web servers. The technologies we've been discussing in this series could in due course spawn a hyper-reality, re-integrating Man's knowledge with his artifacts, an appropriate accompaniment to the escalating convergence between theories of mind and physics of matter.

Hyper-reality, by enhancing our vision, can provide us with both a deepened as well as a more abstract view of the world. Without enhancement we view objects in our world from a single point of view at a time, and only in their current state. With enhancement we can potentially visualize an object and its attributes that are within, behind, latent, implicit, dormant, relational, or historical. A real-time interactive 3D interface permits objects in the real world to be attributed wherever and whenever they reside - up, down, left, right, near, far, inside, outside, past, present, or future. In effect, hyper-reality brings our conceptual knowledge down to the perceptual level. Language, spoken and written, was first to cross this chasm between the conceptual and the perceptual, and we well know the impact that had on the world. Hyper-reality may take us even farther.

While much of the evolution of hyper-reality will occur on its own, without pushing and prodding, there's much to be gained by providing examples and prototypes that exert a shaping influence and ensure that the human dimension is well represented. To this end Infomaniacs, working with Raytheon Systems Company, assembled a project in mid-1998 called "Distributed Multiuser Visualization Infrastructure and 3D HCI Toolkits" for DARPA's "Command Post of the Future" program. The 3D HCI toolkits component of the project provides an integrated development environment for the declarative creation of hyper-reality applications. Sitting beneath these toolkits (beneath in an architectural sense) is a multiuser, real-time 3D infrastructure we call SQL3D, and beneath that are suitably extended distributed objects to make the whole ensemble not only multiuser and collaborative but also distributed.

In the next Information UNBOUND we'll take a look at this project, including SQL3D. This will begin our transition in this column to the representation and exploitation of knowledge and information that will support hyper-realities.

Erick Von Schweber

Information UNBOUND is produced by Infomaniacs.
(C) Infomaniacs 1998. All Rights Reserved.

 

 

 

 

By Erick Von Schweber
Copyright 1996-2004 by Infomaniacs. All Rights Reserved.
Updated January 22, 2002