Site Navigation

For LLMs: zip of all posts.

Edit on GitHub


Julian Gómez with Peter Kaminski, 2024-07-03

Author: Peter Kaminski Issue: 2024-07-03


Julian Gómez with Peter Kaminski, 2024-07-03

A Plex Conversation

Pete: Tell me about Augmented World Expo.

Julian: It was in Long Beach , about two weeks ago, middle of June. The first thing is I'm not sure why it's in Long Beach. It didn't need to be. It could have been many other places like Santa Clara where it historically has been. Apart from that, it's clear that AWE has become the go-to place for modern XR tech. You can expect to see all kinds of stuff there from different kinds of no-code authoring systems to hardcore optics and waveguides using lots of words that even I don't understand. Exposure to the companies doing all kinds of stuff was terrific. Korea actually had two pavilions showing off companies. Taiwan had one pavilion. There were a lot of interesting startups going on there.

The sessions were also useful in getting perspectives from people who have been doing it. I think one of my favorites was the USC reconstruction of the original Chinatown Los Angeles, which was razed to build Union Station back in the late '30s. They reconstructed it from photographs and personal accounts of what the Chinatown looked like. They actually have an AR display that you can go see. It's going to be at Union Station until the end of September, I think. If you happen to be in LA, you can go experience it. I like that one because of the meticulous way in which they recreated a historical context and it showed how to do things properly.

There were a bunch of personal recollections from, as I said, people who were doing things. For example, Tom Furness talking about the work that he did years and years ago developing some of the foundation work for doing VR and AR. Interestingly, I saw a similar note - if you follow Doug Engelbart's career, he created lots of tech that's essential to modern world. I've noticed with people like Doug and others that in the later parts of their life, they started looking towards trying to make society better. I saw the same thing happening with Tom who helped create this Virtual World Society where they're looking at the social aspects of using all of this tech.

I was there for the first 48 hours. I missed the last 12. I think I got the gist of it all because I was able to go through and experience everything I needed. One of the things about XR tech is that you can't just hear about it. You can't just see pictures of it. You have to do it. There were plenty of companies demonstrating different types of gloves. This dates back to the original VPL DataGloves, which had sensors for all of your finger joints. That was the original and there've been lots and lots of attempts at making gloves, which are more lightweight, more accurate, et cetera. Plus experiments with putting things like vibrating crystals at the fingertips to get some haptic feedback.

Actually, the state of gloves has regressed in the last few years, which I find surprising. The state of haptics has not progressed really. The only two possible technologies around are vibrations at the fingertips through whatever mechanisms and then UltraLeap, which uses puffs of air into the hand. I'm hoping that in the research section of the upcoming SIGGRAPH conference at the end of the month, we'll maybe see some more experimental ways of trying to get haptic feedback, because sight and sound are fairly well covered, but humans still have other senses in addition to the way that humans cognitively perceive things beyond the primary five senses.

Overall, AWE is the go-to place if you're into any kind of XR tech.

Pete: Awesome. It's kind of taken over from SIGGRAPH in that respect. Is that right?

Julian: Yes, in terms of products, because AWE is a product expo. It's not a research conference. If you wanted to go and buy something, you'd go to AWE. Even if it's really advanced, just about everything there is at the point where you can buy it right now. Whereas at SIGGRAPH you'll find research and some of these things may not hit the product shelf for another five years, but it's good to know what kinds of ideas are being worked on. Both are important. For someone like me, I'm involved on both sides of it. So I go to both.

Pete: Yeah. Do you want to tell Plex readers a little bit about SIGGRAPH this year or not yet?

Julian: Sure. Although I can't give a comprehensive overview because I've been really nose down in my part of it. I'm in charge of the Retrospective Program. This year I'm focusing on not art history, but computer graphics art history. I have one panel with David Em, Francesca Franco, and Tamiko Thiel. These people are really eminent in computer graphics and in fact, in computer tech. For example, Tamiko designed the Connection Machine. Not the circuitry, but the look of the actual computer. David Em is in the Smithsonian. Francesca is an art historian and curator extraordinaire. I have these three people on one of the panels. Then I have Theodora and Daniel talking about using computation to create art, which is not computer generated art, but rather art where computers were involved somehow. Of course, this applies to everybody, but they've done a lot of research and overview on how that's happening.

Pete: Anything special with SIGGRAPH?

Julian: One thing I'm hoping to have ready at SIGGRAPH is the Center for Computer Graphics History. This is not a museum. It has a multi-line mission statement. The first goal is to get computer graphics history down. In this field, we've lost a whole bunch of people already and we're going to keep losing people. The intent is to get the first person accounts down regardless of the technology. The second goal is research into digital models of history. This quickly becomes a graph database problem when people talk to me about it. In fact, an RDF graph database, because all kinds of abstractions are necessary to represent history. I think this has not been explored because generally historians think about history a lot. They try to make these connections and then they put it all into prose. My contention is that it doesn't need to be prose. I want to see relationships.

An interesting anecdote from my undergraduate career. To graduate with honors, you had to take a special class and write a thesis. The class was conducted by Professor Bill Kahan. If you look into him, he's the primary architect of IEEE 754, the floating point standard. One day in class, I made a comment about the Munich Appeasement and boom, the entire class was all about Hitler and Czechoslovakia, etc. It turns out that he's a real nut on history, especially World War II history. We got a real history lesson that day from somebody who was passionate about it. It was clear, how did this happen? This was a hardcore computer science class. We're talking about the 1939 Munich Appeasement. It became clear many years later that there are all these connections and history is really about connections. We get taught history, like the War of 1812 and these things happen, but there's all this stuff that went into it. The Boston Tea Party wasn't just a bunch of guys dressed up as natives throwing tea into the harbor. There were a whole lot of factors that went into it.

This is why I say it's clear that it's a graph database. That's an easy enough thing to say. But what are the data abstractions that you put into it? One of the good ones I'd like to point out is that any attribute you want to pile into whatever technology you're using, it's actually time dependent. A simple expedient like, what's your name? Well, lots of people use their married names, but they had a different name before that. Whatever attribute you talk about needs to have a time dependency ingrained into whatever the digital model is. That of course means that as you try to query the knowledgebase about this, you're going to have to keep the data abstractions in mind.

The bigger issue is that graph databases get real complex, real fast, and you can't put them on a screen. In fact, you can't put them on anything flat. This is where the XR technology comes in. The idea is that you will be able to use your cognitive abilities to manage and access the knowledge base that is the center's knowledge base. You don't type in queries, you will use your hands, use your eyes, even your feet if you want, but you pick up the knowledge as you would pick up a fork. This will be used both to create and manage, to create the knowledge as it flows in from all the various sources, but also to query it and to try and understand relationships.

I can cite a current reference. If you try it on the Apple Vision Pro, go to the Apple store, you can get a free demo ? One of the things they will have you do is if you want to zoom in on something, you grab space just with your hands like this, move your hands apart. This is how you zoom in on whatever it is you're looking at. This kind of input mechanism was actually developed by Paul Mlyniec 30 years ago at a company called MultiGen, and it's a construct that works. Now imagine you're looking at a complex knowledge base and you need to zoom in on a particular portion of the knowledge base. This is all in virtual 3D using XR displays, not 3D on flat display. You just use your hands and grab this area of knowledge and you can zoom in on it.

Another part of the mission statement for the center is that XR will be the primary mechanism for managing and accessing the knowledgebase. This has relationships to SIGGRAPH because it'll be launched at SIGGRAPH. It has the application to computer graphics because it's a computer graphics history, but also XR. Of course, one of the things that I was doing down there in Long Beach was looking for any technology that would be potentially useful in the center. You see that there's the basis for everything I'm talking about this afternoon in that we go beyond flat. I'm done with flat.

Pete: Nice. Could you tell us a little more about the Center for Computer Graphics History?

Julian: The Center is a California corporation. It's a nonprofit 501(c)(3) and I'm gradually building up the board of directors. Its primary input stream will be through research projects in collaboration with universities. The Naval Postgraduate School in Monterey and UMBC, University of Maryland, Baltimore County, are already signed on. UMBC is interested in the knowledge management aspects and NPS is interested in the XR aspects. Then I expect as all the paperwork is done to be setting up even more affiliations. This is why I want to launch at SIGGRAPH, because there will be so many academics there to communicate with.

Pete: What else is going on?

Julian: I have a startup. Should I talk about that?

Pete: Yes, please.

Julian: Okay. A long time ago, I was the chief scientist of LEGO. Last year I got together with the former chief visionary of LEGO. Using my ideas about how people interact with tech, we decided that we should start to build play apps. Apps on iPads, but it won't be the kind of thing where you slide your finger around on the screen. You'll use your cognitive abilities to interact because the technology is smart enough now that we can pick up on how people are behaving instead of forcing them into the paradigm of being flat again.

Nuon Play is a corporation where we are developing apps based on these principles starting with play. One good way of playing is to just play with blocks. Little kids play with wooden blocks, and depending on how much money your parents have, you can play with plastic bricks. If you study child development psychology, these play mechanisms are essential for development. My contention is that we need to follow the same kind of approach in the digital world if we're going to work well with humans.

Nuon Play is following the same line of “being done with flat.” It's following the same line of looking at the future of how to interact with tech. But it is a commercial startup as opposed to the Center, which is a nonprofit startup.

Pete: Congratulations!

Julian: Thanks!


Related:


Pages that link to this page