Viv: Siri Mark II

Viv

Two co-founders of Siri, the virtual assistant in Apple’s iPhone and iPad, have just announced their latest startup: Viv. It’s another virtual assistant.

RicMac’s Two Cents: Siri is a big part of the book I’m currently writing. I see it as the first widely popular consumer AI product. I’m writing about “augmented intelligence” and Siri is, to my mind, the prototypical app of that kind in the current era. By the sounds of the Wired profile, Viv is the next big step for Siri-like functionality. It’s going to be an open platform, meaning that hundreds of different apps will hook into it. The company is calling it “the global brain” and “an intelligent interface to everything.” Those are huge claims, even in an industry that thrives on hype. But if anyone can deliver that vision, it’s Adam Cheyer and Dag Kittlaus. I interviewed Cheyer for my book, along with Siri’s other co-founder Tom Gruber (he’s still at Apple). If Cheyer’s track record is anything to go by, Viv will be a very significant product in the future of AI. One to watch!

IBM’s Brain Inspired Computer Chip

SyNAPSE

IBM is touting a “new type of computer chip, SyNAPSE, whose architecture is inspired by the human brain.”

RicMac’s Two Cents: Traditional computers rely on ‘brute force’ in order to be intelligent (the so-called von Neumann architecture), but this new chip is modeled on how the brain works. A key point is that this technology will complement the brute force approach, not replace it. As the Forbes article put it: “…to crunch big numbers and do heavy computational lifting, we’ll still need conventional computers. Where these “cognitive” computers come in is in analyzing and discerning patterns in that data.” It’s early days and so far IBM’s technology, which is backed by DARPA, has yet to prove itself in the real world. But I like that the focus is on complementing the traditional computer chip, which in turn complements our own brains. So the ideal result would be having both kinds of chips augmenting our native intelligence. I’ll be tracking this with interest.

Further Facts:

The chip attempts to mimic the way brains recognize patterns, relying on densely interconnected webs of transistors similar to the brain’s neural networks. (New York Times)

Each core of the chip is modeled on a simplified version of the brain’s neural architecture. The core contains 256 “neurons” (processors), 256 “axons” (memory) and 64,000 “synapses” (communications between neurons and axons). This structure is a radical departure from the von Neumann architecture that’s the basis of virtually every computer today. (Forbes)

In a von Neumann computer, the storage and handling of data is divvied up between the machine’s main memory and its central processing unit. To do their work, computers carry out a set of instructions, or programs, sequentially by shuttling data from memory (where it’s stored) to the CPU (where it’s crunched). Because the memory and CPU are separated, data needs to be transferred constantly. (Wired)

Augmented Chat: Emu Acquired By Google

Emu

Today Google bought Emu, an iPhone app which TechCrunch described as “an IM client with Siri-like intelligence.”

RicMac’s Two Cents: Emu is a great example of how apps and Web services are getting smarter, taking some of the cognitive load off us humans. While messaging apps are a dime a dozen currently, adding virtual assistant technology to messaging was an inspired idea. I expect to see this ‘smart’ functionality in the next versions of the big IM services: Apple’s iMessage, Facebook’s Messenger, Skype, etc. And of course, what a smart move by Google – yet another AI (artificial intelligence) technology it has snapped up.

Further Facts: The ties to Siri are deep. Emu co-founder Gummi Hafsteinsson was VP Product at Siri, prior to its acquisition by Apple in 2010. He continued working on the Siri team at Apple up till January 2012, then co-founded Emu a month later with Yahoo alumni Dave Feldman. Hafsteinsson told TC in April 2014 how Emu was inspired by Siri: “I felt that you kind of had to engage Siri all the time, and I wanted to create an assistant that was more in the background and proactive.” Interestingly, he described this as a “Google Now-like” concept – implying that Google Now is more sophisticated than Siri.

Here’s an Emu company video that shows how clunky our current txting and messaging interfaces are, compared to Emu’s automation of background tasks.

Vernor Vinge & Cognitive Computing

VernorVinge_RainbowsEndScience fiction author Vernor Vinge, interviewed by Reason magazine in May 2007:

Reason: You dedicate Rainbows End “to the Internet-based cognitive tools that are changing our lives—Wikipedia, Google, eBay, and the others of their kind, now and in the future.” What’s the story behind this dedication?

Vinge: I regard the current Internet as a test bed for the cognitive coordination of people and databases and computers. Tools such as Google, eBay, and Wikipedia are—I hope—harbingers of much more spectacular developments.

RicMac: That was 7 years ago and, as usual, Vinge’s foresight was razor sharp. The cutting edge of the 2014 era is machine learning technology, such as IBM’s Watson and the many AI startups that Google has acquired recently. These developments are enabling a more advanced version of what Vinge called “the cognitive coordination of people and databases and computers.” We haven’t reached spectacular yet, but it’s heading in that direction…

“Can Technology Think? Watson Can.”

watson 1

So says IBM in its promo video for Watson, released earlier this year (see below). Watson is “a cognitive technology that processes information more like a human than a computer,” according to its homepage.

In January IBM launched the Watson Group and funded it to the tune of $1 billion. Back in November it opened up an API, so developers can build on the technology. Now IBM has entered into a partnership with Apple. Raising the intriguing possibility that Watson might talk with Siri.

Watson is certainly the most impressive large-scale AI project around these days. I think it has a great chance at being the Next Big Platform. At the very least, IBM is shaking things up again technologically – which is good to see.

Desirable Futures 100 Years From Now

Desirable Futures

One of my favorite technology writers Kevin Kelly recently tweeted out: “I’ll pay $100 for the best 100-word description of a plausible technological future in 100 years that I would like to live in.” He’s written up a post on Medium with the answers he got, together with his own musings on “a desirable future scenario.” Kelly chose this one, by John Hanacek, as the winning entry:

Physical and virtual realities are meshed together with no distinction. Ideas are given sovereignty with their creators rewarded fairly and directly. The world itself does the drudgery of assembling itself across all sectors that information science has been applied, which is limited only by the quantum information underpinnings of the universe. Humans have taken up their primary purpose of creativity and now work with other intelligences of any kind to ask questions and achieve answers, with an eye toward more questions. “Human” has taken on flourishing new meanings. Imagination has been unleashed upon the world in a literal sense.

I like how Hanacek subtly portrays machine intelligence working with us humans, instead of replacing us…or worse! This is a topic I’m exploring in my second book. Also, the “ideas are given sovereignty” line sounds a lot like Ted Nelson’s pre-Web vision of Xanadu. Wouldn’t it be amazing if that was the WWW in 100 years time.

I also liked Kelly’s own answer:

2121: Population 4 billion; 85% urban. Cities boom, empty suburbs struggle. Agriculture acreage reduced with GMOs. Nature monitored quantitatively; green lands expand with genetic engineering. Solar, fusion, mini nukes generate cheap power. Climate change adapted. Creative middle class the new majority, globally mobile. Computer pilots make travel common internationally. Eco and heritage tourism primary income for poorest. Robots takeover remaining blue-color jobs in Asia and Africa. Internet of everything physical continued. Universal library, and universal lifelong education for free. All humans always on the net anywhere. Brain interface, wearables. Co-veillent tracking ubiquitous. Quantified self for personalized medicine. Techno-literacy (managing) skills mandatory. 

The brain interface and wearables touch on the theme of my second book too.

Both of these “desirable futures” by Hanacek and Kelly are compelling. Of course, what makes them desirable is that humans are center stage.

The Human Element In Freestyle Chess

freestyle03

“When humans team up with computers to play chess, the humans who do best are not necessarily the strongest players. They’re the ones who are modest, and who know when to listen to the computer. Often, what the human adds is knowledge of when the computer needs to look more deeply.”

Great quote from Tyler Cowen, author of a book called ‘Average Is Over: Powering America Beyond the Age of the Great Stagnation’. He’s referring to “freestyle” chess competitions, in which humans and computers play on teams together. He goes on to say:

“Today, the human-plus-machine teams are better than machines by themselves. It shows how there may always be room for a human element.”

Yes, although I hope the “may” turns out to be a “will”!

Image credit: ChessBase

Follow

Get every new post delivered to your Inbox.

Join 7,268 other followers