Friday, April 17, 2015

Robotics & AI - Silicon, Part 2

















Source: engadget.com

We should reward people, not ridicule them, for thinking the impossible. - Nassim Taleb

Last week we lingered on Qualcomm, one of the absolute technology leaders in semiconductors and an organization with a prescient long term strategic outlook. Though Qualcomm's level of sophistication in AI is enough to put even major systems houses to shame, they are not alone in the silicon sector in pursuing a defining role for themselves in the future of AI and robotics. Today we'll look at the aspirations of two more: CEVA and NVidia. (NB: I have absolutely zero financial interest in either of these companies and no one is paying me to write these analyses.)

CEVA

We first looked at this California-Israeli embedded DSP provider earlier in the year while studying Machine Vision ( http://vigilfuturi.blogspot.com/2015/01/machine-vision-part-2-argus.html .) The company is rather tiny - only $50M in licensing and royalty revenues in 2014. Yet the concentration of brainpower at the firm is completely out of proportion to its size. 

Back in 2012, CEVA released the MM3101 imaging processor to support Machine Vision applications such as image/facial recognition, feature detection and so forth. Based on customer feedback, the company has developed its successor - the XM4. Some comparative specifications are presented below.


























Source: bdti.com

A quick perusal of the tables indicates that the XM4 has a lot more processing muscle in every respect - bandwidth, throughput, pipeline depth, support for double precision floating point operations and a significantly improved L1 cache setup. As applications for machine vision have blossomed, so has CEVA evolved its capabilities to accommodate the increased data load and sophistication of computer vision algorithms. Most revealingly, the ISA for the XM4 has expanded to include non-linear instructions.

CEVA is plainly not a large enough company to play a defining role in the development and growth of the IoT, AI and Robotics. It is an enterprise of a size that requires a short delay between R&D and ROI and does not have resources to spare. Nevertheless, CEVA's technical talent obviously has great insight into the needs of its customers in these embryonic markets and is demonstrating the aptitude to understand the technologies under development in its client base and position itself as a major provider of the 'nuts and bolts' to bring their system-level products to full fruition. 

NVidia


Dreams are the touchstones of our characters. - Thoreau

There is no word-level programmable architecture with more raw processing muscle than a GPU. As the developer of the most potent GPU IP in the chip industry, Nvidia has been searching for over a decade to expand the applications potential of its technology and, as a consequence, has come to resemble a research institution with an ever-renewing endowment (stemming from its GPU business in the PC space.)

The applied researchers at NVidia are in an enviable position, as they are getting the opportunity (an astonishingly rare one in Silicon Valley) to stretch themselves towards an almost unrestricted reach of uncharted territory. A major part of that work is, of course, targeted at sectors where the computational brawn of GPUs is particularly well suited: Robotics and AI.

The Tesla architecture has long been popular for R&D applications and has found a natural home in image & speech recognition, language translation and deep learning in general. Computationally it's a natural fit, as the algorithms require an abundance of both cascading and parallel processing horsepower, with Tesla cores agglomerated in groups up to 2500 at a time. The real trick, however, is in developing algorithms that take advantage of those computing resources efficiently.

The primary reason why GPUs are so suited to the various subsystems for AI and Robotics stems from the current research and development approaches employed. Machine Learning, as discussed before, currently depends on repetition. An item/image/word/sound/text string is introduced to the machine with a label attached to it. Each time, the machine constructs a mathematical model and then refines it after every successive instance. After many instances (often tens of thousands or even millions), the machine is then given unlabeled data and is tasked with accurately identifying it. Considering the types of data and the mathematical approaches employed on them, the need for cascading, parallelism and floating point support becomes obvious.

Nvidia has demonstrated some prescience in its support of research in this area by not simply providing the necessary hardware but also facilitating system software development. cuDNN is a library of primitives for Deep Neural Networks based on the CUDA programming platform/model. It supports convolution and other computational forms, including mathematically described behaviors commonly observed in actual neurons. 

The library is optimized for Nvidia CUDA-supporting GPU arrays for performance, efficiency, memory usage and, naturally, multithreading. The primitives are flexible enough to support a wide range of data sets both in terms of volume and structure and can be used both when the machine is learning and when it is executing tasks.

In its efforts to keep pace with its clientele, Nvidia is now also exploring Convolutional Neural Networks. A variant of Deep Learning/Deep Neural Networks, CNN is an attempt to evolve beyond the simple provision of greater hardware and software computational 'mass' and more directly emulate the patterns of neurons observed in the visual or auditory receptors of the human body.

However, though this latest initiative by Nvidia is indeed the appropriate thing to do to support their customers, it betrays a conceptual weakness in their entire approach to the fledgling AI and Robotics markets. The heart of the problem is that they're following instead of leading. 

The company appears to be content with learning from their systems house and R&D customers, supporting them in their research directions rather than taking steps to figure things out on their own. This is in marked contrast to Qualcomm's approach to explore other directions, go farther than their customers and add superior value to their product developments and customer engagements thru having independently gained greater understanding. Stated differently - Nvidia is demonstrating the behaviors of a fast follower, while Qualcomm is acting like a leader. One can understand CEVA limiting their efforts to closely supporting and following, but not a powerful and prosperous SoC company like Nvidia.

As an exception to the case, the best thing Nvidia is doing is manifested in their efforts to target Jetson TK1 at Robotics. The latest version of the hardware development/prototyping platform uses the Tegra mobile computing apps processor with a 192-core Kepler GPU (their lowest end core) along with an ARM Cortex A15 to manage control plane functions. At a cost of $192, such a platform is indeed difficult to match, let alone beat. 

In this area, Nvidia is showing some real smarts - like Qualcomm, they are focused on mobile computing platforms as the driver for future AI, IoT and Robotics advancements, all of which would be governed and interacted with by an independent AI node on a portable personal processor. This is a deliberate rejection of the Google and Microsoft approach that employs gargantuan server farms to support AI, machine vision and voice recognition applications. The rest of Nvidia should use the Jetson group as an example of how to move forward with genuine strategic vision.


The explorers of the past were great men and we should honour them. But let us not forget that their spirit lives on. It is still not hard to find a man who will adventure for the sake of a dream or one who will search, for the pleasure of searching, not for what he may find. - Edmund Hillary

As we've gathered from the string of editorials posted since the beginning of the year, there is an enormous frontier opening up in technology with many a company great and small trying to stake their claim in IoT, Robotics and AI. It has also become plain to see that these three markets are already intertwined, with their entanglement guaranteed to increase over time.

Are all of their endeavors already paying off? Hardly.

Are various companies making errors along the way? Sure.

Is the effort worth it, or is it all hype and hysteria? Well, parts of it clearly are (as evidenced by the laughable reactions of media talking heads/drooling idiots whenever they hear the phrase "smart watch.") We are, nonetheless, only at the infant stages of the next massive wave of technological change. I have no qualms in proclaiming that what may seem fantastical and magical today will be downright mundane 20 years from now.

It's going to be FUN watching all this unfold. ;-)
___________________________________________________

Dear readers,
I must apologize for missing my posting installment last week on the 10th. Between a critical family medical emergency and my own spectacular bout with stomach flu, I simply could not devote sufficient time or energy to the blog. Again, my apologies.
___________________________________________________
Dear readers,
Don't forget the Amazon and FlipKart banners! Many thanks to those who already have clicked thru them to do their shopping, and I hope more will join in to help keep this blog going. :-)
p.s. If any of you would like those banners to hilite particular products or categories, please let me know.

No comments:

Post a Comment

Feel free to comment or critique!