earnings-calls-qa / Transcripts /NVDA /2017-Nov-09-NVDA.txt
awinml's picture
Upload 206 files (#1)
c49f0b0
Thomson Reuters StreetEvents Event Transcript
E D I T E D V E R S I O N
Q3 2018 Nvidia Corp Earnings Call
NOVEMBER 09, 2017 / 10:00PM GMT
================================================================================
Corporate Participants
================================================================================
* Jensen Hsun Huang
NVIDIA Corporation - Co-Founder, CEO, President & Director
* Colette M. Kress
NVIDIA Corporation - Executive VP & CFO
* Simona Jankowski
-
================================================================================
Conference Call Participiants
================================================================================
* Toshiya Hari
Goldman Sachs Group Inc., Research Division - MD
* Christopher Caso
Raymond James & Associates, Inc., Research Division - Research Analyst
* Vivek Arya
BofA Merrill Lynch, Research Division - Director
* Joseph Lawrence Moore
Morgan Stanley, Research Division - Executive Director
* Stacy Aaron Rasgon
Sanford C. Bernstein & Co., LLC., Research Division - Senior Analyst
* Atif Malik
Citigroup Inc, Research Division - VP and Semiconductor Capital Equipment and Specialty Semiconductor Analyst
* Christopher James Muse
Evercore ISI, Research Division - Senior MD, Senior Equity Research Analyst and Fundamental Research Analyst
* Craig Andrew Ellis
B. Riley & Co., LLC, Research Division - Senior MD & Director of Research
* Matthew D. Ramsay
Canaccord Genuity Limited, Research Division - MD
* Hans Carl Mosesmann
Rosenblatt Securities Inc., Research Division - Senior Research Analyst
================================================================================
Presentation
================================================================================
--------------------------------------------------------------------------------
Operator [1]
--------------------------------------------------------------------------------
Good afternoon. My name is Victoria, and I'm your conference operator for today. Welcome to NVIDIA's financial results conference call. (Operator Instructions) I'll now turn the call over to Simona Jankowski, Vice President of Investor Relations, to begin your conference.
--------------------------------------------------------------------------------
Simona Jankowski, - [2]
--------------------------------------------------------------------------------
Thank you. Good afternoon, everyone and welcome to NVIDIA's conference call for the third quarter of fiscal 2018. With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. It is also being recorded. You can hear a replay by telephone until November 16, 2017. The webcast will be available for replay up until next quarter's conference call to discuss Q4 and full year fiscal 2018 financial results. The contents of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, November 9, 2017, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.
During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO Commentary, which is posted on our website.
With that, let me turn the call over to Colette.
--------------------------------------------------------------------------------
Colette M. Kress, NVIDIA Corporation - Executive VP & CFO [3]
--------------------------------------------------------------------------------
Thanks, Simona. We had an excellent quarter with record revenue in each of our 4 market platforms. And every measure of profit hit record levels, reflecting the leverage of our model. Data center revenue of $501 million more than doubled from a year ago and the strong adoption of our Volta platform and early traction with our inferencing portfolio.
Q3 revenue reached $2.64 billion, up 32% from a year earlier, up 18% sequentially and well above our outlook of $2.35 billion. From a reporting segment perspective, GPU revenue grew 31% from last year to $2.22 billion. Tegra processor revenue rose 74% to $419 million.
Let's start with our gaming business. Gaming revenue was $1.56 billion, up 25% year-on-year and up 32% sequentially. We saw robust demand across all regions and form factors. Our Pascal-based GPUs remained the platform of choice for gamers as evidenced by our strong demand for GeForce GTX 10-Series products. We introduced the GeForce GTX 1070 Ti, which became available last week. It complements our strong holiday lineup ranging from the entry-level GTX 1050 to our flagship GTX 1080 Ti.
A wave of great titles is arriving for the holidays, driving enthusiasm in the market. We collaborated with Activision to bring Destiny 2 to the PC earlier in the month. PlayerUnknown's Battlegrounds, popularly known as PUBG, continues to be one of the year's most successful titles. We are closely aligned with PUBG to ensure that GeForce is the best way to play the game, including bringing shadow play highlights to its 20 million players. Last weekend, Call of Duty: WWII had a strong debut, and Star Wars Battlefront II will be out [soon].
eSports remains one of the most important secular growth drivers in the gaming market with a fan base that now exceeds 350 million. Last weekend, the League of Legends World Championship was held in Beijing's National Stadium, the Bird's Nest, where the 2008 Olympics games were held. More than 40,000 fans attended live, and online viewers were set to break last year's record of 43 million following in 18 languages.
GPU sales also benefited from continued cryptocurrency mining. We met some of this demand with a dedicated board in our OEM business and a portion with GeForce GTX boards, though it's difficult to quantify. We remain nimble in our approach to the cryptocurrency market. It is volatile, does not and will not distract us from focusing on our core gaming market. Lastly, Nintendo Switch console continues to gain momentum since launching in March and also contributed to growth.
Moving to data center. Our data center business had an outstanding quarter. Revenue of $501 million more than doubled from last year and rose 20% on the quarter amid strong traction of the new Volta architecture. Shipments of the Tesla V100 GPU began in Q2 and ramped significantly in Q3 driven primarily by demand from cloud service providers and high-performance computing. As we have noted before, Volta delivers 10x the deep learning performance of our Pascal architecture, which has been introduced just a year earlier, far outpacing Moore's Law.
The V100 is being broadly adopted with every major server OEM and cloud provider. In China, Alibaba, Baidu and Tencent announced that they are incorporating V100 in their data centers and cloud server service infrastructures. In the U.S., Amazon Web Services announced that V100 inferences are now available in 4 of its regions. Oracle Cloud has just added Tesla P100 GPUs to its infrastructure offerings and plans to expand to the V100 GPUs. We expect support from V100 from other major cloud providers as well.
In addition, all major server OEMs announced support for the V100, Dell EMC, Hewlett Packard Enterprise, IBM and Supermicro are incorporating it in servers. And China's top server OEMs, Huawei, Inspur and Lenovo have adopted our HGX server architecture to build a new generation of accelerated data centers with V100 GPUs.
Our new offerings for the AI inference market are also gaining momentum. The recently launched TensorRT 3 programmable inference acceleration platform opens a new market opportunity for us, improving the performance and reducing the cost of AI inferencing in order -- by orders of magnitude compared with CPUs. It supports every major deep learning framework, every network architecture and any level of network complexity. More than 1,200 companies are already using our inference platform including Amazon, Microsoft, Facebook, Google, Alibaba, Baidu, JD.com, iFLYTEK, Hikvision and Tencent.
During the quarter, we announced that the NVIDIA GPU Cloud container registry, or NGC, is now available through Amazon's cloud and will be supported soon by other cloud platforms. NGC helps developers get started with deep learning development through no-cost access to a comprehensive, easy-to-use, fully optimized deep learning software stack. It enables instant access to the most widely used GPU-accelerated frameworks.
We also continue to see robust growth in our HPC business. Next-generation supercomputers, such as the U.S. Department of Energy's Sierra and Summit systems expected to come online next year, leverage Volta's industry-leading performance, and our pipeline is strong. The past weeks have been exceptionally busy for us. We have hosted 5 major GPU Technology Conferences in Beijing, Munich, Taipei, Tel Aviv and Washington with another next month in Tokyo. In a strong indication of the growing importance of GPU-accelerated computing, more than 22,000 developers, data scientists and others will come this year to our GTCs, including the main event in Silicon Valley. That's up 10x in just 5 years. Other key metrics show similar gains. Over the same period, the number of NVIDIA GPU developers has grown 15x to 645,000, and the number of CUDA downloads this year are up 5x to 1.8 million.
Moving to professional visualization. Third quarter revenue grew to $239 million, up 15% from a year ago and up 2% sequentially driven by demand for high-end real-time rendering, simulation and more powerful mobile workstations. The defense and automotive industries grew strongly as the demand for professional VR solutions driven by Quadro P5000 and P6000 GPUs. Among key customers, Audi and BMW are deploying VR in auto showrooms. And the U.S. Army, Navy and Homeland Security are using VR for mission training.
Last month, we announced early access to NVIDIA Holodeck, the intelligent VR collaboration platform. Holodeck enables designers, developers and their customers to come together virtually from anywhere in the world in a highly realistic, collaborative and physically simulated environment. Future updates will address the growing demand for the development of deep learning techniques in virtual environments.
In automotive, revenue grew to $144 million, up 13% year-over-year and up slightly from last quarter. Among key developments this quarter, we announced DRIVE PX Pegasus, the world's first AI computer for enabling Level 5 driverless vehicles. Pegasus will deliver over 320 trillion operations per second, more than 10x its predecessor. It's powered by 4 high-performance AI processors in a supercomputer that is the size of a license plate. NVIDIA DRIVE is being used by over 25 companies to develop fully autonomous robotaxis, and DRIVE PX Pegasus will become the path to production. It is designed for ASIL D certification, the industry's highest safety level and will be available in the second half of 2018.
We also introduced the DRIVE IX SDK for delivering intelligent experiences inside the vehicle. DRIVE IX provides a platform for car companies to create and always engage AI co-pilot. It uses deep learning networks to track head movement and gaze, and it will have a conversation with the driver using advanced speech recognition, lipreading and natural language understanding. We believe this will set the standard for the next generation of infotainment systems, a market that is just beginning to develop.
Finally, we announced that DHL, the world's largest mail and package delivery service, and ZF, one of the world's leading automotive suppliers, will deploy a test fleet of autonomous delivery trucks next year using the NVIDIA DRIVE PX platform. DHL will outfit electric light trucks with the ZF ProAI self-driving system based on our technology.
Now turning to the rest of the income statement. Q3 GAAP gross margins was 59.5% and non-GAAP was 59.7%, both up sequentially and year-over-year, reflecting continued growth in value-added platforms. GAAP operating expenses were $674 million, and non-GAAP operating expenses were $570 million, consistent with our outlook and up 19% year-on-year. Investing in our key market opportunities is essential to our future, including gaming, AI and self-driving cars.
GAAP operating income was a record $895 million, up 40% from a year ago. Non-GAAP operating income was $1.01 billion, up 42% from a year ago. GAAP net income was a record $838 million, and EPS was $1.33, up 55% and 60%, respectively, from a year earlier. Non-GAAP net income was $833 million, and EPS was $1.33, up 46% and 41%, respectively from a year earlier, reflecting revenue strength as well as gross margin and operating margin expansion.
We have returned $1.16 billion to shareholders so far this fiscal year through a combination of quarterly dividends and share repurchases. We have announced an increase to our quarterly dividend of $0.01 to an annualized $0.60 effective with our Q4 fiscal year '18 dividend. We are also pleased to announce that we intend to return another $1.25 billion to shareholders for fiscal 2019 through quarterly dividends and share repurchases. Our quarterly cash flow from operations reached record levels, surpassing $1 billion for the first time to $1.16 billion.
Now turning to the outlook for the fourth quarter of fiscal 2018. We expect revenue to be $2.65 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 59.7% and 60%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $722 million and $600 million, respectively. GAAP and non-GAAP OI&E are both expected to be nominal. GAAP and non-GAAP tax rates are both expected to be 17.5%, plus or minus 1%, excluding discrete items. Further financial details are included in the CFO Commentary and other information available on our website.
We will now open the call for questions. (Operator Instructions) Operator, we will -- would you please pool for questions? Thank you.
================================================================================
Questions and Answers
================================================================================
--------------------------------------------------------------------------------
Operator [1]
--------------------------------------------------------------------------------
(Operator Instructions) Your first question comes from the line of Toshiya Hari with Goldman Sachs.
--------------------------------------------------------------------------------
Toshiya Hari, Goldman Sachs Group Inc., Research Division - MD [2]
--------------------------------------------------------------------------------
Jensen, 3 months ago you described the July quarter as a transition quarter for your data center business. And clearly, you guys have ramped very well into October. But if you can talk a little bit about the outlook for the next couple of quarters in data center and particularly on the inferencing side. I know you guys are really excited about that opportunity. So if you can share customer feedback and what your expectations are into the next year in inferencing, that would be great.
--------------------------------------------------------------------------------
Jensen Hsun Huang, NVIDIA Corporation - Co-Founder, CEO, President & Director [3]
--------------------------------------------------------------------------------
Yes, as you know, we started ramping very strongly Volta this last quarter, and we started the ramp the quarter before. And since then, every major cloud provider, from Amazon, Microsoft, Google to Baidu, Alibaba, Tencent and even recently, Oracle, has announced support for Volta and we'll be providing Volta for their internal use of deep learning as well as external public cloud services. We also announced that every major server computer maker in the world has now supported Volta and in the process of taking Volta out to market. HP and Dell and IBM and Cisco and Huawei in China, Inspur in China, Lenovo, have all announced that they will be building servers -- families of servers around the Volta GPU. And so I think we -- this ramp is just the first part of supporting the build out of GPU-accelerated servers from our company for data centers all over the world as well as cloud service providers all over the world. The applications for these GPU servers has now grown to many markets. I've spoken about the primary segments of our Tesla GPUs. There are 5 of them that I talk about regularly. The first one is high-performance computing where the market is $11 billion or so. It is one of the faster-growing parts of the IT industry because more and more people are using high-performance computing for doing their product development or looking for insights or predicting the market or whatever it is. And today, we represent about 15% of the world's top 500 supercomputers. And I've repeatedly said and I believe this completely and I think it's becoming increasingly true that every single supercomputer in the future will be accelerated somehow. So this is a fairly significant growth opportunity for us. The second is deep learning training, which is very, very much like high-performance computing. And you need to do computing at a very large scale. You're performing trillions and trillions of iterations. The models are getting larger and larger. Every single year, the amount of data that we're training with it is increasing. And the difference between a computing platform that's fast versus not could mean the difference between building a $20 million data center or high-performance computing servers for training to $200 million. And so the money that we save and the capability we provide is really, the value is incredible. The third segment, and this is the segment that you just mentioned, has to do with inference, which is when you're done with developing this network, you have to put it down into the hyperscale data centers to support the billions and billions of queries that consumers make to the Internet every day. And this is a brand-new market for us. 100% of the world's inference is done on CPUs today. We announced very recently, this last quarter in fact, that TensorRT 3 inference acceleration platform and in combination with our Tensor Core GPU instruction set architecture, we're able to speed up networks by a factor of 100. Now the way to think about that is imagine whatever amount of workload that you've got, if you could speed up using our platform by a factor of 100, how much you can save. The other way to think about that is because the amount of -- the networks are getting larger and larger, and they're so complex now. And we know that every network on the planet will run on our architecture because they were trained on our architecture today. And so whether it's CNNs or RNNs or GANs or autoencoders or all of the variations of those, irrespective of the precision that you need to support, the size of the network, we have the ability to support them. And so you could either scale out your hyperscale data center to support more traffic or you can reduce your cost tremendously or simultaneously, both. The fourth segment of our data center is providing all of that capability, what I just mentioned, whether it's HPC, training or inference and turning it inside out and making it available in the public cloud. There are thousands of start-ups now that are in -- are started because of AI. Everybody recognizes the importance of this new computing model. And as a result of this new tool, this new capability, all these unsolvable problems in the past are now interestingly solvable. And so you can see start-ups cropping up all over the west, all over the east, and there's just -- there are thousands of them. And these companies don't either -- would rather not use their scarce financial resources to go build high-performance computing centers, or they don't have the skill to be able to build out a high-performance platform the way these Internet companies can. And so these cloud providers, cloud platforms are just a fantastic resource for them because they could rent it by the hour. We created in conjunction with that, and I mentioned all the cloud service providers have taken it to market, in conjunction with that, we created a registry in the cloud that containerizes these really complicated software stacks. Every one of these soft frameworks with the different versions of our GPUs and different acceleration layers and different optimization techniques, we've containerized all of that for every single version and every single type of framework in the marketplace. And we put that up in the registry -- cloud registry called the NVIDIA GPU Cloud. And so all you had to do was download that into the cloud service provider that we've got certified and tested for, and with just one click, you're doing deep learning. And then the last -- and so that's the cloud service providers. If you -- the way to guess that -- estimate that is there are obviously tens of billions of dollars being invested in these AI start-ups. And some large proportion of their investment fund raise will ultimately have to go towards high-performance computing, whether they build it themselves or they rent it in the cloud. And so I think that's a multibillion dollar opportunity for us. And then lastly, this is probably the largest of all the opportunities, which is the vertical industries. Whether it's automotive companies that are developing their supercomputers to get ready for self-driving cars or health care companies that are now taking advantage of artificial intelligence to do better diagnostics of -- diagnosis of disease, to manufacturing companies to -- for in-line inspection, to robotics, large logistics companies. Colette mentioned earlier DHL. But the way to think about that is all of these planning -- all of these companies doing planning to deliver products to you through this large network of delivery systems, it is the world's largest planning problem. And whether it's Uber or Didi or Lyft or Amazon or DHL or UPS or FedEx, they all have high-performance computing problems that are now moving to deep learning. And so those are really exciting opportunities for us. And so the last one is just vertical industries. I mean, all of these segments, we're now in a position to start addressing because we've put our GPUs in the cloud, all of our OEMs are in the process of taking these platforms out to market and we have the ability now to address high-performance computing and deep learning training as well as inference using one common platform. And so I think the -- we've been steadfast with the excitement of accelerated computing for data centers, and I think this is just the beginning of it all.
--------------------------------------------------------------------------------
Operator [4]
--------------------------------------------------------------------------------
Your next question comes from the line of Stacy Rasgon with Bernstein Research.
--------------------------------------------------------------------------------
Stacy Aaron Rasgon, Sanford C. Bernstein & Co., LLC., Research Division - Senior Analyst [5]
--------------------------------------------------------------------------------
I had a question on your gaming seasonality into Q4. It's usually up a bit. I was wondering, do you see any, I guess, drivers that would drive a lack of normal seasonal trends given how strong it's been sequentially and year-over-year? And I guess as a related question, do you see your Volta volumes in Q4 exceeding Q3?
--------------------------------------------------------------------------------
Jensen Hsun Huang, NVIDIA Corporation - Co-Founder, CEO, President & Director [6]
--------------------------------------------------------------------------------
Let's see. There's -- I'll answer the last one first and then work towards the first one. I think the guidance that we provided, we feel comfortable with. But if you think about Volta, it is just in the beginning of the ramp, and it's going to ramp into the market opportunities I talked about. And so my hope is that we continue to grow, and there's every evidence that the markets that we serve, that we're addressing with Volta is -- are very large markets. And so there's a lot of reasons to be hopeful about the future growth opportunities for Volta. We've primed the pump. So cloud service providers are either announce the availability of Volta or they announced the soon availability of Volta. They're all racing to get Volta to their cloud because customers are clamoring for it. The OEMs are -- we've primed the pump with the OEMs, and some of them are sampling now and some of them are racing to get Volta to production in the marketplace. And so I think the foundation, the demand is there. The urgent need for accelerated computing is there because Moore's Law is not scaling anymore, and then we've primed the pump. So the demand is there. There's a need -- the need is there, and the foundations for getting Volta to market is primed. With respect to gaming, what drives our gaming business? Remember, our gaming business is sold one at a time to millions and millions of people. And what drives our gaming business is several things. As you know, eSports is incredibly, incredibly vibrant, and what drives -- the reason why eSports is so unique is because people want to win and having better gear helps. The latency that they expect is incredibly low, and performance drives down latency and they want to be able to react as fast as they can. People want to win, and they want to make sure that the gear that they use is not the reason why they didn't win. The second growth driver for us is content, the quality of content. And boy, if you look at Call of Duty or Destiny 2 or PUBG, the content just looks amazing. The AAA content looks amazing. And one of the things that's really unique about video games is that in order to enjoy the content and the fidelity of the content, the quality of the production value at its fullest, you need the best gear. It's very different than streaming video, it's very different than watching movies where streaming videos, it is what it is. But for video games, of course, it's not. And so when AAA titles comes out in the later part of the year, it helps to drive platform adoption. And then lastly, increasingly, social is becoming a huge part of the growth dynamics of gaming. People are -- they recognize how beautiful these video games are, and so they want to share their brightest moments with people. They want to share the levels they discover. They want to take pictures of the amazing graphics that's inside. And it is one of the primary drivers, the leading driver, in fact, of YouTube and people watching other people play video games, these broadcasters. And now with our Ansel, the world's first in-game virtual reality and surround and digital camera, we have the ability to take pictures and share that with people. And so I think all of these different drivers are helping our gaming business, and I'm optimistic about Q4. It looks like it's going to be a great quarter.
--------------------------------------------------------------------------------
Operator [7]
--------------------------------------------------------------------------------
Your next question comes from the line of C.J. Muse from Evercore.
--------------------------------------------------------------------------------
Christopher James Muse, Evercore ISI, Research Division - Senior MD, Senior Equity Research Analyst and Fundamental Research Analyst [8]
--------------------------------------------------------------------------------
I was hoping to sneak in a near-term and a longer-term question. On the near term, you talked about the health on demand side for Volta. Curious if you're seeing any sort of restrictions on the supply side, whether it's wafers or access to high-bandwidth memory, et cetera. And then the longer-term question really revolves around CUDA, and you've talked about that as being a sustainable competitive advantage for you guys entering the year. And now that we've moved beyond HPC and hyperscale training to more into inference and GPU as a service and you've hosted GTC around the world, curious if you could extrapolate on how you're seeing that advantage and how you've seen it evolve over the year and how you're thinking about CUDA as the AI standard.
--------------------------------------------------------------------------------
Jensen Hsun Huang, NVIDIA Corporation - Co-Founder, CEO, President & Director [9]
--------------------------------------------------------------------------------
Yes, thanks a lot, C.J. Well, everything that we build is complicated. Volta is the single largest processor that humanity has ever made, 21 billion transistors, 3D packaging, the fastest memories on the planet and all of that in a couple hundred watts, which basically says it's the most energy-efficient form of computing that the world has ever known. And one single Volta replaces hundreds of CPUs. And so it's energy-efficient. It saves an enormous amount of money. And it gets this job done really, really fast, which is one of the reasons why GPU-accelerated computing is so popular now. With respect to the outlook for our architecture, as you know, we are a one-architecture company, and it's so vitally important. And the reason for that is because there are so much software and so much tools created on top of this one architecture. On the inference side -- on the training side, we have a whole stack of software and optimizing compilers and numerics libraries that are completely optimized for one architecture called CUDA. On the inference side, the optimizing compilers that takes these large, huge computational graphs that come out of all of these frameworks, and these computational graphs are getting larger and larger and their numerical precision differs from one type of network to another -- from one type of application to another. Your numerical precision requirements for a self-driving car, where lives are at stake, to detecting where -- counting the number of people crossing the street, counting something versus trying to track -- detect and track something very subtle in all kinds of weather conditions is a very, very different problem. And so numeric -- the type of networks are changing all the time. They're getting larger all the time. The numerical precision is different for different applications. And we have different computing -- computer performance levels as well as energy availability levels that these inference compilers are likely to be some of the most complex software in the world. And so the fact that we have one singular architecture to optimize for, whether it's HPC for numeric -- molecular dynamics and computational chemistry and biology and astrophysics, all the way to training to inference gives us just enormous leverage. And that's the reason why NVIDIA could be an 11,000-people company and arguably, performing at a level that is 10x that. And the reason for that is because we have one singular architecture that's -- that is accruing benefits over time instead of 3, 4, 5 different architectures where your software organization is broken up into all these different small subcritical mass pieces. And so it's a huge advantage for us, and it's a huge advantage for the industry. So people who support CUDA knows that the next-generation architecture will just get a benefit and go for the ride that technology advancement provides them and affords them. Okay. So I think it's an advantage that is growing exponentially, frankly, and I'm excited about it.
--------------------------------------------------------------------------------
Operator [10]
--------------------------------------------------------------------------------
Your next question comes from the line of Vivek Arya with Bank of America.
--------------------------------------------------------------------------------
Vivek Arya, BofA Merrill Lynch, Research Division - Director [11]
--------------------------------------------------------------------------------
Congratulations on the strong results and the consistent execution. Jensen, in the last few months, we have seen a lot of announcements from Intel, from Xilinx and others describing other approaches to the AI market. My question is how does a customer make that decision whether to use a GPU or an FPGA or an ASIC, right? What is -- what can remain your competitive differentiator over the longer term? And does your position in the training market also then maybe give you a leg up when they consider solution for the inference part of the problem?
--------------------------------------------------------------------------------
Jensen Hsun Huang, NVIDIA Corporation - Co-Founder, CEO, President & Director [12]
--------------------------------------------------------------------------------
Yes, thank you, Vivek. So first of all, we have one architecture and people know that our commitment to our GPUs, our commitment to CUDA, our commitment to all of the software stacks that run on top of our GPUs, every single one of the 500 applications, every numerical solver, every CUDA compiler, every tool chain across every single operating system in every single computing platform, we are completely dedicated to it. We support the software for as long as we shall live, and as a result of that, the benefits to their investment in CUDA just continues to accrue. I -- you have no idea how many people send me notes about how they literally take out their old GPU, put in a new GPU and without lifting a finger, things got 2x, 3x, 4x faster than what they were doing before, incredible value to customers. The fact that we are singularly focused and completely dedicated to this one architecture and in an unwavering way allows everybody to trust us and know that we will support it for as long as we shall live. And that is the benefit of an architectural strategy. When you have 4 or 5 different architectures to support, that you offer to your customers and you ask them to pick the one that they like the best, you're essentially saying that you're not sure which one is the best. And we all know that nobody's going to be able to support 5 architectures forever. And as a result, something has to give, and it would be really unfortunate for a customer to have chosen the wrong one. And if there's 5 architectures, surely, over time, 80% of them will be wrong. And so I think that our advantage is that we're singularly focused. With respect to FPGAs, I think FPGAs have their place, and we use FPGAs here in NVIDIA to prototype things and -- but FPGA is a chip design. It's able to be a chip for -- it's incredibly good at being a flexible substrate to be any chip, and so that's its advantage. Our advantage is that we have a programming environment, and writing software is a lot easier than designing chips. And if it's within the domain that we focus on, like, for example, we're not focused on network packet processing, but we are very focused on deep learning. We're very focused on high performance and parallel numerics analysis. If we're focused on those domains, our platform is really quite unbeatable. And so that's how you think through that. I hope that was helpful.
--------------------------------------------------------------------------------
Operator [13]
--------------------------------------------------------------------------------
Your next question comes from Atif Malik with Citi.
--------------------------------------------------------------------------------
Atif Malik, Citigroup Inc, Research Division - VP and Semiconductor Capital Equipment and Specialty Semiconductor Analyst [14]
--------------------------------------------------------------------------------
Colette, on the last call you mentioned crypto was $150 million in the OEM line in the July quarter. Can you quantify how much crypto was in the October quarter and expectations in the January quarter directionally? And just longer term, why should we think that crypto won't impact the gaming demand in the future? If you can just talk about the steps NVIDIA has taken with respect to having a different mode and all that.
--------------------------------------------------------------------------------
Colette M. Kress, NVIDIA Corporation - Executive VP & CFO [15]
--------------------------------------------------------------------------------
So in our results, in the OEM results, our specific crypto [boards] equated to about $70 million of revenue, which is the comparable to the $150 million that we saw last quarter.
--------------------------------------------------------------------------------
Jensen Hsun Huang, NVIDIA Corporation - Co-Founder, CEO, President & Director [16]
--------------------------------------------------------------------------------
Yes, longer term, Atif -- well, first of all, thank you for that. The -- longer term, the way to think about that is, is crypto is small for us but not 0. And I believe that crypto will be around for some time, kind of like today. There will be new currencies emerging. Existing currencies will grow in value. The interest in mining these new emerging currency crypto algorithms that emerge are going to continue to happen. And so I think for some time, we're going to see that crypto will be a small but not 0, small but not 0 part of our business. The -- when you think about crypto in the context of our company overall, the thing to remember is that we're the largest GPU computing company in the world. And our overall GPU business is really sizable and we have multiple segments. And there's data center and I've already talked about the 5 different segments within data center. There's ProVis and even that has multiple segments within it. Whether it's rendering or computer-aided design or broadcast in a workstation, in a laptop or in a data center, the architectures are rather different. And of course, you know that we have high-performance computing. You know that we have autonomous machine business, self-driving cars and robotics. And you know, of course, that we have gaming. And so these different segments are all quite large and growing. And so my sense is that as -- although crypto will be here to stay, it will remain small but not 0.
--------------------------------------------------------------------------------
Operator [17]
--------------------------------------------------------------------------------
Your next question comes from the line of Joe Moore with Morgan Stanley.
--------------------------------------------------------------------------------
Joseph Lawrence Moore, Morgan Stanley, Research Division - Executive Director [18]
--------------------------------------------------------------------------------
Just following up on that last question. You mentioned that some of the crypto market had moved to traditional gaming. What drives that? Is there a lack of availability of the specialized crypto product? Or is it just that there's a preference being driven for the gaming-oriented crypto solutions?
--------------------------------------------------------------------------------
Jensen Hsun Huang, NVIDIA Corporation - Co-Founder, CEO, President & Director [19]
--------------------------------------------------------------------------------
Yes, Joe, I appreciate you asking that. Here's the reason why. So what happens is, is when a crypto -- when a currency -- digital currency market becomes very large, it entices somebody to build a custom ASIC for it. And of course, Bitcoin is the perfect example of that. Bitcoin is incredibly easy to design as a specialized chip form. But then what happens is a couple of different players starts to monopolize the marketplace and as a result, it chases everybody out of the mining market and it encourages a new currency to evolve -- to emerge. And the new currency, the only way to get people to mine it is if it's hard to mine, it's hard to mine, okay, you got to put some effort into it. However, you want a lot of people to try to mine it. And so therefore, the platform that is perfect for it, the ideal platform for digital -- new emerging digital currencies turns out to be a CUDA GPU. And the reason for that is because there are several hundred million NVIDIA GPUs in the marketplace. If you want to create a new cryptocurrency algorithm, optimizing for our GPUs is really quite ideal. It's hard to do. It's hard to do, therefore, you need a lot of computation to do it. And yet there's enough GPUs in the marketplace, it's such an open platform that the ability for somebody to get in and start mining is very low barriers to entry. And so it's the cycles of these digital currencies, and that's the reason why I say that digital currency crypto usage of GPUs, crypto usage of GPUs will be small but not 0 for some time. And it's small because when it gets big, somebody will go and build a custom ASIC. But if somebody builds a custom ASIC, there will be a new emerging cryptocurrency, so ebbs and flows.
--------------------------------------------------------------------------------
Operator [20]
--------------------------------------------------------------------------------
Your next question comes from the line of Craig Ellis with B. Riley.
--------------------------------------------------------------------------------
Craig Andrew Ellis, B. Riley & Co., LLC, Research Division - Senior MD & Director of Research [21]
--------------------------------------------------------------------------------
Jensen, congratulations on data center annualizing at $2 billion. It's a huge milestone. I wanted to follow up with a question on some of your comments regarding data center partners because as I look back over the last 5 years, I just don't see any precedent for the momentum that you have in the marketplace right now between your server partners, white box partners, hyperscale partners that are deploying it, hosted, et cetera. And so my question is relative to the doubling that we've seen year-on-year in each of the last 2 years, what does that partner expansion mean for data center's growth? And then if I could sneak one more in. 2 new products just announced in the gaming platform, 1070 Ti and a Collector's Edition on TITAN Xp. What do those mean for the gaming platform?
--------------------------------------------------------------------------------
Jensen Hsun Huang, NVIDIA Corporation - Co-Founder, CEO, President & Director [22]
--------------------------------------------------------------------------------
Yes, Craig, thanks a lot. Let's see. We have never created a product that is as broadly supported by the industries and has grown 9 consecutive quarters, it has doubled year-over-year and with partnerships of the scale that we're looking at. We have just never created a product like that before, and I think the reason for that is several folds. The first is that it is true that CPU scaling has come to an end. That's just laws of physics. The end of Moore's Law is just laws of physics. And yet the world for software development and the world -- the problems that computing can help solve is growing faster than any time before. Nobody's ever seen a large-scale planning problem like Amazon before. Nobody's ever seen a large-scale planning problem like Didi before. The number of millions of taxi rides per week is just staggering. And so nobody's ever seen large problems like these before, large-scale problems like these before. And so high-performance computing and accelerated computing using GPUs has become recognized as the path forward. And so I think that that's at the highest level of the most important parameter. Second is artificial intelligence and its emergence and applications to solving problems that we historically thought were unsolvable. Solving the unsolvable problems is a real realization. I mean, this is happening across just about every industry we know, whether it's Internet service providers to health care to manufacturing to transportation and logistics, you just name it, financial services. And so I think artificial intelligence is a real tool, deep learning is a real tool that can help solve some of the world's unsolvable problems. And I think that our dedication to high-performance computing and this one singular architecture, our 7-year head start, if you will, in deep learning and our early recognition of the importance of this new computing approach, both the timing of it, the fact that it was naturally a perfect fit for the skills that we have and then the incredibly -- the incredible effectiveness of this approach, I think, has really created the perfect conditions for our architecture. And so I think -- I really appreciate you noticing that, but this is definitely the most successful product line in the history of our company.
--------------------------------------------------------------------------------
Operator [23]
--------------------------------------------------------------------------------
Your next question comes from the line of Chris Caso with Raymond James.
--------------------------------------------------------------------------------
Christopher Caso, Raymond James & Associates, Inc., Research Division - Research Analyst [24]
--------------------------------------------------------------------------------
I have a question on the automotive market and the outlook there. And interestingly, with the other segments growing as quickly as they are, auto is becoming a smaller percentage of revenue now. And certainly, the design traction seems very positive. Can you talk about the ramp in terms of when the auto revenue, when we could see that as getting back to a similar percentage of revenue? Is that growing more quickly? Do you think that is likely to happen over the next year with some of these design wins coming out? Or is that something we should -- we'll be waiting for over several years?
--------------------------------------------------------------------------------
Jensen Hsun Huang, NVIDIA Corporation - Co-Founder, CEO, President & Director [25]
--------------------------------------------------------------------------------
I appreciate that, Chris. So the way to think about that is as you know, we've really, really reduced our emphasis on infotainment even though that's the primary part of our revenues, so that we could take, literally, hundreds of engineers and including the processors that we're building now, a couple of 2,000, 3,000 engineers, working on our autonomous machine and artificial intelligence platform for this marketplace to take advantage of the position we have and to go after this amazing revolution that's about to happen. I happen to believe that everything that moves will be autonomous someday, and it could be a bus, a truck, a shuttle, a car. Everything that moves will be autonomous someday. It could be a delivery vehicle. It could be little robots that are moving around warehouses. It could be delivering a pizza to you. And we felt that those -- this was such an incredibly, incredibly great challenge and such a great computing problem that we decided to dedicate ourselves to it. Over the next several years, and if you look at our DRIVE PX platform today, there's over 200 companies that are working on it. 125 start-ups are working on it. And these companies are mapping companies. They're Tier 1s. They're OEMs. They're shuttle companies, car companies, trucking companies, taxi companies. And this last quarter, we announced an extension of our DRIVE PX platform to include DRIVE PX Pegasus, which is now the world's first auto-grade, full ASIL D platform for robotaxis. And so I think our position is really excellent, and the investment has proven to be one of the best ever. And so I think in terms of revenues, my expectation is that this coming year, we'll enjoy revenues as a result of the supercomputers that customers will have to buy for training their networks, for simulating the -- all these autonomous vehicles driving and developing their self-driving cars. And we'll see fairly large quantities of development systems being sold this coming year. The year after that, I think, is the year when you're going to see the robotaxis ramping, and our economics in every robotaxi is several thousand dollars. And then starting, I would say, late 2020 to 2021, you're going to start to see the first fully automatic autonomous cars, what people call Level 4 cars, starting to hit the road. And so that's kind of how I see it. Just next year is simulation environments, development systems, supercomputers, and then the year after that is robotaxis and then a year or 2 after that will be all the self-driving cars.
--------------------------------------------------------------------------------
Operator [26]
--------------------------------------------------------------------------------
Your next question comes from the line of Matt Ramsay with Canaccord Genuity.
--------------------------------------------------------------------------------
Matthew D. Ramsay, Canaccord Genuity Limited, Research Division - MD [27]
--------------------------------------------------------------------------------
I have, I guess, a 2-part question on gross margin. Colette, I remember, I don't know, maybe 3 years ago, 3.5 years ago at an analyst day, you guys were talking about gross margins in the mid-50s and that was inclusive of the Intel payment. And now you're hitting numbers at 60% excluding that. I want -- if you could talk a little bit about how mix of the data center business and some others drives gross margin going forward. And maybe, Jensen, you could talk a little bit about -- you mentioned Volta being such a huge chip in terms of transistor count. How you're thinking about taking costs out of that product as you ramp it into gaming next year and the effects on gross margin.
--------------------------------------------------------------------------------
Colette M. Kress, NVIDIA Corporation - Executive VP & CFO [28]
--------------------------------------------------------------------------------
Okay. Thanks, Matt, for the question. Yes, we've been on a steady stream of increasing the gross margins over the years. But this is the evolution of the entire model, the model of the value-added platforms that we sell and inclusive of the entire ecosystem of work that we do, the software that we enable in so many of these platforms that we bring to market. Data center is one of them. Our ProVis, another one and if you think about all of our work that we have in terms of gaming and that overall expansion of the ecosystem. So this has been continuing to increase our gross margin. Mix is more of a statement in terms of each quarter we have a different mix in terms of our products. So some of them have a little bit of seasonality. And depending on when some of those platforms come to market, we can have a mix change within some of those subsets. It's still going to be our focus as we go forward in terms of growing gross margins as best as we can, you can see in terms of our guidance into Q4, which we feel comfortable with that guidance that we will increase it as well.
--------------------------------------------------------------------------------
Jensen Hsun Huang, NVIDIA Corporation - Co-Founder, CEO, President & Director [29]
--------------------------------------------------------------------------------
Yes, with respect to yield enhancement, the way to think about that is we do it in several ways. The first thing is I'm just incredibly proud of the technology group that we have in VLSI, and they get us ready for these brand new nodes, whether it's in the process readiness with all the circuit readiness, the packaging, the memory readiness. The readiness is so incredible -- incredibly important for us because these processors that we're creating are really, really hard. They're the largest things in the world. And so we get one shot at it. And so the team does everything they can to essentially prepare us. And by the time that we tape-out a product for real, we know for certain that we can build it. And so the technology team in our company is just world-class, absolutely world-class. There's nothing like it. Then once we go into production, we have the benefit of ramping up the products. And as yields improve, we'll surely benefit from the cost. But that's not really where the focus is. I mean, in the final analysis, the real focus for us is continue to improve the software stack on top of our processors. And the reason for that is each one of our processors carry with it an enormous amount of memory and systems and networking and the whole data center. Most of our data center products, if we can improve the throughput of a data center by another 50% or, in our case, often times we'll improve something from 2x to 4x, the way to think about that is that billion-dollar data center just improved its productivity by a factor of 2. And all of the software work that we do on top of CUDA and the incredible work that we do with optimizing compilers and graph analytics, all of that stuff then all of a sudden translates to value to our customers, not measured by dollars, but measured by hundreds of millions of dollars. And that's really the leverage of accelerated computing.
--------------------------------------------------------------------------------
Operator [30]
--------------------------------------------------------------------------------
Your next question comes from the line of Hans Mosesmann with Rosenblatt.
--------------------------------------------------------------------------------
Hans Carl Mosesmann, Rosenblatt Securities Inc., Research Division - Senior Research Analyst [31]
--------------------------------------------------------------------------------
Jensen, can you comment on some of the issues this week regarding Intel and their renewed interest in getting into the graphics space and their relationship at the chip level with AMD?
--------------------------------------------------------------------------------
Jensen Hsun Huang, NVIDIA Corporation - Co-Founder, CEO, President & Director [32]
--------------------------------------------------------------------------------
Yes, thanks, Hans. Yes, listen, there's a lot of news out there. I guess some of the things I take away, first of all, Raja leaving AMD is a great loss for AMD. And it's a recognition by Intel probably that the GPU is just incredibly, incredibly important now. And the modern GPU is not a graphics accelerator. The modern GPU, we just left the word G in there -- the letter G in there. But these processors are domain-specific parallel accelerators, and they're enormously complex. They're the most complex processors built by anybody on the planet today. And that's the reason why IBM uses our processors for the world's largest supercomputers. That's the reason why every single cloud, every single -- every major cloud, every major server maker in the world has adopted NVIDIA GPUs: It's just incredibly hard to do. The amount of software engineering that goes on top of it is significant as well. And so if you look at the way we do things, we plan a road map about 5 years out. It takes about 3 years to build a new generation, and we build multiple GPUs at the same time. And on top of that, there are some 5,000 engineers working on systems software and numerics libraries and solvers and compilers and graph analytics and cloud platforms and virtualization stacks in order to make this computing architecture useful to all of the people that we serve. And so when you think about it from that perspective, it's just an enormous undertaking; arguably, the most significant undertaking of any processor in the world today. And that's the reason why we're able to speed up applications by a factor of 100. You don't walk in and have a new widget and a few transistors and all of a sudden speed up applications by a factor of 100 or 50 or 20. That's just something that's inconceivable unless you do the type of innovation that we do. And then lastly, with respect to the chip that they built together, I think it goes without saying now that the energy efficiency of Pascal GeForce and the Max-Q design technology and all of the software that we created has really set a new design point for the industry. It is now possible to build a state-of-the-art gaming notebook with the most leading-edge GeForce processors and be able to deliver gaming experiences that are many times greater than a console in 4K and have that be in a laptop that's 18 millimeters thin. The combination of Pascal and Max-Q has really raised the bar, and I think that that's really the essence of it.
--------------------------------------------------------------------------------
Operator [33]
--------------------------------------------------------------------------------
Unfortunately, we have run out of time. Presenters, I'll now turn the call over to you for closing remarks.
--------------------------------------------------------------------------------
Jensen Hsun Huang, NVIDIA Corporation - Co-Founder, CEO, President & Director [34]
--------------------------------------------------------------------------------
We had another great quarter. Gaming is one of the fastest-growing entertainment industries, and we are well positioned for the holidays. AI is becoming increasingly widespread in many industries throughout the world, and we're hoping to lead the way with all major cloud providers and computer makers moving to deploy Volta. And we're building the future of autonomous driving. We expect robotaxis, using our technology, to hit the road in just a couple of years. We look forward to seeing many of you at SC17 next week, and thank you for joining us.
--------------------------------------------------------------------------------
Operator [35]
--------------------------------------------------------------------------------
This concludes today's conference call. You may now disconnect.
--------------------------------------------------------------------------------
Definitions
--------------------------------------------------------------------------------
PRELIMINARY TRANSCRIPT: "Preliminary Transcript" indicates that the
Transcript has been published in near real-time by an experienced
professional transcriber. While the Preliminary Transcript is highly
accurate, it has not been edited to ensure the entire transcription
represents a verbatim report of the call.
EDITED TRANSCRIPT: "Edited Transcript" indicates that a team of professional
editors have listened to the event a second time to confirm that the
content of the call has been transcribed accurately and in full.
--------------------------------------------------------------------------------
Disclaimer
--------------------------------------------------------------------------------
Thomson Reuters reserves the right to make changes to documents, content, or other
information on this web site without obligation to notify any person of
such changes.
In the conference calls upon which Event Transcripts are based, companies
may make projections or other forward-looking statements regarding a variety
of items. Such forward-looking statements are based upon current
expectations and involve risks and uncertainties. Actual results may differ
materially from those stated in any forward-looking statement based on a
number of important factors and risks, which are more specifically
identified in the companies' most recent SEC filings. Although the companies
may indicate and believe that the assumptions underlying the forward-looking
statements are reasonable, any of the assumptions could prove inaccurate or
incorrect and, therefore, there can be no assurance that the results
contemplated in the forward-looking statements will be realized.
THE INFORMATION CONTAINED IN EVENT TRANSCRIPTS IS A TEXTUAL REPRESENTATION
OF THE APPLICABLE COMPANY'S CONFERENCE CALL AND WHILE EFFORTS ARE MADE TO
PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS,
OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE CONFERENCE CALLS.
IN NO WAY DOES THOMSON REUTERS OR THE APPLICABLE COMPANY ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER
DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN
ANY EVENT TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S
CONFERENCE CALL ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE
MAKING ANY INVESTMENT OR OTHER DECISIONS.
--------------------------------------------------------------------------------
Copyright 2019 Thomson Reuters. All Rights Reserved.
--------------------------------------------------------------------------------