Spaces:
Sleeping
Sleeping
File size: 69,421 Bytes
c49f0b0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 |
Thomson Reuters StreetEvents Event Transcript E D I T E D V E R S I O N Q1 2017 NVIDIA Corp Earnings Call MAY 12, 2016 / 9:00PM GMT ================================================================================ Corporate Participants ================================================================================ * Arnab Chanda NVIDIA Corporation - VP of IR * Jen-Hsun Huang NVIDIA Corporation - President & CEO * Colette Kress NVIDIA Corporation - EVP & CFO ================================================================================ Conference Call Participiants ================================================================================ * Deepon Nag Macquarie Research Equities - Analyst * Harlan Sur JPMorgan - Analyst * Blayne Curtis Barclays Capital - Analyst * C.J. Muse Evercore ISI - Analyst * Steven Chin UBS - Analyst * Craig Ellis B. Riley & Company - Analyst * Vivek Arya BofA Merrill Lynch - Analyst * Gabriel Ho BMO Capital Markets - Analyst * David Wong Wells Fargo Securities, LLC - Analyst * Joe Moore Morgan Stanley - Analyst * Suji Desilva Topeka Capital Markets - Analyst * Mark Lipacis Jefferies LLC - Analyst * Ross Seymore Deutsche Bank - Analyst * Romit Shah Nomura Research - Analyst * Ian Ing MKM Partners - Analyst ================================================================================ Presentation ================================================================================ -------------------------------------------------------------------------------- Operator [1] -------------------------------------------------------------------------------- Good afternoon, my name is Claudine, and I will be your conference coordinator today. I would like to welcome everyone to the NVIDIA financial results conference call. (Operator Instructions) This conference is being recorded, Thursday, May 12, 2016. I would now like to turn the call over to Arnab Chanda, Vice President of Investor Relations at NVIDIA. Please go ahead, sir. -------------------------------------------------------------------------------- Arnab Chanda, NVIDIA Corporation - VP of IR [2] -------------------------------------------------------------------------------- Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the first quarter of FY17. With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that today's call is being webcast live on NVIDIA's Investor Relations website. It is also being recorded. You can hear a replay by telephone until May 19, 2016. The webcast will be available for replay up until next quarter's conference call to discuss Q2 financial results. The content of today's call is NVIDIA's property. It cannot be reproduced or transcribed without our prior written consent. During the course of this call, we may make forward-looking statements based on current expectations. These forward-looking statements are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, May 12, 2016 based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette. -------------------------------------------------------------------------------- Colette Kress, NVIDIA Corporation - EVP & CFO [3] -------------------------------------------------------------------------------- Thanks, Arnab. In March, we introduced our newest GPU architecture, Pascal. This extraordinary scalable design built on the16 nanometer finFET process provides massive performance and exceptional power efficiency. It will enable us to extend our leadership across our four specialized platforms, gaming, professional visualization, datacenter and automotive. Year-on-year revenue growth continued to accelerate, increasing 13% to $1.3 billion. Our GPU business grew 15% to $1.08 billion from a year ago. Tegra processor business was up 10% to $160 million. Growth continued to be broad-based across all four platforms. Record performance in datacenter was driven by the adoption of deep learning across multiple industries. In Q1, our four platforms contributed nearly 87% of revenue, up from 81% a year earlier. They collectively increased 21% year-over-year. Let's start out with our gaming platform. Gaming revenue increased 17% year-on-year to $687 million, momentum carried forward from the holiday season, helped by the continued strength of Maxwell-based GTX processors. Last weekend at DreamHack Austin, we unveiled GeForce GTX 1080 and GTX 1070, our first Pascal GPUs for gamers. They represent a quantum leap for gaming and immersive VR experiences, delivering the biggest performance gains from the previous generation architect in a decade. Media reports and gamers have been unanimously enthusiastic. The Verge wrote, what NVIDIA is doing with its new GTX 1000 series is bringing yesteryear's insane high-end into 2016's mainstream. We also extended our VR platform, by adding spatial acoustics to our VRWorks software development kit that helps provided an even greater sense of presence within VR. We introduced simultaneous multi-projection, enabling accurate efficient projection of the real world to surround monitors, VR headsets, as well as future displays. To showcase these technologies, we created our own amazing open-source game called NVIDIA VR Funhouse available on Steam. In addition, we've announced Dazzle, an in-game photography system which enables gamers to capture high-resolution and VR scenes within their favorite games. Moving to professional visualization. Quadro grew year-on-year for the second consecutive quarter. Revenue rose 4% to $189 million. Growth came from higher-end products and mobile workstations. We've launched the M6000 24 gig, and are seeing good success among multiple customers, including Toyota and Pixar. [Roche] is using the M6000 to speed its DNA sequencing pipeline by 8 times, enabling more affordable genetic testing. We see exciting opportunities for our Quadro platform with Virtual Reality, and NVIDIA Iray, a photorealistic rendering tool that enables designers effectively to walk around their creations, and make real-time adjustments. Moving to datacenter, revenue was a record $143 million, up 63% year-on-year, and up [40%] sequentially, reflecting enormous growth in deep learning. In just a few years, deep learning has moved from academia, and is now being adopted across the hyperscale landscape. We expect growing deployment in the coming year, among large enterprises. GPUs have become the accelerator of choice for hyperscale datacenters due to their superior programmability, computational performance and power efficiency. Our Tesla M4 is over 50% more power efficient than other programmable accelerators for applications such as real-time image classification for AlexNet, a deep learning framework. Hyperscale companies are the fastest adopters of deep learning, accelerating their growth in our Tesla business. Starting from infancy three years ago, hyperscale revenue is now similar to that from high-performing computing. NVIDIA GPUs today accelerate every major deep learning framework in the world. We power IBM Watson and Facebook's Big Sur server [through our] AI, and we are in AI platforms at hyperscale giants such as Microsoft, Amazon, AliBaba and Baidu, for both training and real-time imprints. Twitter has recently said, they used NVIDIA GPUs to help users discover the right content among the millions of images and videos shared every day. During the quarter, we hosted our seventh annual GPU Technology Conference. The event drew record attendance, with 5,500 scientists, engineers, designers and others across a wide range of fields, and featured 600 sessions and 200 exhibitors. At GTC, we unveiled the Tesla P100, the world's advanced GPU accelerator based on the Pascal architecture. The P100 utilizes a combination of technologies including NVLink, a high-speed interconnect [allowing] application performance to scale on multiple GPUs, high memory bandwidth, and multiple hardware features designed to natively accelerate AI applications. The Next Platform, an enterprise IT site, called it a beast, in all of the good sense of that word. Among the first customers for our Pascal accelerator with the Swiss National Computer Center, which will use it to double its speed of Europe's fastest supercomputer. And GTC, we also announced the DGX-1, the world's first deep learning supercomputer. Loaded with eight P100s in a single box, interconnected with MVLink, it provides the deep learning performance equivalent to 250 traditional servers. DGX-1 comes loaded with a suite of software designed to aid AI and application developers. Universities, hyperscale vendors and large enterprises developing AI-based applications are showing strong interest in the system. Among the first to get DGX-1, will be the Massachusetts General Hospital. It launched an initiative that applies AI techniques to improve the detection, diagnosis, treatment and management of diseases, drawing on its database of some10 billion medical images. In our GRID graphics virtualization business, we are seeing interest across a variety of industries, ranging from manufacturing, energy, education, government and financial services. Finally in automotive, revenue continued to grow reaching $113 million, up 47% year-over-year, and up 22% sequentially, reflecting the growing popularity of premium infotainment features in mainstream cars. NVIDIA is working closely with partners to develop self-driving cars, using our end-to-end platform with Tesla in the datacenter, and extends with the deployment with DRIVE PX 2. Since we have unveiled DRIVE PX 2 earlier this year, worldwide interest has continued to grow among car makers, tier 1 suppliers, and others. We are now collaborating with more than 80 companies, using the open architecture of DRIVE PX to develop their own software and driving experiences. At GTC, we demonstrated the world's first self-driving car, trained using deep learning, and showed its ability to navigate on roads without lane markings, even in bad weather. Additionally, we announced that DRIVE PX 2 will serve as the brain behind the new Roborace initiative in the Formula E racing circuit. The circuit will include 10 teams, racing identical cars, all using DRIVE PX 2. Beyond our four platforms, our OEM IP business was $173 million, down 21% year-on-year, reflecting weak PC demand. Now turning to the rest of the income statement. We had record GAAP and non-GAAP gross margins for the first quarter, at 57.5% and 58.6%, respectively. Driving these margins was the strength of our Maxwell GPUs, the success of our platform approach, and strong demand for deep learning. GAAP operating expenses for the first quarter were $506 million, and declined from $539 million in Q4 on lower restructuring charges. Non-GAAP operating expenses were $443 million, flat sequentially and up 4% from a year earlier, reflecting increased hiring for our growth initiatives, and development-related expenses associated with Pascal. GAAP operating income for the first quarter was $245 million, up 39% from a year earlier. Non-GAAP operating income was $322 million, also up 39%. Non-GAAP operating margins improved more than 470 basis points from a year ago to 24.7%. For the first quarter, GAAP net income was $196 million. Non-GAAP net income was $263 million, up 41% fueled by the strong revenue growth, and improved gross and operating margins. During the first quarter, we'd entered into a $500 million accelerated share repurchase agreement, and paid $62 million in quarterly cash dividends. Since the restart of our capital return program in the fourth quarter of FY13, we have returned over $3.5 billion to shareholders. This represents over 100% of our cumulative free cash flow for FY13 through this Q1. For FY17, we intend to return approximately $1 billion to shareholders through share repurchases and quarterly cash dividends. Now turning to the outlook for the second quarter of FY17. We expect revenue to be $1.35 billion, plus or minus 2%. Our GAAP and non-GAAP gross margins are expected to be 57.7% and 58.0%, respectively, plus or minus 50 basis points. GAAP operating expenses are expected to be approximately $500 million. Non-GAAP operating expenses are expected to be approximately $445 million. GAAP and non-GAAP tax rates for the second quarter FY17 are both expected to be 20%, plus or minus 1%. Further financial details are included in the CFO commentary, and other information available on our IR website. We will now open the call for questions. Operator, could you please poll for questions? Thank you. ================================================================================ Questions and Answers ================================================================================ -------------------------------------------------------------------------------- Operator [1] -------------------------------------------------------------------------------- (Operator Instructions) First question, Vivek Arya, Bank of America. -------------------------------------------------------------------------------- Vivek Arya, BofA Merrill Lynch - Analyst [2] -------------------------------------------------------------------------------- Thank you for taking my question, and good job on the results and the guidance. Maybe as my first one, Jen-Hsun, how do you assess the competitive landscape in PC gaming? AMD recently claimed to be taking a lot of share, and they are launching a Polaris soon. If you could just walk us through, what does NVIDIA to better than AMD? So that helps you maintain your competitive edge in this market, and what impact does Pascal have in that? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [3] -------------------------------------------------------------------------------- Yes, Vivek, thank you. Our PC gaming platform, GeForce is strong and getting stronger than ever, and I think the reason for that is several-folds. First of all, our GPU architecture is just superior, and we dedicated an enormous amount of effort to advancing our GPU architecture. I think the engineering of NVIDIA is exquisite, and our craftsmanship is really unrivaled anywhere. The scale of our company in building GPUs is the highest and the largest of any company in the world. This is what we do, this is the one job that we do. And so, it is not surprising to me that NVIDIA's GPU technology is further ahead than any time in its history. The second thing, however, it's just so much more than just chips anymore as you know. Over the last 10 years, we've started to evolve our Company to much more of a platform company. And it's about developing all of the algorithms that sit on top of our GPUs. GPU is a general purpose processor. It's a general-purpose processor that is dedicated to a particular field of computing, such that it is computer graphics here, physics simulation, et cetera. But the thing that's really important all of the algorithms that sit on top of it. And we have a really fantastic team computational of mathematicians that captures our algorithms and our know-how into GameWorks, into the physics engine, and recently the really amazing work that we're doing in VR that we have embodied into VRWorks. And then, lastly, lastly it is about making sure that the experience always just works. We have a huge investment, in working with game developers all over the world. From the moment that the game is being conceived of, all the way to the point that it is launched. And we optimize the games on our platform. We make sure that our drivers run perfectly. And even before a gamer downloads or buys into a game, we've already updated their software so that it works perfectly when they install the game. And we call that GFE, GeForce Experience. And so, Vivek, it's really about a top to bottom approach. And I haven't even started talking about all of the marketing work that we do, in engaging the developers, and engaging the gamers all over the world. This is really a network platform, and all of our platform partners that take our platform to market. And so it's a pretty extensive network, and it's a pretty extensive platform. And it's so much more than chips anymore. -------------------------------------------------------------------------------- Vivek Arya, BofA Merrill Lynch - Analyst [4] -------------------------------------------------------------------------------- Got it. Thank you, Jen-Hsun. And as my follow up, so things like data center products were the big upside surprise in Q1, grew over 60% from last year. Could you give us some more color on what drove that upside? Was it the initial Pascal launch? Is that impact still to come? And just broadly, what trends are you seeing there in HPC, versus some of these new AI projects that you're involved with? -------------------------------------------------------------------------------- Colette Kress, NVIDIA Corporation - EVP & CFO [5] -------------------------------------------------------------------------------- Yes, thanks. You know that I have been rather enthusiastic about high-performance computing for some time. We've been evolving our GPU platforms, so that it's better at general-purpose computing than ever. And almost every single data center in the world, every single server company in the world are working with us, to build servers that are based on GPUs, based on video GPUs and high-performance computing. One of the most important areas of high-performance computing has been this area called deep learning. And this deep learning -- deep learning as you know, as you are probably starting to hear, is a brand-new computing model that takes advantage of the massively parallel processing capability of the GPU, along with the big data that many companies have, to essentially have software write algorithms by itself. Deep learning is a very important field of machine learning, and machine learning is now the process of revolutionizing artificial intelligence, making machines more and more intelligent, and using it to discover insights that quite frankly, is impossible otherwise. And so, this particular field was first adopted by hyperscale companies, so that they could find insight, and make recommendations, and make predictions from the billions of customer transactions they have every day. Now it is in the process of moving into enterprises, but in the meantime hyperscale companies are now the process of deploying our GPUs, and deep learning applications into production. And so, we've been talking about this area for some time, and now we're starting to see the broad deployment in production. So we're quite excited about that. -------------------------------------------------------------------------------- Operator [6] -------------------------------------------------------------------------------- Next question, Mark Lipacis, Jefferies. -------------------------------------------------------------------------------- Mark Lipacis, Jefferies LLC - Analyst [7] -------------------------------------------------------------------------------- Thanks for taking my questions. A first question, the growth in the Tesla business is impressive, and in looking back, it seemed like that business actually decelerated in 2015, which was a head-scratcher for me. And I wonder do you think that your customers in that business paused in anticipation of Pascal? Or do you think it is the AI apps and deep learning applications that are just hitting their stride right now? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [8] -------------------------------------------------------------------------------- Well, decelerate, I guess, I'm not sure I recall that. The thing about HPC, about GPU computing is, as you know this is a new computing model, and we've been promoting this computing model for close to seven years. And a new computing model doesn't come along very frequently. In fact, as I know it, I do not know if there's a new computing model that's used anywhere, that has been revolutionary in the last 20 years. And so, GPU computing took some time to develop. We've been evangelizing it for quite some time. We developed robust tools, so that would make it easier for people to take advantage of our GPUs. We have industry expertise in a large number of industries now. We have APIs that have been created for each one of the industries. We've been working with the ecosystem in each one of the industries, and developers in each one of the industries. And as of this time, we have quite a large handful, quite a large number of industries that we accelerate applications for. And so, I think that -- I guess, my recommendation -- my recollection would be is that, that it has taken a long time, in fact, to have made GPU computing into a major, new computing model. But I think at this point, it is pretty clear that it's going mainstream. It is really one of the best ways to achieve post Moore's law error of computing acceleration, and a lot of (inaudible) competition. And the one that, of course, that is very big deal is deep learning and machine learning. This particular field is a brand-new, new way of doing computing for a large number of companies, and we're seeing traction all over the place. -------------------------------------------------------------------------------- Operator [9] -------------------------------------------------------------------------------- Next question, Steven Chin, UBS. -------------------------------------------------------------------------------- Steven Chin, UBS - Analyst [10] -------------------------------------------------------------------------------- Hey, thanks for taking my questions. Jen-Hsun or Colette, first of all I want to see if you can help provide some color on some of the drivers of growth for fiscal 2Q, whether most of it is coming from Pascal possibly in the gaming market, or in the Tesla products, or if there was also some outgrowth and Tegra automotive as called for fiscal 2Q? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [11] -------------------------------------------------------------------------------- Yes, Stephen, I would expect that all of our businesses grow in Q2. And so, it is across the board. We are seeing great traction in gaming. Gaming as you know, has multiple growth drivers covers. But partly of the gaming is growing, because the production value of game is growing, partly because the number of people who are playing is growing. E-sports is more popular sport than ever, sports spectatorship is more popular than ever. And so, gaming is just a larger and larger market, and it's surprising everybody. And the quality of games is going up, which means that the complexity of GPUs has to go up. High-performance computing has grown, and the killer app is machine learning and deep learning. And that's going to continue to go into production from the hyperscale companies, as we expand our region to enterprises all over the world now. Companies who have a great deal of data that they would like to (inaudible). Automotive is growing, and we're delighted to see that the enterprise is growing as well. -------------------------------------------------------------------------------- Steven Chin, UBS - Analyst [12] -------------------------------------------------------------------------------- Great. As my follow-up, maybe for Colette. On the gross margin side of things, you guys are guiding margins up nicely for the quarter. And just kind wondering looking further across the year, whether you have -- whether or not that the levers that you have available to you currently, if there's further room for expansion, whether it is from product mix, higher ASPs, and or maybe some of the platform-related elements such as software services? And I was kind wondering, especially on the software side, how much that can continue to help the margins from a platform perspective? -------------------------------------------------------------------------------- Colette Kress, NVIDIA Corporation - EVP & CFO [13] -------------------------------------------------------------------------------- Sure. Thanks, Stephen. Yes, our gross margins within the quarter for Q1 did hit record levels, just due to very strong mix across our products, on the Maxwell side both from a gaming perspective, as well as what we have in enterprise for pro visualization and data center. As we look to Q2, a good review of where we also see gross margins, and those are looking at a non-GAAP at about 58%. So, again, be a strong component of that. As the launch of Pascal will come out with high-end gaming and with datacenter, and the growth essentially across all of our platforms will help our overall gross margins. As we go forward, there is still continued work to do. We're here to guide just one quarter out, but we do have a large TAM in front of us on many of these different markets, and the mix will certainly help us. We're in the initial stages of rolling out what we have in software services, our overall systems. So I don't expect it to be a material part of the overall gross margin, but it will definitely be a great value proposition for us, for what we put forth. -------------------------------------------------------------------------------- Operator [14] -------------------------------------------------------------------------------- Next question, Deepon Nag, Macquarie. -------------------------------------------------------------------------------- Deepon Nag, Macquarie Research Equities - Analyst [15] -------------------------------------------------------------------------------- Yes, thanks, guys, and congratulations on the great quarter. For Q2, can you kind of talk about how much of a contribution you expect from Pascal? And also maybe give us an update on where you think that yields are progressing right now? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [16] -------------------------------------------------------------------------------- Yes, thanks a lot, Deepon. We're expecting a lot out of Pascal. Pascal was just announced with 1080 and 1070, and both of those products are in full production. We're in production with Tesla P100. And so, all of our Pascal products that have we've already announced are in full production, so we're expecting a lot. Yields are good and building these semiconductor devices are always hard, but we're very good at it. And this is now -- a year behind when the first 16 nanometer finFET products went into production at TSMC. They have yields under great control. TSMC is the world's best manufacturer of semiconductors, and we work very closely with them, to make sure that we are ready for production. And we surely wouldn't have announced it, if we didn't have manufacturing under control. So we're in great shape. -------------------------------------------------------------------------------- Operator [17] -------------------------------------------------------------------------------- Next question, Ambrish Srivastava, BMO Capital Markets. -------------------------------------------------------------------------------- Gabriel Ho, BMO Capital Markets - Analyst [18] -------------------------------------------------------------------------------- This is Gabriel calling in for Ambrish. Thanks for taking my question. I think when you recently launched a new GPU products, looks like your pricing, your MSRP appears to be higher than your prior generation. And how should we think about your ASP, and even gross margin trend as you are ramping this product for the rest of the year? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [19] -------------------------------------------------------------------------------- Yes, thanks. The thing that's most important, is that the value is greater than ever. And one of the things that we know is games are becoming richer than ever, production value has become richer than ever. And gamers want to play these games with all of the settings maxed out. They would like to play at a very high resolution, and they want to play it at very high frame rates. When I announced 1080, I was showing all of the latest, the most demanding games running at twice the resolution of a game console, at twice the frame rate of the game console, and it was barely even breathing hard. And so, I think one of the most important things is for customers of this segment, they want to buy a product that they can count on, and that they can rely on to be ready for the future generation games. And some of the most important future generation games are going to be in VR. And so, the resolution is going to be even higher, the frame rate expectation is 90 hertz, and the latency has to be incredibly low, so that you feel a sense of presence. And so, I think the net of it all, is that the value proposition we deliver with 1080 and 1070 is just through the roof. And if you look at the early response on the web, and from analysts, they're quite excited about the value proposition that we brought. -------------------------------------------------------------------------------- Operator [20] -------------------------------------------------------------------------------- Next question, CJ Muse, Evercore. -------------------------------------------------------------------------------- C.J. Muse, Evercore ISI - Analyst [21] -------------------------------------------------------------------------------- Yes, good afternoon. Thank you for taking my question. I guess, two questions around the data center. I guess, first part, how is the visibility here today? And I guess how do you see perhaps the transition from hyperscale to ramp in HPC? And then, I know you guys are not like to forecast over the next couple of quarters. But looking out over the next 12 to 24 months, this part of your business has grown from 8% to 11% year over year. And curious as you look at one to two years, what do you think this could be as a percentage of your overall company? Thank you. -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [22] -------------------------------------------------------------------------------- Yes, CJ, thanks a lot. I think a lot -- the answer to a lot of your questions is I don't know. However, there are some things I do know very well. One of the things that we do know is that high-performance computing is an essential approach for one of the most important computing models that we know today, which is machine learning and deep learning. Hyperscale datacenters all over the world is relying on this new model of computing, so that it can harvest, it could study all of the vast amounts of data that we're getting to find insight for individual customers, make the perfect recommendation, predict when somebody would anticipate, would look forward to in terms of news or products or whatever it is. And so, this approach of using computing is really unprecedented, and this is a new computing model, and the GPU is really ideal for it. And we have been working on this for coming up on a decade. And it explains one of the reasons why we have such a great lead in this particular aspect. The GPU is really the ideal processor, if you use massively parallel problems. And we've optimized our entire stack of platforms from the architecture, to the design, to the system, to the middleware, to the system software, all the way to the work that we do with developers all over the world, so that we can optimize the entire experience to deliver the best performance. And so, this is something that has taken a long time to do. I have a great deal of confidence that machine learning is not a fad. I have a great deal of confidence that machine learning is going to be the future computing model for a lot of very large and complicated problems. And I think that all of the stories that you see, whether it's the groundbreaking work that's done at Google, and Google DeepMind on AlphaGo, to self driving cars, to the work that people are talking about, and artificial intelligence recommendation chatbots to -- boy, the list just goes on and on. And I think that it goes without saying, that this new computing model in the last couple of years has really started to deliver very promising results, and I would characterize results as being superhuman results. And now they're going into production. And we're seeing production deployments, not just in one or two customers, but basically in every single hyperscale data center in the world, in every single country. And so, I think this is a very big deal. And I don't think it's a short-term phenomenon, and the amount of data that we process is just going to grow. And so, that those are some of the things I do know. -------------------------------------------------------------------------------- Operator [23] -------------------------------------------------------------------------------- Next question, Mark Lipacis, Jefferies. -------------------------------------------------------------------------------- Mark Lipacis, Jefferies LLC - Analyst [24] -------------------------------------------------------------------------------- Hi, thanks for cycling me back in for a follow up. Sometimes in -- when you introduce a new product, and this is broadly for technology, there's kind of a hiccup as the transition happens, where the supply chain blows out the older inventory, and before the new products can ramp in, so people call that an air pocket. So I was wondering is that something that you can manage, how do you try to manage that? Did you account for it, when you think about the outlook for this quarter? Thank you. -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [25] -------------------------------------------------------------------------------- Yes, thanks, Mark. Well, part of transitions are always tricky, and we take it very seriously, and there's several things that we do know. We have a great deal of visibility to the channel. And so, we know how much inventory is where, and of which kind. And secondarily, we have perfect visibility into our supply chain, and both of those matters, we've taken into account, when we launch a new product. And so, anything could happen. The fact of the matter is we are in a high tech business, and high tech is hard. The work that we do is hard. The team is -- doesn't take it for granted and we're not complacent about our work. And so, I think that I can't imagine a better team in the world that is to manage this transition. We manage transitions all the time. And so, we do not take it lightly, however, you're absolutely right. I mean, it requires care, and the only thing I can tell you is that we're very careful. -------------------------------------------------------------------------------- Operator [26] -------------------------------------------------------------------------------- Next question, Joe Moore, Morgan Stanley. -------------------------------------------------------------------------------- Joe Moore, Morgan Stanley - Analyst [27] -------------------------------------------------------------------------------- Great, thank you. I guess, along the same lines, can you talk a little bit about the founders edition of the new gaming products? And how does that differ from sort of previous reference designs that you've done, and is there any kind of difference in economics to NVIDIA if you sell founders edition? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [28] -------------------------------------------------------------------------------- The founders edition is something we did, as a result of demand from the end-user base. The founders edition is basically designed by -- a wholly designed by NVIDIA product. A reference design is really not designed to be an end product. It's really designed to be a reference for manufacturers to use as a starting point. But a founders edition is designed so that it could be manufactured, it could be marketed, and customers can continue to buy it from us, for as long as they desire. Now our strategy -- our strategy is to support our global network of adding card partners, and we're going to continue to do that. And we gave them -- we gave everybody reference designs like we did before. And, in this particular case, we created the founders edition so that people who like to buy directly from us, people who like our industrial design, and people who would like the exquisite design and quality that comes with our products that we can do. And so, it's designed to be extremely [over clockable]. It's designed with all the best possible components. And if somebody would like to buy products directly from us, they have the ability to do that. I expect that the vast majority of the add-in cards will continue to be manufactured by our add-in card partners, and that's our expectation, and that's our hope. And I don't expect any dramatic change in the amount of shifting of that. So that's basically it, Founders edition, the most exquisitely engineered add-in card the world has ever seen, directly from NVIDIA. -------------------------------------------------------------------------------- Operator [29] -------------------------------------------------------------------------------- Next question, Harlan Sur, JPMorgan. -------------------------------------------------------------------------------- Harlan Sur, JPMorgan - Analyst [30] -------------------------------------------------------------------------------- Good afternoon, and solid job on the execution. At the recent Analyst Day, I think articulated its exposure to developed and emerging markets and the unit and ASP growth opportunities around EM. Just wondering, what are the current demand dynamics that you're seeing in the emerging markets? Clearly, I think macro-wise, they're still pretty weak, but on the flip side, gaming has shown to be fairly macro-insensitive. Would be great to get your views here. -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [31] -------------------------------------------------------------------------------- Well, I think you just said it. Depending on which one of our businesses that you're talking about, gaming is rather macro-insensitive for some reason. People enjoy gaming whether the economy is good or not, whether the oil price is high or not, people seem to enjoy gaming. Don't forget gaming is not something that people do once a month, like going out to a movie theater or something like that. People game every day, and the gamers that use our products are gaming every day. It's their way of engaging with their friends, they hang out with their friends that way. It's a platform for chatting. Don't forget that the number one messaging company in China, is actually a gaming company. And the reason for that is, because while people are gaming, they're hanging out with their friends and they're chatting with their friends. And so, it is really a medium for all kinds of things, whether it's entertaining, or hanging out, or expressing your artistic capabilities or whatnot. And so, gaming for one, appears to be doing quite well, in all aspects of the market. The second thing is, enterprise, however, is largely or hyperscale is largely a US dynamic. And the reason for that is, because a US dynamic as well as a China dynamic, because that's where most of the world's hyperscale companies happen to be and so. And then, automotive, most of our automotive success to date has been from the European car companies, and we're seeing robust demand from the premium segments of the marketplace. However, in the future we're going to see a lot more success with automotive here in the United States, here in Silicon Valley, in China. We're going to see a lot more global penetration because of our self-driving car platform. -------------------------------------------------------------------------------- Operator [32] -------------------------------------------------------------------------------- Next question, Ian Ing, MKM Partners. -------------------------------------------------------------------------------- Ian Ing, MKM Partners - Analyst [33] -------------------------------------------------------------------------------- Yes, thank you. So for July, it looks like you've got some operating expense discipline. Given some hiring activity in April, you're down sequentially. Is that related to the timing of some tape out activity? And as Pascal rolls out, what should the shape of tape outs be, do you think of the upcoming quarters? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [34] -------------------------------------------------------------------------------- Well, all of the Pascal chips have been taped out. And so -- but we still have a lot of engineering work to do. The difference -- differences are minor. We're a large company, and we have a lot of things that we're doing. I wouldn't over-study the small deltas in OpEx. We don't manage things a dollar at a time, and we're trying to invest in the important things. On the other hand, this Company is really good about not wasting money. And so, we want to make sure that, on the one hand, we invest into opportunities that are very important to our company, but we just have a culture of frugality that permeates our company. And then lastly, from an operational perspective, we unified everything in our company behind one architecture. And whether you're talking about the cloud or workstations, or datacenters, or PCs, or cars, or embedded systems, or autonomous machines, you name it, everything is exactly one architecture. And the benefit of one architecture is that we can leverage one common stack of software, and that -- a base software, it really streamlines our execution. And so, it's an incredibly efficient approach for leveraging our one architecture into multiple markets. And so, those three aspects of how we run the company really helps. -------------------------------------------------------------------------------- Operator [35] -------------------------------------------------------------------------------- Next question, Blayne Curtis, Barclays. -------------------------------------------------------------------------------- Blayne Curtis, Barclays Capital - Analyst [36] -------------------------------------------------------------------------------- Hey, guys, thanks for taking my question, and nice results. I was just curious, two questions. Jen-Hsun, you talked about the ramp of deep learning, and you talked about that you are going to use GPUs for both learning, as well as applying the inferences. Just curious, what stages, you mentioned all these consumers, what stages are all these customers? Are they actually deploying it in volume, or are they still more of sales for learning? And then, you said all segments up. Just curious, OEM is finally hitting some easy compares. Is that also going to be up year over year? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [37] -------------------------------------------------------------------------------- I think this question I should -- -------------------------------------------------------------------------------- Colette Kress, NVIDIA Corporation - EVP & CFO [38] -------------------------------------------------------------------------------- I think OEM business, will that be up year over year? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [39] -------------------------------------------------------------------------------- I think the OEM business is down year over year, isn't it? -------------------------------------------------------------------------------- Colette Kress, NVIDIA Corporation - EVP & CFO [40] -------------------------------------------------------------------------------- Right. And so, on Q2 we'll probably follow along in Q2, along with the overall PC demand, which is not expected to grow. So we'll look at that as our side product, and probably would not be a growth business in Q2. -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [41] -------------------------------------------------------------------------------- Yes. So Blayne, you know that our OEM business is a declining part of our company's overall business. And not to mention that the margins are also significantly low on the corporate average. And so, I would suggest that it's just increasingly a less important part of the way that we go to market. Now what I don't mean by that, is that we don't partner with the world's large OEMs, HP, Dell, IBM, Cisco, Lenovo, all of the world large enterprise companies are our partners. We partner with them to take our platforms, our differentiated platforms, or specialty platforms to the world's market, and most of them are related to enterprise. We just do less and less volume, high-volume components devices. Generic devices like cell phones that we got out of, generic PCs that we've gotten out of. Largely we tend not to do business like that anymore. We tend to focus on our differentiated platforms. Now you mentioned some, you mentioned learning and training and inferencing. First of all, training is production. You can't train a network just once, you have to train your network all the time. And every single hyperscale company in the world is in the process of scaling out their training. Because the networks are getting bigger, they want their networks to do even better. The difference between a 95% accurate network, and a 98% accurate network, or a 99% accurate network could mean billions of dollars of differences to internet companies. And so, this is a very big deal. And so, they want their networks to be larger, they want to deploy their networks across more applications, and they want to train their network with new data all the time. And so, training is a production matter. It is probably the largest HPC, high-performance computing application on the planet that we know of at the moment. And so, we're scaling, we're ramping up training for production for hyperscale companies. On the other hand, I really appreciate you asking about the inferencing We, recently -- well, this year, several months ago we announced the Tesla M4 that was designed for inferencing. And it's a little tiny graphics card, a little tiny processor, and it's less than 50 watts. It's called the M4. And at GTC, I announced a brand-new compiler called the GPU inference engine, GIE. And GIE recompiles the network that was trained, so that it can be optimumly inferenced at the lowest possible energy. And so, not only are we already 50 watts, which is low power, we can also now inference at a higher energy efficiency, than any processor that we know today, better than any CPU by a very long shot, better than any FPGA. And so, now hyperscale companies could use our GPUs for both training, and use exactly the same architecture for inferencing, and the energy efficiency is really fantastic. Now the benefit of using GPU for inferencing, is that you're not just trying to inference only. You're trying to, often times, decode the image, or you could be decoding the video, you inference on it. And you might even want to use it for transcoding, which is to re-encode that video, and stream it to whoever it is, that is -- that wants to share live video with. And so the [proxy] that you want to do on the images and the video and the data, is more than just inferencing, and the benefit of our GPU is that it's really great for all the other stuff too. And so, we're seeing a lot of success in M4. I expect M4 to be quite a successful product, and hyperscale datacenters, my expectation will start to ramp that into production Q2, Q3, Q4 time frame. -------------------------------------------------------------------------------- Operator [42] -------------------------------------------------------------------------------- Next question, Ross Seymore, Deutsche Bank. -------------------------------------------------------------------------------- Ross Seymore, Deutsche Bank - Analyst [43] -------------------------------------------------------------------------------- Hi, thanks for letting me ask a question. On the automotive side, I just wondered, Colette, in your CFO commentary, you mentioned by product development contracts as part of the reason it was increasing. Can you give us a little bit of an indication what those are? And is the percentage of revenue coming from those increasing? And then maybe finally, is that activity indicative of future growth in any way, that can be meaningful for us to track? -------------------------------------------------------------------------------- Colette Kress, NVIDIA Corporation - EVP & CFO [44] -------------------------------------------------------------------------------- Sure, thanks for the question. So in our automotive business, there's definitely a process even before we're shipping platforms into the overall cars, that we're working jointly with the auto manufacturers, start ups and others on what may be a future product. Many of those agreements continue, and will likely continue going forward, and that is what you see incorporated in our automotive business. So, yes, you probably will see this continue, and go forward. It is not necessarily consistent, it's starts in some quarters, are bigger in other quarters. But that's what is incorporated in our automotive. -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [45] -------------------------------------------------------------------------------- And Colette, let me just add one thing. The thing to remember is that we're not selling chips into a car. We're not selling -- you know that DRIVE PX is the world's first autonomous driving car computer that's powered by AI, it's powered by deep learning, and we're seeing a lot of success with DRIVE PX. And as Colette mentioned earlier, there's some 80 companies that we're working with, whether it's tier 1s, or OEMs, or start-up companies all over the world that we're working with, in this area of autonomous vehicles. And the thing to realize is, you're not selling a chip into that car. You're working with a car company to build an autonomous driving car. And so that process requires a fair amount of engineering. And so, we have a mechanism, we have a development mechanism that allows car companies to work with our engineers to collaborate, to develop these self-driving cars. And that's most of that stuff that Colette was talking about. -------------------------------------------------------------------------------- Operator [46] -------------------------------------------------------------------------------- Next question, Craig Ellis, B Riley & Company. -------------------------------------------------------------------------------- Craig Ellis, B. Riley & Company - Analyst [47] -------------------------------------------------------------------------------- Thanks for taking the question, and congratulations on the revenue and the margin performance. Jen-Hsun, I wanted to follow up on one of the comments that you made regarding Pascal. I think you indicated that all Pascal parts had taped out. So the question is, if that is the case, will we see refresh activity across all of the platform groups in FY17, or in fact will some of the refresh activity taking place FY18? So what's the duration of the refresh that we are looking at? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [48] -------------------------------------------------------------------------------- Well first, let's -- thanks for the question, and we don't comment on unannounced products as you know. I hate to ruin all of the surprises for you, but Pascal is the single most ambitious GPU architecture we have ever undertaken. And this is really the first GPU that was designed from the ground up, for applications that are quite well beyond computer graphics and high-performance computing. It was designed to take into consideration, all of the things that we've learned about deep learning, all of the things that we've learned about VR. For example, it has a brand-new graphics pipeline that allows Pascal to simultaneously project into multiple surfaces at the same time, with no performance penalty, that otherwise, it would degrade your performance in VR by factor by a factor of 2, just because you have two surfaces you're projecting into. And then, we can do all kinds of amazing things for augmented reality, other types of virtual reality displays, surround displays, curved displays, domed displays, I mean there's all kind of -- holographic displays, all kind of display that are being invented at the moment. And we have the ability to now support those type of displays, with a much more elegant architecture, without degrading performance. And so, Pascal is, whether it's AI, whether it's gaming, whether it's VR, is really the most ambitious project we have ever undertaken, and it's going to go through all of our markets. The application for self-driving cars is going to be pretty exciting. And so, it's going to go through all of our market. And so, we are in -- of course, we have plenty to announce in the future, but we've announced what we announced. -------------------------------------------------------------------------------- Operator [49] -------------------------------------------------------------------------------- Next question, Romit Shaw, Nomura Research. -------------------------------------------------------------------------------- Romit Shah, Nomura Research - Analyst [50] -------------------------------------------------------------------------------- Yes, thanks very much. Jen-Hsun, I was hoping you could just share your view today on fully autonomous driving, because your mobilized chairman has said very recently, that the technology basically isn't ready. And that fully autonomous cars won't be available until -- I think he was saying 2019. And I guess, my question is well, one, I'd love your view on that? And two, whether the cars are fully autonomous, or autonomous in certain environments say, one or two years out, does it impact the trajectory of your automotive business? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [51] -------------------------------------------------------------------------------- First of all, to -- working on full autonomy is a great endeavor. And whether we get there 100%, 90%, 92%, 93%, is in my mind completely irrelevant. The endeavor of getting there, and making your car more and more autonomous -- initially, of course, we would like to have a virtual copilot. Having a virtual copilot is the way I get to work every day. I mean, every single day I drive my Model S. And every single day I put it into autonomous mode, and every single day it brings me joy. And I'm not confessing necessarily, texting a little bit is okay. And so, I think that the path to full autonomy is going to be paved by amazing capabilities along the way. And so, we're not waiting around for 2019, we'll ship autonomous vehicles by the end of this year. And so I understand that we're three years ahead of other people's schedules. However, we also know that DRIVE PX 2 is the most advanced autonomous computing car computer in the world today. And it's powered by AI fully. And DRIVE PX 2 will be a DRIVE PX 3, there will be a DRIVE PX 4. And then by 2019, I guess, we'll be shipping DRIVE PX 5. And so, those -- our road map just like that. That's how we work, as you guys know very well. And so, I think there's a point, there 's a lot of work to be done, which is the exciting part. The thing about a technology company, a thing about any company, unless there's great problems and great challenges that we can help solve, what value do we bring? And what NVIDIA does for a living, is to do what-- to build computers that no other company in the world can build, whether it's high-performance computers that are used to power a nation's supercomputers, or a deep learning supercomputer so that we can gain insight from data, or self-driving car computers, so that autonomous cars can save people's lives, and make people's lives more convenient. That's what we do. This is the work that we do. And I am delighted here, that we're three years ahead of the competition. -------------------------------------------------------------------------------- Operator [52] -------------------------------------------------------------------------------- Next question, Suji Desilva, Topeka Capital Markets. -------------------------------------------------------------------------------- Suji Desilva, Topeka Capital Markets - Analyst [53] -------------------------------------------------------------------------------- Hi, Jen-Hsun, hi, Colette. Congratulations on the impressive results here. On the data center business, is there an inflection going on, with deep learning, with the software maturity that's driving some at this point? And can you give us any metrics. Jen-Hsun, for how to think about the size of this opportunity before you? And I know it's hard, but things like server attach rates, what percent of servers you could attach? Will it be an M4 in an high end in every box? Or was it -- or maybe the number of GPUs a single deep learning implementation has? Something like that. That would help. Thanks. -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [54] -------------------------------------------------------------------------------- Yes, the truth is that nobody really knows how big, this deep learning market is going to be. Until a couple of two or three years ago, it was really even hard to imagine how good the results were going to be. And if it wasn't because the groundbreaking work that was done at Google and Facebook, and in other researchers around the world, how would we have discovered, that it was going to be superhuman? The work that recently was done by Microsoft Research. They've achieved superhuman levels of inferencing that -- of image recognition and voice recognition that's really kind of hard to imagine. And these networks are now huge The Microsoft Research network, super deep network is 1,000 layers deep. And so, training such a network is quite a chore, it's quite endeavor, and this is a problem that high-performance computing will have to deployed, and this is why our GPUs so sought after. In terms of how big that's going to be, my sense is that almost no transaction -- my sense is that almost no transaction with the internet will be without deep learning, or some machine learning inference in the future. I just can't imagine that. There's no recommendation of a movie, no recommendation of a purchase, no search, no image search, no text that won't somehow had passed through some smart chatbot, or smartbot, or some machine learning algorithms, so that they could make the transaction more -- make the inference, or more request, more useful to you. And so, I think this is going to be a very big thing. And then, on the other hand, the enterprises, we use deep learning all over our Company today. And we're not -- we had the benefit of being early, because we saw the power of this technology early on. But we are seeing deep learning being used now in medical imaging all over the world. We're seeing it being used in manufacturing, it's going to be used for scientific computing. More data is generated by high-performance computers and supercomputers than just about anything. They generate it through simulation. They generate so much data, that they to throw the vast majority of it away. For example, Hadron Collider, whenever the protons collide, they throw away 99% of the data, and they're barely able to keep up with just that 1%. And so, by using machine learning, and our GPUs, they could find insight in the rest of the 99%. And so, there's just -- applications go on and on and on. And people are now starting to understand this deep learning. It really puts machine learning and puts artificial intelligence in the hands of engineers, is understandable. And that's one of the reasons why it's growing so fast. And so, I don't know exactly how big it's going to be. But here's my proposition, and this is going to be the next big computing model, the way that people compute. That in the past, software programmers wrote programs, compiled it. And in the future, we're going to have algorithms write the software for us. And so, that's a [very good way of computing] and I think it's a very good deal. -------------------------------------------------------------------------------- Operator [55] -------------------------------------------------------------------------------- Next question, David Wong, Wells Fargo. -------------------------------------------------------------------------------- David Wong, Wells Fargo Securities, LLC - Analyst [56] -------------------------------------------------------------------------------- Thanks very much. In automotive, what product are your revenues coming from currently? Is DRIVE PX at all significant, or are your sales primarily DRIVE CX, or something else? -------------------------------------------------------------------------------- Jen-Hsun Huang, NVIDIA Corporation - President & CEO [57] -------------------------------------------------------------------------------- The primary parts of our automotive business today comes from infotainment, in the premier infotainment systems. For example, the virtual cockpit that Audi ships. And the vast majority of our development projects today come from DRIVE [PX] on those projects. We probably have 10 times as many autonomous driving projects, as we have infotainment projects today. And we have a fair number of infotainment projects. And so, that gives you a sense of where we were in the past, and where we're going in the future. -------------------------------------------------------------------------------- Operator [58] -------------------------------------------------------------------------------- I'm showing no further questions at this time. Mr. Chanda, please I will turn the call over to you. -------------------------------------------------------------------------------- Arnab Chanda, NVIDIA Corporation - VP of IR [59] -------------------------------------------------------------------------------- We had a great start to the year, with strong revenue growth and profitability. Pascal is a quantum leap in performance for AI, gaming, and VR and is in full production. Deep learning is spreading across every industry, making datacenter our fastest growing business. With growing worldwide adoption of AI, the arrival of VR, and the rise of self-driving cars, we're really excited about the future. Thanks for tuning in. -------------------------------------------------------------------------------- Operator [60] -------------------------------------------------------------------------------- Ladies and gentlemen, that concludes today's conference call. We thank you for your participation, and we ask that you please disconnect your line. Have a great day, everyone. -------------------------------------------------------------------------------- Definitions -------------------------------------------------------------------------------- PRELIMINARY TRANSCRIPT: "Preliminary Transcript" indicates that the Transcript has been published in near real-time by an experienced professional transcriber. While the Preliminary Transcript is highly accurate, it has not been edited to ensure the entire transcription represents a verbatim report of the call. EDITED TRANSCRIPT: "Edited Transcript" indicates that a team of professional editors have listened to the event a second time to confirm that the content of the call has been transcribed accurately and in full. -------------------------------------------------------------------------------- Disclaimer -------------------------------------------------------------------------------- Thomson Reuters reserves the right to make changes to documents, content, or other information on this web site without obligation to notify any person of such changes. In the conference calls upon which Event Transcripts are based, companies may make projections or other forward-looking statements regarding a variety of items. Such forward-looking statements are based upon current expectations and involve risks and uncertainties. Actual results may differ materially from those stated in any forward-looking statement based on a number of important factors and risks, which are more specifically identified in the companies' most recent SEC filings. Although the companies may indicate and believe that the assumptions underlying the forward-looking statements are reasonable, any of the assumptions could prove inaccurate or incorrect and, therefore, there can be no assurance that the results contemplated in the forward-looking statements will be realized. THE INFORMATION CONTAINED IN EVENT TRANSCRIPTS IS A TEXTUAL REPRESENTATION OF THE APPLICABLE COMPANY'S CONFERENCE CALL AND WHILE EFFORTS ARE MADE TO PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS, OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE CONFERENCE CALLS. IN NO WAY DOES THOMSON REUTERS OR THE APPLICABLE COMPANY ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN ANY EVENT TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S CONFERENCE CALL ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE MAKING ANY INVESTMENT OR OTHER DECISIONS. -------------------------------------------------------------------------------- Copyright 2019 Thomson Reuters. All Rights Reserved. -------------------------------------------------------------------------------- |