Nvidia CorporationOfficially written as NVIDIA and stylized in its logo as n VIDIA with the lowercase "n" the same height as the uppercase "VIDIA"; formerly stylized as n VIDIA with a large italicized lowercase "n" on products from the mid 1990s to early-mid 2000s. Though unofficial, second letter capitalization of NVIDIA, i.e. nVidia, may be found within enthusiast communities and publications. ( ) is an American multinational technology company incorporated in Delaware and based in Santa Clara, California. It is a software and fabless company which designs graphics processing units (GPUs), application programming interface (APIs) for data science and high-performance computing as well as system on a chip units (SoCs) for the mobile computing and automotive market. Nvidia is a global leader in artificial intelligence hardware & software from edge to cloud computing and expanded its presence in the gaming industry with its handheld game consoles Shield Portable, Shield Tablet, and Shield Android TV and its cloud gaming service GeForce Now. Its professional line of GPUs are used in workstations for applications in such fields as architecture, engineering and construction, media and entertainment, automotive, scientific research, and manufacturing design.
In addition to GPU manufacturing, Nvidia provides an API called CUDA that allows the creation of massively parallel programs which utilize GPUs.
Nvidia announced plans on September 13, 2020 to acquire Arm Ltd. from SoftBank Group, pending regulatory approval, for a value of US$40 billion in stock and cash, which would be the largest semiconductor acquisition to date. SoftBank Group will acquire slightly less than a 10% stake in Nvidia, and Arm would maintain its headquarters in Cambridge.
On February 7, 2022, facing increased regulatory hurdles, Nvidia signaled that it was dropping its acquisition of Arm. The deal, which would have been the largest ever in the chip sector, was valued at $66B at the time of its collapse.
In 1993, the three co-founders believed that the proper direction for the next wave of computing was accelerated or graphics-based computing because it could solve problems that general-purpose computing could not. They also observed that video games were simultaneously one of the most computationally challenging problems and would have incredibly high sales volume. Video games became the company's flywheel to reach large markets and funding huge R&D to solve massive computational problems. With only $40,000 in the bank, the company was born. The company subsequently received $20 million of venture capital funding from Sequoia Capital and others. Nvidia initially had no name and the co-founders named all their files NV, as in "next version". The need to incorporate the company prompted the co-founders to review all words with those two letters, leading them to "invidia", the Latin word for "envy". Nvidia went public on January 22, 1999.
Due to the success of its products, Nvidia won the contract to develop the graphics hardware for Microsoft's Xbox game console, which earned Nvidia a $200 million advance. However, the project took many of its best engineers away from other projects. In the short term this did not matter, and the GeForce2 GTS shipped in the summer of 2000. In December 2000, Nvidia reached an agreement to acquire the intellectual assets of its one-time rival 3dfx, a pioneer in consumer 3D graphics technology leading the field from mid 1990s until 2000. The acquisition process was finalized in April 2002.
In July 2002, Nvidia acquired Exluna for an undisclosed sum. Exluna made software-rendering tools and the personnel were merged into the Cg project. In August 2003, Nvidia acquired MediaQ for approximately US$70 million. On April 22, 2004, Nvidia acquired iReady, also a provider of high performance TCP/IP and iSCSI offload solutions. In December 2004, it was announced that Nvidia would assist Sony with the design of the graphics processor (RSX) in the PlayStation 3 game console. On December 14, 2005, Nvidia acquired ULI Electronics, which at the time supplied third-party southbridge parts for to ATI Technologies, Nvidia's competitor. In March 2006, Nvidia acquired Hybrid Graphics. In December 2006, Nvidia, along with its main rival in the graphics industry AMD (which had acquired ATI), received subpoenas from the U.S. Department of Justice regarding possible antitrust violations in the graphics card industry.
Forbes named Nvidia its Company of the Year for 2007, citing the accomplishments it made during the said period as well as during the previous five years. On January 5, 2007, Nvidia announced that it had completed the acquisition of PortalPlayer, Inc. In February 2008, Nvidia acquired Ageia, developer of the PhysX physics engine and physics processing unit. Nvidia announced that it planned to integrate the PhysX technology into its future GPU products.
In July 2008, Nvidia took a write-down of approximately $200 million on its first-quarter revenue, after reporting that certain mobile chipsets and GPUs produced by the company had "abnormal failure rates" due to manufacturing defects. Nvidia, however, did not reveal the affected products. In September 2008, Nvidia became the subject of a class action lawsuit over the defects, claiming that the faulty GPUs had been incorporated into certain laptop models manufactured by Apple Inc., Dell, and Hewlett-Packard. In September 2010, Nvidia reached a settlement, in which it would reimburse owners of the affected laptops for repairs or, in some cases, replacement. On January 10, 2011, Nvidia signed a six-year, $1.5 billion cross-licensing agreement with Intel, ending all litigation between the two companies.
In November 2011, after initially unveiling it at Mobile World Congress, Nvidia released its Tegra 3 ARM Holdings system-on-chip for mobile devices. Nvidia claimed that the chip featured the first-ever quad-core mobile CPU. In May 2011, it was announced that Nvidia had agreed to acquire Icera, a baseband chip making company in the UK, for $367 million. In January 2013, Nvidia unveiled the Tegra 4, as well as the Shield Portable, an Android-based handheld game console powered by the new system-on-chip. On July 29, 2013, Nvidia announced that they acquired PGI from STMicroelectronics.
On May 6, 2016, Nvidia unveiled the first GPUs of the GeForce 10 series, the GTX 1080 and 1070, based on the company's new Pascal microarchitecture. Nvidia claimed that both models outperformed its Maxwell-based Titan X model; the models incorporate GDDR5X and GDDR5 memory respectively, and use a 16 nm manufacturing process. The architecture also supports a new hardware feature known as simultaneous multi-projection (SMP), which is designed to improve the quality of multi-monitor and virtual reality rendering. Laptops that include these GPUs and are sufficiently thin – as of late 2017, under – have been designated as meeting Nvidia's "Max-Q" design standard.
In July 2016, Nvidia agreed to a settlement for a false advertising lawsuit regarding its GTX 970 model, as the models were unable to use all of their advertised 4 GB of RAM due to limitations brought by the design of its hardware. In May 2017, Nvidia announced a partnership with Toyota which will use Nvidia's Drive PX-series artificial intelligence platform for its autonomous vehicles. In July 2017, Nvidia and Chinese search giant Baidu announced a far-reaching AI partnership that includes cloud computing, autonomous driving, consumer devices, and Baidu's open-source AI framework PaddlePaddle. Baidu unveiled that Nvidia's Drive PX 2 AI will be the foundation of its autonomous-vehicle platform.
Nvidia officially released the Nvidia Quadro GV100 on March 27, 2018. News Archive | NVIDIA Newsroom Nvidia officially released the RTX 2080 GPUs in September 27, 2018. In 2018, Google announced that Nvidia's Tesla P4 graphic cards would be integrated into Google Cloud service's artificial intelligence.
In May 2018, on the Nvidia user forum, a thread was startedMay 10, 2018. 'When will the Nvidia Web Drivers be released for macOS Mojave 10.14'. Nvidia asking the company to update users when they would release web drivers for its cards installed on legacy Mac Pro machines up to mid-2012 5,1 running the macOS Mojave operating system 10.14. Device driver are required to enable graphics acceleration and multiple Computer monitor capabilities of the GPU. On its Mojave update info website, Apple Computer stated that macOS Mojave would run on legacy machines with 'Metal compatible' graphics cards Upgrade to macOS Mojave. Apple Computer and listed Metal compatible GPUs, including some manufactured by Nvidia. Install macOS 10.14 Mojave on Mac Pro (mid 2010) and Mac Pro (mid 2012). Apple Computer However, this list did not include Metal compatible cards that currently work in macOS High Sierra using Nvidia developed web drivers. In September, Nvidia responded, "Apple fully control drivers for Mac OS. But if Apple allows, our engineers are ready and eager to help Apple deliver great drivers for Mac OS 10.14 (Mojave)."September 28, 2018. CUDA 10 and macOS 10.14. Nvidia In October, Nvidia followed this up with another public announcement, "Apple fully controls drivers for Mac OS. Unfortunately, Nvidia currently cannot release a driver unless it is approved by Apple,"October 18, 2018. FAQ about MacOS 10.14 (Mojave) NVIDIA drivers suggesting a possible rift between the two companies.Florian Maislinger. January 22, 2019. 'Apple and Nvidia are said to have a silent hostility'. PC Builders Club. By January 2019, with still no sign of the enabling web drivers, Apple Insider weighed into the controversy with a claim that Apple management "doesn't want Nvidia support in macOS".William Gallagher and Mike Wuerthele. January 18, 2019. 'Apple's management doesn't want Nvidia support in macOS, and that's a bad sign for the Mac Pro' The following month, Apple Insider followed this up with another claim that Nvidia support was abandoned because of "relational issues in the past",Vadim Yuryev. February 14, 2019. Video: Nvidia support was abandoned in macOS Mojave, and here's why. Apple Insider and that Apple was developing its own GPU technology.Daniel Eran Dilger. April 4, 2017. 'Why Apple's new GPU efforts are a major disruptive threat to Nvidia'. Apple Insider Without Apple approved Nvidia web drivers, Apple users are faced with replacing their Nvidia cards with a competing supported brand, such as AMD Radeon from the list recommended by Apple. 'Install macOS 10.14 Mojave on Mac Pro (mid 2010) and Mac Pro (mid 2012)' Apple Inc.
On March 11, 2019, Nvidia announced a deal to buy Mellanox Technologies for $6.9 billion to substantially expand its footprint in the high-performance computing market. In May 2019, Nvidia announced new RTX Studio laptops. The creators say that the new laptop is going to be seven times faster than a top-end MacBook Pro with a Core i9 and AMD's Radeon Pro Vega 20 graphics in apps like Autodesk Maya and RedCine-X Pro. In August 2019, Nvidia announced Minecraft RTX, an official Nvidia-developed patch for the game Minecraft adding real-time DXR raytracing exclusively to the Windows 10 version of the game. The whole game is, in Nvidia's words, "refit" with path tracing, which dramatically affects the way light, reflections, and shadows work inside the engine.
In May 2020, Nvidia's top scientists developed an Open source ventilator in order to address the shortage resulting from the global coronavirus pandemic. On May 14, 2020, Nvidia officially announced their Ampere GPU microarchitecture and the Nvidia A100 GPU accelerator. NVIDIA’s New Ampere Data Center GPU in Full Production | NVIDIA Newsroom NVIDIA A100 | NVIDIA In July 2020, it was reported that Nvidia was in talks with SoftBank Group to buy Arm Holdings, a UK-based chip designer, for $32 billion.
In October 2020, Nvidia announced its plan to build the most powerful computer in Cambridge, England. Named Cambridge-1, the computer will employ AI to support healthcare research, with an expected completion by the end of 2020, at a cost of approximately £40 million. According to Jensen Huang, "The Cambridge-1 supercomputer will serve as a hub of innovation for the UK, and further the groundbreaking work being done by the nation’s researchers in critical healthcare and drug discovery."
Also in October 2020, along with the release of Nvidia RTX A6000, Nvidia announced it is retiring its workstation GPU brand Quadro, shifting product name to Nvidia RTX for future products and the manufacturing to be Nvidia Ampere architecture based.
In August 2021, the proposed takeover of Arm Holdings was stalled after the UK's Competition and Markets Authority raised "significant competition concerns". In October 2021, the European Commission opened a competition investigation into the takeover. The Commission stated that NVIDIA's acquisition could restrict competitors' access to Arm's products and provide NVIDIA with too much internal information on its competitors due to their deals with Arm. SoftBank (the parent company of Arm) and Nvidia announced in early February 2022 that they "had agreed not to move forward with the transaction 'because of significant regulatory challenges'". The investigation is set to end on March 15, 2022. That same month, Nvidia was reportedly compromised by a cyberattack. The attack coincided with the 2022 Russian invasion of Ukraine, though there is no indication that the attack came from Russia or Russian hackers.
In March 2022 Nvidia CEO Jensen Huang mentioned that they are open to having Intel manufacture their chips in the future. This was the first time the Company mentioned that they would work together with Intel's upcoming Foundry Services.
For the Q2 of 2020, Nvidia reported sales of $3.87 billion, which was a 50% rise from the same period in 2019. The surge in sales and people's higher demand for computer technology. According to the financial chief of the company, Colette Kress, the effects of the pandemic will "likely reflect this evolution in enterprise workforce trends with a greater focus on technologies, such as Nvidia laptops and virtual workstations, that enable remote work and virtual collaboration."
Some families are listed below:
Instead, Nvidia provides its own binary file GeForce graphics drivers for X.Org and an open-source library that interfaces with the Linux kernel, FreeBSD or Solaris kernels and the proprietary graphics software. Nvidia also provided but stopped supporting an obfuscated open-source driver that only supports two-dimensional hardware acceleration and ships with the X.Org distribution.
The proprietary nature of Nvidia's drivers has generated dissatisfaction within free-software communities. Some Linux and BSD users insist on using only open-source drivers and regard Nvidia's insistence on providing nothing more than a binary-only driver as inadequate, given that competing manufacturers such as Intel offer support and documentation for open-source developers and that others (like AMD) release partial documentation and provide some active development. An overview of graphic card manufacturers and how well they work with Ubuntu Ubuntu Gamer, January 10, 2011 (Article by Luke Benstead)
Because of the closed nature of the drivers, Nvidia video cards cannot deliver adequate features on some platforms and architectures given that the company only provides x86/x64 and ARMv7-A driver builds. As a result, support for 3D graphics acceleration in Linux on PowerPC does not exist, nor does support for Linux on the hypervisor-restricted PlayStation 3 console.
Some users claim that Nvidia's Linux drivers impose artificial restrictions, like limiting the number of monitors that can be used at the same time, but the company has not commented on these accusations.
In 2014, with Maxwell GPUs, Nvidia started to require firmware by them to unlock all features of its graphics cards. Up to now, this state has not changed and makes writing open-source drivers difficult. NVIDIA Begins Requiring Signed GPU Firmware Images, slashdot, 2014-09-27. Linux-Firmware Adds Signed NVIDIA Firmware Binaries For Turing's Type-C Controller, phoronix, 2019-02-13. The Open-Source NVIDIA "Nouveau" Driver Gets A Batch Of Fixes For Linux 5.3, phoronix, 2019-07-19.
On 12th May 2022 Nvidia announced that they are opensourcing their GPU kernel drivers. They are still maintaining closed source userland utilities, hence making users still dependent on their proprietary software.
In April 2016, Nvidia produced the DGX-1 based on an 8 GPU cluster, to improve the ability of users to use deep learning by combining GPUs with integrated deep learning software. It also developed Nvidia Tesla K80 and P100 GPU-based virtual machines, which are available through Google Cloud, which Google installed in November 2016. Microsoft added GPU servers in a preview offering of its N series based on Nvidia's Tesla K80s, each containing 4992 processing cores. Later that year, AWS's P2 instance was produced using up to 16 Nvidia Tesla K80 GPUs. That month Nvidia also partnered with IBM to create a software kit that boosts the AI capabilities of Watson, called IBM PowerAI. Nvidia also offers its own NVIDIA Deep Learning software development kit. In 2017, the GPUs were also brought online at the Riken Center for Advanced Intelligence Project for Fujitsu. The company's deep learning technology led to a boost in its 2017 earnings.
In May 2018, researchers at the artificial intelligence department of Nvidia realized the possibility that a robot can learn to perform a job simply by observing the person doing the same job. They have created a system that, after a short revision and testing, can already be used to control the universal robots of the next generation. In addition to GPU manufacturing, Nvidia provides parallel processing capabilities to researchers and scientists that allow them to efficiently run high-performance applications. "Robot see, robot do: Nvidia system lets robots learn by watching humans" New Atlas, May 23, 2018
The card's back-end hardware specifications, initially announced as being identical to those of the GeForce GTX 980, differed in the amount of L2 cache (1.75 MB versus 2 MB in the GeForce GTX 980) and the number of ROPs (56 versus 64 in the 980). Additionally, it was revealed that the card was designed to access its memory as a 3.5 GB section, plus a 0.5 GB one, access to the latter being 7 times slower than the first one. The company then went on to promise a specific driver modification in order to alleviate the performance issues produced by the cutbacks suffered by the card. However, Nvidia later clarified that the promise had been a miscommunication and there would be no specific driver update for the GTX 970. Nvidia claimed that it would assist customers who wanted refunds in obtaining them. On February 26, 2015, Nvidia CEO Jen-Hsun Huang went on record in Nvidia's official blog to apologize for the incident. In February 2015 a class-action lawsuit alleging false advertising was filed against Nvidia and Gigabyte Technology in the U.S. District Court for Northern California.
Nvidia revealed that it is able to disable individual units, each containing 256 KB of L2 cache and 8 ROPs, without disabling whole memory controllers. This comes at the cost of dividing the memory bus into high speed and low speed segments that cannot be accessed at the same time unless one segment is reading while the other segment is writing because the L2/ROP unit managing both of the GDDR5 controllers shares the read return channel and the write data bus between the two GDDR5 controllers and itself. This is used in the GeForce GTX 970, which therefore can be described as having 3.5 GB in its high speed segment on a 224-bit bus and 0.5 GB in a low speed segment on a 32-bit bus.
On July 27, 2016, Nvidia agreed to a preliminary settlement of the U.S. class action lawsuit, offering a $30 refund on GTX 970 purchases. The agreed upon refund represents the portion of the cost of the storage and performance capabilities the consumers assumed they were obtaining when they purchased the card.
It appears that while this core feature is in fact exposed by the driver, Nvidia partially implemented it through a driver-based shim, coming at a high performance cost. Unlike AMD's competing GCN-based graphics cards which include a full implementation of hardware-based asynchronous compute, Nvidia planned to rely on the driver to implement a software queue and a software distributor to forward asynchronous tasks to the hardware schedulers, capable of distributing the workload to the correct units. Asynchronous compute on Maxwell therefore requires that both a game and the GPU driver be specifically coded for asynchronous compute on Maxwell in order to enable this capability. The 3DMark Time Spy benchmark shows no noticeable performance difference between asynchronous compute being enabled or disabled. Asynchronous compute is disabled by the driver for Maxwell.
Oxide claims that this led to Nvidia pressuring them not to include the asynchronous compute feature in their benchmark at all, so that the 900 series would not be at a disadvantage against AMD's products which implement asynchronous compute in hardware.
Maxwell requires that the GPU be statically partitioned for asynchronous compute to allow tasks to run concurrently. Each partition is assigned to a hardware queue. If any of the queues that are assigned to a partition empty out or are unable to submit work for any reason (e.g. a task in the queue must be delayed until a hazard is resolved), the partition and all of the resources in that partition reserved for that queue will idle. Asynchronous compute therefore could easily hurt performance on Maxwell if it is not coded to work with Maxwell's static scheduler. Furthermore, graphics tasks saturate Nvidia GPUs much more easily than they do to AMD's GCN-based GPUs which are much more heavily weighted towards compute, so Nvidia GPUs have fewer scheduling holes that could be filled by asynchronous compute than AMD's. For these reasons, the driver forces a Maxwell GPU to place all tasks into one queue and execute each task in serial, and give each task the undivided resources of the GPU no matter whether or not each task can saturate the GPU or not.
In emails that were disclosed by Walton from Nvidia Senior PR Manager Bryan Del Rizzo, Nvidia had said:
...your GPU reviews and recommendations have continued to focus singularly on rasterization performance, and you have largely discounted all of the other technologies we offer gamers. It is very clear from your community commentary that you do not see things the same way that we, gamers, and the rest of the industry do.TechSpot, partner site of Hardware Unboxed, said, "this and other related incidents raise serious questions around journalistic independence and what they are expecting of reviewers when they are sent products for an unbiased opinion."
A number of prominent technology reviewers came out strongly against Nvidia's move. Linus Sebastian, of Linus Tech Tips, titled the episode of his popular weekly WAN Show, "NVIDIA might ACTUALLY be EVIL..." and was highly critical of the company's move to dictate specific outcomes of technology reviews. The popular review site Gamers Nexus said it was, "Nvidia's latest decision to shoot both its feet: They've now made it so that any reviewers covering RT will become subject to scrutiny from untrusting viewers who will suspect subversion by the company. Shortsighted self-own from NVIDIA."
Two days later, Nvidia reversed their stance. Hardware Unboxed sent out a Twitter message, "I just received an email from Nvidia apologizing for the previous email & they've now walked everything back." On December 14, Hardware Unboxed released a video explaining the controversy from their viewpoint. Via Twitter, they also shared a second apology sent by Nvidia's Del Rizzo that said "to withhold samples because I didn't agree with your commentary is simply inexcusable and crossed the line."