Symbiosis Of AI & Humans:
Computers Are Better Than Humans At Certain Tasks:
Lowering AI Costs:
Symbiosis Of AI & Humans:
The symbiosis of AI & humans will make workers far more efficient and productive.
According to our research, during the next eight years AI software could boost the productivity of the average knowledge worker by nearly 140%, adding approximately $50,000 in value per worker, or $56 trillion globally, as shown below.[2]
We expect the value of a knowledge worker empowered with AI to increase ~15% at an annual rate, compared to the 2.7% consensus expectation for annualized wage growth through 2030.
As companies generate profound productivity gains from artificial intelligence, we estimate that software spend per year could grow at a 42% compound annual rate from just under $1 trillion today to $14 trillion in 2030.[1]
According to our estimates for 2030, propelled by AI software, total IT spend could increase 20% at an annual rate and surpass $20 trillion, which is four times the ~$5 trillion consensus estimate, as shown below.
Historical standards tell us that, instead of a replacement of humans via the introduction of new technologies, actually there is an increased level of productivity experienced.
Agriculture has been an important case study in automation. The tractor, for example, transformed agriculture from a human- and horse-powered industry into a mechanized industry during the past hundred years. If agriculture is any indication, other industries in the process of automation are likely to enjoy increased productivity and margins, both of which should boost their returns on invested capital over time. The tractor is a good proxy for the level of automation in the agriculture industry.
As shown below, the adoption of the tractor followed the traditional S-curve. While displacing horses and mules, it hit an inflection point in the 20-25% range in 1940 and within 30 years leveled off at roughly 80% in 1969.
Thanks to the tractor, farm productivity increased dramatically. Between 1950 and 2015 total farm output nearly tripled while employment dropped by more than two thirds, as shown below.
We can see the first glimpse of this productivity multiplier in play via the introduction of Palantir’s software solution:
This is whole proposition of Palantir. Namely, the fact that there is a symbiosis between software and human. Thus, this results in workers increasing their productivity drastically, despite not being technical. Palantir empowers non-technical workers.
” I believe that Silicon Valley is creating innovation without jobs, and it’s really hurting our world. And what’s very special about Foundry is that it creates innovation with jobs. So there’s factory workers – there are 1500 people at Chrysler using our product, 1500 people who are technical but are not PHDs in Computer Science or Math.”
And so, on the commercial side, making sure that workers can use the product. And once the worker starts using the product, they’re like really valuable. Because the worker on the frontline can start begin to do something that here to forth no worker can do, actually a computer alone can’t do it. Because it’s weight – it’s a factorial problem. Computers are not capable of actually doing this yet, maybe they will be in ten years, but by then the worker will be so good that maybe they don’t get – they’re irreplaceable.“
Computers Are Better Than Humans At Certain Tasks:
Computers will entirely replace certain tasks, thus increasing the overall productivity of the worker,
Today, Open AI’s Codex coding tool uses neural networks to autocomplete tasks and can complete more than 30% of coding problems associated with simple tasks like creating website forms in minutes instead of hours. Given current rates of improvement, tools like Codex are likely to take over specific tasks in nearly every job category from accounting to engineering.
Another recent example is DALL·E 2, which produces strikingly creative images from text prompts. The Twitter account @Dalle2Pics documents many examples of DALL·E-generated images such as the one below, its response to the following prompt: “Early designs of the iPhone by Leonardo da Vinci.” We plan to publish additional research related to DALL·E 2 and its impact on productivity in the near future.
This interlinks with the future of AI in which refers to AI being flexible, reusable models in which can be applied to just about any domain or industry task. IBM mention how, “the next wave of AI looks to replace the task-specific models that have dominated the AI landscape to date”. The future is models that are trained on a broad set of unlabelled data that can be used for different tasks, with minimal fine-tuning.
Foundation models however are not trivial, and results within the real world have already been seen. For example, the first glimmers of foundation models have been shown within GPT-3, BERT or DALL-E 2. One can input a short prompt, and the system then generates an entire essay, or complex picture based on set parameters – even if it was not trained on how to execute that image or task explicitly.
The model is used to create articles, poetry, stories, news reports and dialogue using just a small amount of input text that can be used to produce large amounts of quality copy. GPT-3 has over 175B parameters and can generate any text including guitar tabs, or computer code. “Others have found that GPT-3 can generate any kind of text, including guitar tabs or computer code.
For example, by tweaking GPT-3 so that it produced HTML rather than natural language, web developer Sharif Shameem showed that he could make it create web-page layouts”. This new partnership, whilst many overlooked, just shows the innovative inventions occurring at Palantir. Eventually through the use of these OpenAI language models, in Foundry, users with NO CODING EXPERIENCE can SPEAK to AI resulting in code being written. Or, users can WRITE IN NATURAL LANGUAGE, and this can be translated to CODE.
Undoubtably, there will be certain tasks in which AI replaces humans entirely.
McKinsey’s mid-point scenario suggests that AI could automate 15% of labor tasks by 2030. According to our research, however, the displacement could be much higher for two reasons: technological advancement and the adoption curve.
The current slope of AI advancement is steep because innovations in hardware, training methods, and neural network architecture are compounding to accelerate progress beyond Moore’s Law.
OpenAI, DeepMind, and other organizations have demonstrated that AI models should be able to achieve near human-level proficiency in many narrow knowledge worker tasks.
Lowering AI Costs:
Our research also suggests that AI training costs are dropping at a rate of ~60% per year, potentially breaking down barriers to many exciting large model projects.[3]
In our view, higher allocations of human and financial capital to AI projects will continue to accelerate the rate of innovation, as shown below.
The cost to train an artificial intelligence (AI) system is improving at 50x the pace of Moore’s Law. For many use cases, the cost to run an AI inference system has collapsed to almost nil. After just five years of development, deep learning – the modern incarnation of AI – seems to have reached a tipping point in both cost and performance, paving the way for widespread adoption over the next decade.
During the past ten years, the computing resources devoted to AI training models have exploded. After doubling every two years from 1960 to 2010, AI compute complexity has soared 10x every year, as shown below.
Just as important, AI training costs have dropped roughly 10x every year. In 2017, for example, the cost to train an image recognition network like ResNet-50 on a public cloud was ~$1,000. In 2019, the cost dropped to ~$10, as shown below.
At the current rate of improvement, the cost should fall to $1 by the end of this year.[2] The cost of inference—running a trained neural network in production—has dropped even more precipitously. During the past two years, for example, the cost to classify one billion images has fallen from $10,000 to just $0.03.
Breakthroughs in both hardware and software have enabled these cost declines. In the past three years, chip and system design have evolved to add dedicated hardware for deep learning, resulting in a 16x performance improvement, as shown in the left chart below.
Holding hardware improvements constant, newer versions of TensorFlow and PyTorch AI frameworks in concert with novel training methods combine to generate an 8x performance gain.
Curiously, AI training chip costs have not dropped in tandem with unit hardware prices. The price of Nvidia’s data center GPU, for example, has tripled over the last three generations. In fact, Amazon Web Services has not lowered the price of Nvidia’s V100 GPU instances since it introduced them in 2017.
Competition from independent and hyperscale AI chip designs does have the potential to erode Nvidia’s pricing power, but so far, no company has been able to field a comparable chip to Nvidia’s V100 GPU with the same breadth of software and developer support.
While thus far AI has added roughly $1 trillion to the global equity market cap, it is poised to scale to $30 trillion by 2037, becoming the first foundational technology to dwarf the internet.