Politics & World Order:
AI and ML technologies have played a fundamental role thus far, in regards to politics, and foreign policy. Generally speaking, deterrence is a positive factor for world peace and stability. Over the past 50 years, this notion of deterrence has been key to ensure stability and harmonisation across the globe.
However today, there are technologies that are being developed early on, in which are shifting what conflicts will look like. These technologies are primarily centred upon AI & ML capabilities – in which are intersectional – and therefore play crucial roles in regards to cybersecurity and warfare, just to name a few.
When comparing the prior paradigm of conflict to future conflicts, a clear shift is apparent. Conflict used to be solely physical. However now, we are moving towards a state in which conflicts are resolved or battled purely in a digital conflict, before physical.
The greatest macro shifts is the notion that conflict is changing from physical to digital. Whereas, the greatest micro shift, is the ability to have the greatest AI technology. The nation(s) with the greatest AI technologies will gain access to the greatest power contributor regarding defence abilities and deterrence.
From a geopolitical plain, we are now entering into a power competitive world, in which the changing world order is less clear, or perhaps is shifting. For leading nations, the ability to be good at technology is fundamental. This is because, AI has compounding effects on any other technology that a nation could develop.
In regards to AI, and the common analogies to nuclear weaponry, generally speaking, AI is a multifaceted technology, more analogous to the internet or software, in comparison to one specific weapon. This is because, via AI, the power yielded enables a 10X better decision making process, thus generating huge influence and dominance.
Public & Private Sector:
AI has primarily, this far, been developed in the private sector, in conjunction with contemporary research that has been achieved in China. Whilst in China there has been fast integration of this research via technological adoption – for example face recognition. In the US, there has not been a clear relationship between the private and public sector regarding AI.
The partnership level of private & public for AI – in the US- is not on the same level as seen within China.
Policy leaders and decision makers must clearly identify and lay out the methods that are currently being used, in comparison to what other competitors are doing. This is key for success.
Regarding the 3 critical components for broad scale AI, 3 major factors are apparent:
- Compute and computational power
- Data and data scale
These three vectors fundamentally matter. The power of AI systems scales via these three principles.
Fascinatingly, when looking at talent, companies such as OpenAI solely have a few hundred employees – 250 to be specific. This is a small number in comparison to their impact.
Therefore, talent within AI, matters a great deal.
General Misconceptions On AI:
The intuitive belief is that with AI, the things and actions that are easy for humans to do, are the areas in which AI will easily achieve. However, this is not necessarily the case. It is likely to be a very long time before society creates robots that can perform simple tasks, such as:
- Folding clothes
- Washing dishes
However today, we have AI that can perform copyrighting at a rate in which is better than most individuals.
From a general sense, one way to think about the impact of AI is via the following: the ability to scale repetitive human tasks.
Regarding humans and algorithms, this solely boils down to data availability. This refers to where one can find large data pools of digital data that algorithms can learn from. Either these pools of data come from the past, or can easily be collected in the future.
These are the problems in which algorithms can learn to do easily – namely actions in which have pools of data that can, or have, been easily collected.
However, in areas in which data is hard to collect, expensive, and there is not much data on a set topic, these are the areas in which will be last to become automated.
For example, with language models, the secret behind them is solely the two decades of human data via usage of the internet. This is the pool of digital data used. However, regarding the possibility of home robots – there is little-to-no data on folding towels, or cooking.
The topics with little data, or had to capture data, in which therefore can allow an algorithm to understand and perform tasks, these are the hard roads in which we must climb.
It is likely that knowledge workers – namely those working on coding, or more “complex” roles, these will be the jobs in which are more susceptible to initial displacement. Whereas, blue collar work – labour – will not be automated firstly. This is solely down to the ability to capture data.
Stages Of Building Models:
Fundamentally, everything starts with data. Data from algorithms is analogous to the ingredients one makes a dish with. Data is the new code.
If one compares traditional software, against AI software, a clear difference is apparent. In traditional software, the life blood is the code. Whereas, in AI software, the life blood is the data. This is a major difference that has occurred recently,
Lifecycle for algorithms:
- Collection of raw data
- Annotation (conversion of unstructured to structured – via labelling)
- Training process (algorithms look through data, and learn)
- Production (running on real world data, and these algorithms produce predictions)
This is not a sole one off process, but instead is a loop.
The critical process is by which one constantly replenishes, and goes through the cycles again. This creates very high quality algorithms.
Code VS Data:
When one looks at a range of high performing algorithms across domains, such as:
- Speech recognition
- Image recognition
- Summary of text
Under the hood, these all use the same codebase.
Thus, a major shift has happened.
Code has become the same, more or less as a commodity. Whereas, the thing that enables the differentiation is the data, and the datasets that enable these algorithms.
For companies, the strategic asset is something that enables differentiation against competitors. As more and more software shifts towards AI software, the vector differentiation shifts to data, from code, and the access of data in which one has availability to.
The strategic differentiator is existing data, in conjunction with the engine that one is using to produce new and insightful data, that therefore powers core algorithms.
In the future, there will be algorithms built around a range of novel activities, including:
- Customer recommendations
- Economic transactions
The fundamental physics associated with what the “best” business will look like, shall change.
The truth is, the magic of software is the ability to collect datasets in a coordinated manner, and therefore the ability to create data tooling on top of this data, therefore having infinite scalability of these systems.
This has allowed SaaS to produce value, however these forms of “alchemy” have their caps in regards to value creation.
With AI, one can take repetitive tasks and automate these tasks.
Fundamentally, the majority of large companies spend a large amount of cash on people that do repetitive tasks. However, through AI, the magic created is not solely the automation of the work, but also the ability to go further – and do more – than many humans could do within these tasks.
In terms of the S Curve for software, it is clear that the S Curve on conventional software is within matured phases. Whereas, there is a new and youthful S Curve apparent on the productization of AI systems. In not too long, there will be a massive new use case generation via AI within organisations. This massive increase will be more impactful than conventional software in the past.
Business value generation by AI systems, will be more than 10X of the value achieved via deployment of a CRM, or ERP system.