Chinese company DeepSeek has taken the AI world by storm with their recent unveiling of cutting-edge large language models (LLMs), specifically DeepSeek-V3 and its reasoning-focused variant DeepSeek-R1. These open-source models demonstrate performance comparable to leading competitors (Figure A) at roughly 1/10th the training cost and significantly lower inference cost. In simple terms, training cost refers to the expenses incurred in developing and fine-tuning an AI model, including the computational resources, data preparation, and model optimisation required to build the model. On the other hand, inference cost refers to the ongoing costs of running the trained model to make predictions or decisions i.e. when ChatGPT answers users’ queries.
By releasing its models under an open-source license, DeepSeek allows other organisations to replicate and build upon its work. This democratises access to advanced AI technologies, enabling a broader range of entities to develop and deploy AI solutions at a much lower cost. The development raises fundamental questions about the economics of AI: While improved efficiency could accelerate AI adoption, it challenges assumptions about near-term semiconductor demand for training capex, cloud infrastructure spending, and the distribution of value across the technology landscape.
In this article, we examine DeepSeek’s key technical innovations, explain how they achieved frontier model performance despite US chip export restrictions, and analyse the broader implications for the technology industry.
Figure A: Benchmark Performance of DeepSeek-R1
Source: DeepSeek-R1 Technical Report
Market reaction and initial impact
The US stock market awakened to DeepSeek’s disruption in the third week of January. That coincided with the release of DeepSeek-R1 (on 20 January 2025). The model, released under the MIT licence along with its source code, demonstrated exceptional capabilities in reasoning, mathematics, and coding tasks, matching the performance of OpenAI’s o1 “reasoning” model at a fraction of the cost. DeepSeek complemented this release with a web interface for free access and launched an iOS application that quickly reached the top of the App Store charts.
Given the US export ban on top-end GPUs to China, which many argued was necessary for the US to maintain its lead in the AI race, the market was shocked to see a China-based frontier model rival the capability of US-based models! By 27 January, as news of DeepSeek’s breakthrough gained widespread attention, technology stocks experienced sharp declines, with Nvidia falling 16%, Oracle 12% and smaller AI infrastructure stocks declining 20-30% in a single trading session!
While R1’s release finally got the market’s attention, the foundation was laid a month earlier when DeepSeek-V3 was released around Christmas. On 26 December 2024, Andrej Karpathy (formerly Director of AI at Tesla and founding member of OpenAI) highlighted the remarkable achievement: DeepSeek-V3 had reached frontier-grade capabilities using just 2,048 GPUs over two months – a task that traditionally required clusters of 16,000+ GPUs (Figure B). This efficiency gain fundamentally challenged assumptions regarding the computational resources required for developing advanced AI models.
Figure B: Andrej Karpathy on DeepSeek-V3

Source: X
Technical innovations
DeepSeek’s efficiency gains arise from several key architectural innovations that fundamentally rethink how large language models process and generate text.
At its core, DeepSeek employs a Mixture-of-Experts (MoE) approach – similar to relying on the most relevant specialist instead of the entire team of doctors when treating a patient. By selectively activating only 37B of 671B parameters for each piece of text (token), the system achieves remarkable efficiency gains. For perspective, Meta’s Llama 3 405B required 30.8M GPU-hours, while DeepSeek-V3 used just 2.8M GPU-hours – an 11x efficiency gain. At an assumed cost of US$4 per GPU hour, this translates to roughly US$11.2 million in training costs versus US$123.2 million for Llama 3 405B.
There are several technical innovations worth calling out that focus on efficiency, including:
- Multi-head Latent Attention (MLA): MLA compresses the information required in the “attention” mechanism through low-rank projections. MLA is analogous to a very good summary of a large document. This technique drastically reduces memory demands during inference while retaining the full performance of standard multi-head attention.
- Multi-Token Prediction (MTP): Rather than generating one word (token) at a time, MTP attempts to predict multiple future words simultaneously. This achieves an 85-90% success rate in predicting upcoming tokens, resulting in 1.8x faster text generation.
- Auxiliary-loss-free load balancing: Traditional MoE architectures struggle to balance workload across the “experts.” This is akin to a medical emergency department where some doctors are overworked versus others! DeepSeek-V3 introduced a novel “dynamic bias adjustment” mechanism that ensures balanced expert workloads without compromising performance, achieving 90% expert utilisation.
Prior to DeepSeek’s arrival, many of these architectural and design choices were already known in the research community. For instance, it was well understood that the MoE approach yields 3x to 7x efficiency gains compared to dense models. Nonetheless, DeepSeek managed to push the boundaries of efficiency even further without compromising model quality. By building DeepSeek efficiently without relying on state-of-the-art GPUs, the company demonstrated that necessity is the mother of invention
Reinforcement learning breakthrough
DeepSeek’s R1 technical paper reveals a significant step change in how reasoning capabilities can be developed in LLMs. The widely accepted standard had been to use supervised learning with human-curated datasets followed by reinforcement learning with human feedback (RLHF). DeepSeek, however, showed that sophisticated reasoning can emerge primarily through reinforcement learning alone. In its technical report, DeepSeek notes that its work is “the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT [Supervised Fine-Tuning].”
The significance of this approach lies in its ability to transcend the limitations of human-demonstrated problem-solving patterns. Traditional supervised learning methods can only replicate reasoning strategies present in the training data. In contrast, DeepSeek’s reinforcement learning approach allows the model to discover novel problem-solving strategies through systematic exploration. This is analogous to the difference between learning chess by studying grandmaster games versus learning by playing millions of games and discovering new strategies independently.
Overcoming the US chip ban
DeepSeek’s technical reports also shed light on how the company navigated the U.S. export ban on top-end GPUs like the Nvidia H100. The H800 GPUs available (legally) to DeepSeek significantly reduced NVLink link bandwidth and double-precision computing capabilities. To overcome these limitations, DeepSeek implemented optimisations such as:
- Restricting token processing to groups of 4 GPUs, thus minimising data transfer bottlenecks.
- Developed techniques to handle internal (NVLink) and external (InfiniBand) communication concurrently.
- Implemented FP8 mixed-precision training, halving memory requirements compared to traditional approaches.
- Developed custom kernel software for efficient local expert forwarding.
Investment implications
DeepSeek’s innovations will potentially upend how value is created and captured in the AI industry. In this section, we consider some of the potential ramifications of DeepSeek’s innovations, cost efficiency, and release under an MIT license.
At a high-level, DeepSeek’s innovations and cost efficiency are likely to force a re-think on the ravenous appetite for high-end AI accelerators. However, technology history suggests that improved efficiency often leads to increased total resource consumption — a phenomenon known as the Jevons paradox. As barriers to entry fall, more organisations can experiment with and deploy AI solutions, potentially driving higher aggregate demand for AI infrastructure.
Naturally, for semiconductor companies, especially those focused on AI acceleration, these trends present both challenges and opportunities. DeepSeek’s ability to produce a top-grade frontier LLM without access to NVIDIA’s high-end GPUs suggests that some companies might have been complacent with respect to engineering and architectural optimisations, and have instead chosen to use a sledgehammer capex approach to develop increasingly sophisticated models. It is possible that DeepSeek’s success will cause some rethink on this front, which might in turn moderate high-end GPU demand for training capex. This means that companies might be able to do more with their existing GPUs and their useful lives might even be extended. However, the fundamental dynamics of AI development remain compelling: As long as scaling laws hold – where model performance improves as a power-law function of size, dataset, and compute resources – we can expect compute-hungry AI algorithms to require ever more processing power to solve increasingly challenging problems. Moreover, as AI models become more widely available, the demand for inference-optimised semiconductors (inference capex) is likely to grow substantially.
Major cloud providers’ current capital expenditure plans suggest confidence in this longer-term vision despite efficiency gains. Microsoft has reiterated its US$80 billion capex commitment, while Alphabet projects approximately US$75 billion in CapEx for 2025, up significantly from US$52.5 billion in 2024. Amazon expects to maintain its Q4 2024 quarterly investment rate of US$26.3 billion through 2025. While it is potentially too early to see strategic shifts in response to DeepSeek’s innovations, these investment levels suggest hyperscalers anticipate growing demand for AI compute, and particularly inference workloads. Notably, hyperscalers are positioning themselves as model-agnostic platform providers – evidenced by Microsoft making DeepSeek’s R1 available on GitHub and Azure AI Foundry despite its close partnership with OpenAI, and Amazon integrating DeepSeek R1 into its Bedrock and SageMaker platforms.
This evolution signals a broader shift in competitive advantage from raw model capabilities towards proprietary data assets, distribution channels, and specialised applications. Enterprise software businesses, which typically focus on building applications around models rather than developing models themselves, stand to benefit from the widespread availability of high-quality open-source foundation models. As these models improve and become more accessible, competitive advantage will increasingly derive from unique data assets and distribution channels. Enterprises with specialised data repositories, such as Salesforce (CRM data) or Bloomberg (financial data), could develop highly targeted AI agents that leverage their proprietary data advantages.
The platform landscape presents varying implications for different players. Meta appears well-positioned to benefit from LLM commoditisation (a strategy it has followed with its own Llama models), as it can deploy AI innovations to enhance content discovery and user engagement across its vast social networks. Alphabet presents a more complex case: while its Google search advertising model faces potential disruption from AI-powered “answer engines,” its YouTube platform and cloud business could benefit substantially from increased AI adoption and deployment.
Obviously, there is a question with respect to where one would undertake the inference task. DeepSeek’s approach to providing “distilled” models (1.5B–70B parameters) opens new possibilities for edge computing. These smaller models enable resource-constrained devices to leverage advanced AI capabilities while navigating practical constraints like battery life and thermal limitations. Apple’s current hybrid approach — running on-device LLMs for certain tasks while routing others to private cloud infrastructure built using Apple’s M-series silicon – could become the standard blueprint, particularly in consumer applications. This hybrid model effectively balances privacy considerations and response times with computational capabilities.
As the industry evolves, significant uncertainties remain regarding the distribution of economic value between frontier model developers, infrastructure providers, platform companies, and application developers. While investors will need to keep a close eye on these shifting value propositions, and the winners and losers are likely to emerge only in the fullness of time, one aspect appears increasingly clear: We are entering a fascinating phase of AI development. The improved efficiency and accessibility demonstrated by innovations like DeepSeek could democratise access to AI capabilities, potentially making businesses and consumers the ultimate beneficiaries of this technological revolution.
Conclusion
DeepSeek’s innovations are not just a technical achievement; they signal a potential restructuring of the AI industry’s competitive dynamics. By achieving frontier model performance at roughly 1/10th the traditional training cost and releasing the model under the MIT license, DeepSeek has challenged fundamental assumptions about AI development and deployment.
In this article, we have explored three key implications of this development. First, we have considered the balance between training and inference capex through the lens of Jevons paradox which suggests that lower barriers to entry could drive higher aggregate compute consumption over time. Second, we have considered the potential shift in value creation from raw model capabilities towards data assets, distribution platforms, and specialised applications. Finally, we have contemplated how the emergence of efficient “distilled” models enables new deployment paradigms, from edge computing to hybrid architectures, potentially expanding AI’s practical applications.
For investors, there are many nuances to consider including:
- The near-term risks versus medium to long-term opportunities for semiconductor businesses.
- The position of hyperscalers as model-agnostic compute and inference platforms.
- Democratisation of AI models and its effects on enterprise software businesses.
- The pros and cons for platform companies with access to unique data and/or strong distribution channels.
While the geopolitical implications of DeepSeek are still unfolding, the development clearly demonstrates that multiple paths to AI progress exist beyond simply scaling up computing resources and that a path to more democratised AI models is likely feasible. We believe organisations across the spectrum will be reassessing their AI strategies in light of these developments, with perhaps their lens focused on “value addition” and “Return on Investment” rather than raw model size or computing power. In a nutshell, DeepSeek is excellent news for the users of AI and by lowering both the training and inference costs, it has massively increased the Return on Investment (ROI) on AI.
At AlphaTarget, we invest our capital in some of the most promising disruptive businesses at the forefront of secular trends; and utilise stage analysis and other technical tools to continuously monitor our holdings and manage our investment portfolio. AlphaTarget produces cutting edge research and our subscribers gain exclusive access to information such as the holdings in our investment portfolio, our in-depth fundamental and technical analysis of each company, our portfolio management moves and details of our proprietary systematic trend following hedging strategy to reduce portfolio drawdowns.
To learn more about our research service, please visit: https://alphatarget.com/subscriptions/.
Autonomous driving technology is rapidly advancing, reshaping the landscape of transportation with its potential to enhance efficiency, convenience and safety on the roads.
The history of autonomous driving technology traces back to the mid-20th century, with early concepts appearing in the 1950s and 1960s. Initial developments were largely theoretical, but by the 1980s, more practical steps began to emerge. In 1986, Carnegie Mellon University’s NavLab project demonstrated a vehicle that could drive autonomously on highways.
The 2000s marked a significant leap with the advent of DARPA’s Grand Challenges, which spurred innovation by challenging teams to build autonomous vehicles capable of navigating complex environments. Google’s entrance into the field in 2009, with its self-driving car project (now Waymo), accelerated advancements, leveraging advanced sensors and AI to enhance safety and functionality. The 2010s brought increased industry investment, with major automotive manufacturers and tech companies investing heavily in autonomous vehicle technology.
As of 2024, self-driving vehicles, powered by sophisticated sensors, artificial intelligence, and machine learning algorithms, are progressively becoming more reliable and integrated into everyday use.
Fully autonomous Level 4 vehicles are already on the road in select regions, marking a significant milestone in the industry’s progress. Despite this achievement, technological hurdles, combined with regulatory and ethical challenges, continue to be obstacles on the road to large-scale deployment.
Levels of autonomy
The Society of Automotive Engineers (SAE) defines six levels of driving automation, from Level 0 to Level 5, each representing a different degree of autonomy in vehicles:
- SAE Level 1 (Driver Assistance): This level includes basic automation features such as adaptive cruise control or lane-keeping assistance. While the vehicle can assist with specific tasks, the driver is still responsible for most driving functions and must remain engaged.
- SAE Level 2 (Partial Automation): Vehicles at this level can control both steering and acceleration/deceleration simultaneously. However, the driver must remain actively involved, monitor the driving environment, and be prepared to take over if necessary. Tesla’s Autopilot and Full Self Driving (Supervised) are examples of Level 2 automation.
- SAE Level 3 (Conditional Automation): At this stage, the vehicle can handle all aspects of driving in certain conditions, such as highway driving, without driver intervention. The driver must be ready to take control if the system requests, but does not need to monitor the driving environment continuously.
- SAE Level 4 (High Automation): Vehicles at Level 4 can operate autonomously in their operational design domain (ODD) – specific conditions or areas where it is designed to operate such as campuses, dedicated routes within a city, or even an entire city. Within its ODD, no driver is required, and some Level 4 vehicles may not even have manual controls. Waymo operating in certain cities in the US is an example of Level 4 autonomy. Tesla’s Cybercab demonstrated within Warner Bros. Discovery’s Burbank California studio is another example of Level 4 autonomy.
- SAE Level 5 (Full Automation): At the highest level, the vehicle is fully autonomous in all conditions and environments. This represents autonomous driving with no ODD restrictions. As of 2024, there are no Level 5 autonomy solutions deployed anywhere in the world, and it remains (thus far) an elusive goal.
These levels represent a continuum of increasing automation, with Level 5 signifying the ultimate goal of fully autonomous driving.
Autonomous transport technology landscape
At the core of autonomous driving technology is artificial intelligence (AI) and machine learning (ML) which enable vehicles to perceive and navigate their surroundings without human intervention. These technologies process data from sensors to understand the vehicle’s environment, make real-time decisions, and adapt to changing conditions. Typically, machine learning models are trained on diverse datasets to improve their ability to recognise patterns and make real-time decisions. Over time, these models learn from new scenarios and edge cases, enhancing their performance and reliability. AI and ML enable autonomous vehicles to adapt to changing conditions, such as varying weather or unexpected obstacles, and continuously refine their driving strategies based on experience. This dynamic learning capability is essential for achieving the high level of flexibility and safety required for autonomous driving. Reinforcement learning also plays a crucial role in improving decision-making over time by simulating real-world driving scenarios and continuously optimising vehicle behaviour. As AI continues to advance, its role in ensuring safety, improving driving efficiency, and handling edge cases will become even more essential, making it a key driver of both technological innovation and long-term investment in the autonomous vehicle sector
In addition to AI, autonomous driving hardware is crucial for enabling vehicles to perceive their surroundings and make informed decisions. Here is an overview of the key components:
Ultrasonic Sensors
- Function: These are used primarily for close-range detection, such as parking assistance and low-speed manoeuvring. They emit sound waves and measure the time it takes for the echoes to return, helping detect nearby objects.
- Limitations: Limited range (usually a few metres) and low resolution make them less suitable for high-speed or long-range scenarios.
Radar (Radio Detection and Ranging)
- Function: Radar systems transmit radio waves and detect objects by analysing the reflected signals. They are effective in various weather conditions and can measure the speed and distance of objects. Radar systems are often used in adaptive cruise control and collision avoidance systems.
- Limitations: Radar typically has lower resolution compared to cameras and LiDAR, making it less effective at identifying and classifying objects.
Cameras
- Function: Cameras capture high-resolution visual data, allowing the system to detect lane markings, traffic signs, and recognize objects such as pedestrians and vehicles. Cameras are critical for tasks like lane-keeping, traffic sign recognition, and object classification
- Limitations: Performance can be affected by poor lighting conditions, glare, and inclement weather conditions such as rain or fog. They also require sophisticated algorithms for processing and interpreting the visual data.
LiDAR (Light Detection and Ranging)
- Function: LiDAR systems emit laser beams and measure the time it takes for the light to return after hitting an object. This allows the creation of detailed 3D maps of the environment, which are crucial for precise localization, obstacle detection, and path planning.
- Limitations: LiDAR is typically more expensive and less effective in certain conditions (e.g., heavy rain or snow) compared to other sensors. The data processing requirements are also high.
In an autonomous driving system, these hardware components are often used in combination to complement each other’s strengths and mitigate their weaknesses. This sensor fusion approach enhances the vehicle’s ability to perceive its environment accurately and operate safely under a wide range of conditions.
Autonomous driving technology often relies on a sophisticated integration of various systems to ensure safe and efficient navigation. Many current implementations use High-definition (HD) mapping in their solution. These maps provide detailed and precise representations of roadways, including lane markings, traffic signs, and topographical features. They offer a static reference against which real-time data from sensors can be compared, enabling vehicles to understand their precise location and navigate complex environments with high accuracy. The rich detail of HD maps enhances a vehicle’s ability to anticipate road conditions and obstacles, thereby improving decision-making and safety.
Vehicle-to-vehicle (V2V) and vehicle-to-everything (V2X) communication frameworks have also been proposed to further augment the capabilities of autonomous driving systems. V2V communication allows vehicles to exchange information about their speed, direction, and position, helping them coordinate movements and avoid collisions. For instance, if one vehicle suddenly brakes, nearby vehicles can receive this information in real-time and adjust their behaviour accordingly. V2X communication extends this concept to include interactions with infrastructure elements like traffic signals and road signs, as well as other entities such as pedestrians and cyclists. As of 2024, V2V/V2X technology is primarily limited to pilot projects and specific vehicle models, rather than being a standard feature across the automotive industry.
Hurdles on the road to Full Autonomy
The path to fully autonomous (“Level 5”) driving involves a complex and incremental journey, marked by technological advancements, rigorous testing, and gradual integration into everyday transportation.
One key concept in this progression is the “rollout,” which refers to the phased introduction of autonomous driving features. Initially, these features are introduced in a limited scope, often focusing on specific driving environments such as highways or well-mapped urban areas. This phased approach allows for the refinement of technology and safety measures through real-world testing and user feedback.
The “March of Nines” is a framework used to describe the incremental safety improvements in autonomous driving technology. It measures safety levels using reliability metrics such as the number of accidents or incidents per billion miles driven. Each “nine” added to the reliability rate (e.g., 99% to 99.9%) represents a significant risk reduction, with the ultimate goal being near-zero accidents.This gradual approach helps ensure that each step forward is backed by rigorous testing and validation, mitigating risks as the technology progresses.
In addition to the technological hurdles towards achieving Level 5 autonomy, there are several other important considerations, which we discuss below.
Public Perception of Autonomous Driving: Public perception of autonomous driving is a mixed landscape of excitement and scepticism. Many people are enthusiastic about the potential benefits, such as reduced traffic accidents, increased mobility for the elderly and disabled, and the convenience of hands-free driving. However there are significant concerns about safety, reliability, and trust in the technology. High-profile accidents involving autonomous vehicles have fueled doubts and fears, leading to calls for more transparency and stringent testing before widespread adoption. Building public confidence involves not only demonstrating the technology’s safety and effectiveness but also addressing these concerns through clear communication and ongoing education.
Security Considerations: Security is a critical concern for autonomous driving systems, given their reliance on complex software and data communications. Protecting vehicles from cyberattacks is paramount, as vulnerabilities could lead to unauthorised control or manipulation of the vehicle’s systems, posing severe safety risks. Robust cybersecurity measures, including encryption, secure software updates, and regular vulnerability assessments, are essential to safeguard both the vehicle’s internal systems and the communication networks they rely on. Additionally, ensuring that data privacy is maintained is crucial to prevent misuse of sensitive information collected during vehicle operation.
Ethical/Societal Implications: The rise of autonomous driving technology raises significant ethical and societal questions. Key issues include the decision-making algorithms used in critical situations — such as how a vehicle should react in unavoidable accident scenarios — and the potential impact on employment, particularly for drivers in sectors like trucking and ride-hailing. There are also concerns about equity and accessibility, and we must consider whether the benefits of autonomous driving will be widely distributed or if they will exacerbate existing social inequalities? Addressing these ethical challenges involves creating transparent guidelines and engaging diverse stakeholders in discussions about the technology’s broader implications.
Regulatory and Legal Issues: Navigating the regulatory and legal landscape for autonomous driving is complex and evolving. Governments and regulatory bodies are tasked with developing standards and guidelines that ensure the safety and efficacy of autonomous vehicles while fostering innovation. This involves creating frameworks for testing and certification, defining liability in the event of accidents involving autonomous vehicles, and establishing data privacy regulations. The lack of uniformity in regulations across different regions can also create challenges for manufacturers and developers seeking to deploy autonomous vehicles on a global scale. Effective regulation requires collaboration between policymakers, industry leaders, and technology experts to create a balanced approach that supports technological advancement while protecting public safety and interests.
Sizing the opportunity
Autonomous vehicles will potentially impact multiple trillion-dollar markets including transportation, logistics, and urban infrastructure. This potential stems from fundamental changes to cost structures and asset utilisation across these sectors, although the early stage of the technology makes precise market sizing challenging.
Since autonomous technologies are still in the very early stages of their rollout, the estimates for total addressable market (TAM) vary significantly across forecasts. For instance, according to Fortune Business Insights, the global autonomous vehicle market was valued at just over US$1.9 trillion in 2023 and is expected to grow to US$13.6 trillion by 2030 (representing a CAGR of 32.3%)! On the other end of the spectrum, Precedence Research has a relatively conservative estimate for the Autonomous Vehicle Market at US$158.3 billion in 2023, set to grow to US$2.75 trillion by 2033 (Figure A).
Figure A: Autonomous Vehicle Market Size, 2023 to 2033
Source: Precedence Research
Although the autonomous driving TAM forecasts vary depending on the source, our work suggests that this is indeed a very large opportunity encompassing both passenger and freight mobility.
In this regard, consider just the US trucking market. According to the American Trucking Association, in 2022 trucks moved 11.4 billion tons of freight and generated more than US$940 billion in freight revenue!
Autonomous trucking software developer Aurora pegs the current cost per mile at ~US$2.27, with the human driver component being the single largest cost at US$0.97 (Figure B). The potential for autonomous technology to eliminate this cost while potentially increasing fleet utilisation through 24/7 operation capability presents a clear economic incentive for widespread adoption.
Figure B: Cost Per Mile for Trucking in the US
Source: Aurora July 2024 Investor Presentation
Similarly, ride-hailing giant Uber’s financials also demonstrate the latent potential of autonomy. In Q3 2024, Uber’s mobility gross bookings were US$21 billion. After paying drivers, the company’s revenue for the quarter was US$6.4 billion. In other words, approximately 70% of the gross bookings were for driver-related costs (i.e., wages, vehicle costs, fuel etc). Similar to the trucking statistics above, the single largest cost factor in ride hailing was driver related.
According to various estimates, the average cost (paid by riders) per mile is in the US$1 to US$3 range. Currently, autonomous vehicle attempts are targeting costs in the sub US$1 per mile range. At the unveiling of Tesla’s Cybercab (the no-pedal 2-seat specialised robotaxi), Musk indicated that Cybercab at scale will probably cost US$0.30 to US$0.40 per mile (inclusive of taxes), which is well under the US$1 cost for a city bus.
Given the glaring cost advantages, it is no surprise that this is a race that is attracting so much interest. Furthermore, beyond impacting the transportation and freight markets, we expect autonomy to have broader implications, including on personal vehicle ownership, insurance, and urban planning. For instance, as autonomous vehicles become more common, consumers may rely less on personal car ownership, favouring shared autonomous vehicles instead. This could impact vehicle sales, especially in urban areas.
Infrastructure may need to adapt to accommodate autonomous vehicles, including the integration of smart roads, sensors, and traffic systems. This could lead to new investments in highway systems and urban design to ensure safe interaction between human-driven and autonomous vehicles. Autonomous vehicles could reduce the need for parking spaces, particularly in urban areas, leading to repurposing of land previously allocated for parking lots and garages. This could also affect real estate values and urban density.
Autonomous driving will shift the focus of liability from human drivers to manufacturers and software providers. Insurance companies may need to develop new products to cover the risks associated with software failures, cybersecurity breaches, and product liability claims.
As autonomous vehicles are expected to reduce human error, the number of accidents could decrease, which may lower overall premiums but also reduce the volume of claims handled by insurers.
Insurers will likely need to adjust to new regulations and policies governing autonomous vehicle operation, while pricing models may shift towards covering technology-related risks rather than individual driver behaviour.
In summary, although the rise of autonomous driving will create opportunities for innovation, it will also force industries to adapt, as traditional models of transportation, logistics, infrastructure, and insurance get disrupted.
Conclusion
The autonomous vehicle market is rapidly evolving, fueled by significant investments and technological advancements that have sparked growing consumer and investor interest. Yet, substantial challenges remain, including regulatory hurdles, insurance complexities, and safety concerns, especially in regions with limited infrastructure or minimal traffic laws. These barriers are particularly pronounced in developing markets, where the absence of consistent road infrastructure and traffic regulations may slow adoption.
Despite these obstacles, the potential benefits of autonomous vehicles—from enhanced safety and cost savings to reduced congestion—continue to drive the industry forward. Key players in automotive, technology, and mobility sectors are actively shaping the future of transportation, creating fresh opportunities for innovation, growth, and investment in this high-risk, high-reward space. While this remains a nascent industry, we believe autonomous driving presents a major opportunity.
The autonomous driving industry offers a highly attractive, high-margin, recurring revenue model. Providers of autonomous driving technologies are likely to licence their software to automotive OEMs and fleet operators, creating a lucrative business through ongoing royalties and partnerships. In addition to licensing, companies are expected to own and operate their own robotaxi fleets, which will also bring in high-margin, recurring revenue. By directly managing these fleets, these companies will capture a significant share of the growing mobility-as-a-service market, generating income through ride-hailing services while minimising operational costs. This combination of technology licensing and fleet ownership positions autonomous driving companies for long-term, scalable growth and significant profitability. Following extensive research, we have invested in what we see as the most promising companies positioned to lead this transformative shift.
At AlphaTarget, we invest our capital in some of the most promising disruptive businesses at the forefront of secular trends; and utilise stage analysis and other technical tools to continuously monitor our holdings and manage our investment portfolio. AlphaTarget produces cutting-edge research and subscribers to our research gain exclusive access to information such as the holdings in our investment portfolio, our in-depth fundamental and technical analysis of each company, our portfolio management moves and details of our proprietary systematic trend following hedging strategy to reduce portfolio drawdowns.
To learn more about our research service, please visit https://alphatarget.com/subscriptions/.
Cybersecurity is an umbrella term for a broad range of technologies and IT practices designed to protect computing systems, applications, networks, and data from unauthorised access, data breaches, and attacks.
In today’s interconnected world, all businesses need to have a cybersecurity plan to defend against, detect, and respond to cyber threats in order to maintain the integrity and availability of digital assets. After all, the stakes are high as even a single breach can be costly. IBM’s Cost of a Data Breach Report 2024 recently reported the global average cost of a data breach reached US$4.88 million in 2024, up 10% over the prior year. According to Statista, the global cost of cybercrimes is expected to be a staggering US$13.82 trillion by 2028, up from $860 billion in 2018. (Figure A)
Figure A: The cost of cybercrime
Source: Statista
The escalation in cyberattacks can be attributed to several factors. Hostile state actors, organised criminal syndicates, and opportunistic hackers are all ramping up their activities. Exponential growth of digital infrastructure – including the proliferation of connected devices – has expanded the surface area available to malicious actors, thereby making it easier for them to find vulnerabilities. The COVID-19 pandemic accelerated this trend, with the FBI reporting a 300% spike in cybercrime since the onset of the pandemic. According to the Whitehouse’s National Cybersecurity Strategy document, state-sponsored attacks have also become more prevalent, with China, Russia, Iran, and North Korea noted to be “aggressively using advanced cyber capabilities to pursue objectives that run counter to our interests.”
Moreover, the emergence of ransomware as a service (RaaS) has lowered the barrier to entry for cybercriminals, particularly as the broader adoption of cryptocurrencies has enabled attackers to monetise their exploits anonymously. According to the recently released Zscaler ThreatLabs 2024 Ransomware Report, ransomware attacks foiled by the Zscaler cloud increased 18% year over year globally in the 12 months ended April 2024, including a staggering 93% increase in attacks against U.S.-based organisations (where nearly 50% of all ransomware attacks occurred).
Overall, inadequate cybersecurity measures, combined with exponential digitisation, has created a perfect storm for the proliferation of cyberattacks across all sectors of the economy. It is therefore not a surprise that robust cybersecurity solutions have evolved into a critical business imperative – one that is becoming ever more important with each passing day.
Common Cyber Threats
Next we will discuss some of the most common forms of cyber threats:
Malware
Malware (short for malicious software) is a category of programs designed to infiltrate and exploit computing devices such as laptops, smartphones, and servers. There are various types of malware:
- Viruses are self-replicating programs that spread by attaching to other files or programs, often propagating through email attachments or infected websites.
- Trojans are malicious software that are disguised as legitimate and trick users to willingly install them on their devices; once installed they open a backdoor for threat actors. Trojans are common amongst pirated software and illegitimate mobile apps.
- Spyware operates covertly in the background, gathering sensitive information without the user’s knowledge and potentially leading to identity theft or financial fraud.
- Ransomware encrypts files or locks computer access, demanding payment (often in cryptocurrency) for the decryption key.
One of the largest known malware attack was the WannaCry ransomware that occurred in 2017 and affected over 200,000 computers across 150 countries, causing billions of dollars of damage. WannaCry exploited a vulnerability in Windows machines (that weren’t updated with the latest security patches) to encrypt users’ files; the attackers demand payment in Bitcoin for decrypting the files. Among the high-profile organisations impacted was the UK’s National Health Service, which resulted in widespread disruption across the UK’s national health services.
Zscaler’s ThreatLabz 2024 Ransomware Report more recently highlighted a record-breaking $75 million ransomware payment made by an organisation to the Dark Angels ransomware group – nearly double the previous highest publicly known ransomware payout – marking a massive windfall that will likely only encourage other bad actors to ramp their own illicit efforts.
Distributed Denial of Service (DDoS) Attacks
DDoS attacks harness networks of compromised computers (“botnets”) to overwhelm a target service or network with the goal of making it inaccessible to its users. These attacks come in various forms such as flooding a system with requests, exploiting protocol-level vulnerabilities, or targeting specific application layer services. The primary goal of these attacks is to disrupt service availability, thus potentially causing financial and/or reputation damage to the impacted organisation.
There have been a number of high-profile DDoS attacks in recent years. In September 2017, Google was the victim of a massive attack that manipulated 180,000 web servers to send their responses to Google. This attack reached a colossal 2.54Tbps. The following year, GitHub was hit by an attack that peaked at 1.3Tbps, with perpetrators leveraging the amplification effect of a popular database caching system. In February 2020, Amazon Web Services (AWS) reported mitigating a staggering 2.3 Tbps DDoS attack, where malicious actors exploited hijacked Connection-less Lightweight Directory Access Protocol (CLDAP) web servers. These high-profile incidents underscore the evolving and ever-present nature of DDoS threats, driving continued innovation in mitigation strategies and technologies within the cybersecurity industry.
Identity-based Attacks
Identity-based attacks often exploit stolen or weak passwords to gain unauthorised access to systems. Common forms include credential stuffing, where attackers use automated tools to test large numbers of stolen username/password combinations across various websites, and password spraying, which attempts to access numerous accounts using a few commonly used passwords.
Identity attacks can have far-reaching implications. In 2012, 6.5 million hashed passwords were stolen from LinkedIn and later cracked. As users tend to reuse passwords, in 2016 Netflix observed a surge in fraudulent logins thanks to perpetrators leveraging the LinkedIn leak.
Organisations are mitigating against these risks by adopting multi-factor authentication (MFA) and passwordless authentication models such as biometric logins.
Code Injection Attacks
Code injection attacks involve inserting malicious code into vulnerable applications to alter their function or gain unauthorised access to systems and data. Common types of attacks include threat actors inserting malicious SQL code into input fields (SQL injection), malicious scripts on websites (Cross-site scripting), and remote code execution. These attacks can lead to data breaches, system outages, and financial loss.
One infamous example of this type of attack was the breach experienced by Equifax – one of the world’s largest credit reporting agencies – in 2017. Attackers discovered that one of Equifax’s servers was running an unpatched version of Apache Struts software, and they leveraged the vulnerability to gain access to sensitive data of 147 million Americans. In 2019, Capital One fell victim to a major data breach affecting over 100 million customers, which was facilitated by a server-side request forgery (SSRF) attack, a form of code injection, and a weakness (at that time) in Amazon Web Services’ EC2 service infrastructure.
Secure coding practices and regular patching of IT infrastructure can go a long way toward mitigating these risks.
Social Engineering Attacks
Social engineering attacks exploit human psychology to manipulate individuals into divulging confidential information or performing actions that compromise security. These attacks often rely on creating a false sense of trust or urgency, exploiting human tendencies rather than technical vulnerabilities.
Phishing is the most prevalent form of social engineering attacks. Phishing commonly occurs via emails, text messages, or websites that appear to be legitimate (but are not), with the victim falling for the masquerade and willingly providing sensitive information to the attacker. The goal is usually to steal credentials, financial information, or install malware on the victim’s device. While phishing attacks generally cast a wide net, “whaling” targets high-value individuals specifically.
Other common social engineering approaches include baiting and voice phishing.
As social engineering attacks become more prevalent and sophisticated, organisations will need to consider the human-in-the-loop factor when designing security measures. In particular, enterprises that intertwine technical defences with human-oriented security measures are likely going to be more successful at thwarting threat vectors than those that primarily focus on the former.
Evolution of Enterprise Cybersecurity
In the early days of the Internet, security was mostly limited to protocol design and access control. The late 1980s saw the emergence of antivirus software, which perhaps can be considered to mark the beginning of dedicated security software solutions.
The consideration of IT infrastructure security gained prominence with the invention of the World Wide Web (“Web”) and the widespread adoption of the Internet starting in the 1990s. As businesses started deploying their corporate networks, the idea of building a defensive perimeter (or “moat”) around the corporate IT infrastructure (or “castle”) started to take hold. The resulting birth of castle-and-moat security saw firewalls emerge as the bulwark of the security measures.
Initially, firewalls were packet filters between the trusted internal and untrusted external network. With cyberthreats becoming more sophisticated, the mid-1990s saw adoption of intrusion detection systems (i.e., security appliances or software that monitor network traffic for suspicious activity and policy violations). As technology matured, next-generation firewalls incorporated solutions such as deep packet inspection and intrusion detection systems.
The increase in threats and their sophistication resulted in the birth of security information and event management (SIEM) systems in the early 2000s. SIEM systems collect and analyse security related event data from across an organisation’s IT infrastructure. Typical SIEM systems include capabilities such as log management, event correlation and analytics, and automated incident response capabilities. These capabilities enable enterprise security teams to identify anomalies and deploy automated threat remediation strategies.
Then, the late 2000s saw another paradigm shift caused by the smartphone revolution and the widespread adoption of cloud computing. AWS launched its first cloud services in 2006, and the iPhone was introduced in 2007. While these technologies transformed how people lived and worked, the shift also dramatically increased the attack surface (e.g., cloud and mobile technologies introduced new apps, anytime anywhere access, distributed computing and storage infrastructure). These developments pushed forward advancement of identity management solutions, introduction of new cloud security frameworks, development of cloud-native security solutions and mobile device management tools.
For instance, with increasing complexity in technology stacks, we saw cloud-based identity and access management (IAM) solutions gain ground. IAM solutions help manage digital identities and user access to data, systems, and resources. It includes features like single sign-on (SSO), multi-factor authentication (MFA) and privileged access management (PAM).
Moreover, with no clear delineation of the corporate perimeter in the cloud computing world, a new Zero-Trust security framework started to take hold in the 2010s. Unlike traditional perimeter-based security models, Zero-Trust assumes that threats exist both inside and outside traditional network boundaries. This approach requires all users, whether inside or outside the organisation’s network, to be continuously validated before being granted access to applications and data.
Ideas such as Cloud Access Security Brokers (CASBs) emerged, providing visibility and control over cloud applications. Cloud Security Posture Management (CSPM) tools also gained prominence as organisations focussed on assessing and managing their cloud security risks.
More recently, AI and machine learning are being leveraged to detect and respond to threats more quickly and effectively, enabling predictive security measures. To cope with the sheer volume of security events, organisations are increasingly turning to security orchestration, automation, and response (SOAR) platforms. Extended Detection and Response (XDR) has evolved as a framework for unifying endpoint, network, and cloud data to provide holistic protection and faster threat detection and response.
It is important to realise that cybersecurity is an arms race of sorts. Cybersecurity specialists are continually working towards securing systems, while threat actors are always on the look for new sophisticated attacks – with both increasingly leveraging AI and machine learning in their respective efforts. In this race, only those companies that have the tenacity to stay at the forefront of innovation can thrive over the long-term.
Cybersecurity is a good business
Like enterprise software companies, cybersecurity businesses often exhibit attractive characteristics that make them compelling investment opportunities.
The market for cybersecurity is rapidly expanding, propelled by the increasing frequency and sophistication of cyber threats. This growth potential makes this sector particularly interesting to investors seeking high-growth technology exposure.
Further, modern security solutions are offered as subscription-based services, thus providing predictable and recurring revenue streams, a trait favoured by investors. In addition, cloud-based cybersecurity solutions can easily scale to meet growing customer demands without significant capital investments. In other words, modern cybersecurity businesses can be asset-light compounders.
It is also important to note that security solutions often fall in the “must have” and not in the “good to have” category. Thus, even in difficult macroeconomic conditions, cybersecurity spending is mission critical and likely to be impacted to a lesser extent than other businesses.
There are many listed behemoths and several up-and-coming innovative cybersecurity players in the public markets. Enterprise software behemoth Microsoft offers a comprehensive range of security solutions, including identity and access management solutions, unified XDR and SIEM platforms, as well as firewall and DDoS protection services. Networking giant Cisco offers a wide range of networking and security solutions, including endpoint security and cloud security, and recently bolstered its position with the completion of its acquisition of business and web analytics leader Splunk in 2024.
Among the security specialists, Palo Alto Networks is a veteran security vendor that has its roots in selling hardware security appliances. The company has more recently migrated to selling cloud-based security solutions.
There are also many next-generation, rapidly growing cybersecurity pure-plays to consider.
While cybersecurity offers plenty of growth potential, investors should not overlook the risks. The lucrative nature of this industry results in intense competition. Technological shifts can also cause dislocation and alter competitive dynamics. Finally, the attractive nature of the industry tends to push valuations up, which might crimp investors’ future returns.
Sizing the opportunity
The cybersecurity market is large and rapidly growing, thrust forward by factors such as digital transformation and the increasing sophistication of threats.
According to Precedence Research, the global cybersecurity market is expected to compound at 12.6% annually over their forecast period, 2024 to 2034, reaching US$878 billion by 2034 (Figure B). According to Grand View Research, IT investments in 5G, Internet of Things (IOT), and the Bring Your Own Device (BYOD) trend is expected to significantly increase the number of endpoints, which is likely to be beneficial to businesses focussed on cloud security solutions.
Figure B: Cybersecurity market size forecast
Source: Precedence Research
In our view, as cyber threats continue to evolve and proliferate, high-quality cybersecurity businesses with innovative solutions, scalable platforms, and efficient operational models are well-positioned to thrive. These companies are likely to be at the forefront of developing cutting-edge solutions to combat emerging cyber threats, thereby strengthening their market positions and financial performance. However, as with any rapidly evolving industry, not all cybersecurity businesses are created equal, and there will inevitably be winners and losers.
Unlike certain markets that tend to be of the “winner takes all” kind or support only a few big winners, we think cybersecurity has the opportunity to support multiple winners. This is because enterprises often adopt a layered security model, implementing a variety of security solutions for different parts of their IT infrastructure. A layered approach can reduce vulnerability as malicious actors need to breach multiple defences to wreak havoc. Further, this mindset often results in companies becoming specialists in their own chosen arena.
The dynamic threat landscape, rapid technological advancements, and shifting regulatory environments mean that some companies in this space may struggle to keep pace or fail to differentiate their offerings effectively. Therefore, investors will need to carefully evaluate potential cybersecurity investments before committing their capital, considering factors such as rate of technological innovation, adaptability to new threats, scalability of solutions (including efficiency of go-to-market strategies), and the company’s track record in protecting against breaches. After carrying out in-depth research, our firm has identified and invested in the most promising, rapidly growing cybersecurity companies in the public markets.
At AlphaTarget, we invest our capital in some of the most promising disruptive businesses at the forefront of secular trends; and utilise stage analysis and other technical tools to continuously monitor our holdings and manage our investment portfolio. AlphaTarget produces cutting edge research and those who subscribe to our research service gain exclusive access to information such as the holdings in our investment portfolio, our in-depth fundamental and technical analysis of each company, our portfolio management moves and details of our proprietary systematic trend following hedging strategy to reduce portfolio drawdowns.
To learn more about our research service, please visit https://alphatarget.com/subscriptions/.