When China’s DeepSeek first claimed that it spent less than $6 million developing its AI, Western tech firms were skeptical. On March 1, the AI start-up reported that the cost and revenue figures or cost-profit ratio were 545% a day.
The figure includes a warning. Revenue is potentially significantly less in this estimation. U.S. tech firms require chatbots to run at high costs on the R1 and V3 models. This lets chip suppliers sell powerful solutions to customers building AI.
DeepSeek demonstrated that it could build a chatbot despite Biden-era chip restrictions. The AI used Nvidia’s H800, which is a technology that is slower than Nvidia’s (NVDA) more recent AI servers. DeepSeek applied a $2 per hour rental cost for one H800 chip in its cost estimation. That is around $87,000 in total daily inference cost. By comparison, the potential daily revenue of the V3 and R1 models is around $560,000.
Risk to Estimate
DeepSeek acknowledged that the revenue would be lower if customers used the model during off-peak hours. In addition, the firm is in a daily acquisition growth phase. It is giving free access to the AI for now.
Be wary of the growth prospects in GPU chip builders like Nvidia (NVDA). The AI field is growing increasingly crowded. Only a few AI models will survive while others shut down.
creator solana token