Fri. Nov 22nd, 2024
Occasional Digest - a story for you

Authors: Kung Chan and Xia Ri*

With Apple and Google engaging in discussions regarding a significant collaboration agreement aimed at integrating Google’s Gemini artificial intelligence (AI) engine into the iPhone, this potential partnership has the potential to reshape the landscape of the AI industry.

In the early stages, enterprises focusing on large-scale models competed primarily in terms of parameters and computational power, while the subsequent phase shifted towards commercial applications. Ultimately, the manifestation of AI applications in products determines competitiveness, acting as a bridge across generational gaps. Notably, South Korea has already introduced AI products. On January 17 this year, Samsung unveiled the world’s first AI smartphone, the Galaxy S24 series, during its annual flagship product launch event in San Jose, California, United States. One of its standout features is Galaxy AI, which boasts real-time voice and text translation capabilities. Samsung claims that this feature currently supports 13 languages, including Chinese, Korean, English, French, German, and Japanese, among others, effectively overcoming language barriers to some extent. Apple, recognizing Samsung’s groundbreaking products and the subsequent standard set for generational distinctions, sought collaboration to avoid falling behind in the rapidly evolving landscape. In this aspect, it is clear that AI development in China trails behind the trend of productization. Conversely, China remains under the illusion of technological superiority. In 2023, propelled by ChatGPT, large-scale AI models in China experienced a surge in growth. However, due to the complexity of large-scale models and the absence of a standardized evaluation method, leading enterprises boast success while falling short in actual output capabilities. Many multimodal large-scale models in Chinese enterprises, such as Tencent’s Hunyuan, Baidu’s ERNIE Bot, Huawei’s Pangu, Alibaba’s Tongyi Qianwen, and 360 Zhinao, predominantly integrate multiple modalities at the input stage, lacking genuine multimodal content generation at the output. Even when applied, these models primarily cater to leisure activities and simple tasks with high fault tolerance. Significant gaps persist in more critical sectors and professional environments, underscoring the deficiency in Chinese AI product development and indicating a substantial developmental lag.

Additionally, the dominance of technological monopolies outside of China underscores the gap in the country’s AI advancement. OpenAI, boasting a vast talent pool, substantial financial backing, and years of dedicated investment, took over eight years to develop its groundbreaking product, GPT-4, and continues to enhance its capabilities. Conversely, China’s foray into large-scale model development began relatively recently, and it primarily falls into three categories: original large-scale models, models derived from overseas open sources, and assembled models. Despite the ostensible competition in creating original large-scale models, many players are wary of the associated risks, resulting in a proliferation of derivative and assembled models. The vulnerability of China’s AI landscape becomes evident when foreign advanced open-source models restrict access, hindering Chinese enterprises’ ability to train their own models. Furthermore, China’s AI sector faces challenges regarding computational power accessibility. The prevalent use of NVIDIA’s A100 and H100 chips for computational power in the Chinese market encounters setbacks due to the U.S. policy of prohibiting their sale to China. As a result, only the H800 and A800 chip versions, with diminished capabilities, are exported, exacerbating the problem due to limited production capacity.

It should also be noted that the unique information environment in China poses significant obstacles to the development and progress of AI within the nation. The “technical level” of AI typically pertains to the proficiency of large-scale models, which rely on prolonged self-learning processes. This necessitates vast amounts of information to mature AI capabilities and drive advancements across various industries. Presently, large-scale models in China primarily rely on domestically sourced data. While web crawlers were traditionally used for data collection, open-source datasets are now available. Despite the abundance of data from e-commerce, social media, search engines, and other platforms held by Chinese internet companies, the data lacks comprehensive diversity. Moreover, professional data services in China are still nascent, and high-quality datasets tailored for AI model training remain relatively scarce. Additionally, the absence of robust data protection measures in the Chinese market compounds these challenges. While efforts are underway to address these issues, a more pressing challenge arises from the Chinese internet’s firewall system, which restricts access to the extensive, diverse, and open information crucial for the long-term learning and training of large-scale models. Consequently, large-scale model training and learning are confined within the restricted confines of China. Information beyond Chinese borders, while available, remains restricted. Without specialized, standardized, and openly accessible information data for continual optimization and refinement, achieving significant AI breakthroughs in China proves challenging. Consequently, the practical application of AI within the country remains highly uncertain.

As it stands, under this trajectory, in the future, Chinese smartphone brands like Huawei and Xiaomi could very well fall behind a technological generation due to AI advancements, akin to the fate of Nokia phones.

* Mr. Xia Ri, a researcher at ANBOUND.

Source link