Seeing is believing, or is it? Artificial intelligence tests our mettle
2024-12-31
The rapid integration of artificial intelligence (AI) into our daily lives offers a future of unprecedented benefits but it also comes with a slippery slope of risks.
While society debates the larger question of whether the new technology will someday become more intelligent than us and take over, disturbing consequences of AI are already making themselves felt – from intelligent vehicles crippled by automakers' financial woes to deceptive deepfakes and robotic trickery.
Several recent cases highlight current controversies surrounding the rapid growth of AI and what that bodes for human society.
AI and misinformation: the threat of deepfakes
AI has ushered in the era of deepfakes, or AI-generated synthetic media, which pose a big threat to trust and authenticity in digital communications.
One example is the misuse of AI to create fake videos featuring the renowned Dr Zhang Wenhong during the COVID-19 pandemic.
Zhang, a Chinese infectious disease expert, was shown promoting protein bars in uploaded videos, but the doctor, who became a household name in China during the pandemic, denied it was him in the video. He reported the fake videos to streaming platforms, but even though some were removed, other emerged, according to Shanghai TV last week.
In another example, Hong Kong filmmaker and actor Raymond Wong Pak-ming issued a statement debunking deepfake videos of him promoting an ointment brand.
This proliferation of deepfakes has created a crisis of trust, challenging the long-held belief that "seeing is believing." With the emergence of advanced AI tools like OpenAI's Sora, detecting fabricated videos becomes increasingly difficult, particularly for less tech-savvy individuals, like the elderly.
Deepfakes are also exploited for financial fraud, using fabricated endorsements from celebrities and experts to lure victims into investment scams. Realistic impersonations can trick victims into divulging sensitive personal information or transferring funds. Scammers frequently target high-profile figures like Elon Musk, the world's richest man, with deepfakes.
Industry reports predict fraud losses linked to AI deepfakes will more than triple to US$40 billion in the next three years.
Most streaming platforms and government regulators struggle to keep pace with the proliferation of deepfakes that undermine the credibility of information people need and rely on.
AI and the automotive industry: a case of financial instability
The electric automotive industry, once a symbol of industrial progress, is being reshaped by AI – not always to the good. A case in point is Jidu Auto (Ji Yue), a Baidu-Geely joint venture.
Due to the company's financial difficulties, thousands of Jidu Auto owners have been left stranded, some unable to utilize intelligent driving and connected features. This effectively rendered their vehicles partially inoperable, posing potential safety risks.
Jidu Auto's new car, released in September, has a starting price from 199,999 yuan (US$27,777), with an optional "advanced driver-assistance system" that costs 4,999 yuan extra.
Jidu Auto, which marketed its cars as "robot vehicles," heavily relies on advanced driver-assistance systems and other connected features.
Media reports surfaced of drivers experiencing limitations with connectivity and navigation, where smart driving and remote-control features become useless without a stable network infrastructure backing them up.
The problem was that debt-laden Jidu Auto was struggling to pay carriers like China Mobile. Fortunately, a latest injection of capital from parent automakers Baidu and Geely offered some respite, ensuring continued services for existing owners.
However, the operational snafus raised concerns about the long-term viability of AI-dependent technologies in the automotive sector.
In China's fiercely competitive automotive market, Jidu Auto is not an isolated example of headaches. Many smaller firms marketing autonomous driving and smart cabin features, which heavily rely on AI, face layoffs, business closures, salary cuts and payment delays.
Jidu's case serves as a potent reminder of the vulnerabilities that emerge when technological ambition is undercut by financial instability.
Despite these challenges, the Chinese automotive AI market is projected to be valued at US$1.7 billion by 2030, more than five times its 2023 value. This growth encompasses applications like autonomous driving, advanced driver assistance, predictive maintenance, in-vehicle voice recognition and related technologies. At the same time, the entry of technology giants like Baidu, Huawei and Xiaomi into the auto market is expected to further accelerate the integration of AI into vehicles.
AI and ethical considerations: the robotic shark controversy
The ethical implications of AI were highlighted by the controversy surrounding a robotic shark exhibited at an aquarium in the southern city of Shenzhen.
The exhibit, intended to showcase the whale shark, the world's largest fish and an endangered species, sparked outrage from visitors who paid substantial entry fees of 240 yuan for adults and 150 yuan for children, and felt cheated by the non-living replica.
The aquarium operator cited animal protection as the reason for using a robotic shark, noting that whale sharks are not available for sale and that the aquarium paid several million yuan for the robotic facsimile.
The backlash from visitors, who demanded refunds, reflects a broader social debate about the acceptability of AI in recreating or substituting real-world experiences or in mimicking nature.
Regulation and the future of AI
These incidents underscore the dual nature of AI. While the technology offers transformative potential in areas such as health care and manufacturing, it also presents challenges and risks in areas such as cyberattacks, fraud, data privacy, energy-intensive computations, weapons development and lack of transparency. The need for public wariness and government oversight is clear.
China is currently developing a series of AI industry standards, including guidelines for large language models and AI risk assessment, to better regulate the sector, industry officials said early this month.
One key focus is the regulation of generative AI content, with proposed measures requiring streaming platforms to label generated content – potentially an effective tool in combating deepfake fraud.