Deepfake Proliferation: Safeguarding Political Integrity and Public Trust

28th January, 2024

How can political institutions and civil society prepare for the increasing inevitability of deepfake technology being used to influence elections and public opinion?

First Layer

Investigation into the Regulation of Deepfake Technology and Its Impact on Political Processes

The rapid proliferation of deepfake technology has introduced a formidable vector in the dissemination of misinformation, with political processes increasingly at risk of exploitation. This technology utilizes Generative Adversarial Networks (GANs), consisting of two machine learning models, one generating content and the other evaluating it. If left unchecked, the escalation of deepfake videos from under 15,000 in 2019 to over 50,000 by late 2020 (Deeptrace Labs), illustrates a profound potential for political manipulation as the technology becomes more sophisticated.

Deepfake technology not only threatens election integrity and the core pillar of democracy - public trust - but also poses considerable national security concerns. Case in point, during the presidency of Marcos Jr. in the Philippines, there was a notable instance of political influence operation leveraging deepfake technology, as reported by the South China Morning Post. This example serves as a stark demonstration of how fabrications can intertwine with geopolitical agendas, circumvent traditional fact-checking mechanisms, and foster profitable ecosystems that compound the spread of misinformation.

Legal frameworks established by entities like China with the Administrative Provisions on Deep Synthesis for Internet Information Service (Cyberspace Administration of China, 2021), alongside Singapore’s active engagement with digital misinformation through the Protection from Online Falsehoods and Manipulation Act (POFMA), are emergent regulatory steps taken to contend with artificial content. The European Union, actively spearheading the AI Act, is setting regulatory precedents that could prove to be functional templates for other nations formulating their counter-deepfake strategies. However, the existing legal standards show a disparity in regulatory velocity and enforcement capabilities across different jurisdictions.

The detection paradigm's advancement is equally critical as legislative measures. Behavioral biometric analytic tools, as referenced by Adobe Inc., serve as an essential tier in detection frameworks. Preliminary criteria to assess efficacy must include false positive rates, adaptability to evolving deepfake sophistication, and integration feasibility within media and political communication workflows.

Political institutions and civil society preparation for the increasing inevitability of deepfake-based interference necessitate a multi-layered strategy. Immediate investments should emphasize research and development of detection technology, establishing cryptographic watermarking (such as those by companies like Truepic) into media workflows to preserve content authenticity, and the meticulous crafting of educational campaigns aimed at enhancing public media literacy. These campaigns are pivotal due to their effective inoculation against such digital manipulations, fostering a cognizant and questioning electorate capable of discerning between authentic and maliciously doctored content.

For instance, the civil society can champion models like Taiwan's concerted efforts against misinformation, utilizing government-backed initiatives and cultivating an environment for indigenous tools like Cofacts, thus fostering societal resilience against disinformation. Technology companies hold the onus of bolstering platforms against the replication and spread of deepfakes through content policies and algorithmic adjustments, akin to the pivotal steps taken by platforms such as Twitter and Meta, which address and mitigate the spread of illegitimate content.

Furthermore, alliances between states for cybersecurity and mutual agreements on sharing threat intelligence can potentiate collective mitigating actions. These strategic moves would support international norms against the production and dissemination of fabricated political content, strengthening the trustworthiness and reliability of shared information. A collaborative approach between international tech companies and political institutions can foster information integrity and serve as a counterbalance to threat actors seeking to compromise the information space.

Timelines specific to these initiatives should include the immediate commencement of strategic consultations crystallizing into actionable policies within a 6-month timeframe, with successive benchmarking at 12 and 24 months to ensure adaptability to the evolving landscape of AI-generated content. As part of these strategies, political institutions must engage in vigorous dialogue beyond national frontiers to forge a universally coherent understanding and uniform enforcement against the ever-mutable challenge posed by deepfake technology.

Cascading impacts of remaining passive in the face of surging deepfake technologies could lead to a deterioration of political discourse and degradation of electoral integrity, thus impacting both national tranquility and international relations. The triggers for such events include rapid technological progress unmitigated by adequate governance or political will, coupled with public apathy or unawareness.

Most likely, if comprehensive and coordinated responses are not sculpted, civil societies can expect intensified political subversions gravitating towards states with weaker technological and legislative fortifications. The most immediate actions to be taken must encompass:

  • Rapid development and implementation of AI detection software within media and political entities

  • Standardized legal reformation following successful models to safeguard informational integrity

  • Formulation and dissemination of comprehensive media literacy programs across educational platforms

  • Coalescence of domestic and international protocols for technology firms in regulating content propagation

Recognition of the multifaceted influence of deepfake technology insists that responses be intricate, informed, and swift, anchoring decisions on established empirical insights and projected trends. As deepfakes continue to evolve, so too must the vigilance, readiness, and ingenuity of the political institutions and civil society vested in protecting the sanctity of political processes and the democratic tableau from being undermined by the specter of digital deception.

Second Layer

Investigation into the Regulation of Deepfake Technology and Its Impact on Political Processes

With the ever-accelerating advances in deepfake technology, political institutions and civil society face unprecedented challenges in safeguarding the integrity of elections and public opinion. The arsenal of artificial intelligence, specifically Generative Adversarial Networks (GANs), has matured rapidly; a stark example is the exponential increase in deepfakes online, from a mere 15,000 in 2019 to a staggering 50,000 towards the end of 2020 (Deeptrace Labs). Deepfakes' potency as a tool for political deceit became evident during Marcos Jr.'s presidency in the Philippines, where deepfake technology was instrumental in amplifying political narratives, as reported by the South China Morning Post. This incident underscores the technological potency to upend traditional fact-checking norms and the pressing need to galvanize legal and technological defenses.

Emerging regulatory efforts, such as China's Administrative Provisions on Deep Synthesis for Internet Information Service (Cyberspace Administration of China, 2021), signal a global awareness and proactive stance. Singapore's Protection from Online Falsehoods and Manipulation Act (POFMA) exemplifies a legislative front on misinformation, while the EU's AI Act constructs a scaffold for risk assessment and behavioral guidelines for artificial intelligence deployment.

In parallel, technological innovations in deepfake detection, leveraging behavioral biometrics, exemplified by research from entities like Adobe Inc., are advancing toward viable solutions counteracting deepfake production. These sophisticated AI models are tasked with discerning between genuine human-device interaction patterns and those indicative of AI generation, offering mechanisms to challenge the escalating verisimilitude of deepfakes.

Political institutions and civil society face the daunting task of preempting the encroachment of deepfake technology on the electoral process and public belief systems. Comprehensive education initiatives capable of fostering a critically thinking electorate are one prong in a multi-faceted approach. These initiatives, while indispensable, must also grapple with the diverse literacy levels across global populations to ensure inclusive and effective reach.

The technology sector, particularly platforms widespread with content dissemination like Twitter and Meta, bear the responsibility to reinforce algorithms and content policies to weed out inauthentic content. This vigilance is paramount in supporting content integrity and fostering user trust. Strategic alliances across national and sectoral lines serve as pillars for deterring deepfake misuse, strengthening international norms against political manipulation via fakes.

In terms of implementation, a systematic rollout earmarking milestones at 6-month intervals, commencing with the immediate initiation of policy dialogues, presents a measured approach to policy implementation, extending to a 24-month horizon for a full-scale evaluation of enacted measures. These policy dialogues should culminate in actionable frameworks that not only reinforce detection and response to deepfakes but also engage in the proactive cultivation of resilience within political processes and civil society discourse.

The societal tremors of disregarding the escalation of deepfakes could culminate in a corroded fabric of political dialogue, erosion of public trust, and compromised electoral integrity. The need for immediate action traverses:

  • Deployment of robust AI detection landscapes within media and political frameworks.

  • Legal reforms, benchmarked against successful models for content integrity.

  • Commencement of media literacy enhancement across educational infrastructures.

  • Consolidation of global principles for regulating AI deployment and content circulation by technology corporations.

The embattlement against the tide of deepfake technology charges political institutions and civil society with the task of crafting nuanced, knowledgeable, and immediate strategies. The strategy espoused must stand on empirical data, informed projections, and embrace the dynamism of artificial intelligence. As deepfakes evolve in complexity, so should the stewardship of those entrusted with the integrity of public discourse and democratic systems—ever vigilant, ever proactive, and ever innovative in their guardianship against digital duplicity.

NA Preparation

Material Facts

Deepfake technology

Deepfake creation employs Generative Adversarial Networks (GANs), which consist of two dueling neural networks. The synthetic media output has seen a significant quantitative increase, with data from Deeptrace Labs reporting fewer than 15,000 deepfake videos online in 2019, which ballooned to over 50,000 by late 2020. This figure is indicative of swift technological escalation, manifesting a tangible cybersecurity threat.

Deepfake detection methodologies

Research developments in detection include utilizing biometric indicators and deep learning techniques to identify irregular behavioral patterns, such as inconsistent head poses or improper blinking, as explored in studies from the Computer Vision Foundation. Advancements also involve audio analysis, spotting anomalies in speech patterns, which could be synthesized.

Legislations against deepfakes

China, for instance, has implemented regulatory measures targeting information manipulation by online video and audio providers, as seen in the Administrative Provisions on Deep Synthesis for Internet Information Service (Cyberspace Administration of China, 2021). The legislation mandates content flagging, representing a model approach towards legal countermeasures.

Watermarking technologies

Electronic watermarking, as discussed in scholarly articles from the IEEE, involves cryptographic algorithms integrated within digital media as a marker of authenticity, which resists tampering or removal. Companies such as Truepic have commercialized watermarking tools to introduce verifiability into digital media workflows.

ChatGPT's implications

With source information from OpenAI documenting over 100 million users engaging with ChatGPT, the potential for AI-enabled text generation to augment political disinformation campaigns is material. Rigorous societal and legal barriers need to be established to prevent such generative models from being harnessed for malicious narrative manipulation.

Political deepfake incidents

Notably, the Philippines' political landscape witnessed an influence operation utilizing deepfake technology during the Marcos Jr. presidency, aligning with broader patterns of utilization in geopolitical contexts as detailed by the South China Morning Post. Such incidents necessitate inclusion in policy discussions to galvanize awareness and defensive strategies.

Detection paradigms

An emerging detection paradigm involves behavioral biometrics, as elucidated in research by Adobe Inc., which deploys machine learning models that analyze device interaction patterns for signs of artificial media. This contributes to a matrix of potential solutions combating the sophistication of deepfake production.

Regulatory frameworks

Singapore's IMDA demonstrates a proactive engagement with AI governance models necessitated by deepfake technology advancements. The EU's regulatory response includes the AI Act, which classifies AI tools based on associated risks and provides usage guidelines, as found in the European Commission's legal texts. Such frameworks may be instructive for jurisdictions seeking to establish their regulatory systems.

Deepfake's impact on political processes

Public figures have emerged as prime targets for deepfake exploitation, with implications for the trustworthiness of media during elections and political negotiations. The rise in synthetic media necessitates robust strategic responses to ensure media authenticity and defend democratic institutions from such subversive tactics.

Political institutions' and civil society's preparedness

Strategies for confronting deepfake technology's utilization in political arenas require an amalgam of technical solutions (such as AI-based detection software), legislative efforts (like the proposed bills on misinformation), and educational campaigns fostering information literacy. This collective preparedness aims to maintain the sanctity of political processes and public discourse worldwide.

Force Catalysts

Deepfake Technology and Political Prognosis

Leadership: Navigating Deepfake Technology with Cultural Nuance and Technological Governance

Leadership in the era of deepfake technology mandates acute cultural sensitivity and astute policy-making finesse. Leaders must proactively engage with both traditional and online stakeholders to formulate cohesive responses that address the multiplicity of threats posed by deepfakes while safeguarding digital freedoms. Cross-cultural competencies become crucial, as the interpretation of leadership varies significantly across cultures, influencing public reception of policies.

Cultural Distinctions in Leadership Responses to Deepfakes

Culturally attuned leadership necessitates understanding distinct frameworks of honor, shame, and face-saving prevalent in various societies and how these affect public perception of deepfake incidents. For instance, in collectivist cultures, leaders might prioritize maintaining social harmony when addressing deepfake incidents publicly, contrasting with individualist societies that may emphasize personal accountability and transparency.

Technological Governance and Leadership Precedence

A nuanced understanding of technological governance emerges from studying the trajectories traversed by past leaders in response to burgeoning media technologies. With deepfake technology being a disruptor akin to how television reshaped public engagement, current leaders could derive strategic insights from past successes and missteps—ranging from Eisenhower's pioneering of televised presidential campaigns to the delayed responses to Cambridge Analytica’s exploitation of data privacy lapses.

Psychological Insights and Governance Styles

A leader's psychological orientation—ranging from risk-taking innovators to conservative preservers—forms an underpinning for their strategic alignment with technology governance. A comparative assessment of national leaders' stance on digital privacy and their implications on deepfake regulation can be notably observed in the varied approaches between EU's General Data Protection Regulation (GDPR) and the US's principle of Section 230 of the Communications Decency Act.

Resolve: Institutional Determination and Societal Perseverance

Resolve reflects the multifaceted tenacity of institutions and society in addressing technological malignancies such as deepfakes. It bifurcates into institutional commitment to long-term policy advancements and societal willingness to preserve democratic processes.

Legislative Commitment and Adaptive Innovation

Institutional resolve finds expression in legislation such as Singapore’s Protection from Online Falsehoods and Manipulation Act (POFMA), which represents an adaptive legislative approach, blending proactive measures and reactive enforcement to navigate the intricacies of digital misinformation.

Societal Resolve and Cultural Adaptability

The strength of societal resolve is evidenced across cultures, such as in Taiwan’s collective mobilization against disinformation through government-backed initiatives and grassroots movements, tailored to cultural contexts and historical experiences with state-led propaganda and censorship efforts.

Initiative: Proactive Policy Responses and Community Participation

Initiative surfaces in both legislative foresight and community engagement. Proactive policy-makers and citizen groups exhibit nimbleness in navigating the challenges posed by deepfake technology, creating strategies that address immediate threats while laying the groundwork for future resilience.

Legislative Anticipation and Strategic Posturing

Foreseeing potential deepfake abuse within electoral contexts has prompted nations like Brazil to consider comprehensive cyber legislation. States with strong initiatives, like California with its AB-730, tackle immediate concerns while simultaneously prioritizing research into long-term solutions, including deepfake detection tools and authentication protocols that preserve digital integrity.

Community Empowerment and Collaborative Actions

Emerging tools and platforms offer space for community initiative, enactments seen in open-source intelligence communities' collaborations to harness sensors, geo-temporal data, and user-generated content for the trustworthy identification of deepfake content. These grassroots mobilizations and collaborations illustrate how community-driven initiative shapes the societal response to technological hazards.

Entrepreneurship: Technological Antidotes and Adaptive Market Responses

Entrepreneurial waters are charted by innovative solutions and strategic foresight. Entrepreneurs, legislators, and nonprofit organizations embrace technological challenges with creativity, leveraging opportunities to develop deepfake countermeasures that maintain trust in digital communications while fostering economic vitality and ethical accountability.

Innovation and Ethical Propulsion

In the venture against deepfakes, technological entrepreneurship is manifested through the development of blockchain infrastructure for media validation. Industry leaders in software development, such as Adobe with its Content Authenticity Initiative, exemplify how the entrepreneurial spirit intersects with ethical considerations, setting an industry standard for content veracity and transparency.

Economic Underpinning and Adaptive Ventures

The surge in deepfake detection start-ups, triggered by high-profile deepfake incidents as in the notorious 'election-meddling' cases, reflect economic and entrepreneurial adaptability. As economic stalwarts continue to invest in AI verification technologies, we perceive stark parallels with historical market adaptations, resonant with the 'Napster moment' for recording industries.

Conclusive Strategic Imagination

Laying the future foundations against deepfake-induced perturbations necessitates a visionary strategic imagination that synthesizes historical analogies, institutional resolve, communal initiatives, and entrepreneurial vigor. As we mobilize a response encompassing legislative firmness with technological innovation, leveraging insights from diverse cultural perspectives and psycho-socio-political variations ensures comprehensive defenses against multifaceted threats in a rapidly evolving digital panorama.

Political integrity, thus, is sustained not by static defense but by dynamic and globally cognizant strategies that navigate the treacherous waters of technological abuse. With a flashlight cast upon digital authenticity, strategic planners and society at large must devise a holistic blueprint that fortifies existing institutions, empowers community watchdogs, stimulates market innovation, and, most critically, anchors democracy in the steadfast bedrock of the informational truth.

Constraints and Frictions

Given the weighty implications of deepfake technology on political processes and public opinion, it is critical to perform a granular analysis of the constraints and frictions that might influence entities' capacities to respond effectively to the imminent challenges deepfakes present.

Constraints Analysis

Technological Infrastructure

Different levels of innovation and capabilities, particularly within computational technologies, create stratification among entities. High-resource entities, often in technologically advanced nations, have a stronger infrastructure to both produce and detect high-quality deepfakes, thereby influencing global narratives. Conversely, entities in developing nations may lack both the computational power and the technical expertise necessary to create sophisticated countermeasures. This disparity could lead to unequal influence operations where less advanced nations become consumers rather than producers of deepfakes, potentially altering domestic political dynamics.

Economic Resources

Financial constraints are acute, particularly in less affluent countries, which may lack the capital to invest in advanced AI and deepfake detection tools. This not only hinders their technological development but also their ability to combat the influx of deepfakes during critical periods such as elections.

Legal and Regulatory Frameworks

Regulatory constraints emerge from a lack of harmonized global standards and disparate legal statutes. Many countries are yet to enact comprehensive policies addressing deepfake technology. Additionally, enforcement mechanisms often cannot keep pace with the rapid development of AI, resulting in a reactionary rather than a preventive stance on misinformation and its consequences.

Human Capital

Institutional capacity to understand, identify, and mitigate the risks associated with deepfakes is unevenly distributed. High-education entities with access to global talent pools might be well-prepared, in contrast with institutions in low-income countries, which may not have such specialized training.

Sociocultural Norms

Cultural disposition toward new technologies can foster environments adverse or susceptible to deepfake technologies. In cultures where skepticism of media is prominent, deepfakes may be less effective; however, in others where authority and tradition hold sway, deepfakes could undermine trust levels on a grand scale.

Frictions Analysis

Technological Evolution

The rapid advancement of generative AI introduces friction as entities struggle to adapt. Deepfakes are becoming more sophisticated, outpacing current detection tools' effectiveness. The fluctuating state of technology creates an environment ripe for manipulation, as evidenced by both increased fidelity in impersonation attempts and enhanced capabilities for misinformation campaigns.

Global Internet Dynamics

The flow of information online is both a connector and disruptor. While global connectivity can increase awareness and facilitate international cooperation, it can inversely accelerate the spread of deepfakes across borders. This transnational diffusion is an unpredictable variable, complicating the formulation of cohesive strategies to counter the multiplicative nature of such content.

Social Media Algorithms

Content amplification mechanisms inherent in social media platforms incur frictions as biased and provocative content, including fabricated materials, reaches larger audiences. The opacity of algorithmic curation further complicates efforts to anticipate and counteract the dissemination of deepfakes.

Political Repercussions

The fabric of political discourse is sensitive to the introduction of deepfakes, which may serve to undermine political figures, manipulate electoral outcomes, and destabilize trust in public institutions. Unpredictable reactions to deepfakes by the electorate and political bodies can precipitate a cascade of unintended consequences, fomenting discord and volatility in already charged political environments.

Integrative Analysis and Iterative Developments

The current geopolitical landscape is irrefutably transformed by the digital age's offerings, with deepfake technology embodying a central role in shaping future narrative warfare. Lessons from previous electoral interference cases underscore how deepfakes could both leverage existing fractures within societies and create new divides. A retrospective examination reveals a proliferation of misinformation efforts, from the generative AI-generated Facebook profiles in 2019 to the doctored statements affecting high-stake political arenas, each representing unique threads of social fabric that deepfakes further unravel.

Drawing from recent elections worldwide, the probability of deepfakes being used by both state and non-state actors to sway public opinion is high. Scenarios must consider this duality of actors with differing objectives, capacities, and reach. State-sponsored deepfakes harness substantial resources and institutional support, potentially causing significant disruptions, whereas non-state actors wield ingenuity and stealth, aiming to reshape public perception or cast doubt on the integrity of political processes.

The complexity of such scenarios warrants the construction of robust feedback mechanisms. These systems entail multi-stakeholder engagements inclusive of civil societies, technologists, and policy-makers, with the objective of constructing a resilient infrastructure capable of swift identification, response, and rectification of deepfake incidents. The feedback process itself is subject to constant refinement—analyzing effectiveness, adjusting to technological evolution, including emerging AI governance and ethics standards, and incorporating public sentiment regarding political trust and media aptitude.

To embody a more fortified analysis, stakeholders must be made aware of the nuanced limitations that directly and indirectly influence the strategic environment shaped by deepfakes. By dissecting these constraints and friction points with specificity, delving into the historical narratives, and broadly encompassing the temporal dynamism these technologies exhibit, entities can engage in sophisticated foresight that anticipates and mitigates the impact of deepfakes on political processes. Through incremental policy adaptation, holistic resource allocation, and vigilant public discourse, the global community can forge a defensive posture towards a technology that exists in the tension between innovation and manipulation.

Alliances and Laws

In considering the regulation of deepfake technology and its impact on political processes, a multifaceted approach accounting for alliances and laws is essential. Alliances may include partnerships between nations for cybersecurity, agreements between tech companies and governments for information sharing and detection, as well as collaborations among civil society groups to raise awareness and educate the public.

Laws relevant to mitigating the threats posed by deepfakes in political contexts would involve a combination of domestic regulations and international frameworks that address misinformation, data privacy, cybercrimes, and election integrity. These could include laws related to:

  • Cybersecurity: Provisions that aim to secure IT infrastructures against deepfake dissemination.

  • Privacy: Regulations ensuring personal data protection may limit the unauthorized use of data in creating deepfakes.

  • Copyright and intellectual property laws: Safeguards against unauthorized use of individuals' likenesses and image rights.

  • Election laws: Stipulations designed to maintain the integrity of elections might include specific provisions against the use of deepfakes as a means of disinformation.

  • Defamation and libel standards: Laws that hold parties accountable for distributing harmful content that damages a person's reputation.

  • Transparency and disclosure requirements: Rules mandating disclosure when digital content has been altered or is synthetic in nature.

  • International treaties and conventions: Agreements between countries to jointly address the transnational nature of cyber threats, including the spread of deepfakes.

Deepfake technology's potential use for political disinformation represents a direct challenge to the trust and reliability of electoral processes, demanding adaptability and strategic deterrence through alliances and legislative responses. Regulatory frameworks must evolve to manage the risk of deepfakes while respecting the rights to free speech and innovation. The strategic deterrent aspect of alliances could support collective action against cyber threats, reinforcing international norms against their use in political manipulation.

As deepfakes become ever more sophisticated, it will be necessary to integrate these regulatory measures with proactive strategies including:

  • Investment in deepfake detection technologies, fostering a synergy between AI advancements and anti-fraud efforts.

  • Encouraging multi-factor authentication and "liveness" checks to authenticate the source and veracity of political communications.

  • Engaging in public education campaigns to promote media literacy, enabling voters to critically assess the content they encounter.

  • Developing a legal responsibility for AI companies and social media platforms to prevent the production and sharing of maliciously deceptive content.

  • Fostering international dialogue and shared best practices to counter disinformation without stifling technological progress or global innovation.

In essence, comprehensive net assessment demands that we recognize not only the alliance structures and legal frameworks in place but also the adaptive capabilities of societies and institutions in responding to these technological evolutions. Continuous scanning of the landscape for emerging trends and potential risks is key, as well as fostering alliances that cross national and sectoral boundaries to maintain strategic deterrence and preserve the integrity of political processes.

Information

- Vloggers gained access to palace media corps during Marcos presidency, raising issues.

- Pro-Marcos social media pages, creators, and posts surged before elections, fostering a profitable ecosystem where engagement led to financial rewards.

- Social media content from these creators often included historical distortion and misinformation, particularly relating to the 1972-1986 dictatorship.

- Marcoses engaged in "celebrification," sharing personal stories and everyday life, which avoided traditional fact-checking.

- The Marcos family leveraged social media for influence operations beyond disinformation.

- Marcos Jr. won the presidency with over 31 million votes.

- Post-election, social media reflected continued polarization with boycott calls for Marcos-associated brands and figures.

- Social media's influence clashed with Marcos Jr.'s campaign message of unity, presenting unforeseen challenges for politicians.

- Twitter is used as a real-time communication tool and a research medium for public attitudes and open-source intelligence (OSINT).

- Twitter data aids in understanding public sentiment on various issues, correlating geographic information with events, and tracking public health trends.

- Twitter has been instrumental in identifying war crimes using crowdsourced OSINT, especially in situations like the Przewodow, Poland missile incident.

- Prior to Elon Musk's ownership, Twitter's verification system helped confirm the authenticity of public figures' accounts.

- Australian Chinese community news groups self-censor to avoid Beijing's repercussions, based on a Lowy Institute report.

- A proposed bill seeks to extend the reach of corrections to misinformation, suggesting corrections be crafted clearly and address deep-seated biases.

- The news industry's evolution, reshaped by technology, is ironically returning to an interactive and discursive state as in pre-industrial times.

- China released rules banning deepfake technologies for creating fake news, under the new Administrative Provisions on Deep Synthesis for Internet Information Service.

- In Singapore, MPs discussed the need for stronger interventions against digital scams, the risks of AI and deepfakes, and suggested more active roles for banks and government in cybersecurity.

- The adoption of generative AI in Singapore's business sector is rising but raises concerns about privacy and data security.

- Taiwan's government, led by Tsai, is actively combating disinformation and promoting information literacy among citizens through various initiatives.

- Meta's 2019 investigation found a network of fake personas on Facebook using AI-generated profile photos as part of an influence campaign.

- In the context of two ongoing wars and at least 40 countries with elections, the threat of manipulated content is high, and such tactics could undermine trust in electoral processes and democracy.

- Taiwan's president-elect accused China of disinformation and interference in the elections.

- Deepfake technology can destabilize societies by swaying public opinion and could be used in microtargeted disinformation campaigns.

- MPs in Singapore debated a motion for a whole-of-nation approach to maintain trust in digital society, with 13 demands for the government.

- Banks were encouraged to enhance protections against scams; MPs also discussed the risks of AI misuse and ways to preempt the threat posed by deepfakes.

- Suggestions included better anti-phishing solutions and vigilance against unusual transactions.

- Deepfakes are becoming more sophisticated, making them harder to detect; some proposed electronically watermarking content to verify authenticity.

- Singapore's Infocomm Media Development Authority (IMDA) has been proactive in the AI field, but it's a continuous challenge to regulate evolving technologies.

- Approximately two-thirds of SMEs in Singapore are using or planning to use AI; benefits include greater productivity and time savings in tasks like writing proposals.

- AI technology raises privacy concerns as sensitive data could be exposed; companies are taking precautions like limiting uploaded information.

- Sri Lanka's lawmakers are considering a social media regulation bill with concerns about free speech suppression, and potential consequences for the IT sector investments.

- President Trump banned transactions with Alipay, WeChat Pay, and other apps over national security concerns, escalating tensions with China.

- China responded by accusing the US of bullying and hypocrisy, with promises to protect its companies; this tension follows other US bans and investment restrictions on Chinese firms.

- Singapore announces a S$30 million co-production fund to foster international media collaborations and showcase local talent globally.

- A S$25 million virtual production fund is aimed to develop talent and technologies in the industry; initiatives will include training and curriculum integration.- The rise of 'killer apps' has made it possible for bad actors to create high-quality deepfakes quickly and at no cost, impacting the effectiveness of some detection tools.

- Millions of deepfakes are online as of now, which is a significant increase from fewer than 15,000 in 2019.

- Approximately 80% of companies surveyed by Regula view deepfake voice or video as real threats to their operations.

- Matthew Moynahan from OneSpan highlights the shift in cybersecurity from issues of confidentiality and availability to authenticity.

- Transmit Security's June report suggests that AI-generated deepfakes can bypass biometric security systems, such as facial recognition, and are used to create counterfeit ID documents.

- The use of chatbots mimicking trusted individuals to extract personal information for attacks has been noted.

- Behavioral biometrics is suggested as a solution by LexisNexis Risk Solutions Government Group, by analyzing user behavior on devices to flag suspicious activities.

- Start-ups like BioCatch and large companies such as LexisNexis are developing real-time user verification technology.

- AI-powered fraudsters could use adversarial attacks to trick fraud detection systems into classifying illegitimate activities as legitimate.

- Multi-factor authentication and device assessment tools, along with "liveness" checks in verification processes offer other approaches to prevent identity theft.

- Many US state labour departments are still only using facial recognition for security.

- China has banned the use of deep learning to create fake news by online video and audio providers.

- Biometric security systems are increasingly used but vulnerable to spoofs, and generative AI might be able to hack these systems.

- Security technologist Bruce Schneier, among others, discusses the potential for biometric data breaches and practical cybersecurity solutions.

- A deepfake incident involving a fake recording of mayoral candidate Paul Vallas exemplifies concerns about political manipulation, with the 2024 elections considered particularly vulnerable.

- Facebook has implemented strict criteria for political ads in Singapore to combat misinformation.

- Deepfakes are a concern for politicians, businessmen, and journalists due to their high online visibility.

- Deepfakes have been used for investment scams in Asia, defrauding companies, and spreading disinformation.

- Instances include a deepfake video during Russia's invasion of Ukraine and one of US President Joe Biden announcing military conscription inaccurately.

- Deepfakes pose a threat to elections and could influence public opinions and perceptions.

- In Singapore, misinformation about COVID-19 has impacted personal relationships and could threaten national vaccination strategies and pandemic management.

- Increase in COVID-19-related misinformation incidents observed, with POFMA being invoked in Singapore.

- Allegations of state-led disinformation campaigns raise concerns about vaccine diplomacy and the promotion or discrediting of specific vaccines.

- Social media discussions highlight misinformation about the efficacy of different types of COVID-19 vaccines.- Singapore's media ecosystem shows confidence and readiness for growth with an investment that signals belief in local talent and industry.

- The Asia-Pacific media market is projected to increase from S$1.65 trillion in 2023 to S$2 trillion by 2028.

- Singapore aims to capitalize on this growth by fusing technology and media, leveraging its competitive edge.

- Media plays a crucial role in fostering Singapore's cultural identity and economic expansion beyond its borders.

- The UK government plans to update media rules for the streaming era, ensuring that on-demand services from the BBC, ITV, and others are easily accessible on smart TVs and set-top boxes.

- New legislation will regulate streaming services such as Netflix, Amazon Prime Video, and Disney+ to protect audiences; overseen by regulator Ofcom.

- The changes will allow public service broadcasters to maintain visibility for their on-demand content amid evolving consumer habits.

- Smart speakers will be mandated to provide access to all licensed UK radio stations without additional charges or overlay advertisements.

- The UK media reforms are still in consultation with the industry, published as a draft bill.

- The industry faces challenges with the digital consumption of media content, including significant changes in consumer habits for newspapers, TV, and music.

- Media companies must adapt by offering user-friendly cross-platform subscriptions and enhancing the consumer experience.

- Firms are encouraged to innovate and utilize consumer feedback to remain competitive in the new media landscape.

- Canadian news industry groups urge antitrust action against Meta Platforms for blocking news content in response to legislation requiring tech firms to pay for news articles.

- Meta aims to weaken the competitive capabilities of Canadian news organizations and strengthen its dominant position in advertising and social media distribution.

- Canada's Competition Bureau is reviewing the news industry groups' complaint under the Competition Act.

- FIFA extends its agreement with the European Broadcasting Union (EBU) ensuring broadcast coverage of the Women's World Cup across several European countries.

- The EBU commits to promoting women's football with dedicated weekly content, offering wide exposure for the sport.

- The 2019 Women's World Cup achieved 1.12 billion viewers, indicating high global interest and setting expectations for the upcoming tournament.

- Broadcasters' initial bids for the Women's World Cup rights were considered too low by FIFA President Gianni Infantino, but an agreement has now been reached.

- Formula One's OTT streaming service, F1 TV, is set to launch in India, leveraging the strong mobile market and high digital engagement.

- F1 TV offers fans in-depth race weekend coverage including in-car cameras and practice, qualifying, race, and sprint event sessions.- December retail sales growth predicted at 8.0%, a slowdown from November's 10.1%.

- Factory output forecast to grow 6.6% in December year-on-year, steady with November.

- The People's Bank of China (PBOC) commits to policy support for economy and price rebound in the coming year.

- On a recent Monday, the PBOC kept the medium-term policy rate steady, against market expectations of a cut, due to pressure on the yuan.

- Analysts expected a cut in the one-year loan prime rate (LPR) by 10 basis points in Q1; Nomura anticipates two policy rate cuts and one RRR cut in H1 2024.

- The Chinese government is expected to continue fiscal spending to stimulate growth, following a 1 trillion yuan ($139.22 billion) sovereign bond issuance for investment projects in October.

- Current exchange rate mentioned: $1 = 7.1829 Chinese yuan renminbi.

- Despite sanctions, Russian political elite publicly maintains loyalty to President Vladimir Putin post-Ukraine invasion.

- Some Russian artists, media figures, and oligarchs have expressed dissent against the war.

- Deepfake technology poses significant risks for politicians, businessmen, and journalists due to ample data available for AI training.

- Rise in investment scam victims due to deepfake videos in Asia; use of deepfake audio for CEO impersonation in frauds.

- Deepfakes have been employed for disinformation, as seen with manipulated videos of leaders like Zelenskyy and Biden, and a Slovakian pre-election audio.

- Meta uncovered fake AI-generated Facebook profiles in 2019.

- Deepfake threats particularly concerning with 40 countries holding elections, potential electoral interference, and trust in electoral integrity at risk.

- OpenAI's ChatGPT rapidly garnered a user base, attracting 100 million users within two months post-launch on November 30, 2022.

- ChatGPT, despite occasional inaccuracies, spurred excitement and dread, drawing a $27 billion investment into generative AI startups in 2023.

- A call for a pause to train more powerful AI systems for impact assessment emerged, drawing comparisons to atomic bomb concerns.

- AI's projected economic impact is $15.7 trillion globally by 2030, with adoption across various industries.

- Nvidia has become a notable beneficiary in the AI industry.

- OpenAI's CEO Sam Altman faced temporary ousting, reflecting the debate on AI advancement speed.

- AI's societal impact under debate as EU plans regulations with the EU AI Act.

- AI-generated misinformation is a concern heading into a historic election year with numerous elections worldwide.

- ChatGPT usage ramping up to 13 million daily visitors by January, demonstrating human adaptation to new technologies.- Generative AI systems like ChatGPT are increasingly ubiquitous, with OpenAI claiming 100 million weekly users.

- Both individuals and business professionals across various levels utilize ChatGPT.

- Generative AI is regarded as a significant platform, comparable to the iPhone in 2007, leading to major investments and growth in AI startups.

- ChatGPT has raised concerns over disinformation, fraud, intellectual property issues, and discrimination.

- Concerns in higher education focus on cheating, a subject of research interest.

- ChatGPT's success is attributed to its user-friendly interface, making AI more accessible.

- The chat-based interface is a natural way for people to engage with AI, demonstrating the importance of design in technology adoption.

- The technology's capacity to generate convincing language also enhances risks of fraud and misinformation.

- The debate around AI includes its potential to transform society juxtaposed with fears of being an existential risk.

- Intellectual property rights, job displacement, biases in AI-powered systems, and concerns of misuse are prominent issues.

- Global AI regulation efforts to balance AI's benefits and risks are underway, with the first-ever AI Safety Summit held in the UK.

- The EU is finalizing its AI Act; ASEAN plans to introduce AI governance and ethics guidelines.

- A Goldman Sachs report suggests up to 300 million jobs could be impacted by AI automation, with customer service jobs particularly at risk in countries like India and the Philippines.

- The rise of generative AI has prompted dialogue about its creative uses, risks to jobs, and societal transformations.

- Battle for global influence involves US and allies advocating for the rules-based international order (RBIO) and Russia and China promoting a "multipolar" world.

- Russia accuses the US of hypocritically imposing RBIO; the US and EU reject multipolar demands as excuses for autocratic influence.

- Conflicts in Ukraine, Gaza, and the South China Sea are part of the struggle to define the world order.

- Russia's actions in Ukraine have led to global sanctions, and China's economic support is crucial to Russia.

- India's nuanced position reflects both multipolarity and RBIO interests, while China and Russia seek changes to global order.

- The dynamics of world opinion are influenced by national interests and values, with democracies leaning towards the RBIO.

- Speculations on the impact of a potential Donald Trump re-election on RBIO are a point of international concern.

- Lifelong learning is emphasized for Singaporeans to remain relevant in the workforce, with initiatives like SkillsFuture and MOOCs providing opportunities for upskilling.

- The growing prevalence of deepfakes targets public figures and raises concerns about cybercrime and disinformation.

- Deepfakes have been used in investment scams, fraud, and to spread false messages during sensitive times like the Russia-Ukraine conflict and elections.- In 2018, contentious referendum rumors led to the establishment of Fake News Cleaner in Taiwan.

- Fact-checking in Taiwan is challenging; recent cases involved altered memos and a manipulated claim spread from an English-language forum to Mandarin social media.

- Rand Corp research highlights China's disinformation's measurable effects on Taiwan, like worsening polarization and generational divides.

- Taiwan's government created a task force to combat election-related fake news.

- Meta removed a large Chinese influence operation in August, with 7,704 Facebook accounts targeting Taiwan.

- An audio deepfake seemed to feature a Taiwanese politician criticizing Lai, was flagged by Taiwan's Ministry of Justice and Reality Defender.

- Chinese disinformation is becoming more subtle, using content farms and social media to spread false narratives organically.

- Rand Corp reported on China attempting to purchase Taiwanese social media accounts and secretly paying influencers.

- Misinformation expert Samuel Woolley recognized Donald Trump's social media numbers included bots, skewing public perception.

- Web-based political manipulation is prevalent, accessible, and potent due to AI advancements like deepfakes.

- The rise of Bitcoin and its acceptance as legal tender in El Salvador and Cuba signify cryptocurrencies' staying power.

- Trust in currency depends on the belief in future acceptance, applicable to both traditional currency and Bitcoin.

- Cryptocurrencies lack stable institutions needed for consistent public trust, leading to volatility as seen in Bitcoin price fluctuations.

- Proof-of-work mechanism for Bitcoin necessitates energy-intensive mining operations.

- Bitcoin mining consumes more energy than countries like Malaysia or Sweden.

- Five factors make Bitcoin appealing: its political narrative, criminal utility, distributed seigniorage, techno-optimism, and quick wealth prospects.

- President Joe Biden condemned social media misinformation on Covid-19 as lethal.

- The spread of misinformation is aggravating the Covid-19 crisis, causing panic and division.

- US, Indian, and EU elections have significant global implications, and the results may affect international relations and policies.

- Joe Biden and Donald Trump may rematch, with potentially starkly different US policies on foreign relations and climate.

- The EU election surge of far-right parties could shift parliamentary policies and influence the selection of the next European Commission president.

- The dominance of incumbents in this year's elections raises concerns about democratic norms weakening in Asia.

- Political stability risks stem from autocrats rejecting unfavorable election results and controlled media environments leading to misinformation.

- Singapore's TOCA has been declared a Declared Online Location (DOL), prohibiting financial benefits due to multiple false statements and misinformation.

- The DOL notice will inform visitors of TOCA's history of communicating falsehoods.

- Previous pages operated by Alex Tan were also marked as DOLs, and Facebook access to them was disabled in Singapore.

- Terry Xu, chief editor of TOCA, has a history of legal issues surrounding the communication of misinformation.

- Deepfake technology poses risks for misinformation and fraud, as generative AI advancements make deepfakes more sophisticated and available.

- A market for deepfake detection tools has developed, using AI to spot content inconsistencies indicating fakes.- Frontier AI refers to early mainstream AI applications such as ChatGPT.

- EU finalizing AI Act to classify AI systems by risk and set development/use requirements.

- ASEAN developing AI governance and ethics guidelines to suggest risk mitigation "safeguards."

- Guidelines expected to inspire national laws on AI regulation in ASEAN member states.

- Sharing of AI knowledge to benefit countries behind in AI adoption.

- AI pervasiveness seen in healthcare, education, transport, and crime fighting is of public concern.

- As many as 300 million jobs might be impacted by AI, per Goldman Sachs report.

- AI-powered chatbots may replace call centers, specifically threatening jobs in India and Philippines.

- Twitter to label/remove manipulated media, including deepfakes and edited photos/videos, starting the following month.

- MPs debate to adopt a whole-of-nation approach for a trusted, safe digital society.

- MPs suggest banks improve protections against fraud, address AI misuse, and deal with deepfakes.

- Digital banking perceived as progressing towards a "crisis of confidence" without stronger regulation.

- Banks urged to better verify authenticity and monitor abnormal transactions to secure accounts.

- Deepfakes seen as threats to democracy due to the challenge of discerning authenticity.

- Ancient China and Rome cited as historical examples of foreign influence.

- The 2020 U.S. elections and COVID-19 vaccine campaigns were targets of foreign interference.

- Technological advancements such as machine learning make hostile campaigns harder to detect.

- Foreign agents use social media and local proxies for political manipulation.

- Singapore's FICA Act responds to foreign interference targeting domestic politics both online and offline.

- FICA requires transparency from political entities and can involve social media companies in government investigations.

- Educators and social media platforms play roles in countering foreign interference; legislation offers protection.

- Individuals urged to be critical of online content to counter foreign interference.

- To identify scam calls, look for urgency and evasion; hang up and verify via known numbers.

- Avoid clicking links from messages; instead, use a browser, and look for HTTPS secure sites.

- Post-scam actions include turning off the phone, not using a microwave, and avoiding engagement with scammers.

- President Biden met with AI company CEOs for a discussion on ensuring AI product safety.

- Biden emphasized the need for companies to be transparent, evaluate safety, and secure AI against attacks.

- The White House meeting stressed the legal responsibility of AI companies to ensure product safety.

- The administration is open to advancing AI regulations and legislation.- The National Science Foundation is investing $140 million in seven new AI research institutes.

- The White House's Office of Management and Budget will issue AI use guidance for the federal government.

- AI developers like Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability AI will participate in public AI system evaluations.

- Republican National Committee released a video using AI to portray a dystopian future under Biden.

- AI political ads are becoming more prevalent.

- U.S. AI tech regulation is less stringent compared to Europe.

- The U.S.-EU Trade & Technology Council collaborates with the administration on AI issues.

- February: Biden signed an executive order to eliminate AI bias in federal agencies.

- Administration released an AI Bill of Rights and a risk management framework.

- The FTC and DOJ's Civil Rights Division will use legal authority to combat AI-related harm.

- Tech giants repeatedly fail to address misinformation and harmful content effectively.

- AI was used to replicate Paul Walker’s face in the “Fast and the Furious” films.

- Deepfake AI can swap faces and manipulate expressions realistically.

- Deep neural networks provide highly convincing manipulated imagery and videos.

- Deepfake technology can now be used to create high-quality deepfakes in real-time.

- States use deepfake technology for political disruption.

- Facebook developed AI to identify and trace deepfake sources.

- AI Singapore competition with a S$100,000 prize aims to detect fake media.

- Deepfake technologists are advised to work with regulators on legal framework.

- Deepfakes have positive applications like enhancing visual effects in videos.

- Research shows deepfakes influence public perception and politics and are used for financial scams.

- Tech giants escalate fight against deepfake technology misuse.

- China's Cyberspace Administration proposes user identity verification and promotion of Chinese socialist values for AI services.

- Hong Kong police arrest a fraudulent syndicate using deepfake AI for financial scams.

- China prohibits deep learning to produce fake news.

- Meta will require disclosures for altered political, social, or election-related advertisements on Facebook and Instagram.

- Western social media accounts have been used for covert influence operations against China.

- The Philippines struggle with political disinformation campaigns on social media platforms.

Marcos Jr's social media presence influences Filipino politics and bypasses traditional media bias.- Technology's penetration in commercial sphere has varied global economic value-add estimates:

  - 7% of GDP ($7tn) over 10 years as per Joseph Briggs of Goldman Sachs

  - McKinsey predicts $2.6tn to $4.4tn per year

  - UK GDP in 2022 was $2.7tn

- Biggest gains in sectors: banking, life sciences, and high tech

- Marketing, sales, and customer experience profoundly affected by AI

- AI's natural language processing helps in creative fields: producing text, audio, video, music, code, images, and design

- Generative AI creates images from text, featuring realistic 3D perspectives and shadows

- In audio/music, potential exists for AI to generate new work with existing artists' styles

- Creativity, previously human-centric, is now a scope for AI, contrary to earlier expectations

Generative AI could automate:

  - 25% of work tasks in the US and Europe

  - Over 40% of tasks in administration and legal professions

  - Less than 10% in physically intensive jobs like construction and maintenance

What is Generative AI?

- Generative AI uses foundation models which are broad data trained for a wide range of tasks

- Training data includes internet-sourced content, like 650 million images for OpenAI's DallE 2

- It uses machine learning for creating analytical models from new data

- Generative AI "creates" content through predictive sequencing, pattern recognition, and reproduction from text to images, video, and audio

Adaptation and Limitations:

- Generative AI is fine-tuned for specific tasks using narrower or specialized inputs

- It is not capable of reasoning, lacks fact check for autocomplete, and is statistically driven

- Acumen Research and Consulting recognizes leading players: D-ID, Genie AI, Rephrase.ai, Amazon, Adobe, Google, IBM, Microsoft, among others

- Investment in generative AI startups increased to $14bn in the first half of 2023 compared to $2.5bn in all of 2022

- OpenAI excluded, fundraising is significant; Conviction's accelerator program received over 1,000 applications

Strategy:

- BCG advises companies to develop a generative AI strategy to be industry leaders in 5 years

- Companies with significant proprietary data will benefit by developing specialized in-house tools

- Deloitte suggests that data-rich industries will integrate AI faster

- ManMohan Sodhi emphasizes identifying problems suitable for AI technology

- Gartner advises realistic goals for generative AI's value

Fine-tuning vs. building from scratch:

  - Fine-tuning can be cost effective

  - Building proprietary models can be secure but expensive

- Human oversight is essential to manage errors and quality control

Use Cases: Consumer-Facing

- Marketing content enhanced by generative AI, with potential for 30% of messages by 2025 to be AI-generated (Gartner prediction)

- Customer service via AI-driven chatbot responses

- Film and video creation at lower costs, with potential for AI-created blockbusters by 2030

- AI assists in teaching with personalized lesson plans and course design (e.g., Duolingo)

- AI enhances search functions for product comparisons and recommendations

- Journalism's use of AI criticized for inability to discern fact from fiction

Use Cases: Internal

- Generative AI aids organizations in brainstorming, producing numerous suggestions

- AI assists in product design and could inspire novel ideas

- Code writing and optimization using AI requires realistic expectations

- Medical innovations accelerated by AI, with the first AI-designed drugs underway

- AI-generated internal knowledge banks improve accessibility of company knowledge

- AI used for data analysis to uncover unseen trends

- Presentation creation streamlined through AI programs

Sector Specific:

- Academic papers written by AI raise concerns on quality and factual accuracy

- AI-generated scientific papers could benefit from improved readability- **Consumer Protection on Shopee**:

  - Shopee Mall offers official goods from authorized retailers like Under Armour and Samsung.

  - Verified sellers provide authentic products.

  - If authenticity is in doubt, buyers may refund twice the purchase amount.

  - Shopee Guarantee withholds seller payment until buyer satisfaction.

  - Preferred Sellers are high-rated and offer exceptional service.

Returns and Refunds:

  - 15-day return period for Shopee Mall, 6 days for others.

  - Free product returns policy.

  - Shopee targets a 2.5-day resolution time for returns/refunds.

  - Customer service is reachable via chat, hotline (6206 6610), or feedback form.

Scam Awareness:

  - Educational scam awareness content shared through social media channels.

  - Alerts and collaborations on scam prevention with Singapore Police Force (SPF).

  - SPF awarded Shopee with Community Partnership Award for scam prevention contributions.

  - Campaigns and in-app resources (Shopee Help Centre) to educate and report scams.

Deepfake Detection and Misinformation Combating Efforts:

  - Trusted Media Challenge by AI Singapore to create AI solutions against fake media.

  - ByteDance research scientist won the competition, plans to integrate AI model into BytePlus platform.

  - The Challenge saw participation from 470 teams globally, top three received S$700,000 in prizes and grants.

  - AI models were given a dataset including video clips from Mediacorp's CNA and The Straits Times.

  - AI Singapore aims to commercialize media-focused AI innovations.

Deepfake Threat and Prime Targets:

  - Public figures are prime targets for deepfakes due to available data.

  - Victims of investment scams and businesses impacted by CEO-voice deepfakes.

  - Deepfake disinformation used during Russia's invasion of Ukraine and Israel-Hamas conflict.

Global Political Landscape and Disinformation:

  - With 40+ elections in a year, deepfakes could influence public opinion and disrupt electoral integrity.

  - Taiwanese government and citizens combat disinformation, educational initiatives growing.

  - Chinese methods of spreading disinformation becoming more subtle and organic.

  - TikTok removed thousands of videos for policy violations and maintains moderators and fact-checking partners.

Response to Social Media Misinformation:

  - The European Commission is addressing misinformation with Meta and TikTok.

  - Singapore's Ministry of Communications emphasizes content moderation.

  - TikTok actively moderates content related to conflict and misinformation, removing large numbers of videos and fake accounts.- TikTok spokesperson claims the idea that the platform benefits from shock value content is "baseless", and the platform removes content violating policies against harmful misinformation and shocking content.

- Content that gains popularity is further reviewed to prevent recommending videos that violate TikTok's rules.

- TikTok does not support keyword or topic-based advertising and blocks search terms against its policies, including "Israel" and "Palestine" from advertising.

- Fake videos related to Israel-Hamas content on TikTok were observed; some were removed but others remained, illustrating challenges in content moderation.

- Mr. Harris from Malaysia urges more regulatory scrutiny and transparency from social media apps in content moderation.

- US President Joe Biden asserts social media misinformation on COVID-19 and vaccinations is "killing people", particularly among the unvaccinated.

- Individuals like Mr. Goh and artist Zelda suffer strained relationships and social isolation due to their beliefs in COVID-19 misinformation and conspiracy theories.

- Ms Tin Pei Ling, MP for MacPherson, stresses the importance of engaging with people holding falsehood beliefs to provide correct facts and understand each other better.

- Experts warn that unchecked misinformation could undermine Singapore's pandemic strategy and its vaccination exercise.

- Instances involving misinformation have led to the invocation of Singapore’s fake news law, POFMA, and notable spikes in vaccine-related Google searches.

- Concerns arise over state-led disinformation campaigns, potentially linked to vaccine geopolitics and to discredit vaccines from other countries.

- Misinformation spotlighted includes claims about mRNA-based vaccines' ineffectiveness against COVID-19 variants and biased vaccine comparisons on social media.

- Rebuttals come from experts and authorities, emphasizing the efficacy of mRNA vaccines and refuting claims that favor certain vaccines.

- Vaccine nationalism discussed by Senior Minister Teo Chee Hean emphasizes scientific and medical facts over geopolitical influences.

- Public communication campaigns, experts, and media work together to dispel myths and build resilience against vaccine misinformation and foster public trust in science.

- EU's Digital Services Act (DSA) set to take effect, shifting how large online platforms are regulated and directly influencing online user experience globally.

- Online video and audio providers in China banned from using deep learning to create fake news.

- China proposes regulations requiring AI service providers to verify user identities and promote socialist values, open for public consultation until February 28.

- AI technology's potential for misuse, such as generating misleading text via pre-trained transformer models or creating deepfake content, is recognized.

- The debate over AI in copyright concerns artists while highlighting fears of job loss and misuse of AI, as well as opportunities such as automation and predicting illnesses.

- A race for AI regulation unfolds globally, with the first AI Safety Summit held in the UK emphasizing safe and responsible AI use.- Dr. David Lye, director at the National Centre for Infectious Diseases, refutes claims about mRNA vaccines, stating they are the most effective against variants, with limited data on Sinovac's efficacy.

- The expert committee on COVID-19 vaccination under Singapore's Ministry of Health emphasizes the high efficacy of Pfizer-BioNTech and Moderna mRNA vaccines, around 90%, against severe disease and hospitalization.

- Studies and roll-outs in the U.S., U.K., and Israel show mRNA vaccines' effectiveness.

- A U.K. study finds Pfizer-BioNTech mRNA vaccine provides about 88% protection against symptomatic disease from the B16172 (delta) variant.

- Senior Minister Teo Chee Hean addresses vaccine nationalism, urging reliance on science and facts for vaccine choices in Singapore.

- Dr. Jayakumar from CENS notes the impact of vaccine diplomacy on national soft power and acknowledges the spread of fear about vaccines on social media.

- Singapore's vaccine choices are observed globally; questions may arise on why certain vaccines are chosen over others.

- Myths around vaccines dispelled through mainstream media and infectious disease experts; public service campaigns, like Phua Chu Kang, promote vaccination awareness.

- Efforts in Singapore include the "The Covid Chronicles" comic series to explain COVID-19 science engagingly.

- It's important for public communication to share diverse views, interpret data differently, and explain disagreements, as per Dr. Lim.

- The Financial Times discusses the negative impact of social media on politics, spreading misinformation, and partisanship, contrasting with the positive potential envisioned for the platforms.

- Between January 2015 and August of a year after the election, 146 million Facebook users, YouTube reported 1,108 Russian-linked videos, and Twitter identified 36,746 accounts related to Russian misinformation.

- Social media platforms, through their content delivery algorithms, create an “attention economy” leading to bias reinforcement rather than wisdom and truth.

- The erosion of horse-trading politics by social media may hurt liberty by corroding conditions for civil discourse and compromise.

- Regulatory and legal measures are considered for social media accountability, along with potential algorithm adjustments to reduce misinformation spread.

- The SCMP reports on the Thai political situation, where parliament is leading efforts to reconcile political differences, with the rejection of protestor participation in the committee and the progression of constitutional amendment procedures.

- The monarchy's involvement in politics is discouraged, emphasizing its moral authority and the need for parliament to find an acceptable solution through dialogue and compromise.

- Generative AI has seen rapid growth in usage and sophistication, with over one million users trying OpenAI's ChatGPT within five days of release.

- Insider Intelligence predicts that by the end of 2023, 25% of U.S. internet users will use generative AI at least monthly, with a rise expected.

- A Salesforce survey indicates widespread use of generative AI, particularly in India.

- Improvements in generative AI functionality include better understanding and replication of natural language.

- The potential transformative impact of generative AI on various sectors is likened to the introduction of the microprocessor.- Digital misinformation expert Samuel Woolley noted that online follower counts can be misleading, as they may include bots, impacting public perception.

- Political manipulation is now more prevalent and powerful online due to lower costs and the reach of the internet, with deepfakes making falsification of video and audio easier.

- Taiwanese President Tsai has tackled Beijing's disinformation campaign and refuted claims that her policies suppress free speech from rivals.

- Tsai promotes public education on discerning false information while carefully balancing information freedom and rejection of manipulation.

- Melody Hsieh's organization, Fake News Cleaner, with 22 lecturers and 160 volunteers, educates on discerning disinformation in Taiwan, using incentives like handmade soap.

- Taiwanese fact-check groups include Cofacts with Line app integration, Doublethink Lab, and MyGoPen; they address disinformation and improve public skepticism.

- Fact-checkers in Taiwan often encounter complex situations, as seen with manipulated claims against critic Lai and his visit to Paraguay.

- Rand Corp research indicates China's disinformation affects Taiwan, deepening social and political divides and generational perceptions.

- Taiwan recently created a task force to address election-related fake news.

- Meta dismantled a major Chinese influence campaign in August, deleting 7,704 Facebook accounts targeting Taiwan, among other regions.

- A deepfake audio clip targeting a politician in Taiwan exemplifies sophisticated disinformation efforts, confirmed by Taiwan's Ministry of Justice and AI-detection firm Reality Defender.

- Chinese disinformation tactics have shifted towards more subtle narratives, often starting on content farms, spread by various agents, and boosted on social media.

- Rand Corp reports that China has attempted to purchase Taiwanese social media accounts and possibly paid local influencers to share pro-Beijing content.- Formula One engaged in talks with Star and other platforms but chose F1 TV, believing others undervalued their rights.

- There's a notable increase in F1's global popularity, partly due to Netflix's 'Drive to Survive', attracting younger and more female fans.

- Holmes credits the series for not just increasing fans but enriching the sport's demographic.

- F1 is focused on expanding its audience beyond traditional fans by adding exciting races and content on social media.

- The 2023 F1 season begins on March 5 in Bahrain.

- Perpetual Evolution reports a critical digital skills gap in marketing, with 74% of surveyed executives recognizing a talent shortage.

- Recruitment and reskilling are priorities, with 47% focusing on the former and 40% equally on both.

- Customer experience is considered the leading success driver by marketers, despite lagging technology adoption.

- Skills for future marketing include openness to change, adaptability, broader business knowledge, and technological capabilities.

- Customer experience, strategy, brand management, and data analytics are essential today and in the next five years for marketing.

- AI is seen as the most influential technology for the next five years in marketing, followed by mobile apps and voice/digital assistants.

- The Economist Group is known for high-quality, independent analysis and produces various products, including The Economist newspaper.

- Deepfakes pose a significant threat, targeting politicians and public figures and being used for scams and disinformation.

- Deepfake audio and videos have been used to spread misinformation and influence politics, including during elections.

- Concerns about deepfakes and misinformation have led to initiatives like Fake News Cleaner in Taiwan, which educates the public.

- Increased yet subtle Chinese disinformation efforts have been identified, often organically woven into online narratives.

- The weaponization of deepfake technology can have a destabilizing impact on societies through influence on opinions and perceptions.

- In Thailand, a digital cash handout worth 10,000 baht from the new government could disproportionately benefit the rich over lower-income earners.

- The digital wallet program is available to Thai nationals over 16 and is met with skepticism regarding long-term financial implications.

- Critics suggest the digital cash handout, set to disburse next February, could increase taxes or inflation as repayments for the handout.

- The distribution method may not benefit those living far from business centers or who have their registered address in remote areas.

- Upcoming elections might be influenced by such financial aid policies, hinting at a strategy to win votes for the ruling party.- AI technology can make errors and introductions of falsehoods, raising concerns for academic accuracy.

In the legal profession:

  - AI can assist with analyzing and summarizing large volumes of documents.

  - Useful for contract drafting and compliance, it's already being utilized by firms.

  - PwC UK partnered with OpenAI's Harvey for generative AI development.

  - Other companies use AI for creating proposals and analyzing contracts for revenue insights.

- Using AI for legal cases without verification is not recommended.

In finance:

  - Generative AI aids financial analysis, app development for competitor insights.

  - Fund managers may use AI for screening and analyzing investment decisions.

  - AI assists internal tasks like summarizing non-standardized bookkeeping.

Concerns about generative AI include:

  - Prone to errors, with content being statistically probable rather than factual.

  - Risk of misuse in coding, threatening data security and requiring thorough checks.

  - High reputational risk with companies not adequately mitigating it.

Public perception:

  - Rising consumer distrust, though familiarity may improve attitudes.

  - Young, educated, managerial classes, and people from emerging economies view AI more positively.

Skills obsolescence:

  - Demand shifting towards AI quality control and oversight, away from original content creation.

  - Rapid changes in the tech landscape could render certain tech occupations redundant.

  - AI can help in retraining and promoting critical thinking.

IP and copyright concerns:

  - Generative AI's ability to mimic styles may infringe on IP rights, causing author lawsuits, such as those from writers like Jodi Picoult against OpenAI.

  - Microsoft assures legal protection for users against copyright breaches.

Potential for misuse:

  - AI could facilitate state-level disinformation and individual-level scams.

  - Regulation suggested to mitigate risks, not relying on big tech self-regulation.

- Future developments are uncertain but will require business adaptation.

- Hong Kong has added focus on developing ethical technology usage and training within industries.

- HKUST students are participating in global data governance research, particularly in financial ethics.

- The project helps students navigate dilemmas like ethically monetizing customer data while maintaining privacy.

- Hong Kong's "technology-neutral" regulations do not yet address specific AI risks.

- CHEN Xin, Ezekiel CHIM, Valeria STADALNINKAS, and WANG Can shared their diverse backgrounds and hopes stemming from the project, emphasizing the importance of ethical considerations in the AI-enabled economy.

- Hong Kong aims for progressive fintech development and talent cultivation with an open approach to innovative technology.

- Reuters reports on AI's impact on professional sectors and details users’ experiences participating in related projects.

- China's economy data:

  - Q4 GDP expected growth of 5.3% year-over-year, with a 1.0% quarterly increase.

  - 2023 likely to see 5.2% expansion, a rebound from 3% growth in 2022.

  - Growth forecast 4.6% in 2024, then 4.5% in 2025, underlining hesitations towards the strength of China's economic recovery.

  - Local debt, property crisis, and global dynamics contribute to uncertain outlook.

  - Investors and policymakers anticipate upcoming data releases for future sentiment.

Previous
Previous

Reshaping Security: Countering Misinformation's Global Impact

Next
Next

AI Revolution: Deflation and Safe Havens Redefined