Project Nexus: Governors see potential to enable instant cross-border payments

Project Nexus aims to connect domestic instant payment systems to improve the speed, cost, transparency of and access to cross-border payments. The BIS Innovation Hub is now working with the central banks of India, Malaysia, the Philippines, Singapore and Thailand as they work towards live implementation of Nexus

There's a growing concern about an AI bubble

Despite massive investments and hype, AI hasn't yet delivered on its promised transformative impact. Experts believe it will take much longer than expected to see significant changes in daily life and the economy.

Key issues:

Overhyped Expectations

  • Massive investments: Tech giants and startups are pouring billions into AI research, development, and infrastructure. This includes acquiring AI startups, building specialized AI chips, and constructing massive data centers.

  • Inflated valuations: The stock market has rewarded companies that integrate AI into their business plans, leading to inflated valuations and a fear of missing out (FOMO) among investors.

  • Unrealistic timelines: There's a tendency to overestimate the speed at which AI will revolutionize industries and daily life, leading to unrealistic expectations about its near-term impact.

Limited Practical Applications

  • Narrow intelligence: While AI excels at specific tasks like image recognition and language translation, it struggles with broader reasoning, understanding context, and general intelligence.

  • Complex problem-solving: Many real-world problems require human judgment, creativity, and adaptability, which AI currently lacks.

  • Data limitations: AI models heavily rely on vast amounts of high-quality data, which can be difficult and expensive to obtain, especially for niche or complex domains.

High Costs

  • Expensive hardware: Developing and training advanced AI models requires specialized hardware like GPUs and TPUs, which are costly and in high demand.

  • Energy consumption: AI data centers consume massive amounts of electricity, driving up operational costs and environmental concerns.

  • Ongoing expenses: Maintaining and updating AI models is an ongoing expense, as new data, algorithms, and hardware are required to keep up with the competition.

Potential for Disappointment

  • Investor backlash: If AI fails to deliver on its promised returns, investors may lose confidence in the technology and pull back funding.

  • Economic slowdown: Overinvestment in AI could lead to a misallocation of resources and hinder economic growth if the technology doesn't pan out.

  • Job displacement concerns: While AI has the potential to create new jobs, it could also lead to job losses in certain sectors, causing social and economic disruption.

It's important to note that these are potential challenges and not definitive predictions. AI is a rapidly evolving field, and there's a chance that these obstacles will be overcome. However, understanding the risks is crucial for making informed decisions about AI investments and development.


How the Financial Action Task Force (FATF) is Being Abused by Autocrats and Dictators

The Financial Action Task Force (FATF), originally created to combat money laundering, has been increasingly weaponized by authoritarian regimes to silence dissent and suppress opposition. By exploiting the vaguely worded FATF standards, autocrats can freeze assets, harass activists, and even imprison critics under the guise of fighting financial crime.

Key tactics employed by these regimes include:

  • Data collection: Governments amass financial information on citizens and opposition figures, often using it to build cases against them.

  • Asset freezing: Banks, fearing repercussions, comply with government requests to freeze accounts, leaving individuals financially crippled.

  • Politically motivated arrests: Critics are detained on spurious financial crime charges, with lengthy pre-trial detentions becoming commonplace.

  • Targeting exiles: Authoritarian states collaborate to pressure Western countries into freezing assets and extraditing dissidents living abroad.

While the FATF has made efforts to address these abuses, such as revising Recommendation 8 to protect charities, its primary focus remains on intensifying the fight against money laundering rather than preventing its misuse. Critics argue that the FATF needs to implement stricter standards, establish a reporting mechanism for abuses, and develop a system to block countries from exploiting the system.

Essentially, while the FATF was designed as a tool for financial integrity, it has become a potent weapon in the hands of autocrats, allowing them to erode democratic freedoms and suppress opposition under the guise of fighting crime.


Beyond the Firewall: Creative Uses of AI in Banking Operational Risk Management

Artificial intelligence (AI) is transforming the banking industry, not just in customer-facing applications but also behind the scenes in operational risk management. While traditional methods focus on compliance and rule-based systems, AI offers a new frontier for proactive risk mitigation and intelligent response.

This article explores five unconventional approaches that leverage AI's power to create a more dynamic and comprehensive risk management strategy:

1. The Conversational Comrade: AI Chatbots for Incident Response

Imagine a tireless assistant, always available to guide staff through the initial stages of a security incident. AI-powered chatbots can be trained on historical data, regulations, and best practices to become valuable assets during critical moments. These chatbots can triage incoming reports, categorize them by severity, and offer step-by-step guidance on initial response protocols. Furthermore, they can facilitate root cause analysis by asking focused questions, searching internal databases for similar events, and suggesting potential causes based on learned patterns. Finally, AI chatbots can streamline post-incident reporting by generating draft reports based on user input, saving valuable time and ensuring consistency in reporting formats.

2. Gamified Risk Detection: Empowering Employees with AI

Banks often rely on employees to flag suspicious activity. However, traditional reporting methods can be cumbersome and lack real-time engagement. Here's where gamification steps in. Imagine a system where employees can flag anomalies in transactions, customer behavior, or system performance through a user-friendly interface that incorporates game mechanics like points and leaderboards. This not only incentivizes participation but also fosters a culture of collective vigilance. The power of AI comes into play when these flagged activities are analyzed. The AI can prioritize them based on risk factors and severity, and even provide investigative tools for deeper analysis. Furthermore, the AI can continuously learn from employee feedback on flagged activities, refining its ability to detect anomalies over time. This creates a powerful feedback loop where human intuition is amplified by AI's analytical muscle.

3. The Friendly Adversary: AI-Powered Penetration Testing

Traditional penetration testing involves security professionals attempting to breach a bank's systems. While valuable, this approach can be time-consuming and limited in scope. AI offers a new approach: a constantly learning "friendly adversary." This AI can be trained on a bank's security protocols and continuously attempt to breach them, mimicking real-world hacking attempts. By constantly testing systems and processes for weaknesses, the AI can identify vulnerabilities that might be missed by traditional methods. Even more importantly, the AI can rank these vulnerabilities based on potential impact and exploitability, guiding security teams towards the most critical areas for remediation. Finally, because the AI can adapt its attacks based on the bank's evolving security posture, it ensures a more comprehensive evaluation and reduces the chance of blind spots.

4. Simulating the Future: Generative AI for Scenario Planning

Imagine a crystal ball that shows not only potential futures, but also their likelihood and impact. Generative AI can be harnessed to create such a tool for operational risk management. By training a generative AI model on historical data, regulations, and industry trends, banks can create realistic scenarios that depict potential operational risks, such as cyberattacks, natural disasters, or economic downturns. These scenarios can then be used to "stress test" the bank's response plans, identifying gaps in procedures and refining mitigation strategies. Perhaps even more importantly, generative AI can be used to identify emerging risks on the horizon, allowing banks to take proactive measures before they materialize.

5. Reading Between the Lines: Emotion Recognition for Customer Interactions

Customer interactions are a treasure trove of data, and AI can help banks unlock valuable insights related to operational risk. By integrating AI with call centers or chatbots, banks can analyze customer sentiment during interactions. This can be particularly useful in identifying potential issues early on. For instance, the AI can recognize signs of distress or anxiety that might indicate fraudulent activity on a customer's account. This allows for a swifter response and potentially prevents financial losses. Furthermore, AI-powered sentiment analysis can help identify frustrated customers and flag them for priority service, improving customer satisfaction and reducing churn. Finally, by analyzing customer sentiment data, banks can identify areas where customer service representatives need additional training to better manage difficult interactions, leading to a more positive customer experience overall.

Conclusion

These are just a few examples of how AI can be harnessed to move beyond traditional risk management approaches. By embracing these creative applications, banks can foster a more proactive and intelligent risk management environment, ultimately safeguarding their operations and building trust with their customers. As AI technology continues to evolve, the possibilities for even more innovative risk mitigation strategies are limitless.


Steering the Ship: Operational vs. Strategic Risk

Every organization, from a bustling startup to a well-established corporation, navigates a sea of uncertainty. This uncertainty manifests as risk, the potential for events to disrupt operations and impact success. But not all risks are created equal. Understanding the difference between operational risk and strategic risk is crucial for effective risk management.

Operational Risk: The Engine Room

Imagine the engine room of a ship. Here, a network of pipes, valves, and machinery keeps the vessel moving. Operational risks are like leaks, malfunctions, or human error in the engine room. They arise from the day-to-day functions of a business and can disrupt its core operations.

Examples:
  • System failures (IT outages, power disruptions)
  • Human error (accidents, negligence)
  • Compliance issues (regulatory violations)
  • Third-party disruptions (supplier delays, transportation problems)
  • Natural disasters (floods, fires)
Operational risks tend to be more frequent but have a lower impact on the organization. However, they can snowball if left unchecked, leading to significant financial losses and reputational damage.

Strategic Risk: Charting the Course

Now, consider the captain's cabin on the ship. Here, the captain and crew pore over charts, plan their route, and make critical decisions about the ship's direction. Strategic risks are like sudden storms, uncharted territories, or misreading the map. They stem from the organization's long-term goals and can significantly impact its future success.

Examples:
  • Technological advancements that render a product obsolete
  • Shifting customer preferences
  • Entry of new competitors
  • Mergers and acquisitions gone wrong
  • Economic downturns

Strategic risks are typically less frequent but carry a much higher potential impact. They can derail an organization's entire business model or even lead to its demise.

Managing the Risks: Calm Seas Ahead

An effective risk management strategy addresses both operational and strategic risks. Here's how:

Operational Risk: Focus on prevention and mitigation. Implement robust procedures, invest in training, and have contingency plans in place.


Strategic Risk: Continuously scan the environment, identify potential threats and opportunities, and adapt the organization's course accordingly.

By understanding and managing both operational and strategic risks, organizations can navigate the uncertain seas of business with greater confidence and reach their desired destinations.



8 AI Risks Lurking in the shadows of Business Innovation

Artificial intelligence (AI) is revolutionizing businesses, but along with the benefits come significant risks. Here are 8 top risks to consider before diving into the world of AI:

1. Biased Algorithms, Unequal Outcomes: AI systems learn from data, and biased data leads to biased algorithms. This can perpetuate discrimination in areas like hiring, loan approvals, or criminal justice.
  • How it Happens: Biased training data can reflect societal prejudices or incomplete information. For example, an AI resume screener trained on past hires might favor resumes with keywords used by a specific demographic.
  • Mitigate it: Scrutinize training data for bias, ensure diversity in data sets, and implement human oversight in critical decision-making processes.
  • Tell-Tale Signs: Unexplained disparities in AI outputs across different demographics.

2. Job Automation Anxiety: AI can automate tasks, leading to job displacement. While new jobs will be created, there's a fear of a skills gap leaving some workers behind.
  • How it Happens: Repetitive tasks are prime targets for automation. This can disrupt industries like manufacturing, transportation, and data entry.
  • Mitigate it: Invest in employee retraining programs, focus on AI-human collaboration for complex tasks, and create clear communication plans about automation.
  • Tell-Tale Signs: Repetitive tasks being phased out, increased focus on automation in company strategy discussions.

3. Security Vulnerabilities: AI systems can be vulnerable to hacking, potentially exposing sensitive data or manipulating AI outputs for malicious purposes.
  • How it Happens: Complex AI systems can have hidden vulnerabilities. Hackers might exploit these to steal data, disrupt operations, or even cause physical harm (e.g., in AI-powered autonomous vehicles).
  • Mitigate it: Implement robust cybersecurity measures, conduct regular security audits of AI systems, and prioritize data privacy.
  • Tell-Tale Signs: Unusual behavior in AI outputs, unexplained system crashes, or data breaches.

4. Algorithmic Black Boxes: Some AI systems are complex and opaque ("black boxes"), making it difficult to understand how they reach decisions. This lack of transparency can be problematic, especially for high-stakes decisions.
  • How it Happens: Deep learning models can be intricate, with decision-making processes not easily explained. This can lead to a lack of trust and accountability.
  • Mitigate it: Develop explainable AI (XAI) techniques, document decision-making processes, and involve human experts in the loop for critical choices.
  • Tell-Tale Signs: Inability to explain AI outputs, difficulty in debugging errors, and a feeling of unease about the rationale behind AI decisions.

5. Privacy Infiltration: AI relies on data, and businesses need to be mindful of privacy concerns when collecting and using customer data.
  • How it Happens: Over-collection of data, inadequate data security, and lack of user control over data can lead to privacy breaches.
  • Mitigate it: Obtain explicit user consent for data collection, implement data anonymization techniques, and be transparent about how data is used.
  • Tell-Tale Signs: Vague data privacy policies, lack of user control over data settings, and customer complaints about data misuse.

6. Over-Reliance and Misplaced Trust: Over-reliance on AI without human oversight can lead to missed nuances and potentially risky decisions.
  • How it Happens: Blind faith in AI outputs without critical evaluation can lead to overlooking errors or biases.
  • Mitigate it: Develop clear human-AI collaboration frameworks, prioritize human expertise for critical tasks, and foster a culture of questioning AI outputs.
  • Tell-Tale Signs: Important decisions being made solely on AI recommendations, lack of human involvement in AI projects, and a general belief that AI is infallible.

7. Unforeseen Consequences: AI is a rapidly evolving field, and the long-term consequences of certain applications are not fully understood.
  • How it Happens: The complexity of AI systems can lead to unintended consequences, especially when dealing with novel situations.
  • Mitigate it: Conduct thorough risk assessments before deploying AI, prioritize ethical considerations in development, and foster a culture of continuous learning and adaptation.
  • Tell-Tale Signs: AI outputs that seem illogical or unexpected, emergence of unintended biases, and difficulty in predicting the long-term impact of AI systems.

8. The "AI Singularity" (Existential Risk): While this is a hypothetical scenario, some experts warn of a future where super-intelligent AI surpasses human control.
  • How it Happens: Unforeseen advancements in AI could lead to a scenario where machines become self-aware and pose an existential threat.
  • Mitigate it: Focus on developing safe and beneficial AI, prioritize human-centered


AI presents a powerful toolkit for businesses, but with great power comes great responsibility. By acknowledging these risks and taking proactive steps to mitigate them, businesses can harness the potential of AI while ensuring ethical and responsible use. Remember, AI is a tool, and like any tool, its impact depends on the hands that wield it. By fostering a culture of transparency, collaboration, and responsible development, businesses can navigate the exciting future of AI with confidence.

What Has Happened to Electric Vehicle Sales?


Sales growth of electric vehicles has slowed dramatically this year. Tesla delivered 20% fewer cars in the first quarter of 2024 than in the prior quarter, and BYD who was previously the world’s biggest EV maker saw sales decline more than 40% over the same period.

BYD’s EV sales were still up 13% when compared to the same quarter a year earlier, while Tesla’s sales were down 9%. Both companies have been slashing prices to stimulate demand.

While EV sales overall are still rising, they are rising at a slower rate than before. On top of that, the space has become more competitive as legacy automakers have introduced new EVs, and Chinese manufacturers have ramped up exports, overtaking Japan as the world's biggest vehicle exporter last year. Apple, who spent a decade and ten billion dollars on research, decided in February to end their efforts to build an electric car. The Apple car would have likely cost over $100 thousand dollars and would have had lower profit margins than their core consumer electronics business. Apple’s stock price rose on the announcement that they were abandoning their EV project.

Popular Posts