.Lets share some lunchclub updates -more at Economistdiary.com/2020 eg what are varous ways ordinary pcusers are expected to use the new ai pc- at different entry level- for example am i correct that one has to invest about 15000 dollars to be a lamda 3 connector through pc? Or for those (eg parents of 11 year olds in usa 2025-8) who just want eg emails whatsapp zoom (tiktok) linkedin algorithms to sort out 30 years of virtual connections whats min dell system to buy? I am also looking for chapter s[omsors of 2025report.com last of 75 years of mediation by my family and von neumann family and for 40 years The Economist 1951-1990 including now King Charles AI summits- from 1964 Tokyo Olympics win-win between both island post-emires (dad norman awarded honors by both) HUMANITY & INTELLIGENCE's FUTURE
Thanks to Jensen Huang the last decade has been most exciting of 75 years dad Norman Macrae 1:: 2 and then I have had privilege to question inteliligence's future. In 1951 Von Neumann suggested to dad to dad that Economists and Media might be generatively disastrous unless they celebrated questioning future's with engineers. Check out the world Jensen Huang has been inviting humans to linkin since he commited to designing million times more energetic computing including today's AI Chats and deep learning robots.
India 2024 : 2
India 2016
Silicon Valley 2024
2015 with Elon Musk move video to 97 mins 40 secs
Valley March 2025.
Taiwan 2024
Taiwan XX
Wash DC 2024Japan 2024
.Is Human Species capable of celebraing intelligence as deeper (and more open) data flow than politicians printing paper money?
Economistwater.com: Do you know that even the world's biggest nations will fail in 2020s unless their peopled celebrate copiloting waters and energy transmission (CLICK TO PUZZLES of 25% more in 2020s) maps inttrligent;y?
MOTHER EARTHS CODES: ELERCTRIGICATION POWERS THINGS WITH ELECTRICITY: INTELLIGENCE EMPOWERS PEOPLES: FRESH WATER CONNECTS OUR HEALTH & EMOTIONAL COOL Please linkin with me chris.macrae@yahoo.co.uk (Wash DC) to add where we the peoples can add to these 4 spaces for unerathing humanity's intrlligence boosters-
  • Paris Inteligence Action summit February,
  • Santa Clara future of accelerrated computimng partners- nvidia santa clara Japan's Osaka Expo - 6 months in which any nations pavilion can virally survey inteligence of any other pavilion
  • Canada's G7- will all 7 nations leaders sink or swim together. Of course if we the peoples can decide what inteligences top 20 spaces need to be, we have a chance to change every education momemt og every age of person at every community around the world in line with the future of capitalism that The Economist premiered in 1976.Japan and silicon calley had payed with the first decade of moore's law - would other places be free to entrepremeurially join in the milliion times more compute in time?
  • Friday, December 31, 1999

    Water/earth exponential risks versus 90 day numerologists

     

    Does intelligence nodeling have name for terrible error of system that doesnt work at extremes- today's example hills of city of angels dont have water pressire when winds of fire also make it impossible for planes to dump water - infamous 2008 financial example: insurnce companies had taken out chain of extreme risks that were correlated with each other as well as being actualosed by fake ratings

    Pro Search

    Answer
    The phenomenon you're describing is often referred to as "tail risk" or "extreme event modeling failure" in intelligence and risk modeling. This concept encompasses the tendency of some systems or models to break down or fail catastrophically under extreme conditions that weren't adequately accounted for in their design.

    Tail Risk and Black Swan Events

    In risk management and financial modeling, these extreme scenarios are sometimes called "tail events" or "black swan events." They represent rare but high-impact occurrences that fall outside the range of normal expectations and can have severe consequences
    15
    .

    Examples from Recent Events

    2025 Los Angeles Wildfires

    The recent wildfires in Los Angeles provide a stark example of this type of system failure:
    1. Water System Breakdown: The city's water infrastructure, designed for normal usage, failed to maintain adequate pressure in hillside areas during extreme fire conditions
      16
      22
      .
    2. Compounding Factors: High winds grounded firefighting aircraft, eliminating a crucial backup method and further straining the already overwhelmed ground-based water system
      5
      25
      .
    This cascading failure demonstrates how multiple systems can break down simultaneously under extreme conditions, leading to a catastrophic outcome that wasn't adequately planned for.

    2008 Financial Crisis and AIG

    The 2008 financial crisis, particularly the near-collapse of AIG, offers another prime example:
    1. Correlated Risks: AIG had taken on massive amounts of risk through credit default swaps (CDSs) that were highly correlated with each other and with broader market conditions
      30
      31
      .
    2. Misrated Securities: Many of these risks were based on securities that had been improperly rated, leading to a severe underestimation of the true risk exposure
      31
      32
      .
    3. Systemic Impact: When the housing market collapsed, it triggered a chain reaction that threatened not just AIG but the entire financial system, revealing the hidden interconnectedness of these risks
      33
      34
      .

    Modeling Challenges

    These examples highlight several key challenges in modeling extreme events:
    1. Underestimation of Tail Risks: Traditional models often underestimate the likelihood and impact of extreme events
      15
      38
      .
    2. Correlation in Extreme Conditions: Many systems that appear independent under normal conditions can become highly correlated during crises
      30
      32
      .
    3. Cascading Failures: The failure of one system can trigger a chain reaction, leading to widespread collapse
      11
      26
      .
    4. Data Limitations: Historical data often lacks sufficient examples of extreme events, making it difficult to model them accurately
      8
      36
      .

    Addressing the Challenge

    To better prepare for these extreme scenarios, researchers and policymakers are exploring several approaches:
    1. Stress Testing: Subjecting systems to simulated extreme conditions to identify vulnerabilities
      17
      23
      .
    2. AI and Machine Learning: Using advanced algorithms to identify complex patterns and potential failure modes that might be missed by traditional models
      8
      36
      .
    3. Interdisciplinary Approaches: Combining insights from multiple fields to better understand and model complex, interconnected systems
      20
      38
      .
    4. Improved Risk Communication: Ensuring that the limitations and uncertainties of models are clearly communicated to decision-makers
      38
      .
    By recognizing the potential for these extreme failure modes and working to address them, we can build more resilient systems better equipped to handle the unexpected challenges of an increasingly complex world.
    Is there a term for lack of transparency where a market's tail risks are not opne intelligence mapped. Adftr all, someone profits from others losses - and it seems that with tail risks. losers are those who were most separated from deep data chains while gainers were some of those at top of such data. A general context for asking is nuclear energy. Whilst theoretcally this could be an energy solution i would not chose to brimg up a family near nuclear plant beciase over time humans fail to attend 24/7 or fail to keep the peace. Can autonomous montoring bring safety to where human attention has history of failing and future secarion proofchecking e some

    Pro Search

    Answer
    There isn't a single widely-used term that precisely captures the lack of transparency around unmapped tail risks in markets, especially in the context of nuclear energy. However, several concepts relate to this issue:

    Information Asymmetry and Tail Risk Opacity

    The scenario you describe, where those with access to deep data chains profit from tail risks while others suffer losses, is a form of information asymmetry. In the context of tail risks, we might call this "tail risk opacity" or "extreme event information asymmetry."This opacity can lead to:
    • Mispricing of risk in markets
    • Concentration of losses among less-informed participants
    • Potential for exploitation by those with superior information

    Nuclear Energy Safety and Autonomous Monitoring

    Regarding nuclear energy safety, your concerns about human fallibility in 24/7 monitoring and maintaining peace are valid. The nuclear industry has historically faced challenges related to:
    • Human error
    • Complacency over time
    • Geopolitical instability
    Autonomous monitoring systems could potentially address some of these issues:Benefits of Autonomous Systems:
    • Continuous, tireless monitoring
    • Rapid detection of anomalies
    • Objective data collection and analysis
    Challenges and Considerations:
    • Cybersecurity risks
    • Need for human oversight and decision-making
    • Potential for new types of errors or biases

    Emerging Trends in Nuclear Safety

    Recent developments in nuclear plant safety monitoring include:
    • Advanced remote monitoring technologies
    • AI-powered predictive maintenance
    • Drone-assisted inspections
    These innovations aim to enhance safety by reducing reliance on human attention and improving early detection of potential issues.

    Conclusion

    While autonomous monitoring can significantly improve nuclear safety, it's not a complete solution to the complex challenges of nuclear energy. Transparency around tail risks and equitable access to critical safety information remain important considerations for policymakers and the public.The term "tail risk opacity" could be used to describe the lack of transparency around extreme event risks, though it's not a widely established term. As for nuclear safety, autonomous systems offer promising advancements but must be implemented thoughtfully alongside robust human oversight and comprehensive risk management strategies.
    Related
    How can transparency in AI models be improved to avoid hidden risks
    What are the main challenges in achieving transparency in complex AI systems
    How does duration drift impact the stability of fixed-income markets
    What role do hedge funds play in exacerbating hidden risks in financial markets
    How can autonomous monitoring enhance safety in high-risk industries like nuclear energy
    blog.ml-quant.com
    fastercapital.com
    mdpi.com
    View More
    Search Videos
    Generate Image

    No comments:

    Post a Comment