Safety risk assessments are becoming a preferred regulatory tool around the world. Online safety laws in Australia, Ireland, the United Kingdom, and the United States will require a range of providers to evaluate the safety and user-generated content risks associated with their online services. While the specific assessment requirements vary across jurisdictions, the common thread is that providers will need to establish routine processes to assess, document, and mitigate safety risks.
Resource Search
Tailored to the needs for risk leaders, this executive summary of the 2024 Global Risks Report highlights the key findings to support decision-makers in balancing current crises and longer-term priorities.
Set against the backdrop of rapidly accelerating technological change and economic uncertainty, this year's Global Risks Report is a comprehensive analysis of the most significant risks facing the world today. With its forward-thinking approach and survey of nearly 1,500 risk experts, it provides insights into potential challenges and opportunities for risk leaders in various industries.
For leaders of founder-owned companies, simply making the decision to sell or bring in an outside investor can be anxiety inducing. The transaction process itself is often filled with apprehensive moments—arguably none more so than the potential of sensitive information leaking. This primer helps business owners understand how to avoid leaks, how they might emerge, and how to handle them. It details three common scenarios: (1) when there are signs of a possible leak; (2) when signs of a leak are clearer; and (3) when media coverage appears imminent.
Consistently revisiting potential liquidity risk is important work for family investors, as many of these risks can lay silent for prolonged periods and become easy to overlook. In fact, unexpected liquidity demands can undo a lot of hard work and, in a worst-case scenario, force a fire sale of assets.
The growing use of video and automated technology, including artificial intelligence (AI), in employment practices—and the concern that the technology may foster discrimination and bias—has triggered a wide array of regulatory efforts. At least 11 statutes have been introduced targeting the use of AI-related technology to assist with employment decisions. Employers should take note of enacted and proposed legislation and consult with legal counsel before implementing automated employment technologies.
Wealthy families have always faced complex risk management issues, but it is particularly challenging when facing soaring inflation, regulatory uncertainty, rising cybercrime rates, and increasingly severe natural disasters. These market stressors impact all sectors of the insurance market, making it more expensive and challenging for affluent families to secure property, cyber, auto, and specialty coverages.
With climate disclosures like TCFD (the Task Force on Climate-related Financial Disclosures) being mandated across the globe, it’s time for risk management professionals to prepare for it. This playbook explores climate-related disclosures through the lens of risk and insurance, providing you with the information, specialist insight, disclosure requirements that vary from geography to geography, strategies, tools, and tips you will need to prepare for and navigate climate-change risks. It’s a resource for leaders at different stages of their climate and environmental change journey.
The use of artificial intelligence (AI) continues to spread with a staggering speed as it reshapes industries through improved efficiency, productivity, and decision-making. However, the meteoric rise and adoption of AI technology—including ChatGPT—can overshadow some valid concerns around security and privacy. Addressing those concerns, this report offers insights from industry use cases for AI and delves into the cybersecurity risks, privacy regulations and compliance, mitigation strategies, and immediate actions that security teams can take to mitigate the risk from generative AI.
Many employers have begun using artificial intelligence (AI) tools supplied by third-party vendors. On May 18, 2023, the Equal Employment Opportunity Commission (EEOC) provided guidance indicating that, in its view, employers are generally liable for the outcomes of using selection tools to make employment decisions. Learn more about what tools are covered in the EEOC guidance that clarifies an employer’s responsibility for discrimination in AI employment tools.