Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Canadian Ingenuity: Artist captured ‘beauty and hardness’ of Arctic life

The artist won the hearts and confidence of the Indigenous people living at Eskimo Point in the Northwest Territories (now Arviat, Nunavut). In the mid-1900s, Winifred Petchey Marsh recorded the activities, the landscapes and daily lives in watercolours. Living in the North with her pastor husband for over 40 years, the family home transformed into a community hub. Petchey Marsh paintings depicted Indigenous culture before their northern ways were lost to modern society. Read More

AI is changing the shape of leadership – how can business leaders prepare? – Source: www.cybertalk.org

ai-is-changing-the-shape-of-leadership-–-how-can-business-leaders-prepare?-–-source:-wwwcybertalk.org

Source: www.cybertalk.org – Author: slandau By Ana Paula Assis, Chairman, Europe, Middle East and Africa, IBM. EXECUTIVE SUMMARY: From the shop floor to the boardroom, artificial intelligence (AI) has emerged as a transformative force in the business landscape, granting organizations the power to revolutionize processes and ramp up productivity. The scale and scope of this […]

La entrada AI is changing the shape of leadership – how can business leaders prepare? – Source: www.cybertalk.org se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Saving our common home

9 May 2024 at 11:04
In a recent Whig-Standard editorial, I was intrigued by Doug Cuthand’s opening commentary on the carbon tax debate. He wrote: “Future generations will look at the carbon tax opposition and ask, “What were they thinking? Why did they protest one of the most cost-effective methods of lowering carbon dioxide in the atmosphere?” The fact that he called out those politicians who axe the facts and have only a short-term view is worthy, but it is not what piqued my interest. Rather, it was the way his remarks reflected what Pope Francis expressed in his recent apostolic exhortation, Laudate Deum. In the same way Cuthand expressed his alarm about the short-term view politicians are taking on the climate crisis, the pope expressed his disappointment with the inadequate response to his 2015 environmental encyclical, even though “the world in which we live is collapsing and may be nearing the breaking point.” (Laudate Deum, 2) Read More

Nursing Week in Canada: Elevating nursing to a skilled profession

“Easy” has never described a nurse’s job. As science and technology progresses, the ability to care for complex patients has improved. Nursing duties in the late 1800s were more about housekeeping and less about patient care. Removing the coal scuttle from the hands of trained nurses, nursing superintendent Nora Livingston at Montreal General Hospital (MGH) transformed nursing from drudgery into a respected career. Read More

Biden Delays Ban on Menthol Cigarettes

The proposal had been years in the making, in an effort to curb death rates of Black smokers targeted by Big Tobacco. In an election year, the president’s worries about support among Black voters may have influenced the postponement.

© Mario Tama/Getty Images

Public health groups supporting the ban of menthol cigarettes cited years of data suggesting that the cigarettes, long marketed to African American smokers, make it more palatable to start smoking and more difficult to stop.

On TikTok, Potential Ban of App Leads to Resignation and Frustration

By: Yiwen Lu
24 April 2024 at 14:28
While Congress says the social app is a security threat, critics of the law targeting it say it shows how out of step lawmakers are with young people.

© Kent Nishimura for The New York Times

Supporters of TikTok gathered near the Capitol last month as the House of Representatives voted to pass a bill to force TikTok to cut ties with its Chinese parent company, ByteDance, or risk being banned in U.S. app stores.

Why new proposals to restrict geoengineering are misguided

23 April 2024 at 06:00

The public debate over whether we should consider intentionally altering the climate system is heating up, as the dangers of climate instability rise and more groups look to study technologies that could cool the planet.

Such interventions, commonly known as solar geoengineering, may include releasing sulfur dioxide in the stratosphere to cast away more sunlight, or spraying salt particles along coastlines to create denser, more reflective marine clouds.  

The growing interest in studying the potential of these tools, particularly through small-scale outdoor experiments, has triggered corresponding calls to shut down the research field, or at least to restrict it more tightly. But such rules would halt or hinder scientific exploration of technologies that could save lives and ease suffering as global warming accelerates—and they might also be far harder to define and implement than their proponents appreciate.

Earlier this month, Tennessee’s governor signed into law a bill banning the “intentional injection, release, or dispersion” of chemicals into the atmosphere for the “express purpose of affecting temperature, weather, or the intensity of the sunlight.” The legislation seems to have been primarily motivated by debunked conspiracy theories about chemtrails. 

Meanwhile, at the March meeting of the United Nations Environmental Agency, a bloc of African nations called for a resolution that would establish a moratorium, if not a ban, on all geoengineering activities, including outdoor tests. Mexican officials have also proposed restrictions on experiments within their boundaries.

To be clear, I’m not a disinterested observer but a climate researcher focused on solar geoengineering and coordinating international modeling studies on the issue. As I stated in a letter I coauthored last year, I believe that it’s important to conduct more research on these technologies because it might significantly reduce certain climatic risks. 

This doesn’t mean I support unilateral efforts today, or forging ahead in this space without broader societal engagement and consent. But some of these proposed restrictions on solar geoengineering leave vague what would constitute an acceptable, “small” test as opposed to an unacceptable “intervention.” Such vagueness is problematic, and its potential consequences would have far more reach than the well-intentioned proponents of regulation might wish for.

Consider the “intentional” standard of the Tennessee bill. While it is true that the intentionality of any such effort matters, defining it is tough. If knowing that an activity will affect the atmosphere is enough for it to be considered geoengineering, even driving a car—since you know its emissions warm up the climate—could fall under the banner. Or, to pick an example operating on a much larger scale, a utility might run afoul of the bill, since operating a power plant produces both carbon dioxide that warms up the planet and sulfur dioxide pollution that can exert a cooling effect.

Indeed, a single coal-fired plant can pump out more than 40,000 tons of the latter gas a year, dwarfing the few kilograms proposed for some stratospheric experiments. That includes the Harvard project recently scrapped in light of concerns from environmental and Indigenous groups. 

Of course, one might say that in all those other cases, the climate-altering impact of emissions is only a side effect of another activity (going somewhere, producing energy, having fun). But then, outdoor tests of solar geoengineering can be framed as efforts to gain further knowledge for societal or scientific benefit. More stringent regulations suggest that, of all intentional activities, it is those focused on knowledge-seeking that need to be subjected to the highest scrutiny—while joyrides, international flights, or bitcoin mining are all fine.

There could be similar challenges even with more modest proposals to require greater transparency around geoengineering research. In a submission to federal officials in March, a group of scholars suggested, among other sensible updates, that any group proposing to conduct outdoor research on weather modification anywhere in the world should have to notify the National Oceanic and Atmospheric Administration in advance.

But creating a standard that would require notifications from anyone, anywhere who “foreseeably or intentionally seeks to cause effects within the United States” could be taken to mean that nations can’t modify any kind of emissions (or convert forests to farmland) before consulting with other countries. For instance, in 2020, the International Maritime Organization introduced rules that cut sulfate emissions from the shipping sector by more than 80%, all at once. The benefits for air quality and human health are pretty clear, but research also suggested that the change would unmask additional global warming, because such pollution can reflect away sunlight either directly or by producing clouds. Would this qualify?

It is worth noting that both those clamoring for more regulations and those bristling to just go out and “do something” claim to have, as their guiding principle, a genuine concern for the climate and human welfare. But again, this does not necessarily justify a “Ban first—ask questions later” approach,  just as it doesn’t justify “Do something first—ask permission later.” 

Those demanding bans are right in saying that there are risks in geoengineering. Those include potential side effects in certain parts of the world—possibilities that need to be better studied—as well as vexing questions about how the technology could be fairly and responsibly governed in a fractured world that’s full of competing interests.

The more recent entrance of venture-backed companies into the field, selling dubious cooling credits or playing up their “proprietary particles,” certainly isn’t helping its reputation with a public that’s rightly wary of how profit motives could influence the use of technologies with the power to alter the entire planet’s climate. Nor is the risk that rogue actors will take it upon themselves to carry out these sorts of interventions. 

But burdensome regulation isn’t guaranteed to deter bad actors. If anything, they’ll just go work in the shadows. It is, however, a surefire way to discourage responsible researchers from engaging in the field. 

All those concerned about “meddling with the climate” should be in favor of open, public, science-informed strategies to talk more, not less, about geoengineering, and to foster transparent research across disciplines. And yes, this will include not just “harmless” modeling studies but also outdoor tests to understand the feasibility of such approaches and narrow down uncertainties. There’s really no way around that. 

In environmental sciences, tests involving dispersing substances are already performed for many other reasons, as long as they’re deemed safe by some reasonable standard. Similar experiments aimed at better understanding solar geoengineering should not be treated differently just because some people (but certainly not all of them) object on moral or environmental grounds. In fact, we should forcefully defend such experiments both because freedom of research is a worthy principle and because more information leads to better decision-making.

At the same time, scientists can’t ignore all the concerns and fears of the general public. We need to build more trust around solar geoengineering research and confidence in researchers. And we must encourage people to consider the issue from multiple perspectives and in relation to the rising risks of climate change.

This can be done, in part, through thoughtful scientific oversight efforts that aim to steer research toward beneficial outcomes by fostering transparency, international collaborations, and public engagement without imposing excessive burdens and blanket prohibitions.

Yes, this issue is complicated. Solar geoengineering may present risks and unknowns, and it raises profound, sometimes uncomfortable questions about humanity’s role in nature. 

But we also know for sure that we are the cause of climate change—and that it is exacerbating the dangers of heat waves, wildfires, flooding, famines, and storms that will inflict human suffering on staggering scales. If there are possible interventions that could limit that death and destruction, we have an obligation to evaluate them carefully, and to weigh any trade-offs with open and informed minds. 

Daniele Visioni is a climate scientist and assistant professor at Cornell University.

Let’s not make the same mistakes with AI that we made with social media

Oh, how the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and beyond. Now front pages are splashed with stories of social platforms’ role in misinformation, business conspiracy, malfeasance, and risks to mental health. In a 2022 survey, Americans blamed social media for the coarsening of our political discourse, the spread of misinformation, and the increase in partisan polarization.

Today, tech’s darling is artificial intelligence. Like social media, it has the potential to change the world in many ways, some favorable to democracy. But at the same time, it has the potential to do incredible damage to society.

There is a lot we can learn about social media’s unregulated evolution over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes with AI that we did with social media.

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

#1: Advertising

The role advertising plays in the internet arose more by accident than anything else. When commercialization first came to the internet, there was no easy way for users to make micropayments to do things like viewing a web page. Moreover, users were accustomed to free access and wouldn’t accept subscription models for services. Advertising was the obvious business model, if never the best one. And it’s the model that social media also relies on, which leads it to prioritize engagement over anything else. 

Both Google and Facebook believe that AI will help them keep their stranglehold on an 11-figure online ad market (yep, 11 figures), and the tech giants that are traditionally less dependent on advertising, like Microsoft and Amazon, believe that AI will help them seize a bigger piece of that market.

Big Tech needs something to persuade advertisers to keep spending on their platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads really have an impact. When major brands like Uber and Procter & Gamble recently slashed their digital ad spending by the hundreds of millions, they proclaimed that it made no dent at all in their sales.

AI-powered ads, industry leaders say, will be much better. Google assures you that AI can tweak your ad copy in response to what users search for, and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image generation AI to make your toaster product pages look cooler. And IBM is confident its Watson AI will make your ads better.

These techniques border on the manipulative, but the biggest risk to users comes from advertising within AI chatbots. Just as Google and Meta embed ads in your search results and feeds, AI companies will be pressured to embed ads in conversations. And because those conversations will be relational and human-like, they could be more damaging. While many of us have gotten pretty good at scrolling past the ads in Amazon and Google results pages, it will be much harder to determine whether an AI chatbot is mentioning a product because it’s a good answer to your question or because the AI developer got a kickback from the manufacturer.

#2: Surveillance

Social media’s reliance on advertising as the primary way to monetize websites led to personalization, which led to ever-increasing surveillance. To convince advertisers that social platforms can tweak ads to be maximally appealing to individual people, the platforms must demonstrate that they can collect as much information about those people as possible. 

It’s hard to exaggerate how much spying is going on. A recent analysis by Consumer Reports about Facebook—just Facebook—showed that every user has more than 2,200 different companies spying on their web activities on its behalf. 

AI-powered platforms that are supported by advertisers will face all the same perverse and powerful market incentives that social platforms do. It’s easy to imagine that a chatbot operator could charge a premium if it were able to claim that its chatbot could target users on the basis of their location, preference data, or past chat history and persuade them to buy products.

The possibility of manipulation is only going to get greater as we rely on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate with others and as a butler to you. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’re going to want it with you constantly, and to most effectively work on your behalf, it will need to know everything about you. It will act as a friend, and you are likely to treat it as such, mistakenly trusting its discretion.

Even if you choose not to willingly acquaint an AI assistant with your lifestyle and preferences, AI technology may make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking you mundane questions. And with chatbots increasingly being integrated with everything from customer service systems to basic search interfaces on websites, exposure to this kind of inferential data harvesting may become unavoidable.

#3: Virality

Social media allows any user to express any idea with the potential for instantaneous global reach. A great public speaker standing on a soapbox can spread ideas to maybe a few hundred people on a good night. A kid with the right amount of snark on Facebook can reach a few hundred million people within a few minutes.

A decade ago, technologists hoped this sort of virality would bring people together and guarantee access to suppressed truths. But as a structural matter, it is in a social network’s interest to show you the things you are most likely to click on and share, and the things that will keep you on the platform. 

As it happens, this often means outrageous, lurid, and triggering content. Researchers have found that content expressing maximal animosity toward political opponents gets the most engagement on Facebook and Twitter. And this incentive for outrage drives and rewards misinformation. 

As Jonathan Swift once wrote, “Falsehood flies, and the Truth comes limping after it.” Academics seem to have proved this in the case of social media; people are more likely to share false information—perhaps because it seems more novel and surprising. And unfortunately, this kind of viral misinformation has been pervasive.

AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. Generative AI tools can fabricate unending numbers of falsehoods about any individual or theme, some of which go viral. And those lies could be propelled by social accounts controlled by AI bots, which can share and launder the original misinformation at any scale.

Remarkably powerful AI text generators and autonomous agents are already starting to make their presence felt in social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts that appeared to be operated using ChatGPT. 

AI will help reinforce viral content that emerges from social media. It will be able to create websites and web content, user reviews, and smartphone apps. It will be able to simulate thousands, or even millions, of fake personas to give the mistaken impression that an idea, or a political position, or use of a product, is more common than it really is. What we might perceive to be vibrant political debate could be bots talking to bots. And these capabilities won’t be available just to those with money and power; the AI tools necessary for all of this will be easily available to us all.

#4: Lock-in

Social media companies spend a lot of effort making it hard for you to leave their platforms. It’s not just that you’ll miss out on conversations with your friends. They make it hard for you to take your saved data—connections, posts, photos—and port it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your follows on a social platform adds a brick to the wall you’d have to climb over to go to another platform.

This concept of lock-in isn’t unique to social media. Microsoft cultivated proprietary document formats for years to keep you using its flagship Office product. Your music service or e-book reader makes it hard for you to take the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might mock you for sending text messages in green bubbles. But social media takes this to a new level. No matter how bad it is, it’s very hard to leave Facebook if all your friends are there. Coordinating everyone to leave for a new platform is impossibly hard, so no one does.

Similarly, companies creating AI-powered personal digital assistants will make it hard for users to transfer that personalization to another AI. If AI personal assistants succeed in becoming massively useful time-savers, it will be because they know the ins and outs of your life as well as a good human assistant; would you want to give that up to make a fresh start on another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, that can be a powerful form of lock-in.

Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the more poorly a company can treat you. Absent any way to force interoperability, AI companies have less incentive to innovate in features or compete on price, and fewer qualms about engaging in surveillance or other bad behaviors.

#5: Monopolization

Social platforms often start off as great products, truly useful and revelatory for their consumers, before they eventually start monetizing and exploiting those users for the benefit of their business customers. Then the platforms claw back the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has powerfully written about and traced through the history of Facebook, Twitter, and more recently TikTok.

The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal gray area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI. 

Mitigating the risks

The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

The good news is that we have a whole category of tools to modulate the risk that corporate actions pose for our lives, starting with regulation. Regulations can come in the form of restrictions on activity, such as limitations on what kinds of businesses and products are allowed to incorporate AI tools. They can come in the form of transparency rules, requiring disclosure of what data sets are used to train AI models or what new preproduction-phase models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies disregard the rules.

The single biggest point of leverage governments have when it comes to tech companies is antitrust law. Despite what many lobbyists want you to think, one of the primary roles of regulation is to preserve competition—not to make life harder for businesses. It is not inevitable for OpenAI to become another Meta, an 800-pound gorilla whose user base and reach are several times those of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports competition-enabling standards specific to the technology sector, such as data portability and device interoperability. This is another core strategy for resisting monopoly and corporate control.

Additionally, governments can enforce existing regulations on advertising. Just as the US regulates what media can and cannot host advertisements for sensitive products like cigarettes, and just as many other jurisdictions exercise strict control over the time and manner of politically sensitive advertising, so too could the US limit the engagement between AI providers and advertisers.

Lastly, we should recognize that developing and providing AI tools does not have to be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open-source AI development in 2023, successful to an extent that startled corporate players, is proof of this. And we can go further, calling on our government to build public-option AI tools developed with political oversight and accountability under our democratic system, where the dictatorship of the profit motive does not apply.

Which of these solutions is most practical, most important, or most urgently needed is up for debate. We should have a vibrant societal dialogue about whether and how to use each of these tools. There are lots of paths to a good outcome.

The problem is that this isn’t happening now, particularly in the US. And with a looming presidential election, conflict spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t get our arms around AI any faster than we have (not) with social media. But it’s not too late. These are still the early years for practical consumer AI applications. We must and can do better.

Nathan E. Sanders is a data scientist and an affiliate with the Berkman Klein Center at Harvard University. Bruce Schneier is a security technologist and a fellow and lecturer at the Harvard Kennedy School.

The SEC’s new climate rules were a missed opportunity to accelerate corporate action

8 March 2024 at 14:19

This week, the US Securities and Exchange Commission enacted a set of long-awaited climate rules, requiring most publicly traded companies to disclose their greenhouse-gas emissions and the climate risks building up on their balance sheets. 

Unfortunately, the federal agency watered down the regulations amid intense lobbying from business interests, undermining their ultimate effectiveness—and missing the best shot the US may have for some time at forcing companies to reckon with the rising dangers of a warming world. 

These new regulations were driven by the growing realization that climate risks are financial risks. Global corporations now face climate-related supply chain disruptions. Their physical assets are vulnerable to storms, their workers will be exposed to extreme heat events, and some of their customers may be forced to relocate. There are fossil-fuel assets on their balance sheets that they may never be able to sell, and their business models will be challenged by a rapidly changing planet.

These are not just coal and oil companies. They are utilities, transportation companies, material producers, consumer product companies, even food companies. And investors—you, me, your aunt’s pension—are buying and holding these fossilized stocks, often unknowingly.

Investors, policymakers, and the general public all need clearer, better information on how businesses are accelerating climate change, what they are doing to address those impacts, and what the cascading effects could mean for their bottom line.

The new SEC rules formalize and mandate what has essentially been a voluntary system of corporate carbon governance, now requiring corporations to report how climate-related risks may affect their business.

They also must disclose their “direct emissions” from sources they own or control, as well as their indirect emissions from the generation of “purchased energy,” which generally means their use of electricity and heat. 

But crucially, companies will have to do so only when they determine that the information is financially “material,” providing companies considerable latitude over whether they do or don’t provide those details.

The original draft of the SEC rules would have also required corporations to report emissions from “upstream and downstream activities” in their value chains. That generally refers to the associated emissions from their suppliers and customers, which can often make up 80% of a company’s total climate pollution.  

The loss of that requirement and the addition of the “materiality” standard both seem attributable to intense pressure from business groups. 

To be sure, these rules should help make it clearer how some companies are grappling with climate change and their contributions to it. Out of legal caution, plenty of businesses are likely to determine that emissions are material.

And clearer information will help accelerate corporate climate action as firms concerned about their reputation increasingly feel pressure from customers, competitors, and some investors to reduce their emissions. 

But the SEC could and should have gone much further. 

After all, the EU’s similar policies are much more comprehensive and stringent. California’s emissions disclosure law, signed this past October, goes further still, requiring both public and private corporations with revenues over $1 billion to report every category of emissions, and then to have this data audited by a third party.

Unfortunately, the SEC rules merely move corporations to the starting line of the process required to decarbonize the economy, at a time when they should already be deep into the race. We know these rules don’t go far enough, because firms already following similar voluntary protocols have shown minimal progress in reducing their greenhouse-gas emissions. 

The disclosure system upon which the SEC rules are based faces two underlying problems that have limited how much and how effectively any carbon accounting and reporting can be put to use. 

First: problems with the data itself. The SEC rules grant firms significant latitude in carbon accounting, allowing them to set different boundaries for their “carbon footprint,” model and measure emissions differently, and even vary how they report their emissions. In aggregate, what we will end up with are corporate reports of the previous year’s partial emissions, without any way to know what a company actually did to reduce its carbon pollution.

Second: limitations in how stakeholders can use this data. As we’ve seen with voluntary corporate climate commitments, the wide variations in reporting make it impossible to compare firms accurately. Or as the New Climate Institute argues, “The rapid acceleration in the volume of corporate climate pledges, combined with the fragmentation of approaches and the general lack of regulation or oversight, means that it is more difficult than ever to distinguish between real climate leadership and unsubstantiated greenwashing.”

Investor efforts to evaluate carbon emissions, decarbonization plans, and climate risks through ESG (environmental, social, and governance) rating schemes have merely produced what some academics call “aggregate confusion.” And corporations have faced few penalties for failing to clearly disclose emissions or even meet their own standards. 

All of which is to say that a new set of SEC carbon accounting and reporting rules that largely replicate the problems with voluntary corporate action, by failing to require consistent and actionable disclosures, isn’t going to drive the changes we need, at the speed we need. 

Companies, investors, and the public require rules that drive changes inside companies and that can be properly assessed from outside them. 

This system needs to track the main sources of corporate emissions and incentivize companies to make real investments in efforts to achieve deep emissions cuts, both within the company and across its supply chain.

The good news is that even though the rules in place are limited and flawed, regulators, regions, and companies themselves can build upon them to move toward more meaningful climate action.

The smartest firms and investors are already going beyond the SEC regulations. They’re developing better systems to track the drivers and costs of carbon emissions, and taking concrete steps to address them: reducing fuel use, building energy-efficient infrastructure, and adopting lower-carbon materials, products, and processes. 

It is now just good business to look for carbon reductions that actually save money.

The SEC has taken an important, albeit flawed, first step in nudging our financial laws to recognize climate impacts and risks. But regulators and corporations need to pick up the pace from here, ensuring that they’re providing a clear picture of how quickly or slowly companies are moving as they take the steps and make the investments needed to thrive in a transitioning economy—and on an increasingly risky planet.

Dara O’Rourke is an associate professor and co-director of the master of climate solutions program at the University of California, Berkeley.

❌
❌