Opinion | OpenAI Could Be a Force for Good if It Can Answer These Questions First

OpenAI is now worth as much as Goldman Sachs or AT&T. The artificial intelligence start-up behind ChatGPT has also said it intends to shed its status as a nonprofit to become a for-profit business within two years. Outside experts and OpenAI employees have expressed concern that as a result, the company will shy away from its founding purpose — to build safe A.I. to “benefit all of humanity” — in favor of earning profits for investors.

Artificial intelligence may be the most consequential technological advance in our lifetime, and OpenAI is unique in the breadth of its potential impact. Its product could displace workers in far-flung industries, from customer service to radiology to film production. Its work is so energy-hungry that it could knock off track the planet’s progress on climate change.

I’m not a defense expert or a science-fiction writer, but it’s clear the effect it will have on our democracy, national security and privacy will be profound. That means how we structure the business of A.I. is a decision that carries great significance.

OpenAI has responded to these concerns by saying it will become a public benefit corporation. A benefit corporation is a traditional for-profit company with one key difference: It is legally obligated to balance profit with purpose. Public benefit corporation leaders and boards must consider workers, customers, communities and the environment, not just shareholders, as in a standard corporation.

This idea — some call it “stakeholder governance” — has caught the imagination of business leaders, with an estimated 15,000 companies globally adopting the new legal form. Think of Patagonia, Allbirds, Chobani and Warby Parker.

I helped write the model benefit corporation legislation as a co-founder of the B Corp movement, a community of over 9,000 companies dedicated to using business as a force for good. I championed its passage alongside many business leaders, including Patagonia’s Yvon Chouinard.

That’s why I know OpenAI’s approach is insufficient.

Yes, becoming a public benefit corporation will give OpenAI’s board the ability to make decisions that consider the long-term interests of society and the planet, in addition to its balance sheet. But that should be table stakes for any A.I. company. Public benefit corporations are required to balance the impact of their business decisions with a broad set of interests.

This might mean choosing to invest in solar and wind energy, which have higher upfront costs and take time to build but pollute less. It might mean choosing not to offer products or services to clients who pose a risk to worker safety even though that might sacrifice short-term profits. That’s a good start. But this structure alone does not ensure that OpenAI will be held accountable.

Before proposing some practical solutions, let me be clear: I am deeply invested in the success of A.I. I have advanced metastatic prostate cancer, and, selfishly, I am rooting for OpenAI and its competitors to help accelerate drug development that could save my life, among many other possible benefits to society.

However, the company can’t do that if it is beholden to investors whose main measure of success is their investment return. Furthermore, the future of humanity shouldn’t have to rely on unaccountable executives such as Sam Altman, OpenAI’s chief executive, to know if the company is living up to its stated principles.

As a first priority, the company has to get its purpose right. In other words, who is OpenAI working for? OpenAI must clearly write in its corporate charter, not just in its marketing materials, how it will serve each of its stakeholders, and who it won’t do business with. This might mean committing the company to doing its fair share to help reach global climate goals by sharply reducing the amount of energy from fossil fuels used to power its servers.

Second, we can’t hold OpenAI accountable for its commitments if we don’t know what impact it is having or what’s in its source code. Current law requires that public benefit corporations registered in Delaware report on their social impact only once every two years using their own chosen measurements (a much lower bar than most companies have to meet for annual, audited financial reporting).

Given the A.I. industry’s history of nondisclosure agreements, it is clear that we cannot trust companies to regulate themselves. And even a high-functioning government will not be able to stay ahead of a fast-moving industry.

OpenAI needs to commit to transparent, annual, audited impact reporting using independent third-party standards that are as rigorous as its financial reporting requirements. These must be developed in cooperation with organizations and individuals with expertise. If OpenAI is truly serious about serving society and wants to maintain its social license to operate, then I would expect the company to welcome this.

Lastly, OpenAI needs a “belt and suspenders” legal structure that ensures its commitments can be enforced. Investors could sue the company for failing to fulfill its purpose, but that outcome is unlikely because they are focused on maximizing their own financial returns.

Recently OpenAI created what it says is an independent Safety and Security Committee, but it also has the power to blow that up whenever it becomes inconvenient, just as Microsoft laid off its entire A.I. ethics and society team in 2023.

One way to protect and balance these competing interests is through a trust with special decision-making rights. The Guardian news organization in Britain uses its trust to ensure its journalists are free to report without influence from its advertisers. Anthropic, an OpenAI competitor, set up a long-term benefit trust to hold a separate class of stock with expert trustees who will have the right to appoint a majority of Anthropic’s board.

Perhaps Mr. Altman could take a cue from Patagonia, a brand he’s often been spotted wearing. Patagonia’s purpose trust owns all of the company’s voting stock, meaning that the decision makers are obliged to advance Patagonia’s commitment to protecting the earth and its natural resources.

It’s not a utopian fantasy to believe that OpenAI can solve some of our greatest societal challenges. This can be true only if it is structured and governed to do so. Every day, by applying these same principles, thousands of certified B Corps show that business can be a force for good to create high-quality jobs, rebuild strong communities and solve environmental crises — all while making money for investors. Surely OpenAI, with its ingenuity and newfound resources, can match their efforts.

Andrew Kassoy is a co-founder and co-chair of B Lab Global.

(The Times sued OpenAI and Microsoft in December for copyright infringement of news content related to A.I. systems.)

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, WhatsApp, X and Threads.

<

About FOX NEWS

Check Also

Opinion | The Risk of Trump’s Old Age

What should we think as Donald Trump urges people to vote in January, confuses places …

Leave a Reply

Your email address will not be published. Required fields are marked *