Itemoids

US

The Order That Defines the Future of AI in America

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 10 › biden-white-house-ai-executive-order › 675837

Earlier today, President Joe Biden signed the most sweeping set of regulatory principles on artificial intelligence in America to date: a lengthy executive order that directs all types of government agencies to make sure America is leading the way in developing the technology while also addressing the many dangers that it poses. The order explicitly pushes agencies to establish rules and guidelines, write reports, and create funding and research initiatives for AI—“the most consequential technology of our time,” in the president’s own words.

The scope of the order is impressive, especially given that the generative-AI boom began just about a year ago. But the document’s many parts—and there are many—are at times in tension, revealing a broader confusion over what, exactly, America’s primary attitude toward AI should be: Is it a threat to national security, or a just society? Is it a geopolitical weapon? Is it a way to help people?

The Biden administration has answered “all of the above,” demonstrating a belief that the technology will soon be everywhere. “This is a big deal,” Alondra Nelson, a professor at the Institute for Advanced Study who previously served as acting director of the White House Office of Science and Technology Policy, told us. AI will be “as ubiquitous as operating systems in our cellphones,” Nelson said, which means that regulating it will involve “the whole policy space itself.” That very scale almost necessitates ambivalence, and it is as if the Biden administration has taken into account conflicting views without deciding on one approach.

One section of the order adopts wholesale the talking points of a handful of influential AI companies such as OpenAI and Google, while others center the concerns of workers, vulnerable and underserved communities, and civil-rights groups most critical of Big Tech. The order also makes clear that the government is concerned that AI will exacerbate misinformation, privacy violations, and copyright infringement. Even as it heeds the recommendations of Big AI, the order additionally outlines approaches to support smaller AI developers and researchers. And there are plenty of nods toward the potential benefits of the technology as well: AI, the executive order notes, has the “potential to solve some of society’s most difficult challenges.” It could be a boon for small businesses and entrepreneurs, create new categories of employment, develop new medicines, improve health care, and much more.  

If the document reads like a smashing-together of papers written by completely different groups, that’s because it likely is. The president and vice president have held meetings with AI-company executives, civil-rights leaders, and consumer advocates to discuss regulating the technology, and the Biden administration published a Blueprint for an AI Bill of Rights before the launch of ChatGPT last November. That document called for advancing civil rights, racial justice, and privacy protections, among other things. Today’s executive order cites and expands that earlier proposal—it directly addresses AI’s demonstrated ability to contribute to discrimination in contexts such as health care and hiring, the risks of using AI in sentencing and policing, and more. These issues existed long before the arrival of generative AI, a subcategory of artificial intelligence that creates new—or at least compellingly remixed—material based on training data, but those older AI programs stir the collective imagination less than ChatGPT, with its alarmingly humanlike language.

[Read: The future of AI is GOMA]

The executive order, then, is naturally fixated to a great extent on the kind of ultrapowerful and computationally intensive software that underpins that newer technology. At particular issue are so-called dual-use foundation models, which have also been called “frontier AI” models—a term for future generations of the technology with supposedly devastating potential. The phrase was popularized by many of the companies that intend to build these models, and chunks of the executive order match the regulatory framing that these companies have recommended. One influential policy paper from this summer, co-authored in part by staff at OpenAI and Google DeepMind, suggested defining frontier-AI models as including those that would make designing biological or chemical weapons easier, those that would be able to evade human control “through means of deception and obfuscation,” and those that are trained above a threshold of computational power. The executive order uses almost exactly the same language and the same threshold.

A senior administration official speaking to reporters framed the sprawling nature of the document as a feature, not a bug. “AI policy is like running a decathlon,” the official said. “We don’t have the luxury of just picking, of saying, ‘We’re just going to do safety,’ or ‘We’re just going to do equity,’ or ‘We’re just going to do privacy.’ We have to do all of these things.” After all, the order has huge “signaling power,” Suresh Venkatasubramanian, a computer-science professor at Brown University who helped co-author the earlier AI Bill of Rights, told us. “I can tell you Congress is going to look at this, states are going to look at this, governors are going to look at this.”

Anyone looking at the order for guidance will come away with a mixed impression of the technology—which has about as many possible uses as a book has possible subjects—and likely also confusion about what the president decided to focus on or omit. The order spends quite a lot of words detailing how different agencies should prepare to address the theoretical impact of AI on chemical, biological, radiological, and nuclear threats, a framing drawn directly from the policy paper supported by OpenAI and Google. In contrast, the administration spends far fewer on the use of AI in education, a massive application for the technology that is already happening. The document acknowledges the role that AI can play in boosting resilience against climate change—such as by enhancing grid reliability and enabling clean-energy deployment, a common industry talking point—but it doesn’t once mention the enormous energy and water resources required to develop and deploy large AI models, nor the carbon emissions they produce. And it discusses the possibility of using federal resources to support workers whose jobs may be disrupted by AI but does not mention workers who are arguably exploited by the AI economy: for example, people who are paid very little to manually give feedback to chatbots.

[Read: America already has an AI underclass]

International concerns are also a major presence in the order. Among the most aggressive actions the order takes is directing the secretary of commerce to propose new regulations that would require U.S. cloud-service providers, such as Microsoft and Google, to notify the government if foreign individuals or entities who use their services start training large AI models that could be used for malicious purposes. The order also directs the secretary of state and the secretary of homeland security to streamline visa approval for AI talent, and urges several other agencies, including the Department of Defense, to prepare recommendations for streamlining the approval process for noncitizens with AI expertise seeking to work within national labs and access classified information.

Where the surveillance of foreign entities is an implicit nod to the U.S.’s fierce competition with and concerns about China in AI development, China is also the No. 1 source of foreign AI talent in the U.S. In 2019, 27 percent of top-tier U.S.-based AI researchers received their undergraduate education in China, compared with 31 percent who were educated in the U.S, according to a study from Macro Polo, a Chicago-based think tank that studies China’s economy. The document, in other words, suggests actions against foreign agents developing AI while underscoring the importance of international workers to the development of AI in the U.S.

[Read: The new AI panic]

The order’s international focus is no accident; it is being delivered right before a major U.K. AI Safety Summit this week, where Vice President Kamala Harris will be delivering a speech on the administration’s vision for AI. Unlike the U.S.’s broad approach, or that of the EU’s AI Act, the U.K. has been almost entirely focused on those frontier models—“a fairly narrow lane,” Nelson told us. In contrast, the U.S. executive order considers a full range of AI and automated decision-making technologies, and seeks to balance national security, equity, and innovation. The U.S. is trying to model a different approach to the world, she said.

The Biden administration is likely also using the order to make a final push on its AI-policy positions before the 2024 election consumes Washington and a new administration potentially comes in, Paul Triolo, an associate partner for China and a technology-policy lead at the consulting firm Albright Stonebridge, told us. The document expects most agencies to complete their tasks before the end of this term. The resulting reports and regulatory positions could shape any AI legislation brewing in Congress, which will likely take much longer to pass, and preempt a potential Trump administration that, if the past is any indication, may focus its AI policy almost exclusively on America’s global competitiveness.

Still, given that only 11 months have passed since the release of ChatGPT, and its upgrade to GPT-4 came less than five months after that, many of those tasks and timelines appear somewhat vague and distant. The order gives 180 days for the secretaries of defense and homeland security to complete a cybersecurity pilot project, 270 days for the secretary of commerce to launch an initiative to create guidance in another area, 365 days for the attorney general to submit a report on something else. The senior administration official told reporters that a newly formed AI Council among the agency heads, chaired by Bruce Reed, a White House deputy chief of staff, would ensure that each agency makes progress at a steady clip. Once the final deadline passes, perhaps the federal government’s position on AI will have crystallized.

But perhaps its stance and policies cannot, or even should not, settle. Like the internet itself, artificial intelligence is a capacious technology that could be developed, and deployed, in a dizzying combination of ways; Congress is still trying to figure out how copyright and privacy laws, as well as the First Amendment, apply to the decades-old web, and every few years the terms of those regulatory conversations seem to shift again.

A year ago, few people could have imagined how chatbots and image generators would change the basic way we think about the internet’s effects on elections, education, labor, or work; only months ago, the deployment of AI in search engines seemed like a fever dream. All of that, and much more in the nascent AI revolution, has begun in earnest. The executive order’s internal conflict over, and openness to, different values and approaches to AI may have been inevitable, then—the result of an attempt to chart a path for a technology when nobody has a reliable map of where it’s going.

Illinois man pleads not guilty in killing of Palestinian-American boy

Al Jazeera English

www.aljazeera.com › news › 2023 › 10 › 30 › illinois-man-pleads-not-guilty-in-killing-of-palestinian-american-boy

Joseph Czuba is charged in the fatal US stabbing of six-year-old Wadea Al-Fayoume and the wounding of Hanaan Shahin.

Ukraine war in maps: Russian military units generally undermanned, says ISW

Euronews

www.euronews.com › 2023 › 10 › 30 › ukraine-war-in-maps-russian-military-units-generally-undermanned-says-isw

Ukraine’s defence minister Rustem Umerov told the US defence secretary Lloyd Austin that Russian losses in Avdiivka amount to approximately 4,000 soldiers.

Why Congress Keeps Failing to Protect Kids Online

The Atlantic

www.theatlantic.com › technology › archive › 2023 › 10 › protect-children-online-social-media-internet › 675825

Roughly a decade has passed since experts began to appreciate that social media may be truly hazardous for children, and especially for teenagers. As with teenage smoking, the evidence has accumulated slowly, but leads in clear directions. The heightened rates of depression, anxiety, and suicide among young people are measurable and disheartening. When I worked for the White House on technology policy, I would hear from the parents of children who had suffered exploitation or who died by suicide after terrible experiences online. They were asking us to do something.

The severity and novelty of the problem suggests the need for a federal legislative response, and Congress can’t be said to have ignored the issue. In fact, by my count, since 2017 it has held 39 hearings that have addressed children and social media, and nine wholly devoted to just that topic. Congress gave Frances Haugen, the Facebook whistleblower, a hero’s welcome. Executives from Facebook, YouTube and other firms have been duly summoned and blasted by angry representatives.

But just what has Congress actually done? The answer is: nothing.

Everyone knows that Congress struggles with polarizing issues such as immigration and gun control. But this is a failure on a different level: an inability to do something urgent and overwhelmingly popular, despite the agreement of both major parties, the president, and the large majority of the American population.

[Read: The Perils of ‘Sharenting’]

As someone who witnessed this failure firsthand, I am pained to admit that our government is failing parents, teenagers, and children. Congressional dysfunction cannot be reduced to any one thing. But one fact stands out: For a decade and counting, not a single bill seeking to protect children has reached a full vote in the House or Senate.

It is easy to read this and want to give up on Congress entirely. But what we voters and citizens need is a mechanism to force congressional leadership to make hard commitments to holding votes on overwhelmingly popular legislation. Whatever power public opinion and moral duty may have once had, they are no longer working.

The story of child-protection legislation in recent years could be taught as a reverse civics lesson, where bills that have the support of the president, the public, and both houses of Congress fail to become law. It would almost be reassuring if we could blame partisanship or corporate lobbyists for the outcome. But this is a story of culture war, personal grievance, and petty beefs so indefensible as to be a disgrace to the Republic.

During my time in the White House, no meetings were more painful than those with parents whose children had been killed or committed suicide after online bullying or online sexual exploitation. Parents, in more pain than any parent should have to endure, would come in bearing photos of their dead children. Kids like Carson Bride, a 16-year-old who died by suicide after online bullying, or Erik Robinson, a 12-year old who died after trying out a choking game featured on TikTok.

The case for legislative action is overwhelming. It is insanity to imagine that platforms, who see children and teenagers as target markets, will fix these problems themselves. Teenagers often act self-assured, but their still-developing brains are bad at self-control and vulnerable to exploitation. Youth need stronger privacy protections against the collection and distribution of their personal information, which can be used for targeting. In addition, the platforms need to be pushed to do more to prevent young girls and boys from being connected to sexual predators, or served content promoting eating disorders, substance abuse, or suicide. And the sites need to hire more staff whose job it is to respond to families under attack.

All of these ideas were once what was known, politically, as low-hanging fruit. Even people who work or worked at the platforms will admit that the U.S. federal government should apply more pressure. An acquaintance who works in trust and safety at one of the platforms put it to me bluntly over drinks one evening: “The U.S. government doesn’t actually force us to do anything. Sure, Congress calls us in to yell at us every so often, but there’s no follow-up.”

“What you need to do,” she said, “is actually get on our backs and force us to spend money to protect children online. We could do more. But without pressure, we won’t.”

Alex Stamos, the former chief security officer of Facebook, made a similar point to me. Government, he says, is too focused on online problems with intangible harms that are inherently difficult for the platforms to combat, like “fighting misinformation.” In contrast, government does far too little to force platforms to combat real and visceral harms, like the online exploitation of minors, that the platforms could do more about if pushed. This is not to let the platforms off the hook—but government needs to do its job too.

Some of the bills that emerged in the 117th Congress, in 2021 and 2022, sought to strengthen the protection of teenagers’ privacy online. The case for such legislation is not hard to make—lack of privacy makes targeting possible. Senators Ed Markey (a Democrat from Massachusetts) and Bill Cassidy (a Republican from Louisiana) were prominent sponsors of one such bill, named the Children and Teens’ Online Privacy Protection Act.

Enacting a stronger children’s-privacy bill also seemed a good fallback if Congress should, once again, fail to pass a general privacy law protecting everyone. Whatever promise there may have been for passing such a law last year began to disappear after a nasty fight between Senator Maria Cantwell, chair of the Senate Commerce Committee and her three counterparts, Frank Pallone of New Jersey, the chair of the House Commerce Committee; Roger Wicker, the ranking Republican on the Senate committee; and Cathy McMorris Rodgers, the Republican ranking member on the House committee. The latter three co-drafted a privacy bill, with special protections for children, but they did it without Cantwell, and she opposed the bill and refused to introduce it in the Senate. The bill was then promptly roadblocked in the House by the State of California (California feared elimination of its own privacy law and did not want to lose its ability to pass future laws on the matter).  California convinced then-Speaker Nancy Pelosi, in early September, to announce her opposition, all but ending any chance of passing a general privacy bill. The deadlock over general privacy was its own tragedy, but it made a children’s bill a natural and seemingly attainable alternative.

A bolder approach to protecting children online sought to require that social-media platforms be safer for children, similar to what we require of other products that children use. In 2022 the most important such bill was the Kid’s Online Safety Act (KOSA), co-sponsored by Senators Richard Blumenthal of Connecticut and Marcia Blackburn of Tennessee. KOSA came directly out of the Frances Haugen hearings in the summer of 2021, and particularly the revelation that social-media sites were serving content that promoted eating disorders, suicide, and substance abuse to teenagers. In an alarming demonstration, Blumenthal revealed that his office had created a test Instagram account for a 13-year-old girl, which was, within one day, served content promoting eating disorders. (Instagram has acknowledged that this is an ongoing issue on its site.)

[Read: Facebook Is a Doomsday Machine]

The KOSA bill would have imposed a general duty on platforms to prevent and mitigate harms to children, specifically those stemming from self-harm, suicide, addictive behaviors, and eating disorders. It would have forced platforms to install safeguards to protect children and tools to enable parental supervision. In my view, the most important thing the bill would have done is simply force the platforms to spend more money and more ongoing attention on protecting children, or risk serious liability.

But KOSA became a casualty of the great American culture war. The law would give parents more control over what their children do and see online, which was enough for some groups to transform the whole thing into a fight over transgender issues. Some on the right, unhelpfully, argued that the law should be used to protect children from trans-related content. That triggered civil-rights groups, who took up the cause of teenage privacy and speech rights. A joint letter condemned KOSA for “enabl[ing] parental supervision of minors’ use of [platforms]” and “cutting off another vital avenue of access to information for vulnerable youth.”

It got ugly. I recall an angry meeting in which the Eating Disorders Coalition (in favor of the law) fought with LGBTQ groups (opposed to it) in what felt like a very dark Veep episode, except with real lives at stake. Critics like Evan Greer, a digital rights advocate, charged that attorneys general in red states could attempt to use the law to target platforms as part of a broader agenda against trans rights. That risk is exaggerated; the bill’s list of harms is specific and discrete; it does not include, say, “learning about transgenderism” but it does provide that “nothing shall be construed [to require a platform to prevent] any minor from deliberately and independently searching for, or specifically requesting, content.” Nonetheless, the charge had a powerful resonance and was widely disseminated.

Sometime in the late fall of 2022, Chairman Pallone made the decision not to advance children’s privacy or children’s protection bills out of his committee, effectively killing both in regular session. Pallone (and his Republican counterparts) argued that passing a children’s privacy law would take the wind out of the sails of some future effort to pass a comprehensive privacy bill (for which, I note, we are still waiting). When it came to his reasoning for killing KOSA, Pallone mentioned the concerns of the special interest groups—his spokesman pointed out to me that “nearly 100 civil rights organizations had substantive policy concerns with the bills.” There was, finally, as his staffers freely admitted, as a form of payback involved—a desire, shared by McMorris-Rogers, to punish Cantwell for having blocked the adult-privacy bill. A spokesman for Pallone insisted to me recently that “there was never a path forward for either COPPA or KOSA” based on the opposition of unnamed members of Congress and the civil rights groups, and that “young people will quickly age out of age-specific protections” anyway. (I note that civil rights groups don’t actually have voting rights in Congress.)

There was, in fact, one last path forward in 2022. Senator Blumenthal managed to get KOSA inserted in the early draft of an end-of-year spending bill, subject to the sign-off of House and Senate leadership. It was, however, promptly and shamelessly removed by Mitch McConnell, presumably to avoid giving Democrats the win. This mess of infighting, myopic strategy, and political maneuvering meant Congress failed to do anything to protect children online last year.

To be sure, there was and is, to be sure, a serious, substantive debate to be had over KOSA. Teenagers do have privacy and speech interests; but parents have interests as well. As a teenager, I resented anything that seemed like censorship or parental oversight; as a parent, I feel differently. Reasonable people can and do disagree over the balance that should be struck. But at some point, in a democracy, the vote needs to be called. Polls show that 70 percent of Americans and about 91 percent of parents want stronger legal protections for children online. If a majority, indeed a supermajority, of Americans want stronger protection for teenagers online, it is simply wrong to never call a vote.

I am well aware that part of the power of leadership and committee chairs lies in their control over the holding of votes. But that doesn’t make it less horribly undemocratic, and it is in these “non-votes” that the power of corporate lobbyists and special interests really makes its mark. That’s why what we need is some mechanism for a popular override—say, if legislation attracts more than 50 co-sponsors, leadership must hold a floor vote, win or lose.

It doesn’t help that there has been no political accountability for the members of Congress who were happy to grandstand about children online and then do nothing. No one outside a tiny bubble knows that Wicker voted for KOSA in public but helped kill it in private, or that infighting between Cantwell and Pallone helped kill children’s privacy. I know this only because I had to for my job. The press loves to cover members of Congress yelling at tech executives. But its coverage of the killing of popular bills is rare to nonexistent, in part because Congress hides its tracks. Say what you want about the Supreme Court or the president, but at least their big decisions are directly attributable to the justices or the chief executive. Congressmen like Frank Pallone or Roger Wicker don’t want to be known as the men who killed Congress’s efforts to protect children online, so we rarely find out who actually fired the bullet.

The American public has the right to be angry: Things are not okay. That said, other parts of government have done what they can.  The White House and FTC have tightened oversight using existing authorities. Some states have passed their own child-protection legislation, and this fall, 44 state attorneys general sued Instagram (Meta) alleging that the site knew its site was dangerous but promoted it as safe and appropriate anyhow. Both the children’s-privacy bill and KOSA were reintroduced this year, and the latter has picked up 48 co-sponsors, including prominent progressives like Elizabeth Warren. While vocal detractors remain, the major LGBTQ groups no longer oppose the legislation.

At this point both parties, the president, and the public want a law passed—which is why we need a commitment to hold a floor vote in both chambers. Protecting children is a fundamental role in any civilized state, and by that measure we are failing badly.

The Secretive Industry Devouring the U.S. Economy

The Atlantic

www.theatlantic.com › ideas › archive › 2023 › 10 › private-equity-publicly-traded-companies › 675788

The publicly traded company is disappearing. In 1996, about 8,000 firms were listed in the U.S. stock market. Since then, the national economy has grown by nearly $20 trillion. The population has increased by 70 million people. And yet, today, the number of American public companies stands at fewer than 4,000. How can that be?

One answer is that the private-equity industry is devouring them. When a private-equity fund buys a publicly traded company, it takes the company private—hence the name. (If the company has not yet gone public, the acquisition keeps that from happening.) This gives the fund total control, which in theory allows it to find ways to boost profits so that it can sell the company for a big payday a few years later. In practice, going private can have more troubling consequences. The thing about public companies is that they’re, well, public. By law, they have to disclose information about their finances, operations, business risks, and legal liabilities. Taking a company private exempts it from those requirements.

That may not have been such a big deal when private equity was a niche industry. Today, however, it’s anything but. In 2000, private-equity firms managed about 4 percent of total U.S. corporate equity. By 2021, that number was closer to 20 percent. In other words, private equity has been growing nearly five times faster than the U.S. economy as a whole.

[James Surowiecki: The method in the market’s madness]

Elisabeth de Fontenay, a law professor at Duke University who studies corporate finance, told me that if current trends continue, “we could end up with a completely opaque economy.”

This should alarm you even if you’ve never bought a stock in your life. One-fifth of the market has been made effectively invisible to investors, the media, and regulators. Information as basic as who actually owns a company, how it makes its money, or whether it is profitable is “disappearing indefinitely into private equity darkness,” as the Harvard Law professor John Coates writes in his book The Problem of Twelve. This is not a recipe for corporate responsibility or economic stability. A private economy is one in which companies can more easily get away with wrongdoing and an economic crisis can take everyone by surprise. And to a startling degree, a private economy is what we already have.

America learned the hard way what happens when corporations operate in the dark. Before the Great Depression, the whole U.S. economy functioned sort of like the crypto market in 2021. Companies could raise however much money they wanted from whomever they wanted. They could claim almost anything about their finances or business model. Investors often had no good way of knowing whether they were being defrauded, let alone whether to expect a good return.

Then came the worst economic crisis in U.S. history. From October to December of 1929, the stock market lost 50 percent of its value, with more losses to come. Thousands of banks collapsed, wiping out the savings of millions of Americans. Unemployment spiked to 25 percent. The Great Depression generated a crisis of confidence for American capitalism. Public hearings revealed just how rampant corporate fraud had become before the crash. In response, Congress passed the Securities Act of 1933 and the Securities Exchange Act of 1934. These laws launched a regime of “full and fair disclosure” and created a new government agency, the Securities and Exchange Commission, to enforce it. Now if companies wanted to raise money from the public, they would have to disclose a wide array of information to the public. This would include basic details about the company’s operations and finances, plus a comprehensive list of major risks facing the company, plans for complying with current and future regulations, and documentation of outstanding legal liabilities. All of these disclosures would be reviewed for accuracy by the SEC.

This regime created a new social contract for American capitalism: scale in exchange for transparency. Private companies were limited to 100 investors, putting a hard limit on how quickly they could grow. Any business that wanted to raise serious capital from the public had to submit itself to the new reporting laws. Over the next half century, this disclosure regime would underwrite the longest period of economic growth and prosperity in U.S. history. But it didn’t last. Beginning in the “Greed Is Good” 1980s, a wave of deregulatory reforms made it easier for private companies to raise capital. Most important was the National Securities Markets Improvement Act of 1996, which allowed private funds to raise an unlimited amount of money from an unlimited number of institutional investors. The law created a loophole that effectively broke the scale-for-transparency bargain. Tellingly, 1997 was the year the number of public companies in America peaked.

[From the November 2018 issue: The death of the IPO]

“Suddenly, private companies could raise all the money they want without even thinking about an IPO,” De Fontenay said. “That completely undermined the incentives companies had to go public.” Indeed, from 1980 to 2000, an average of 310 companies went public every year; from 2001 to 2022, only 118 did. The number briefly shot up during the coronavirus pandemic but has since fallen. (Over the same time period, the rate of mergers and acquisitions soared, which also helps explain the decline in public companies.)

Meanwhile, private equity has matured into a multitrillion-dollar industry, devoted to making short-term profits from highly leveraged transactions, operating with almost no regulatory or public scrutiny. Not all private-equity deals end in calamity, of course, and not all public companies are paragons of civic virtue. But the secrecy in which private-equity firms operate emboldens them to act more recklessly—and makes it much harder to hold them accountable when they do. Private-equity investment in nursing homes, to take just one example, has grown from about $5 billion at the turn of the century to more than $100 billion today. The results have not been pretty. The industry seems to have recognized that it could improve profit margins by cutting back on staffing while relying more on psychoactive medication. Stories abound of patients being rushed to the hospital after being overprescribed opioids, of bedside call buttons so poorly attended that residents suffer in silence while waiting for help, of nurses being pressured to work while sick with COVID. A 2021 study concluded that private-equity ownership was associated with about 22,500 premature nursing-home deaths from 2005 to 2017—before the wave of death and misery wrought by the pandemic.

Eventually, the public got wind of what was happening. The pandemic death count focused attention on the industry. Journalists and watchdog groups exposed the worst of the behaviors. Policy makers and regulators, at long last, began to take action. But by then, much of the damage had been done. “If we had some form of disclosure, we probably would have seen regulatory action a decade earlier,” Coates told me. “But instead, we’ve had 10-plus years of experimentation and abuse without anyone knowing.”

Something similar could be said about any number of industries, including higher education, newspapers, retail, and grocery stores. Across the economy, private-equity firms are known for laying off workers, evading regulations, reducing the quality of services, and bankrupting companies while ensuring that their own partners are paid handsomely. The veil of secrecy makes all of this easier to execute and harder to stop.

Private-equity funds dispute many of the criticisms of the industry. They argue that the horror stories are exaggerated and that a handful of problematic firms shouldn’t tarnish the rest of the industry, which is doing great work. Freed from onerous disclosure requirements, they claim, private companies can build more dynamic, flexible businesses that generate greater returns for shareholders. But the lack of public information makes verifying these claims difficult. Most careful academic studies find that although private-equity funds slightly outperformed the stock market on average prior to the early 2000s, they no longer do so. When you take into account their high fees, they appear to be a worse investment than a simple index fund.

“These companies basically get to write their own stories,” says Alyssa Giachino, the research director at the Private Equity Stakeholder Project. “They produce their own reports. They come up with their own numbers. And there’s no one making sure they are telling the truth.”

In the Roaring ’20s, the lack of corporate disclosure allowed a massive financial crisis to build up without anyone noticing. A century later, the growth of a new shadow economy could pose similar risks.

The hallmark of a private-equity deal is the so-called leveraged buyout. Funds take on massive amounts of debt to buy companies, with the goal of reselling in a few years at a profit. If all of that debt becomes hard to pay back—because of, say, an economic downturn or rising interest rates—a wave of defaults could ripple through the financial system. In fact, this has happened before: The original leveraged buyout mania of the 1980s helped spark the 1989 stock-market crash. Since then, private equity has grown into a $12 trillion industry and has begun raising much of its money from unregulated, nonbank lenders, many of which are owned by the same private-equity funds taking out loans in the first place.

Meanwhile, interest rates have reached a 20-year high, posing a direct threat to private equity’s debt-heavy business model. In response, many private-equity funds have migrated toward even riskier forms of backroom financing. Many of these involve taking on even more debt on the assumption that market conditions will soon improve enough to restore profitability. If that doesn’t happen—and many of these big deals fail—the implications could be massive.

[Joe Nocera and Bethany McLean: What financial engineering does to hospitals]

The industry counters that private markets are a better place for risky deals precisely because they have fewer ties to the real economy. A traditional bank has a bunch of ordinary depositors, whereas if a private-equity firm goes bust, the losers are institutional investors: pension funds, university endowments, wealthy fund managers. Bad, but not catastrophic. The problem, once again, is that no one knows how true that story is. Banks have to disclose information to regulators about how much they’re lending, how much capital they’re holding, and how their loans are performing. Private lenders sidestep all of that, meaning that regulators can’t know what risks exist in the system or how tied they are to the real economy.

“Everything could be just fine,” says Ana Arsov, a managing director at Moody’s Analytics who specializes in private lending. “But the point is that we don’t have the information we need to assess risk. Who is making these loans? How big are they? What are the terms? We just don’t know. So the worry is that the leverage in the system might grow and grow and grow without anyone noticing. And we really don’t know what the effects could be if something goes wrong.”

The government appears to be at least somewhat aware of this problem. In August, the SEC proposed a new rule requiring private-equity fund advisers to give more information to their investors. That’s better than nothing, but it hardly addresses the bad behavior or systemic risk. Nearly a century ago, Congress concluded that the nation’s economic system could not survive as long as its most powerful companies were left to operate in the shadows. It took the worst economic cataclysm in American history to learn that lesson. The question now is what it will take to learn it again.