Se trataba de confianza y responsabilidad

We chat about her time in Beijing, where she delved into the complexities of AI and its intersection with national security. “The Chinese are very pragmatic,” she says. “They do not want to be a follower. They want to be a leader.”

As we finish our meal with the last of the lychee juice and jasmine tea, Toner reflects on her experience at OpenAI and the lessons she learned from the tumultuous events that unfolded. “It was a challenging time, but I wouldn’t change it for anything,” she says. “It taught me a lot about power dynamics, corporate governance, and the importance of transparency and accountability in the tech industry.”

With a sense of determination in her voice, Toner adds, “I may have lost the battle at OpenAI, but the war for ethical AI is far from over. And I plan to continue fighting for a future where AI is developed responsibly and ethically.”

As we leave the restaurant and step out into the bustling streets of London, I can’t help but admire Helen Toner’s resilience and unwavering commitment to her principles. She may have faced setbacks, but her passion for ensuring the responsible development of AI remains unshakeable.

” The aubergine has hints of miso that I savour.

“Ma is part of the Chinese word for anaesthesia or paralysis, and that’s because the Sichuan peppercorn numbs your tongue and your lips,” she explains. “I’m kinda addicted to that flavour.”

The conversation turns back to OpenAI, and Toner’s relationship with the company over the two years she sat on its board. When she first joined, there were nine members, including LinkedIn co-founder Reid Hoffman, Shivon Zilis, an executive at Elon Musk’s neurotechnology company Neuralink, and Republican congressman Will Hurd. It was a collegiate atmosphere, she says, though in 2023 those three members all stepped down, leaving three non-execs on the board, including Toner, tech entrepreneur Tasha McCauley and Adam D’Angelo, the chief executive of website Quora, alongside Altman and the company’s co-founders Greg Brockman and Ilya Sutskever.

“I came on as the company was going through a clear shift,” Toner says. “Certainly when I joined, it was much more comparable to being on the board of a VC-funded start-up, where you’re just there to help out [and] do what the CEO thinks is right. You don’t want to be meddling or you don’t want to be getting in the way of anything.”

The transition at the company, she says, was precipitated by the launch of ChatGPT — which Toner and the rest of the board found out about on Twitter — but also of the company’s most advanced AI model, GPT-4. OpenAI went from being a research lab, where scientists were working on nascent and blue-sky research projects not designed to be used by the masses, to a far more commercial entity with powerful underlying technology that had far-reaching impacts.

LEAR  Bancos de inversión reducen previsiones de PIB de China a medida que la confianza disminuye.

I ask Toner what she thinks of Altman, the person and leader. “We’ve always had a friendly relationship, he’s a friendly guy,” she says. Toner still has legal duties of confidentiality to the company, and is limited in what she can reveal. But speaking on the Ted AI podcast in May, she was vocal in claiming that Altman had misled the board “on multiple occasions” about its existing safety processes. According to her, he had withheld information, wilfully misrepresented things that were happening at the company, and in some cases outright lied to the board.

She pointed to the fact that Altman hadn’t informed the board about the launch of ChatGPT, or that he owned the OpenAI Startup Fund, a venture capital fund he had raised from external limited partners and made investment decisions on — even though, says Toner, he claimed “to be an independent board member with no financial interest in the company”. Altman stepped down from the fund in April this year.

In the weeks leading up to the November firing, Altman and Toner had also clashed over a paper she had co-authored on public perceptions of various AI developments, which included some criticism of the ChatGPT launch. Altman felt that it reflected badly on the company. “If I had wanted to critique OpenAI, there would have been many more effective ways to do that,” Toner says. “It’s honestly not clear to me if it actually got to him or if he was looking for an excuse to try and get me off the board.”

Today, she says those are all merely illustrative examples to point to long-term patterns of untrustworthy behaviour that Altman exhibited, with the board but also with his own colleagues. “What changed it was conversations with senior executives that we had in the fall of 2023,” she says. “That is where we started thinking and talking more actively about [doing] something about Sam specifically.”

Public criticisms of the board’s decision have ranged from personal attacks on Toner and her co-directors — with many describing her as a “decel”, someone who is anti-technological progress — to disapproval of how the board handled the fallout. Some noted that the board’s timing had been poor, given the concurrent share sale at OpenAI, potentially jeopardising employees’ payouts.

Last March, an independent review conducted by an external law firm into the events concluded that Altman’s behaviour “did not mandate removal”. The entrepreneur rejoined the board the same month. At the time he said he was “pleased this whole thing is over”, adding: “Over these last few months it’s been disheartening to see some people with an agenda trying to tease leaks in the press to try and hurt the company and hurt the mission. They have not worked.”

LEAR  Desarrollar confianza en la escritura en estudiantes de K-5

In Toner’s view, the review’s outcome sounded like the new board had posed the question of whether it had to fire Altman. “Which I think gets interpreted as: ‘Did he do something illegal?’ And that is not how I think the board should necessarily be evaluating his conduct,” she says.

“They’ve not disputed anywhere any of the actual claims that we’ve made about what went wrong or why we fired him . . . which was about trust and accountability and oversight.”

In a statement to the FT, chair of OpenAI’s board Bret Taylor said that “over 95% of employees, including senior leadership, asked for Sam’s reinstatement”. Toner can’t explain — and didn’t anticipate — defections by senior staff, including by board member Sutskever, who went from criticising to supporting Altman within days. “I learnt a lot about how different people react to pressure in different situations.”

We’re making our way through the feast with efficiency, in agreement that the tingly and fragrant ma po tofu is the star of the show. I ask Toner how life has changed for her since November, and she insists that it hasn’t. She has kept her full-time job at CSET, where she advises senior government officials on AI policy and national security, makes her own rye bread at home with her husband, a German scientist, and deals daily with the exertions of toddler-parenting.

We shouldn’t let things be set up such that a small number of people get to be the ones that get to decide what happens

At the time, when the OpenAI crisis turned into a long weekend of sleepless negotiations and damage control, she admits it gave her a new appreciation for her community in DC.” ” Dado que muchos de sus colegas estaban en el espacio de la seguridad nacional, habían enfrentado “crisis reales reales, donde la gente estaba muriendo o había guerras en curso, por lo que eso puso las cosas en perspectiva”, dice. “Unas cuantas noches sin dormir no son tan malas.”

Su mayor aprendizaje fue en torno al futuro de la gobernanza de la inteligencia artificial. Para ella, los eventos en OpenAI elevaron las apuestas para lograr una supervisión externa adecuada para el pequeño grupo de empresas que compiten por construir sistemas de IA potentes. “Podría significar regulación gubernamental, pero también podría significar… estándares de la industria, presión pública, expectativas públicas”, dice.

Esto no es solo el caso de OpenAI, enfatiza, sino también de empresas como Anthropic, Google y Meta. Establecer requisitos legales en torno a la transparencia es crucial para evitar construir una herramienta peligrosa para la humanidad, cree.

“Las empresas también están en una situación difícil, donde todas están tratando de competir entre sí. Y así hablas con personas dentro de estas empresas, y casi te ruegan que intervengas desde el exterior”, dice. “No se trata solo de confiar en la benevolencia y el juicio de personas específicas. No deberíamos permitir que las cosas se configuren de tal manera que un pequeño número de personas sean las que decidan lo que sucede, no importa qué tan buenas sean esas personas.”

LEAR  Laurene Powell Jobs compra mansión en la fila de los multimillonarios de San Francisco por $70 millones.

Toner llegó a la política de IA a través de un camino inusual. Como estudiante universitaria en Melbourne, fue introducida al altruismo efectivo (EA). Había sido seducida por las ideas de la comunidad de ayudar a mejorar el mundo de una manera que requería pensar con la cabeza y el corazón, dice.

La comunidad de EA, y su funcionamiento problemático, fueron llevados a la luz pública en 2022 por su promotor y donante más público, Sam Bankman-Fried, el fundador deshonrado de la firma de comercio de criptomonedas FTX. Toner dice que lo conocía “un poco, no bien”, y lo había conocido “una o dos veces”.

“He estado mucho menos involucrada en los últimos años, principalmente debido a este pensamiento grupal, este tipo de adoración a los héroes. [Bankman-Fried] es un síntoma de eso”, dice. “Lo último que escribí [sobre eso] fue sobre desilusionarme con el EA, tanto cómo lo experimenté yo como cómo vi a otros experimentarlo.”

En este punto, estamos saciados de la comida pero no podemos resistirnos a picar los restos para obtener otro toque de ese sabor adormecedor de pimienta. Un estómago lleno se siente como el momento adecuado para hacer la pregunta distópica sobre la próxima ola de sistemas de IA. “Una cosa [los altruistas efectivos] hicieron muy bien es tomarse en serio la posibilidad de que podríamos ver sistemas de IA muy avanzados en nuestras vidas y que eso podría ser importante para lo que sucede en el mundo”, dice. “En 2013, 2014, cuando empecé a escuchar este tipo de ideas, parecía muy contracultural, y ahora… ciertamente se siente más convencional.”

A pesar de esto, ella tiene fe en la capacidad de adaptación de la humanidad. “En general, me siento algo esperanzada de que tendremos espacio para respirar y prepararnos”, dice.

A lo largo de nuestra conversación, Toner ha sido prudente al relatar sus intentos de desafiar a uno de los CEO más poderosos de la tecnología. Muchas de las críticas personales y la atención que se vio obligada a aceptar podrían haberse evitado si hubiera actuado de manera diferente, se hubiera preparado mejor para las consecuencias, o hubiera buscado más asesoramiento, quizás. Me siento obligado a preguntarle si alguna vez se cuestiona a sí misma, sus acciones o sus métodos del pasado noviembre.

“Quiero decir, todo el tiempo”, dice, sonriendo ampliamente. “Si no te estás cuestionando a ti mismo, ¿cómo estás tomando buenas decisiones?”

Madhumita Murgia es la editora de IA del FT

Descubre nuestras últimas historias primero: sigue a FT Weekend en Instagram y X, y suscríbete a nuestro podcast Life & Art donde sea que escuches”