Suchir Balaji had criticized the makers of ChatGPT
Warning: This article contains discussion of suicide which some readers may find distressing.
A former OpenAI researcher was found dead by police after speaking out against the tech giant in recent months.
Suchir Balaji, 26, was found in his San Francisco apartment on November 26, after police received a call asking officers to check in on his wellbeing.
Balaji was a researcher at the artificial intelligence company for four years, after studying computer science at the University of California, Berkeley.
But after his departure from the AI giant, he spoke out against the methods they were using to train the intelligence system.
The whistleblower alleged that OpenAI had violated US copyright law, while developing its popular ChatGPT online chatbot.
The former researcher left OpenAI in August (SUCHIR BALAJI/X)
ChatGPT can create a one week itinerary for a trip abroad, or give advice to someone struggling with a particular issue.
However, the former researcher was not on board with practices that were allegedly going on behind the scenes.
OpenAI have been fighting a number of lawsuits relating to its data-gathering practices, after stating that its system is trained on ‘publicly available data’.
The San Francisco medical examiner’s office determined that Balaji’s death was as a result of suicide, and that no foul play had taken place.
In wake of the news, a spokesperson for OpenAI said: “We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir’s loved ones during this difficult time.”
Upon leaving the company, the 26-year-old told all in a bombshell interview with the New York Times.
There is some controversy about how OpenAI gathers data (Jaque Silva/NurPhoto via Getty Images)
And he issued a warning to all his former colleagues, in which he stated: “If you believe what I believe, you have to just leave the company.”
However, in what turned out to be his final post on X, he cleared some things up.
Balaji wrote: “I recently participated in a NYT story about fair use and generative AI, and why I’m skeptical ‘fair use’ would be a plausible defense for a lot of generative AI products.
“That being said, I don’t want this to read as a critique of ChatGPT or OpenAI per se, because fair use and generative AI is a much broader issue than any one product or company.”
Since leaving OpenAI in August, the researcher had been working on his own personal projects.
If you or someone you know is struggling or in a mental health crisis, help is available through Mental Health America. Call or text 988 or chat 988lifeline.org. You can also reach the Crisis Text Line by texting MHA to 741741.
If you have experienced a bereavement and would like to speak with someone in confidence, contact The Compassionate Friends on (877) 969-0010.